Welcome!
- zhuohao [at] uw.edu
- @ZhuohaoZhang
- Zhuohao (Jerry) Zhang
My name is Zhuohao (Jerry) Zhang. I am a 5th-year Ph.D. candidate at the University of Washington, working with Prof. Jacob Wobbrock at the ACE Lab. I obtained my M.S. degree in CS from the University of Illinois, Urbana-Champaign, where I worked with Prof. Yang Wang, and my B.Eng. degree in CS from Zhejiang University, China. I also worked closely with Prof. Anhong Guo at UMich. During summers, I interned at Apple HCMI (x2), Microsoft Research, Meta Reality Labs, and Adobe Research.
My research focuses on aligning AI systems with human values through structured semantic representations. I study forms of human judgment that are nuanced, context-dependent, and difficult to operationalize, and develop taxonomies, benchmarks, and intermediate languages that make these evaluations computationally explicit. My work began in accessibility, where I examined how AI systems break down in high-stakes contexts for blind and low-vision users. Today, I extend that foundation to broader questions of AI alignment and human-AI collaboration.
My research was recognized and supported by the Apple Scholars in AI/ML PhD fellowship.
From Fluency to Formalization
Contemporary AI systems can generate fluent outputs, but fluency alone does not imply grounded understanding. Many of the judgments humans care about—whether an action is appropriate, whether an artifact is well-designed, whether an instruction has been faithfully interpreted—are high-dimensional, context-sensitive, and rarely reducible to simple metrics.
My work seeks to formalize these forms of evaluation. I construct structured representations that make explicit three dimensions AI systems must reason about: the effects of their actions, the quality of the artifacts they assess or produce, and the intent and context embedded in human instructions and experiences. By making these dimensions computationally inspectable, I aim to move AI from surface-level generation toward precise, semantically grounded alignment.
Selected Research on Human-AI Alignment
AI Safety & Agentic Alignment. Benchmarking LLM agents' ability to reason about the consequences of their actions in mobile environments. We propose a taxonomy of UI impacts to prevent unintended side effects in autonomous agents.
Value Alignment in Design. Can AI critique design like a human expert? We introduced a large-scale dataset and taxonomy to train models that align with professional design principles, enabling automated, high-quality feedback.
Contextual Alignment for Personal Question Answering. How can AI answer complex personal questions that require integrating fragmented memories? We introduce a contextual augmentation pipeline that structures multimodal episodic data into semantic representations, enabling more faithful and context-aware question answering.
Structured Visual Alignment. Can AI systems reason about design the way trained humans do—and preserve those principles when content changes? We introduce a large-scale benchmark of human-annotated “design DNA” representations to evaluate relational and slide-level reasoning in vision-language models. Building on this, we formalize design-preserving generation as a constraint propagation problem, showing that explicit semantic representations enable smaller models to maintain visual coherence under complex content transformations.