Synthetic Humans

This project explores how AI can approximate selected aspects of human experience and behavior to support architectural design evaluation.

Synthetic Humans
Line Shape Image
Line Shape Image

About

Synthetic Humans explores a central question: can computational models approximate aspects of human experience in ways that help designers evaluate buildings and cities?

By synthetic humans, we mean AI systems that simulate selected aspects of behavior, preference, attention, and emotional response. The long-term vision is to let designers test not only how a space performs technically, but how it might be perceived and experienced by different people.

The project focuses on early-stage design evaluation, where architects often rely on intuition, precedent, and limited feedback. AI models can help compare alternatives, reveal likely experiential patterns, and support more deliberate conversations about atmosphere, comfort, legibility, and emotion.

Current work studies how language and vision-language models respond to different spatial settings, representation styles, and architectural images. We are especially interested in identifying where these models can support design judgment and where human validation remains essential.

Funding

Papers

2025 Publication

Quantifying Architectural Experience using VLMs: Does AI Dream of Rendered Spaces?

Gal Guz, Nikolas Martelaro, Gerhard Schubert, Jonathan Dortheimer
  • Venue CAAD Futures 2025
  • AI-Assisted Design
  • Synthetic Humans