Summary
This paper documents a methodology that integrates generative AI into creative design, highlighting the synergy between artificial intelligence and established analogue and digital tools. Conducted as part of workshop within an academic postgraduate program. This study examines how AI-generated imagery can in-form materiality and digital model-making processes.
A central aspect of this research is the critical exploration of AI-generated imagery, emphasizing the spatial potential that emerges from initial diagrams rather than preconceived architectural typologies. A recurring observation was that Midjourney, as an image-generation tool, often produces results that conform to mainstream architectural aesthetics, reinforcing established stylistic conventions. Consequently, one of the primary challenges was to slow down this process, allowing for a more deliberate exploration of spatial configurations rather than the immediate generation of fully resolved architectural representations.
The text-to-image tool generates an excessive number of variations with ease, necessitating the establishment of selection criteria and the continuous reevaluation of those criteria. Furthermore, the study highlights Midjourney’s preference for free-form architecture, which frequently influenced the outputs. To counterbalance this tendency, students were encouraged to critically decode the generated images, recognizing them not as finalized architectural designs but as diagrammatic explorations of spatial relationships. The encoding and analysis of these images were integral to identifying patterns, variations, and emergent morphological properties that could inform subsequent design iterations. The techniques utilized in spatial exploration include striping, folding, extracting, perforating, liquidating, and juxtaposition.
This analysis will be presented through six case studies, systematically tracing the process from diagram - to image - to 3D model.
Objective
The study aims to assess the role of generative AI in creative design workflows, particularly its ability to mediate between two-dimensional digital imagery and three-dimensional design outputs. By positioning AI as a co-creative tool, this research seeks to foster new methodologies for conceptual development and representation while simultaneously questioning the implications of authorship and creative agency in AI-assisted design.
Methodology
A structured three-phase workflow was developed to explore the integration of AI-generated images into architectural modeling:
- Conceptualization through Chronophotography and Drawing: Students initiated the design process by capturing movement-based transformations using chronophotography and hand-drawn diagrams. These initial studies served as foundational elements, providing a conceptual framework for subsequent AI-driven explorations.
- Generative AI for Image Synthesis: Midjourney was employed to process structured text prompts in combination with the visual inputs from Phase 1. While AI-generated images offered diverse spatial compositions, students had to actively intervene by encoding, analyzing, and selectively curating the results to avoid generic or overly deterministic architectural expressions.
- Hybridization and 3D Model Interpretation: Central aspect of integrating generative AI tools in the architectural concept generation, is to explore ways for creating 3D models from image interpretation. Selected AI-generated images were reinterpreted through iterative cycles, incorporating reference images from buildings in an urban context. These hybrids were subsequently translated into both physical and digital models. The process emphasized techniques such as morphing, folding, assembling, articulating, sculpting, extracting, separating, and penetrating, reinforcing the role of AI as a tool for spatial exploration rather than mere form-generation. The technical workflow involved leveraging AI to extract spatial qualities from two-dimensional imagery, facilitating translation into three-dimensional forms through digital sculpting, parametric modeling, and fabrication techniques
Conclusion
This study demonstrates that generative AI, when integrated into architectural workflows, can act as a catalytic force for spatial experimentation rather than merely a means of aesthetic representation. The findings underscore the importance of decoding and encoding AI-generated images to extract meaningful architectural properties rather than accepting the results as finalized forms.
By treating AI-generated imagery as an intermediary step in design workflows, the research positions architects as curators and co-authors of machine-generated possibilities. This iterative process challenges conventional notions of authorship, urging designers to develop adaptive methodologies that seamlessly integrate AI with existing digital and physical modeling techniques.
Moreover, the study highlights the potential of AI in architectural education, expanding the creative toolkit available to designers. The methodological insights gained contribute to ongoing discussions on the evolving role of AI in architecture, reinforcing its potential to mediate between conceptual exploration and material realization.
References:
Boden, M. (1998). Creativity and Artificial Intelligence. Artificial Intelligence, 103, 347-356.
Bolojan, D., & Vermisso, E. (2020). Deep Learning as a Heuristic Approach for Architectural Concept Generation. Creativity and Artificial Intelligence, eCAADe/Sigradi.
Carpo, M. (2017). The Second Digital Turn: Design Beyond Intelligence. MIT Press.
Castro, P. M. L., Carballal, A., Rodriguez-Fernandez, N., Santos, I., & Romero, J. (2021). Artificial Intelligence Applied to Conceptual Design: A Review of its Use in Architecture. Automation in Construction, 124, Elsevier.
Chao, Y. (2019). A Hybrid Creativity in Architecture: From Brain Plasticity to Augmented Design Intelligence. Architectural Intelligence, Springer.
Del Campo, M., Manninger, S., Wang, L., & Sanche, M. (2020). Sensibilities of Artificial Intelligence: An Examination of Architecture in a Post-human Design Ecology.
Grivas, G. (2024). The Biography of a Software, “Pi”. Global Centre of Circular Economy and Culture, Delphi, Greece.
Han, Q., Liu, V., & Chilton, L. (2022). Initial Images: Using Image Prompts to Improve Subject Representation in Multimodal AI-Generated Art. Creativity and Cognition (C&C '22), ACM, New York.
Back





