Teaching and learning in this course is studio-based and research-led, combining short inputs with sustained making, testing, and reflection.
- Skills workshops (contact sessions): brief framing followed by guided exercises and tool demonstrations where needed. Students apply methods immediately in-class through short, structured tasks rather than passive watching.
- Seminar-style discussion of set readings: students engage with key texts through online annotation and then develop shared understanding in roundtable discussion, linking ideas directly to the design work and the ethics of AI practice.
- Iterative design development with critique: students progress through cycles of proposing, testing, revising, and re-testing. Feedback is given through structured desk crits and peer exchange, focusing on evidence, decision-making, and clarity of intent.
- Independent, practice-based learning: substantial time is allocated for self-directed experimentation, troubleshooting, and skill development with new tools and workflows. Students are expected to test alternatives, document outcomes, and manage their own iterative process.
- Reflective learning through journaling and provenance: students maintain a weekly development journal that records intentions, methods, results (including failures), and next steps, supported by a transparent record of how AI and other tools were used.
Relational learning underpins this course through an intentionally social, studio-seminar format that values dialogue, critique, and shared problem-solving. Weekly sessions are designed around roundtable discussion, workshops, and desk crits that require students to articulate their reasoning, listen carefully, and respond constructively to alternative interpretations and approaches. Peer-to-peer exchange is treated as a core learning mechanism: students learn not only from their own experiments with AI-augmented workflows, but also from seeing how others test, fail, revise, and justify design decisions using evidence. Small-group activities - particularly in early technical upskilling - support collaborative troubleshooting and reduce barriers to experimentation, while the seminar environment establishes expectations of respectful engagement, intellectual generosity, and accountability to the collective learning culture. These relational dynamics help students develop the professional habits needed to operate in complex, technology-mediated design contexts where judgement is strengthened through critique, collaboration, and shared standards of evidence.
In alignment with principles of Assessment for Learning, this course uses assessment to support ongoing development rather than treating it only as an end-point judgement. Assessment tasks are authentic and aligned with the intended learning outcomes, emphasising iterative design thinking, evidence-led testing, and reflective judgement. A weekly development journal is a core assessment method, requiring students to document intentions, decisions, tests, outcomes (including failures), and next steps, supported by a transparent provenance trail of AI-assisted processes. The design case study portfolio provides an authentic context in which students apply and evaluate AI-augmented workflows through spatial development, performance- and privacy-informed strategies, and clear communication using contemporary media. Across the semester, workshops and structured critique function as continuous feedback loops that help students refine both their design proposals and the quality of their reasoning.
This course uses Technology-Enhanced Learning (TEL) to build students’ critical and practical capability with AI-augmented architectural workflows. A blended approach combines directed self-study (including selected tutorials and reference material) with face-to-face seminars structured around workshops, roundtable discussion, and iterative critique. Students repeatedly move between conceptual framing (through readings), tool-based experimentation (through comparative tests and prototyping), and reflective evaluation (through weekly journalling and transparent provenance of methods and outputs).
Digital and AI-assisted media are used to explore and communicate design intent and inhabitation, while computational methods support the development and testing of responsive non-structural architectural elements. The emphasis is on architectural judgement: understanding what these tools can and cannot credibly demonstrate, and using evidence - rather than visual persuasion alone - to guide design decisions.
Collectively, these teaching and learning methods foster an educational environment deeply rooted in relational trust and collaborative inquiry, informed by ongoing formative assessment and enhanced through strategic integration of digital technologies. By thoughtfully aligning these practices with the University of Auckland’s signature pedagogies, this course aims to cultivate graduates who are reflective, adaptive, ethically engaged, and professionally skilled, capable of effectively navigating and contributing to the evolving fields of AI and advanced technology within architectural practice.
Marking Rubrics: rubrics draw on the SOLO taxonomy (Structure of Observed Learning Outcome) to describe levels of performance from surface to deep learning. Rather than using vague qualifiers (e.g. “good” or “excellent”), the rubrics provide specific, actionable criteria at each achievement level, following Orrell’s rubric design guidelines. This approach gives students transparent standards and feedback on how to improve. Each incremental level introduces qualitatively new capabilities, not merely “more of the same”, making distinctions between performance levels meaningful.