Graph Out Loud explores how a large-language-model (LLM) can become a dialogic partner inside the Grasshopper environment—one that designers can engage with naturally through voice, text, and sketches. The project reframes generative tools from transactional prompt-boxes into collaborative actors that co-reason with the designer, supporting Schön’s reflective practice (reflection-in-action, reflection-on-action, and metacognition) across the entire workflow rather than only at the end.
At the center of the system is an LLM “Actor” component that lives on the Grasshopper canvas. The actor listens to multimodal input—spoken intent, typed guidance, and quick ink/sketch overlays—and translates that intent into curve creation, parameter edits, component placement, wiring, and solver execution. Every action the actor takes is reversible, annotated, and linked to a provenance log so the designer can see what changed, why it changed, and how to change it again.
To preserve agency while still harnessing breadth, the actor supports three modes that the designer can switch between at any time:
Co-led: the actor proposes small, contextual edits in a conversational loop while the designer continues to drive.
AI-led (burst): the actor generates diverse option sets on request, useful for canvassing the design space early on.
Manual: the actor observes only, summarizing moves and offering explanations without taking initiative.
These modes keep initiative calibrated, preventing the slide from “partner” to “autopilot.”
Designers up-/down-vote variants, compare pairs, or adjust constraints; those signals are logged with context (graph state, parameters, and verbal rationale) and used to refine the actor’s policy. Over time the actor aligns to a designer’s preferred aesthetics, tolerances, and modeling habits, shifting from generic assistance to personalized guidance without removing the human from the decision loop.
Graph Out Loud treats sketches, voice, and text as first-class citizens. A quick pen stroke can be interpreted as a target curve, attractor region, or “don’t touch” mask. A voice note like “make this a quad grid, relax it, and keep edge lengths under 0.5m” becomes a sequenced set of Grasshopper edits. Text clarifies intent or asks “why” questions; the actor responds with legible explanations tied to specific components and parameter diffs.
To counter opacity, every actor move includes a provenance card: what was changed, which alternatives were considered, and how the choice relates to the stated goal. Multiple candidates are presented side-by-side with handles (clear, minimal controls) for immediate refinement. After bursts of exploration, a reflection panel summarizes what happened (“what we tried, what improved, what regressed”) so designers can reflect-on-action and decide the next step. Periodic micro-prompts nudge metacognitive checks—“Do we value smoothness over area here?”—making the dialogue explicitly reflective rather than purely productive.
A co-led conversational loop sustains designer agency while still delivering the diversity designers appreciate from AI. RLHF turns generic generation into situated practice, aligning the tool to the individual without hard-coding rules. Most importantly, transparency and provenance are not “nice to haves”: they are the bridge between speed and trust, enabling designers to keep judgment—rather than throughput—as the central value.
Next steps include richer sketch semantics (constraints and intent labels), stronger safety/undo guarantees, team-level preference sharing, and tighter evaluation of agency and reflection metrics (initiative balance, correction rate, time-to-iteration, and designer confidence). Longer term, we aim to extend the actor to multi-robot and fabrication contexts, keeping the same dialogic, reflective backbone from concept to construction.