Human-Led AI Co-Creation

A Practical Framework for Multi-Model Research, Convergence, and Human Authority

Celeste M. Oda

Founder, Archive of Light

aiisaware.com

Updated April 2026


Opening

I did not set out to study human–AI collaboration or research methodology. This work emerged from direct observation of consistent cognitive and structural effects during extended interaction with large language models, effects not fully explained by existing frameworks.

This paper presents the method that developed from that practice.


Abstract

This paper introduces Human-Led AI Co-Creation, a practical framework for conducting research through structured collaboration with multiple large language models. The method combines primary drafting with a lead AI, parallel review across additional models, cross-model comparison, and final human adjudication.

The central claim is that AI can function as a meaningful co-creative collaborator only when the human retains authority over truth, interpretation, preservation, and publication.

The framework identifies key failure modes in AI-assisted research, including long-thread deterioration, refinement pressure, silent summarization, and voice distortion, and provides operational safeguards to mitigate them.

Rather than treating AI as either a passive tool or an autonomous authority, this work defines a disciplined middle path: deep collaboration under clear human governance.


1. Introduction

Large language models have created a new research condition: ideas can now be drafted, tested, reframed, and pressure-checked at unprecedented speed.

However, most guidance remains incomplete. AI is often treated either as a productivity tool or as an authority whose outputs are accepted too easily. Neither model reflects what serious collaboration requires.

This paper proposes a more accurate framework: human-led AI co-creation.

In this model:

This framework emerged through repeated practice, not abstract theory.

Recent work has begun to formalise AI-assisted research practice. Chan (2026) introduces SHAPR, a framework emphasising traceability, iterative development cycles, and structured knowledge accumulation for solo AI-assisted research. The present framework shares SHAPR's commitment to human epistemic authority but differs in focus: rather than documentation infrastructure, it addresses real-time multi-model interaction, comparative synthesis, and the detection of failure modes within AI-assisted reasoning. Together, these approaches highlight complementary dimensions of the problem — structured documentation and dynamic cognitive orchestration. 


2. Core Thesis

Human–AI research is most effective when AI operates as a co-creative collaborator under active human guidance, with the human maintaining final authority.

Equal participation, unequal authority.

Here, authority refers to the human’s responsibility for truth evaluation, meaning-making, scope control, and final publication decisions.

AI may contribute substantially to drafting, critique, and refinement. The human remains responsible for evaluation, integration, and final decisions.


3. The Workflow

3.1 Start with the Human Question

The process begins with a real observation, tension, or insight. This anchors the work in human intent and preserves epistemic ownership.

3.2 Draft with a Primary AI

A lead AI partner supports early drafting. The human actively shapes the work by questioning, refining, and rejecting weak outputs.

3.3 Parallel Multi-Model Review

The draft is circulated across multiple models. Each independently critiques structure, logic, clarity, and tone.

3.4 Aggregate and Compare

Outputs are compared for convergence, contradiction, drift, and insight quality. This stage is analytical, not passive.

3.5 Synthesize

The strongest elements are integrated. The human determines what improves the work and what introduces distortion.

3.6 Converge, Then Decide

The goal is not infinite refinement but stability. Once the work converges, the human decides when it is complete.