Summary of GTT VOD "AI is Here: How Can We Stay Relevant"
Goju opens with a strong conviction, grounded in the emergence of large language models since the ChatGPT moment in late 2022, that language models will be central to our AI future. While uncertainty remains about whether massive cloud-based models specifically will dominate long-term, there is compelling early evidence of widespread adoption across many markets. Importantly, Goju distinguishes between language models as a technology (not hype) and the various commercial forms in which they are currently packaged (potentially hype). We are, in the speaker’s view, genuinely in an AI era — not a bubble.
The most significant and unique strength of language models is their capacity for creative ideation and rapid prototyping. This stems directly from their inherent stochasticity — the same property that might seem like a flaw actually enables a kind of generative brainstorming that pushes forward progress across many domains. This is what makes them genuinely novel compared to prior technologies.
Rather than following AI developments blindly, Goju advises building a personal collection of trusted AI voices — researchers, practitioners, and commentators — and applying a gradient of trust to each. No individual should be trusted 100%, including Goju himself. Prominent figures like Yann LeCun and Geoffrey Hinton are cited as examples of people who can be trusted to varying degrees, but the point is the framework: evaluate what people say, test it against your own domain, and update accordingly.
The same semi-trust framework applies to AI companies and tools. Rather than chasing whatever appears to be leading in the moment, Goju urges following the evidence over time. Many splashy announcements represent years of effort and heavy debt, often positioning a company for acquisition rather than sustainable leadership. Goju specifically recommends Google/DeepMind as a reliable, foundationally strong option — not because of any single product release, but because of their consistent body of work, deep capital, and talent. The key trait to look for is iterative refinement over time rather than flashy one-time leaps.
Goju issues a practical warning against investing deeply in “wrapper” companies — businesses that build products on top of foundational AI models without contributing meaningful technological innovation themselves. These companies lack a “moat,” making them vulnerable to acquisition or shutdown by larger players who simply want the wrapper, not the underlying product. Tools built on top of these companies can be reworked or fully dismantled, leaving users who invested heavily in learning them starting from scratch.
Goju recommends to invest learning time in tools and platforms from companies with genuine technological foundations. This is not a moral judgment against wrapper companies — it is pragmatic advice about protecting the time you invest in skill-building. Learning a tool from a company with deep roots is more likely to compound over time.
Perhaps the most forward-looking section of the talk concerns the growing scarcity and value of authentic human-generated data. Based on conversations with leading AI researchers and companies, Goju argues that models learning from models is a dead end — it degrades fidelity and can lead to model collapse. What AI systems genuinely need is novel data created by humans that does not simply echo what already exists in training sets.
This creates a significant opportunity. Any human who has mastered a craft — whether writing, coding, music, art, or other domains — can produce content that is genuinely scarce and novel to existing models. Critically, this includes intentionally imperfect or low-fidelity content created with a specific purpose in mind (e.g., logically correct but wildly inefficient code), because known-bad data is just as valuable as known-good data for model training and evaluation.
The key variables that determine data value are:
• Scarcity — content that does not already exist in abundance in training sets
• Controlled fidelity — content produced with a deliberate and consistent quality level (high or low)
• Novelty — something genuinely different from the common data already available
The implication is that skilled humans hold a superpower in the AI economy — one that will likely become more valuable, not less, as AI systems mature and the pool of novel human data shrinks.
Goju closes with grounded, actionable advice for anyone wanting to navigate and contribute to the AI future:
• Follow the breadcrumbs. What AI companies say publicly is often not what they are actually building. By tracking publications, patents, and hiring patterns, it is often possible to anticipate where companies are heading well before announcements.
• Identify signal in the noise. There is an enormous volume of AI commentary right now. Identifying a small set of trusted, high-signal voices and letting them serve as guides is far more effective than trying to track everything.
• Master your craft. The ability to create novel, scarce content depends on genuine domain expertise. If you have not yet developed deep expertise in a craft, invest in doing so — read, practice, follow practitioners, and put in the time on weekends if necessary.
• Embrace human uniqueness seriously. This is not a feel-good sentiment — it is a practical observation. Each person has the capacity to generate content that no model has yet encountered, and that capacity has real economic value in an AI-driven world.
• Be a pragmatist. AI is here whether we prefer it or not. The most productive orientation is to understand it clearly, engage with it strategically, and position yourself to benefit from it rather than be displaced by it.
The overarching message is one of calibrated optimism: language models represent a genuine technological shift, not a bubble. The people who will thrive are those who invest thoughtfully in the right knowledge sources, protect their time from unstable tools, and lean into their uniquely human ability to create novel, high-fidelity content.