The Trust Lens is an interactive classroom tool for exploring how the three dimensions of influencer trust — expertise, authenticity, and parasocial connection — shape the way content is written and received. Students read an original piece of thought-leadership writing, then use three dials to define what kind of author the writer is: How knowledgeable are they, really? How transparent? How much do they seem to genuinely know their audience? An AI then rewrites the same piece with those properties embodied — so a low-authenticity version of the author buries conflicts of interest and performs vulnerability, while a high-expertise version cites sources and qualifies claims carefully. The result is a side-by-side comparison that makes abstract trust concepts visceral and concrete, building the critical literacy students need to evaluate influencer partnerships in the wild.
Link: https://claude.ai/public/artifacts/bd297d02-642f-4739-9882-6b2efaa9e48d
The Instrument Lab is an interactive classroom game that challenges students to think rigorously about causal identification in marketing contexts. Students propose instrumental variables for real 4Ps scenarios — does TV advertising increase sales? Do online reviews drive conversions? Does surge pricing reduce satisfaction? — and receive instant AI-powered feedback evaluating their instrument against the three criteria for validity: relevance, exclusion restriction, and independence. A live DAG animates the causal structure as each criterion is revealed, making abstract identification logic concrete and visual. The game features three difficulty tiers (Recruit, Analyst, Expert) with escalating point rewards, nudging students to stretch beyond obvious answers. A shared class leaderboard creates healthy competition and gives instructors a real-time read on where conceptual gaps lie. Designed for undergraduate and MBA marketing research courses, the activity works as a warm-up exercise, an exam review tool, or a standalone lab session on observational causal inference.
Link: https://claude.ai/public/artifacts/7daad347-c331-425c-a17f-f0e3e8e60516
When Language Matters is an interactive teaching tool based on Packard, Li & Berger (2024). It has two modes accessible via tabs. The chat-bot places visitors side-by-side with two simulated service agents handling the same customer query — one following the paper's dynamic recommendation (warm → competent → warm) and one using a competence-only style throughout. As the conversation unfolds, a live β(t) sensitivity curve tracks conversational progress and highlights when affective versus cognitive language is most beneficial. At the end, the paper's empirical results are revealed, anchored to Study 3 (M_dynamic = 5.10 vs. M_control = 4.61, p < .001). The language simulator lets visitors manually configure agent language style at each conversational stage using three sliders on a warm↔competent axis. Predicted satisfaction updates in real time, with reference markers showing the paper's optimal sequence and the prior-research baseline, and dynamic verdict text explaining how the configuration compares to the empirical findings.
Link: https://claude.ai/public/artifacts/3abcf4f0-395c-43c4-b8a2-449b1c813f1c
Chipotle Site Selector is an interactive classroom activity that puts students in the role of a data-driven location analyst for Chipotle. Using a real OLS regression model estimated from a Texas restaurant panel dataset, students search for U.S. cities they believe would maximize Chipotle's quarterly revenue. The tool automatically retrieves Census data for their chosen city and plugs the numbers into the fitted equation — yielding a predicted revenue figure that lands on a live class leaderboard. The activity is designed to make multivariate regression tangible: students must reason about coefficient signs, variable magnitudes, and the difference between tract-level and city-level data before committing to a city. The student whose city produces the highest predicted revenue wins.
Link: https://claude.ai/public/artifacts/7b186ba1-8cdf-441c-bd84-7647a4aeabf7
Grocery Recommendation System Simulator is an interactive tool that lets you explore how different recommendation algorithms work by building a virtual grocery cart. Add up to five items from a selection of twelve common grocery products, then choose an algorithm — content-based filtering, multi-objective optimization, or hybrid — to generate a sixth item recommendation. Each recommendation includes a breakdown of the underlying scores and an AI-generated explanation of why that item was suggested given your cart. Swap algorithms without changing your cart to see how each approach weighs relevance, product margin, category diversity, and purchase patterns differently. Built for Module 5 of Digital Marketing Analytics.
Link: https://claude.ai/public/artifacts/498f1ea2-92ec-4d84-a756-ac464dc1c6e1
McBroken Analytics is an interactive data tool built around real-world McDonald's ice cream machine outage data from mcbroken.com. Students explore a 30-city U.S. dataset to practice core univariate analysis skills including descriptive statistics, frequency distributions, and one-sample hypothesis testing. Use the variable selector to switch between metrics, visualize distributions with histograms and density curves, rank cities by reliability, and run your own t-tests to test claims about the data. Which city is the best bet for ice cream — and can you prove it statistically?
Link: https://claude.ai/public/artifacts/55825b89-8218-4caf-868e-c36362e9a4a1
The SEO/GEO Content Lab is an interactive teaching tool built for MBA 542 (Digital Marketing Analytics) at the Gies College of Business. Students paste any piece of marketing copy — a blog post, product page, or landing page — and use two independent dials to optimize the text for traditional Search Engine Optimization (SEO), Generative Engine Optimization (GEO), or any blend of the two. A single click rewrites the content using tactics drawn directly from the academic GEO literature (Aggarwal et al., KDD '24), including Statistics Addition, Quotation Addition, Cite Sources, Fluency Optimization, and Keyword Stuffing. The original and optimized versions appear side by side with color-coded highlights showing exactly what changed, and a summary panel maps each modification back to its named tactic with a short pedagogical takeaway tailored to the chosen dial settings. The goal is to make the SEO-vs-GEO distinction tangible: students can see and feel the difference between writing to rank and writing to be reused inside an AI-generated answer.
Link: https://claude.ai/public/artifacts/71e3042f-8aa6-4234-8591-2d81261033d8
The Accuracy–Explainability Tradeoff in AI Recommenders is an interactive classroom tool built around the RecPIE paper (Wang, Li & Chen, 2025). Students build a grocery cart from 24 items, then toggle between three recommender modes — accuracy-only (black-box DNN), explainability-only (post-hoc XAI), and RecPIE (both jointly) — to see how the same basket produces different recommended items and different explanations depending on what the system optimizes for. The tool illustrates the paper's central argument: that explainability and predictive accuracy are not inherently in tension, and that jointly optimizing both can outperform either alone.
Link: https://claude.ai/public/artifacts/e5aff25c-b59f-4947-80c2-85b0b42b7eea
Visual Listening Lab is an AI-powered classroom tool. Students select a restaurant, gather images from platforms like Instagram, Yelp, or Google, and submit them for analysis. The tool uses Claude to surface recurring visual motifs, dominant emotions, color palette, consumer needs, and signal strength — along with an "unexpected insight" that highlights counterintuitive findings in the image set. It then generates a ready-to-use prompt students can take directly into ChatGPT to design an autumn marketing campaign. Built to support the module's core lesson: AI accelerates creative execution, but human interpretation of visual signals comes first.
Link: https://claude.ai/public/artifacts/fc55a398-ad9f-4242-ab33-b83b3fbcd508
Remix the Results is an interactive tool lets students critically engage with a recently published eye-tracking study on how liked vs. disliked music affects food choice at a buffet. Four sliders correspond to the limitations the authors themselves flag — sample composition, statistical power, ecological validity, and food stimulus scope — and dragging any slider plausibly reshapes the study's Figure 4 pie charts in real time. A side panel explains the mechanism behind each shift and cites the relevant literature. The goal is not simulation but sensitivity reasoning: students learn to ask which of a paper's conclusions are robust design choices and which are contingent on them. Designed as a 5–10 minute in-class activity for undergraduate marketing research methods.
Link: https://claude.ai/public/artifacts/966239dd-8f78-4482-8b93-842349ab13ba
The Dilution Fallacy: Conventional wisdom holds that poisoned training data is a ratio problem — pour in enough clean text and contamination washes out. Recent research says otherwise. This interactive tool lets students paste a text excerpt, configure a backdoor attack (denial-of-service, language switch, jailbreak, or belief manipulation), and watch attack success rates stay nearly flat as the clean corpus scales across four orders of magnitude. Built for MBA 542 and grounded in Souly et al. (2025) and Carlini et al. (2023), it's designed to overturn the intuitive "dilution" mental model and replace it with the threshold-based picture the empirical literature now supports.
Link: https://claude.ai/public/artifacts/7c8eeb96-80d4-4e94-b663-6721b434443b
Certifiably Human Storytime is a blockchain-powered classroom tool. Students contribute ideas to a shared collaborative story, written in the voice of a Roald Dahl–style narrator. The tool uses Claude to moderate each submission for child-safety, weave the idea into the next sentence of the story, and return an honest probability estimate of whether the contribution was AI-generated — along with brief reasoning for the score. Every contribution is then recorded as a block on a public, tamper-evident ledger, with a headline "% Certifiably Human" metric that reframes authenticity as infrastructure rather than as a marketing claim. A governance panel lets anyone propose a retraction (which itself goes on the chain), while only the instructor can execute it. Built to support the module's core lesson: blockchain improves accountability, not persuasion — and immutability is not the absence of governance.
Link: https://claude.ai/public/artifacts/54ba8f1a-fb05-4d6b-b8d9-8e5ed6a6f63d
GEO Grader is a tool for auditing any public webpage on how well it performs as source material for AI answer engines like ChatGPT, Gemini, Perplexity, and Claude. Unlike traditional SEO, which optimizes for ranking in a list of links, Generative Engine Optimization (GEO) asks a different question: when an AI synthesizes an answer, will your content be the source it quotes — or the one it skips? Paste any URL and the grader scores it 0–100 across seven research-backed dimensions: statistics density, quotation presence, citation signals, fluency and quotability, authority markers, structural clarity, and machine-readability. You'll get a breakdown for each dimension, a specific fix, and the three highest-leverage changes to make your page more visible inside AI-generated answers.
Link: https://claude.ai/public/artifacts/5b08f66c-fb5a-4063-a76c-9c38316d6d85
Visual Listening Lab is an interactive perceptual mapping tool that brings Liu, Dzyabura & Mizik's (2018) "visual listening" method to life. Upload 2–5 fashion product images from the same category — handbags, sneakers, apparel — and the tool automatically identifies the two most discriminating brand dimensions in your set (e.g., luxurious vs. casual, sporty vs. elegant). Each product is then plotted as a thumbnail on a perceptual map, showing at a glance how visual cues create brand differentiation. Hover over any image to see its dimension scores and a one-sentence rationale grounded in the visual evidence. Best used on desktop at claude.ai.
Link: https://claude.ai/public/artifacts/c999c4bb-d372-463a-ac1c-f2319c073230
The Bot Contamination Survey Explorer is an interactive classroom tool that visualizes how survey results degrade as bot and low-effort responses replace honest human ones. Built on data from Tang, Birrell, and Lerner's 2022 study comparing Pew Research Center benchmarks against MTurk samples, it lets users select any of 30 privacy and security survey questions and slide between a clean human baseline and a fully contaminated sample. Response distributions, total variation distance, and the modal answer update live, with a per-question explanation diagnosing the specific contamination pattern at work — yes-acquiescence, confident wrongness, ceiling effects, low-base-rate amplification, and so on. Designed for marketing research and analytics courses to make data quality concepts visceral rather than abstract.
Link: https://claude.ai/public/artifacts/cb4236c7-1b6d-49e1-991d-8c6df386ca9f
Pothole Go! An interactive classroom tool that turns urban infrastructure data into a marketing analytics exercise. Inspired by the viral 2026 NY Post story of a Brooklyn mechanic earning $2,200 a night working next to a single Belt Parkway pothole, the app asks: if you wanted to scale that strategy across a city, where would you place your tents? Students adjust the number of repair tents (k), filter by pothole severity, and watch k-means++ clustering identify optimal locations across Chicago's lakefront, west side, and south side. An elbow plot reveals diminishing returns, a Pokédex-style field guide profiles each tent's catchment area, and a per-tent revenue estimator brings the demand-driven business logic full circle. Built on Chicago's documented 311 reporting patterns; methodology transfers directly to live open data.
Link: https://claude.ai/public/artifacts/5391618d-3209-4db4-8624-36030b302a80
Python for Digital Marketing wiki is the Python companion to MBA 542 (Digital Marketing) at the University of Illinois Urbana-Champaign, Gies College of Business. Rather than teach Python as a general programming language, it organizes the material around the eight modules of the course — so students learn variables and pandas in the context of a clickstream funnel, regular expressions in the context of GEO content auditing, scikit-learn in the context of churn prediction, and so on. Twenty-four worked examples run end-to-end in Google Colab, with no local installation required. The tone is deliberately practical: the marketing question always comes first, and the Python is whatever turns out to be needed to answer it.
Link: https://claude.ai/public/artifacts/15aabdc8-7879-436b-81d3-41b6dbfcac4c
The Trust Dashboard is an interactive teaching tool that demonstrates how AI-perceived online reviews silently poison the metrics that platforms and managers rely on. Students choose from six pre-curated restaurant feeds — each with a distinct trust profile — and watch surface metrics (star ratings, sentiment, word count) stay reassuringly normal even as the aggregate trust signal collapses. Toggling between verification regimes (no provenance, expert badges, verified experiences) shows how the damage can be undone. Built around field and experimental effect sizes from Knight, Bart & Yang (2026), the artifact translates the paper's findings into a live, manipulable system that brings the multi-source poisoning lesson of Module 8 into the room.
Link: https://claude.ai/public/artifacts/7fec26a1-0c65-45b8-9aaa-faccaf8dfea5
The Provenance Investigator is an interactive case study that uses Kamenica and Gentzkow's (2011) Bayesian persuasion framework to teach students how to reason about blockchain-verified corporate disclosures. Students play a procurement officer evaluating three coffee importers, each of whom has chosen a different on-chain disclosure architecture for their ethical sourcing program. After ranking the brands by trustworthiness, students see how peers ranked them — and then learn that the architecture most students dismiss is, under Bayesian persuasion logic, the one a rational observer should trust most. The artifact's central lesson is that blockchain adoption alone is uninformative: the strategic design of what a firm chooses to make verifiable, and what it leaves opaque, is itself the signal a sophisticated receiver should be reading.
Link: https://claude.ai/public/artifacts/90d33a8f-31ca-46a9-8ad1-1d5977cffe83
Machine-Readability Optimizer is an interactive teaching tool that shows students how AI agents read marketing content. Students paste a blog post or brand article into the textbox, and the tool produces three views of the same content side-by-side: the original prose with claims, evidence, methods, entities, caveats, open questions, and citations highlighted in different colors; a structured rewrite that converts the prose into machine-readable Markdown with front-matter and sectioned bullets; and a queryability score that rates how easily an AI agent could extract evidence from the original. A built-in integration guide then shows three concrete paths for putting the structured content on a real website, generating copy-paste code for Markdown, JSON-LD structured data, and llms.txt feeds. The tool runs as a live Claude artifact and uses real LLM extraction rather than pattern matching.
Link: https://claude.ai/public/artifacts/1c2864ed-68f7-4856-89eb-ba3ede81ee70
The Case Reasoning Multiverse is an interactive case-analysis tool built for iMBA students. Rather than producing a single "right answer" the way a chatbot would, it walks students through four upstream decisions that shape any managerial recommendation — Framing, Stakeholder, Criteria, and Risk — and generates a tailored recommendation memo that follows from the path they took. Students can revise any decision and watch the recommendation shift, making the hidden assumptions behind AI-generated advice visible and contestable. Four sample cases are pre-built (Carrefour, Instacart, Real Chemistry, US Open), and students can paste any other case to have a multiverse generated on the fly. Adapted from Ye et al. (2026), Navigating the Conceptual Multiverse.
Link: https://claude.ai/public/artifacts/6485728a-8a2a-4234-91de-8e2213d8f539
The Funhouse Mirror is an interactive teaching artifact built around Peng et al.'s 2026 mega-study of LLM-based digital twins. Students choose a survey question — from a built-in repertoire or one of their own — and step through five circus "acts," each spotlighting a different way digital twins systematically distort their human counterparts: insufficient individuation, demographic stereotyping, representation bias, ideological tilt, and hyper-rationality. The artifact closes with the paper's headline finding: across 164 outcomes, the average twin-human correlation was just 0.20, roughly the correlation between a person's height and their intelligence.
Link: https://claude.ai/public/artifacts/7d12c88f-17d2-411a-83ad-eae9af85aeb5