I am an innovation scholar. I care about how early ideas take shape: how we identify problems worth solving, how we generate solutions, and, once we have options, how we figure out which ones are worth pursuing. Innovation can look messy and non-linear from the outside, but I believe there is a lot of structure underneath it. My research works to surface that structure: to understand the cognitive, social, and technological processes that help organizations find more breakthroughs and high-impact ideas.
My work unfolds across three interconnected streams. The first asks a foundational question: what kinds of problems are worth solving? The second examines how the design of evaluation processes — including who evaluates and how — shapes which ideas get recognized and funded. The third explores how AI can augment human judgment in idea generation and evaluation without displacing the kinds of diverse, unexpected insight that humans uniquely contribute.
What kinds of problems are entrepreneurs trying to solve, and how is AI reshaping which problems get targeted?
Before we can evaluate solutions, we need to understand problems. I am building a structured map of the kinds of problems that early-stage startups are tackling — and examining how the structural properties of those problems predict where AI disruption is most economically consequential.
Reframing Entrepreneurship: How AI Reshapes the Problems that Startups Can Address (with S. Bernstein, M. Chen, and Z. Ma). Drafting manuscript.
Traditional frameworks for studying entrepreneurship center on solutions — the products, services, and business models that ventures bring to market. This project reframes the inquiry around problems: what customer needs do ventures address, and how does the choice of problem predict venture trajectories?
We are developing a multi-dimensional taxonomy of customer problems grounded in literatures on problem structure, AI economics, and information theory, and deploying it at scale using a novel human-LLM hybrid annotation pipeline across a large corpus of AI and non-AI startups. The framework classifies problems along dimensions including information intensity, frequency, and expert dependency — each of which captures a distinct mechanism by which AI reshapes the economics of problem-solving. Mapping this landscape lets us ask: is AI expanding the frontier of economically viable problems entrepreneurs can tackle, or concentrating activity in a narrower set of addressable markets?
Keywords: problem structure, AI economics, entrepreneurship, startup ecosystems, human-LLM annotation
How do the composition and structure of evaluation processes shape which innovations get recognized and funded?
A recurring theme in my work is that who evaluates and how evaluation is structured profoundly shapes which ideas make it forward. Specialists and generalists see different things. Evaluators asked to focus on feasibility alone surface different concerns than those juggling multiple criteria at once. Getting evaluation design right is not just an organizational efficiency question but it determines which breakthroughs get a chance.
Judging the Problem: A Problem-Centric Approach to Evaluating Early-Stage Ventures (with M. Zhang). Under 1st review.
If the problem a startup addresses is as consequential as its solution — as Stream I argues — then evaluation processes should be designed to surface it. "Judging the Problem" tests this notion: what happens when you reorient evaluation around problem quality rather than solution quality alone.
Early-stage ventures often struggle to articulate the fundamental problems they aim to solve, yet traditional evaluation approaches may not effectively surface these issues. Through a field experiment with over 200 expert judges evaluating 150 ventures at a leading university's accelerator program, we examine how training judges to focus on problem identification shapes their assessments and feedback quality. We find that a problem-focused approach helps judges better identify promising ventures and provide more actionable guidance.
Keywords: venture evaluation, problem identification, entrepreneurship, feedback quality, field experiment
Beyond Feasibility Filters: How Domain-Spanning Expertise Enables Recognition of Innovation Potential (with Z. Szajnfarber, J. Crusan, and M. Menietti). Strategic Management Journal, 47(5), 1368–1432, 2026.🏆 2023 TIM Best Paper Award, Academy of Management .·
Partnering with NASA, we show that evaluators with cross-domain expertise are uniquely positioned to recognize how novel solutions enhance system functionality, while specialists tend to assess novelty and feasibility in isolation. These differences shape which ideas get recognized as promising. By combining human expertise with LLM-based analysis of evaluator comments, the findings offer guidance for designing evaluation processes that better surface high-potential innovations.
Keywords: project evaluation, system architecture, expertise, human-LLM annotation, field experiment
Greenlighting Innovative Projects: How Evaluation Format Shapes the Perceived Feasibility of Early-Stage Ideas (with S. Friis, T. Cai, M. Menietti, G. Weber, and E. Guinan). Revising manuscript.
Focusing evaluators' attention on feasibility alone leads to more comprehensive assessments of implementation challenges, while multi-criteria evaluation better captures interdependencies across dimensions. Drawing on evidence from a large-scale evaluation setting at a leading research university, these findings provide insights for organizations seeking to design more effective innovation funding processes.
Keywords: project evaluation, cognitive attention, feasibility analysis, human-LLM annotation, field experiment
Designing Evaluation Panels: When Does Professional Panel Diversity Predict Startup Success? (with N. Rietzler and Y. Zhang). Under 1st review.
Panel composition is a central but underexamined design choice in startup selection. Drawing on 91,423 evaluations of 19,152 startups in a global accelerator — with stratified random assignment of evaluators — we show that professionally diverse panels produce assessments more strongly associated with subsequent venture outcomes (funding, survival, revenue, acquisition) than homogeneous panels, particularly among high-scoring ventures. Diverse panels succeed not by evaluating more in-depth, but by evaluating more broadly: investors emphasize markets, entrepreneurs focus on teams, and executives prioritize the product. Panel composition is thus a strategic lever — rather than seeking one "ideal" evaluator type, accelerators can improve selection by combining complementary professional perspectives.
Keywords: venture evaluation, panel composition, professional diversity, accelerators, organizational selection
Forecasting Impact of Ideas: The Role of Concrete Language in Idea Evaluation (with W. Orwig, M. Zhang, and D. Schacter). 1st revise & resubmit.
How do evaluators assess the long-term impact of early-stage ideas in the absence of clear market evidence? We show that the language used in entrepreneurial pitches plays a critical role. Analyzing hundreds of entrepreneurial pitches from a leading university innovation competition over the past five years, we find that ideas described in more concrete terms are consistently rated as higher impact. This effect operates by enabling evaluators to better envision future outcomes, strengthening their ability to assess an idea’s potential. Together, these findings highlight how subtle features of communication shape evaluation and offer practical guidance for founders seeking to convey the promise of their ideas.
Keywords: concreteness, idea evaluation, forecasting, impact, entrepreneurial pitches
Conservatism Gets Funded? The Role of Negative Information in Expert Evaluations for Novel Projects (with M. Teplitskiy*, H. Ranu, G. Gray, E. Guinan, M. Menietti, and K. Lakhani). Management Science, 68(6), 4478–4495, 2022. *Equal authorship.
Information sharing among expert evaluators introduces a systematic negativity bias in project evaluation. Evaluators are more likely to lower their scores after seeing more critical peer assessments than to raise them after seeing favorable ones, leading to more conservative funding decisions. Analysis of evaluators' written justifications reveals that negative peer input shifts attention toward identifying weaknesses, while positive input does not equivalently amplify strengths. These findings highlight how transparency in evaluation processes can unintentionally bias decisions against novel, high-upside projects.
Keywords: project evaluation, bias, negativity bias, innovation selection, field experiment
Engineering Serendipity: When Does Knowledge Sharing Lead to Knowledge Production? (with I. Ganguli, P. Gaulé, E. Guinan, and K. Lakhani). Strategic Management Journal, 42(6), 1215–1244, 2021.
Knowledge similarity plays a dual role in shaping the outcomes of serendipitous encounters. Analyzing interactions among more than 15,000 scientist pairs and their subsequent publication trajectories, we find that moderate overlap in expertise fosters collaboration and knowledge exchange, while high similarity—such as being in the same field—reduces cross-citation, suggesting competitive dynamics. These findings offer guidance for designing interactions that maximize innovation outcomes.
Keywords: serendipity, knowledge similarity, collaboration, competition, innovation, field experiment
III. Human-AI Collaboration in Idea Generation and Evaluation
How can AI augment human judgment in idea generation and evaluation — and when does it help versus hurt?
The rise of generative AI raises a fundamental question for organizations: how do we effectively combine human judgment with AI assistance in both idea generation and evaluation processes? My research in this stream treats human-AI collaboration as an empirical question — examining when AI augmentation improves the generation and assessment of ideas, when it introduces new biases, and how the design of AI assistance shapes outcomes. A core concern is the risk of over-deference: evaluators who receive AI recommendations may become more confident without becoming more accurate, particularly when AI-generated content is fluent and persuasive.
The Crowdless Future? Generative AI and Creative Problem Solving (with L. Boussioux*, M. Zhang, V. Jacimovic, and K.R. Lakhani). Organization Science, 35(5), 1589–1607, 2024. *Equal authorship.
Generative AI is reshaping how organizations source and develop new ideas. Comparing traditional crowdsourcing with human–AI collaboration in a real-world innovation challenge, we find that while human-generated solutions tend to be more novel, AI-assisted approaches produce ideas that are more feasible and higher quality. Rather than replacing the crowd, AI shifts the tradeoff between novelty and viability, enabling more efficient exploration of the solution space. These findings highlight how organizations can strategically integrate AI into idea generation processes to balance creativity with implementability.
Keywords: idea generation, crowdsourcing, human-AI collaboration, generative AI
The Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations (with L. Boussioux, C. Ayoubi, Y. Chen, C. Lin, R. Spens, P. Wagh, and P. Wang). Under 3rd review. 🏆 1st Place, 2025 Wharton People Analytics White Paper Competition · Best in Track Nominee, ICIS 2025
We examine how different AI assistance formats — from black box recommendations to narrative explanations — shape human evaluators' decision-making. Using behavioral data and mouse tracking analysis, we find that the way AI presents its recommendations influences how humans engage with subjective criteria, with meaningful implications for over-reliance and evaluation quality.
Keywords: project evaluation, human-AI collaboration, generative AI, over-deference, field experiment
The Mean-Variance Innovation Tradeoff in AI-Augmented Evaluations (with C. Grumbach and G. von Krogh). 1st revise & resubmit.
AI integration shifts not just the average quality of selected ideas, but the variance — with implications for portfolio diversity and radical innovation. We develop and test a mean-variance framework showing that AI augmentation tends to narrow the selection distribution in ways that may systematically disadvantage unconventional, high-upside proposals.
Keywords: evaluation design, AI augmentation, innovation portfolio, radical innovation