Research Program and Publications

Summary

Should we attribute successes to skill or to luck? We are hard-wired to think that the most successful people or organizations must have done something right to deserve all the attention and rewards they receive. My research provides systematic evidence that luck plays a critical role in exceptional successes, not only in business and innovation but also in music, movies, science, and professional sports. A key finding is that more can be gained by rewarding and learning from the “second best”, which has important implications for organizational design and redistributive policies. My research also shows that there is room for strategizing with luck—not because there are systematic ways to become luckier than others, but because the ways people are fooled by randomness are non-random and predictable.

These ideas have been published in major strategy and management outlets (e.g., the Academy of Management Review, Organization Science*3, the Academy of Management Annals, Strategy Science, Leadership Quarterly), practitioner outlets (e.g., California Management Review and Harvard Business Manager) and interdisciplinary outlets (e.g., PNAS, Social Networks, and Behavioral and Brain Sciences). I have received multiple awards for my research (e.g., Best Paper awards from both the Academy of Management and Strategic Management Society), and my 2019 book summarizes my research on how to quantify and strategize with luck. My current research focuses on how organizations should manage diversity and (re)design themselves in the age of AI.


Successes attract our attention. But should we attribute success to brilliant strategy or pure luck? The answer to this question is dependent on the context, but research in behavioral sciences has found a context-independent tendency: people tend to overattribute success to the person (e.g., their superior skill or merit) rather than to the situation (e.g., being at the right place at the right time), also known as the fundamental attribution error (Ross & Nisbett, 1991). My research program formally, empirically, and experimentally addresses this fundamental tension in strategy research and practice and provides novel insights regarding when and how managers should learn from successes and failures, how organizations and societies should design their incentive systems and redistributive policies, as well as the opportunities that arise from suboptimal learning. Figure 1 illustrates my research program with four specific streams.

Stream One and Two demonstrate various theoretical mechanisms regarding how randomness in structured environments can produce systematic patterns, as well as how people can be predictably fooled by these stochastic processes. Stream Three proposes a theoretical framework to turn these biases into competitive advantage, with applications for diversity management, organizational changes, innovation, and leadership. My fourth research stream extends these ideas to the context of artificial intelligence (AI) and examines how algorithmic bias complicates machine and organizational learning. Taken together, my research offers a comprehensive package illustrating how to quantify and strategize with biases, including the bias to mistake luck for skill (Stream One), the bias against chance explanations for complex phenomena (Stream Two), the bias against diversity and changes (Stream Three), and an inherent bias in algorithmic predictions (Stream Four). Below, I provide overviews of my four research streams and the key findings and implications of my research papers.

Research Stream One: Exceptional Success—Skill or Luck?

A longstanding concern for policy-makers, business practitioners, and scholars is how to evaluate an actor’s merit on the basis of their performance. Merit is difficult to measure directly, so people often infer it from observed performance. This research stream focuses on this inference challenge and shows the conditions under which people can systematically mistake luck for skill when evaluating successes and failures and their implications.

Top performers are usually perceived to be the most skillful and thus receive the greatest rewards, being promoted and imitated. In my paper published in PNAS (2012), I formally and empirically demonstrate the flaws in the belief that exceptional performers are the most skilled. Exceptional performance often occurs in exceptional circumstances, and top performers are often the luckiest people who have benefited from rich-get-richer dynamics that boost their initial fortune. Our experiments show, however, that people usually rely on the heuristic of learning from the most successful. This assumption is likely to lead to disappointment—even if you can imitate everything Bill Gates did, you will not be able to replicate the context he was in, or his initial fortune, which contributed to his exceptional success. More broadly, my research suggests a “less-is-more” effect: the more exceptional a performance is, the less sensible it would be to promote or learn from such outlier.

This less-is-more effect is empirically examined in my paper under revision at Organization Science (an earlier version was included in the 2018 Academy of Management Best Paper Proceedings and won the 2019 Best Paper Prize at the Strategic Management Society). In particular, we collected eight datasets from four domains that satisfy the theoretical conditions predicting when a less-is-more effect can occur: entertainment (Billboard 100 singles, movie box office), science and innovation (journal and patent citation), auto racing (Formula 1 and NASCAR), and firm performances (US public firms’ growth rate and profitability). We found less-is-more effects in five out of the eight datasets examined (Billboard 100, journal citations, Formula 1 racing, NASCAR, and firm growth rate), and four of them are statistically significant (all except journal citations). Strong and non-linear statistical regression effects are found in all datasets, but the contexts determine whether and where the less-is-more effects occur. For example, the fastest growing firms usually attract media attention, investment, and benchmarking, such as those on Fortune’s 100 Fastest Growing Companies list. The results confirm that consecutive growth rates are almost random (Geroski, 2005), but systematic less-is-more effects can occur: firms with the top current growth rate (>34% per annum) have a significantly lower expected growth rate for the next year than firms with a high but less extreme current growth rate (between 32% and 34% per annum). This suggests that exceptional performances are not only regressive, as prior studies emphasize, but also become predictably worse, as our results show.

My 2019 Academy of Management Review paper elaborates how to turn the luck bias—the misattribution of luck in salient successes and failures—into opportunities. Indeed, many people mistake luck for skill when evaluating outliers. But even though some people realize this bias, they may still fail to act on this knowledge if their decisions rely on others understanding and approval (Turco & Zuckerman, 2014). This also suggests opportunities may favor those who are less sensitive to what others think, and has important implications for entrepreneurship and the field of behavioral strategy. Opportunities from luck biases are more likely to be protected by strong behavioral barriers, awaiting smart and independent strategists who understand our theory to monopolize the contrarian profit.

Research Stream Two: Chance Models and Explanations in Management

Chance models—mechanisms that explain empirical regularities through unsystematic variance without assuming a priori differences among actors—have a long tradition in the sciences, but they are marginalized in management scholarship. An exception is the works of James G. March and his coauthors (including my postdoctoral mentor, Jerker Denrell), which propose a variety of chance models that explain important management phenomena, including the careers of top executives, managerial risk taking, organizational anarchy, learning, and adaptation (Cohen, March, & Olsen, 1972; Denrell & March, 2001; Harrison & March, 1984; March & March, 1977; March & Shapira, 1992). Professor March is my research hero, and this stream of research serves as a tribute to honor the beauty of these random “little ideas” (Liu, Maslach, Desai, & Madsen, 2015) and, in extension, to demonstrate how they can be recombined to generate novel predictions and rich implications.

In particular, my 2016 Academy of Management Annals paper offers a systematic review of “luck” by analyzing the applications of luck in the literature of management and its neighboring fields, such as psychology, sociology, economics, and moral philosophy. I discuss the five typical use of luck as an explanation for performance differences: (a) luck as attribution; (b) luck as randomness; (c) luck as counterfactual; (d) luck as undeserved; (e) luck as serendipity, as well as a sixth, underexplored perspective, “luck as leveler”, to provide possible solutions to issues such as social inequality and unwarranted executive compensation. My 2015 paper at Organization Science offers a comprehensive review of prior literature on how randomness in structured environments can produce systematic outcomes. We also offer a toolbox for researchers to incorporate randomness when building stronger null models for developing and testing management theories. My 2017 paper at Strategy Science suggests that promoting the top performing executives is likely rewarding them for their luck rather than skill. The variance in skill is likely to be small among top-level executives, implying that extreme performances at this level are likely to be associated with factors beyond the executives’ control. Higher standards of selection can lead to worse outcomes, because the promoted are the luckier rather than the more skilled.

I also extend Professor March’s chance models to generate novel predictions in an article that is under revision at the Research in the Sociology of Organizations. For example, an empirical regularity of the firms and executives featured in business bestsellers lists or awards is their systematic performance decline after being featured (Clayman, 1987; Liu, 2019; Rosenzweig, 2007). Take the yearly list of Barron’s 30 most admirable chief executive officers (CEOs) as an example: there is an inverted V-shape pattern in the firms’ performances before and after their CEOs are featured. The slope is steeper for the decline after the features than the rise before them, which is puzzling from a statistical regression viewpoint (Harrison & March, 1984). A reanalysis of March and Shapira’s (1992) random walk model provides an alternative explanation: the strong decline may result from these CEOs’ slow adaptation, lack of merit, and self-reinforced risk taking. In particular, this model can reproduce the inverted V-shape pattern with asymmetrical slope coefficients, and the focal actors are characterized by a low learning rate (i.e., aspiration adaptation rate) and low skill (i.e., drift rate in random walks). The mechanism is as follows. Most low-skilled actors are selected out when their cumulated resources touch the lower bound, but a few survive thanks to the favorable outcomes of their risk taking (Levinthal, 1991). However, their lucky successes soon generate a vicious cycle: their lack of skill forces them to continue taking excessive risk to entertain their high and sticky aspirations, creating a stronger statistical regression effect and higher mortality rate after obtaining exceptionally high performances. This result connects several mechanisms in Professor March’s chance models (Harrison & March, 1984; March & March, 1977; March, 1991; March & Shapira, 1992) and generates novel implications, such as the inference that worse-performing CEOs with faster learning rates are likely to be better role models. I am examining this prediction in my work-in-progress project with Chia-Jung Tsay (UCL) using data from firms’ annual reports and interviews with award-winning CEOs.

Research Stream Three: Analytical Behavioral Strategy

Decades of research in behavioral sciences as well as management portrays decision biases as nuisances to be eliminated. This stream of research shows how these biases can become a guide for untapped opportunities—combining insights from both strategy and behavioral science to exploit irrational biases. I call this the “analytical behavioral strategy”: it consists of drawing on behavioral science to search for contrarian opportunities, then using experimental and data analysis to formulate an exploitation strategy.

My first perspective is a therapeutic one—how to help leaders and executives, as well as their followers and stakeholders, overcome predictable biases in decisions. In addition to conventional de-biasing techniques, I focus on the applications of “nudges”—going with the grain of human nature and overcoming a damaging bias by triggering other intuitive processes. My paper published in California Management Review in 2017 translates the policy applications of nudges, developed by Nobel Laureate Richard Thaler, to organizational contexts. For example, I worked with two large UK firms to examine how a de-salience nudge, like a CV blind policy, can enhance diversity in their recruiting. The results are encouraging, with important nuance: the nudge improved the number of women and disabled applicants reaching the next round of interviews, but hurt the Black and Minority Ethnic (BME) category. This suggests a limit to nudges—they may attenuate certain unconscious biases but strengthen other discrimination, particularly that related to social inequality and cumulative disadvantages.

My second perspective is a strategic one—how rivals’ predictable biases can become an alternative source of strategic opportunity and profitability for the more informed. This stream of research opens up a new avenue for behavioral strategy by providing a strategy as arbitrage perspective. This perspective is predicated on the assumption that market failure is necessary for the presence of strategic opportunities and superior profit (Denrell, Fang, & Winter, 2003). A valuable resource may be mispriced owing to “behavioral failures” (Gavetti, 2012), such as failing to recognize resource value because of cognitive distance or inertia (Tripsas & Gavetti, 2000). To promise attractive opportunities, biases must be difficult to spot or act on, otherwise the resulting mispriced resources will attract competition and soon be arbitraged away (Denrell, Fang, & Liu, 2019). Superior profit is realized when a strategist manages to overcome these behavioral failures through superior intelligence and insight, or luck and exaptation, and acquires undervalued resources ahead of rivals.

I elaborate on the application of this strategy as arbitrage perspective on diversity biases—how strategists could exploit others’ failures in recognizing and capturing the potential diversity bonus of teams (2020, Organization Science), on luck biases—how strategists could take advantage of the ways others mistake luck for skill (Academy of Management Review, 2019), and on how venture capitalists can exploit inertia and homophily bias when searching for the next big startups (Advances in Strategic Management, 2018). I am currently extending this work to describe how organizational design can help search for contrarian opportunities in the ecology of decision-screening functions (joint work with Jose Arrieta, PhD student at ETH Zurich. We have received a €20k Earnest and Young Grant for extending this work by running experiments at the ESMT).

Research Stream Four: Algorithm Bias and Machine Learning Traps

Organized systems that utilize AI—algorithms that learn from data and take actions to maximize the chance of successfully achieving their goals—will undoubtedly be important agents of change in the 21st century. Nevertheless, extensive research on individual and organizational learning suggest that learning does not necessarily equate to improvement (Levitt & March, 1988). Imperfect learning algorithms, combined with biased data, could sustain suboptimal beliefs and actions that trap organizations indefinitely (Levinthal & March, 1993).

There are many ways to study possible traps in algorithmic learning. I chose to build on a reinforcement learning model and a mutual learning model to examine possible machine learning traps. In my paper under revision at Management Science, I explore how a natural consequence of outcome reinforcement can create a machine learning trap. That is, success tends to increase and failure tends to decrease the chances of future success. We demonstrate that such reinforcing processes can make a perfect record of repeated success less impressive than a mixed record with occasional failures. The mechanism is that the relative role of quality typically changes over time when past outcomes influence the probability of future success. If initial success increases the chances of a future success to such an extent that even lucky but unmerited actors are likely to succeed, then the role of quality in future success is reduced, making additional successes uninformative. We formalize the conditions under which a mixed record signals opportunities for more informative learning than a perfect one does. Our results imply that ranking algorithms based on past successes, such as preferential attachment, can produce systematically misleading indicators of quality and have important implications when the applications of algorithms diffuse faster than the awareness of their constraints and biases.

In my paper under revision for Organization Science, I explore possible traps in a mutual learning model that builds on the model of March (1991), which resembles the process in which intelligent systems that employ machine learning and algorithms both learn from individual human actors and shape the way these individuals learn. Individuals are socialized to an organizational code comprised of beliefs, norms, practices, and other aspects of organizational culture. As the many futile attempts to define such culture indicate, this code is an elusive black box. Very much like search algorithms employed by Google and Amazon, it learns from the organizational members’ beliefs and actions (e.g., the websites, opinions, or partners that individuals endorse), and, in return, provides suggestions, guidelines, and directions that influence members’ beliefs and actions (e.g., the display of search ranks or shortlisted targets).

One of the main results of the March 1991 model is that the highest level of performance is reached when individuals learn slowly (e.g., partially ignoring the algorithmic recommendations) while the organizational code learns fast (e.g., quickly aligning its knowledge to selective individuals’ choices). The usual explanation for this result is that slow learning preserves useful diversity among organizational members, which implies that the potentially beneficial solutions that they hold are given a fair chance to be expressed. Through an analytically tractable model, we demonstrate that this explanation, despite its appealing intuition and compelling logic, is both insufficient and, in some instances, misleading. We show that simply including and preserving diversity is not necessarily beneficial—there needs to be a discerning learning mechanism in the organization that can differentiate useful diversity from the rest. The benefit of slow learning is conditional upon who is the better learner in mutual learning processes. The more discerning entity allows its learning counterpart to adapt slowly because the former has the capacity to recognize the beneficial differences from the latter.

Our finding has important implications when algorithms combined with Big Data increasingly resemble the organizational code assumed in the March model. The code learns from every one of us and we, in turn, learn from the code’s output. If the code’s learning process is imperfect or even manipulated, individuals should learn quickly instead of slowly. This conclusion may sound counterintuitive, but it follows logically from our results: when learning algorithms become less discerning, all individual contributors should learn before the algorithms have the chance to learn the wrong, misleading lessons that would otherwise diffuse and contaminate all of us. I am revising both papers for leading management journals and developing a new project to experientially explore how organizational design and routines are complicated by the adoption of machine learning and artificial intelligence.

Summary

Overall, I have established a unique and comprehensive research program on how stochastic processes impact inferences about performance differences, and I have elaborated on its implications for management and strategy. I also engage a wide audience by disseminating my research insights through media exposure, public talks, and my book. I am beginning to mentor the next generation of scholars. I look forward to continuing my research program and engaging the wider academic community by taking leadership roles after obtaining tenure.