AI will change work as profoundly as the Industrial Revolution—but much faster. My research explores what happens when AI begins to reshape not just tasks, but the organization itself: how decisions are made, how performance is evaluated, how ideas are judged, and what role managers play. The central message across this work is that AI is not just a productivity tool. It is a force that can restructure hierarchy, redesign management control systems, and change who gets to exercise judgment at work. Whether that future becomes more empowering or more controlling depends on how organizations choose to design the systems around it.
Organizations centralize decisions because expertise is scarce, information is fragmented, and coordination is hard. This paper asks whether AI changes that logic. Drawing on field experiments it examines whether AI can move decision-making closer to employees with the best local knowledge. The larger question is whether AI can do more than automate work—whether it can help redesign organizations to become more adaptive, and empowering.
For decades, organizations have lived with a basic design constraint: important decisions often stay centralized because information is dispersed, expertise is uneven, and coordination is difficult. Frontline employees may hold the richest local knowledge, but that has rarely been enough. Without broader information, interpretive support, and coordination mechanisms, decentralization can come at a cost. Hierarchy persisted not simply because leaders preferred control, but because the system often required it.
This paper asks whether AI begins to loosen that constraint. As AI reduces the cost of accessing, processing, and interpreting information, it may alter one of the classic tradeoffs in organizational design: the tradeoff between centralizing authority to economize on expertise and decentralizing authority to make better use of local knowledge. In that sense, the project is not mainly about automation. It is about whether AI changes the underlying logic that has long shaped organizational architecture.
We study this question in two field settings where the stakes of coordination are high and local knowledge matters deeply. In home care, scheduling is not just a logistical problem. It also involves continuity, relational fit, caregiver availability, and sensitivity to the human realities of care. In mental health care, scheduling similarly requires judgment about client needs, therapeutic progress, and timing. In both settings, the traditional assumption was that these decisions needed to remain centralized. Our interventions test whether AI-supported frontline workers can take on greater decision authority without sacrificing decision quality.
The broader argument is that AI may matter less as a tool for replacing people than as a tool for redesigning the organization itself. When embedded in the hands of employees rather than imposed solely from above, AI can function as a form of distributed expertise. It can narrow skill gaps, reduce informational frictions, and make it more feasible to move authority closer to where the best local knowledge resides. That possibility has consequences far beyond scheduling. It speaks to a larger future in which management is no longer defined primarily by the upward flow of decisions, but by the design of systems that allow judgment, information, and coordination to operate differently.
At the same time, the paper does not treat empowerment as automatically beneficial. Expanding decision rights can improve ownership and motivation, but it can also create pressure if employees experience greater authority without sufficient support. That is why the paper examines not only whether AI-supported decentralization improves decision quality, but also whether it changes how employees experience their work. The question is not simply whether AI allows authority to move. It is whether organizations can redesign that authority in a way that actually works.
At its core, this project is about a much larger possibility. If AI can reduce the structural reasons organizations have historically relied on centralized control, then the future of AI at work will not just be about doing the same things faster. It will be about building organizations on a different logic.
Would you be comfortable being evaluated by an AI instead of by a human manager? This study shows that employees do not accept AI-based evaluation automatically. Acceptance depends on how much control people have over their performance and on how the surrounding incentive system is designed.
Performance evaluation only works when employees trust the system judging them. In this study, we examine when people are willing to accept AI instead of a human manager in that role. We show that acceptance depends strongly on controllability: employees are more open to AI when performance is clearly tied to their own actions, but more hesitant when outside factors have a larger influence on results. When people feel that performance is not fully within their control, they are less confident that AI can interpret those outcomes appropriately.
That challenge is especially important because in many business settings, controllability is relatively low. Employees often work in environments where performance is shaped not only by their own effort, but also by market conditions, customer behavior, team interdependence, and other factors outside their control. Our findings show, however, that organizations can actively increase acceptance through the design of the performance evaluation system. In lower-controllability settings, employees become more comfortable with AI when the system includes relative performance information, because that information helps place outcomes in context. This suggests that the success of AI in performance evaluation depends not just on the technology itself, but on whether the broader system is designed to reflect the realities shaping performance. Overall, this study shows that employee acceptance of AI-based evaluation is not automatic. Organizations cannot simply introduce AI and expect employees to trust it. Acceptance has to be built through a performance evaluation system that fits the nature of the job and recognizes the context in which performance occurs.
What if AI could do more than help organizations manage ideas—what if it could help bring out better ones? In this research study, we find that when AI is used to review employee ideas, people do not hold back. Under the right incentive system, they share more ideas, and the ideas that rise to the top tend to be better.
Employee creativity is essential for innovation and business success, but ideas only create value when employees are willing to share them. That makes it a central challenge for managers to create systems that encourage idea sharing while also allowing ideas to be reviewed fairly and efficiently. We examine whether AI can help with that challenge by evaluating employee ideas rather than requiring managers to review every submission themselves. This matters because paying employees for each idea they submit can increase idea generation, but in practice it is often difficult to implement because it places a heavy review burden on managers. Before organizations adopt systems like this, though, it is important to know whether employees will respond positively to being evaluated by AI.
Our results suggest that they do. When employees are rewarded for each idea they submit, AI leads them to share more ideas, and those ideas are more useful on average. We do not see the same benefits under tournament-style incentives, which suggests that AI works best when it is paired with the right incentive system. Most importantly, some of the very best ideas are especially likely to emerge when AI evaluation is combined with per-idea rewards. Overall, the results suggest that AI can do more than make idea evaluation easier to scale. It can also help organizations bring out more of the creativity already within their workforce.
This paper examines how human–AI collaboration can best be structured in workplace communication that requires empathy, judgment, and tact. It explores how different ways of dividing the work between people and AI shape both the quality of the message and its personal touch. More broadly, it asks how organizations can use AI in socially complex situations without weakening the human side of communication.
This paper examines how human–AI collaboration should be structured when workplace communication requires empathy, judgment, and tact. It compares two approaches: an initiator condition, in which AI drafts first and the human refines the message, and a feedback condition, in which the human drafts first and AI provides suggestions. The paper asks how these different forms of human–AI collaboration shape the quality, tone, and personal touch of socially complex workplace communication.
At a broader level, the paper asks how organizations should design human–AI collaboration when communication is both operationally important and socially delicate. In practice, that means thinking not just about whether to use AI, but about when it should lead, when it should support, and how those choices shape both the message and the employee experience of producing it.
AI is changing not only how work gets done, but also what organizations should reward. This paper explores how incentive systems need to adapt when humans and AI create value together.
When AI becomes part of the workflow, traditional incentive systems may no longer direct effort in the right way. Some dimensions of performance are easier for AI to support, while others still depend more heavily on human judgment, creativity, and discernment. That makes the central challenge not just technological, but managerial: how should firms motivate people when the value of human effort is changing?
This paper explores what that shift means for performance evaluation and incentive design. In doing so, it moves the conversation beyond AI capability alone and toward a broader question of how organizations can make human–AI collaboration more effective.
Audit firms increasingly use AI systems to analyze transactions and flag risks, the auditor’s role is changing. Auditors are no longer only searching for problems themselves; they are also evaluating what the AI highlights, deciding whether the system’s output is reliable, and asking what the system may have missed.
This paper studies a key risk in AI-assisted auditing: overreliance. When AI tools direct attention toward flagged transactions, auditors may become less likely to notice issues outside the system’s focus. We examine whether auditors can avoid this kind of attentional narrowing when they better understand how the AI system works and when the system presents its output in a more interpretable way.
Using an experiment with professional auditors experienced with AI-assisted audit software, we find that situational awareness matters—but only under the right design conditions. When AI output is clearly structured, labeled, and easier to interpret, auditors who are prompted to think about the system’s limitations are more likely to identify the need for additional audit procedures. When the output is less structured, those prompts are less effective because auditors must spend more cognitive effort simply making sense of the information.
The broader implication is that audit quality in the AI era depends not only on the sophistication of the technology, but also on how AI tools are designed and how auditors are trained to use them. AI can support better judgment, but only when it preserves enough human attention, skepticism, and situational awareness for auditors to look beyond the system’s recommendations.