3 ways to use AI: Are you a cyborg, a centaur, or a self-automator?
What you’ll learn:
- Employees using generative AI fall into one of three categories: cyborgs, who collaborate closely with artificial intelligence; centaurs, who engage with AI in a more constrained way; and self-automators, who offload a task more fully to AI.
- How employees engage with AI affects the quality of their work and what they learn while doing it.
- Companies should be proactive in managing generative AI deployment, not only to get the results that they want but to train the employees in the skills they need.
As generative artificial intelligence use becomes widespread in more companies and industries, it is important that organizations first determine how individual employees will use the technology and whether it will reshape how they learn skills and solve problems.
In a new study co-authored by MIT Sloan professor researchers tracked generative AI use among employees at Boston Consulting Group and identified three distinct ways in which people interacted with AI: offloading all of their work to the technology; continuously iterating with the AI tool; and initiating tightly controlled interactions with the AI, guided by their personal expertise. The researchers labeled the three modes centaurs, cyborgs, and self-automators.
“There is a lot of talk about human-AI augmentation, and what we were really struck by with our study is that these three different ways of interacting with generative AI have profound implications for employees’ skill development and performance,” said Kellogg. “We can no longer look just at ‘Do people use AI?’ but we need to dissect how do they use it and the implications of this use.”
Cyborgs, centaurs, and self-automators
In the study, 244 junior associates and consultants at BCG were asked to recommend which of three brands of a fictional company should receive strategic investment. They were given fictitious interview notes and business data on each brand and asked to submit a memo to the CEO that was scored on whether the recommendation was correct and how persuasive it was.
The employees were also given access to a custom generative AI platform built on OpenAI’s GPT-4 to use as they wished. Their interactions with the AI were recorded and time-stamped, and they participated in follow-up interviews.
The researchers found that the consultants engaged in three distinct forms of interaction with AI:
- Cyborgs accounted for 60% of consultants. They engaged in what the researchers call “fused co-creation.” Throughout every stage of the project, they collaborated closely with the AI tool — probing its suggestions, allowing it to lead the way, and taking its advice on some occasions while pushing back against it on others.
- Centaurs accounted for 14% of consultants, who practiced a more constrained form of engagement that the researchers call “directed co-creation.” Centaurs knew which questions they wanted answers to and asked more-specific questions to get them. Unlike cyborgs, who engaged in conversational back-and-forth, centaurs maintained structured and controlled interactions with AI, harnessing it as a tool for targeted efficiency.
- Self-automators constituted 27% of consultants. They demonstrated what the researchers describe as “abdicated co-creation,” offloading the task almost fully or entirely to the AI. This approach is characterized by delegating analytical and evaluative thinking to the AI, which produced quick results that were polished but lacked depth.
The researchers note that the 27% figure may underestimate the number of self-automators in the broader knowledge workforce.
“Our study was looking at a highly diligent and motivated population that worked on this task in a focused manner, knowing it was being investigated by researchers,” said Hila Lifshitz, a professor at Warwick Business School and a co-author of the paper. “Many knowledge-work organizations have much more widespread automation, where people are essentially copying information into generative AI, modifying it slightly, and pasting the results into their work. This is why we chose to name them self-automators — to raise awareness that when we do this, over time it can lead to automation.”
Employees who operated as centaurs had the highest accuracy in their business recommendations, outperforming cyborgs and self-automators. When it came to persuasiveness, cyborgs and centaurs were roughly equal, while self-automators lagged in the category.
Implications for employee learning
How the AI tools were used influenced not only the conclusions that consultants reached but what they learned along the way. Centaurs, with their narrower use of AI, relied on their own domain expertise — and, in turn, increased their expertise through targeted questions, like “What’s the formula for compound annual growth rate?” Their responses in follow-up interviews suggested that their approach was, in part, an effort to avoid becoming overly reliant on AI.
Cyborgs gained little in the way of domain expertise but became much more knowledgeable about solving problems using generative AI. “They weren’t upskilling themselves with domain expertise like centaurs, but they were learning another valuable skill,” Kellogg said. “Organizational leaders ought to be thinking about having their employees doing both things, and how they might structure work to make sure that happens, depending on what needs to be accomplished.”
Self-automators saw no skills gains. The researchers suggest that this does not mean that there is no role for automation but that leaders must carefully and deliberately manage the training of a new generation of employees entering the workforce with generative AI tools at their disposal.
AI Executive Academy
In person at MIT Sloan
Register Now
Finding a place for automation
Employees also must decide which tasks should be automated. It depends on context, the researchers write, but there are a few general guidelines that can help point employees and leaders in the right direction.
The criticality of the task is the first criterion, Lifshitz said. In the case of this study, recommending a strategic investment to the CEO is a core function of the employees’ work, and errors could be costly. “We’d warn against automating this task, but when it’s something more repetitive, menial, and the risk is low, then those are areas where automation seems OK,” she said.
Companies often give employees the responsibility of deciding which tasks to automate; employees are asked to take ownership of their work, and managers are asked to be sure to give employees enough time to complete their jobs.
“But rather than putting the onus on people, there is a lot that organizations can do with systems to help employees make better decisions about when to use AI,” Kellogg said. To begin with, companies should allow new employees to experiment with AI during onboarding and provide them with feedback about their outputs to help them get a clear understanding of where it works for them and where it doesn’t.
As employees develop greater domain expertise, companies can consider adding interfaces that visualize uncertainty in AI responses or default questions that people need to answer, forcing them to step back and think for a moment before diving into a conversation with AI.
“For novices especially — but, really, for everyone in a company — leaders need to think about the best ways to set up workflows and training so that employees can upskill themselves while using generative AI in efficient and useful ways,” Kellogg said.
Related Articles
Kate Kellogg is the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management. Her research focuses on helping knowledge workers and organizations develop and implement predictive and generative AI products to improve decision-making, collaboration, and learning.
Hila Lifshitz is a professor of management at Warwick Business School and a faculty affiliate at the Harvard University Laboratory for Innovation Science. Her research focuses on understanding scientific and technological innovation and knowledge creation processes in the digital age.
The paper’s other authors are Steven Randazzo, a visiting research fellow at the Harvard University Laboratory for Innovation Science and a PhD candidate at the University of Warwick; Fabrizio Dell’Acqua, a postdoctoral researcher at Harvard Business School; Ethan Mollick, SM ’04, PhD ’10, a professor at the University of Pennsylvania; François Candelon, an executive fellow on AI adoption at the Digital Data Design Institute at Harvard University; and Karim R. Lakhani, a professor at Harvard Business School.