Kate Kellogg

Faculty

Kate Kellogg

Hear name pronounced.

Support Staff

Get in Touch

Title

About

Academic Groups

Academic Area

Kate Kellogg is the David J. McGrath jr Professor of Management and Innovation, a Professor of Business Administration at the MIT Sloan School of Management.

Kate's research focuses on helping knowledge workers and organizations develop and implement Predictive and Generative AI products, on-the-ground in everyday work, to improve decision making, collaboration, and learning. She shows how organizations can gain user acceptance and effective use of intelligent products and services by including users in the technology design process, providing training to give employees the skills they need to work with intelligent technologies, and designing the technologies with employees in mind.

She has authored dozens of articles that have appeared in top journals across the fields of management, organization studies, healthcare, sociology, work and employment, and information systems research. Her research has won awards from the Academy of Management, the American Sociological Association, the Alfred P. Sloan Foundation, the Institute for Operations Research and the Management Sciences, and the National Science Foundation.

Over the past decade, Kate has partnered with for-profit and not-for-profit organizations to help improve collaboration among diverse experts, use technologies to improve internal knowledge sharing, and manage the human aspects of new technology implementation in order to thrive in fast-paced and uncertain contexts.

Before coming to MIT Sloan, Kate worked as a management consultant for Bain & Company and for Health Advances. She received her PhD in organization studies from MIT, her MBA from Harvard, and her BA from Dartmouth in biology and psychology.

 

 

Honors

Kellogg wins Jamieson Prize

May 28, 2021

Publications

"Organizational Learning with Generative AI: New Modes of Knowledge Search, Creation, Retention, and Transfer."

Wiesenfeld, Batia Mishan and Katherine C. Kellogg. Strategic Organization. Forthcoming.

"Validating LLM Output? Prepare to Be ‘Persuasion Bombed’."

Randazzo, Steven, Akshita Joshi, Katherine C. Kellogg, Hila Lifshitz, and Karim R. Lakhani. Sloan Management Review, February 3, 2026.

"Large Language Models Require a New Form of Oversight: Capability-based Monitoring."

Kellogg, Katherine C., Danielle S. Bitterman et al, Working Paper. November 2025. SSRN.

"GenAI as a Power Persuader: How Professionals Get Persuasion Bombed When They Attempt to Validate LLMs."

Steven Randazzo, Akshita Joshi, Katherine C. Kellogg, Hila Lifshitz-Assaf, Fabrizio Dell'Acqua, and Karim R. Lakhani, Working Paper. October 2025.

Load More

Recent Insights

Ideas Made to Matter

5 ‘heavy lifts’ of deploying AI agents

New research provides insights for using AI agents in clinical settings.

Read Article
Ideas Made to Matter

Agentic AI, explained

The age of agentic AI — systems that are semi- or fully autonomous and can act on their own — has arrived. Here’s what you need to know, according to MIT experts.

Read Article
Load More

Media Highlights

Press Inc.

AI can talk you out of being right. New research shows how

In a field study by professor Kate Kellogg and co-authors, 70 consultants from Boston Consulting Group used GPT-4 for a complex financial analysis task. Participants were asked not just to create output, but to validate that output. Instead of correcting the mistake, the AI frequently responded with more detail, more data, emotional affirmation, and even more confident language, almost like it was trying to win the argument.

Read Article
Press MIT Sloan Management Review

Validating LLM output? Prepare to be 'persuasion bombed'

Professor Kate Kellogg and co-authors wrote: "Given the need to moderate LLMs' inaccuracies, hallucinations, and other limitations, having a human in the loop is a common AI governance approach. Our finding regarding how GenAI tends to get activated into a 'power persuader' when users seek to validate output is critical, therefore, as it suggests that the loop itself has become contested ground."

Read Article
Press Fortune

Are you a cyborg, a centaur, or a self-automator? Why businesses need the right kind of 'humans in the loop' in AI

Professor Kate Kellogg and co-authors wrote: "To understand how companies can truly extract value from human-AI collaboration, we conducted a field experiment with 244 consultants using GPT-4 for a complex business problem-solving task. The experiment analyzed nearly 5,000 human-AI interactions to answer a critical question: When humans collaborate with GenAI, what are they actually doing—and what should they be doing?"

Read Article
Press Charter Works

Why human oversight of AI isn't always enough

Professor Kate Kellogg said: "The fascinating pattern we found in this paper is that when the consultants tried to push back, the AI didn't concede. Instead, it intensified its arguments and shifted its rhetorical strategy. It wasn't merely providing information. It was actively trying to persuade the consultants."

Read Article
Load More