More News from IWER
Why human oversight of AI isn't always enough
Professor Kate Kellogg said: "The fascinating pattern we found in this paper is that when the consultants tried to push back, the AI didn't concede. Instead, it intensified its arguments and shifted its rhetorical strategy. It wasn't merely providing information. It was actively trying to persuade the consultants."
4 top takeaways from MIT's 2025 CFO Summit
The dawn of AI has drawn some of the process issues that have long faced businesses into higher relief, professor Nelson Repenning said during a keynote speech. He added that while large, established organizations may have plans to utilize new tools to free up time and money, they tend to stumble on the change management aspect — in part because they have an opaque view of their teams' workflows.
What AI gets right in cybersecurity; what it can fix
Principal research scientist Keri Pearlson said: "I think in 2026, we will see managers get more control over the AI environments that they hope to bring into their organizations."
Here's why concerns about an AI bubble are bigger than ever
"These models are being hyped up, and we're investing more than we should," said Institute Professor Daron Acemoglu. "The danger is that these kinds of deals eventually reveal a house of cards."
AI is transforming politics, much like social media did
Assistant professor Chara Podimata and co-author wrote: "Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. These models may appear neutral — politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can't see."
Who benefits the most from retirement savings 'nudges'?
In this podcast interview, assistant professor Taha Choukhmane said: "We've had some early evidence that nudging people gets them to put more money in their retirement account. But whether that's translating to actual additional savings is going to depend on how people finance those increases."
What jobs will be most affected by AI?
In this podcast episode, associate professor Lawrence Schmidt said: "If it looks like technology can help save you time in a subset of the tasks that you perform, this potentially dampens the blow associated with technological progress. We find evidence that firms that are employing workers who take advantage of new technology are becoming more productive."
How to turn impostor syndrome into an advantage
Research by assistant professor Basima Tewfik and co-authors revealed that employees with more frequent workplace impostor thoughts were often seen as more interpersonally effective. The very doubt that makes someone question their competence may drive them to listen more intently, collaborate more genuinely, and seek help more readily.
When LLMs write: Social media, advertising and authorship
Professor Sandy Pentland said: "The current business model is engagement. The social media you and I remember from 30 years ago didn't have that business model. AOL didn't care if you spent 15 more minutes in one discussion space, versus another discussion space. Now the business model is to maximize engagement, so that you can maximize advertisement."
Our research shows it's a profound strategic error to cut entry-level jobs — those workers are likely to get the best results from AI
Research scientist Frank Nagle wrote: "Junior employees are typically innovative and technically adept, and in tune with a new generation of customers. More importantly, they become tomorrow’s managers and leaders. Cutting them off not only silences crucial perspectives but also creates a long-term deficit in institutional knowledge, breaking the chain of skills that develops as employees grow within a company."