Alumni

Future of Work

The Perils and Possibilities of AI

By

Artificial intelligence (AI) was top of mind at the 2025 MIT Sloan Reunion, and the result was a fascinating study of opposites. In two talks, MIT alumni went to the furthest reaches of possibility with AI, both in terms of its capacity to do harm and its potential to solve existence’s toughest problems. During the discussions, alumni learned about cutting edge research and the widening circle of AI’s capacity.

A “cyber battlefield” over private data

With the advent of generative AI, cyberattacks have become cheaper, easier, and more effective. In a panel titled, “From Threat to Shield: How AI is Shaping the Future of Cybersecurity,” moderator Leon Bian, SDM ’05, noted that the statistics are grim.

The cost of launching a successful phishing campaign has lowered by as much as 95 percent. Since ChatGPT was introduced, there has been a 1,200 percent surge in phishing attacks. In 2024, 1.35 billion individuals were impacted by data compromises—an increase of 211 percent year over year. Most alarming are the “mega-breaches,” like that of Change Healthcare and other data-centric organizations. A mega-breach can cost an organization as much as $375 million.

“The stakes cannot be higher,” he said.

Emmy Linder, MBA ’10, a cybersecurity and business operations leader, noted that tools like ChatGPT help cyberterrorists obtain the majority of the resources they need for a phishing scheme more quickly and cheaply. Instead of a “low and slow” strategy, now “you’re in the fast and furious world, where it’s a constant attack. They’re all very sophisticated ... they’re so credible that you just easily click on them.”

Emmy Linder | MBA ’10
You’re in the fast and furious world, where it’s a constant attack. They’re all very sophisticated ... they’re so credible that you just easily click on them.

Some countries, including the United States, have legislation in place around generative AI, which provides some essential guardrails around its use. “That obviously is not the case in other areas in the world, where not only are there no guardrails, but on the contrary, you can basically do as much bad as you can. Train on bad, do more bad, and keep going to the extent that you can,” she said.

So, there’s a deep asymmetry in the attackers’ utilization and the defenders’ attempts at protection. Linder predicted that “it will be agentic warfare on a cyber battlefield. And now the question is: how bad? And who will actually win?”

On the flip side, cybersecurity companies and solution providers are also leveraging AI to create better, more autonomous tools to address this. Anupam Sahai, SDM ’00, founder and CEO of ChukraVU Inc., noted, “It’s going to get worse before it gets better. The current generation of tools, which are based on static signatures and rules, are going to get washed away,” he said, adding,” Unless somebody adopts AI-based behavioral analysis and generative capability for defense, it’s going to be a huge problem.”

Taylor Reynolds, MBA ’15 (Technology Policy Director, MIT Internet Policy Research Initiative, Computer Science & Artificial Intelligence Lab), provided critical context. “There’s a lot of AI skepticism coming out of CSAIL. We’re in a big hype circle with anything AI, but they’re less convinced that the models are as good as they need to be.”

With that said, he notes several proactive measures to take. On the business side, “you need to be concerned about your data being fed into AI models and being used for training somewhere else, and you don't know how that information is going to get out.”

And, much like a healthy lifestyle preempts future health problems, proactively addressing one's digital health is essential. Citing Jen Easterly, former director of the Cybersecurity and Infrastructure Security Agency, Reynolds notes, “It’s strong passwords, multi-factor authentication. Do your patching backups, and watch for links and clicks. Encrypt your data at rest and in transit. Segment your networks. Train your users and have instant responses. If you’ve done those, you’ve got 80 percent of your security covered.”

Leon Bian | SDM ’05
The stakes cannot be higher.

AI and the potential for “collective” governance

In a talk titled “The Habermas Machine: AI Can Help Humans Find Common Ground in Democratic Deliberation,” an entirely different discussion took place. Michiel Bakker, SM ’19, PhD ’20 (Assistant Professor), who spent years building frontier AI models in industry before returning to MIT, compared humanity to AI in terms of the latter’s increasing capabilities.

“We are, through evolution, a very efficient, effective brain. Collective intelligence has helped us get very far in the world to build great structures, build societies, build knowledge. But there is no fundamental reason why machines can’t do the same or even much more,” he said.

There’s an ongoing debate, at MIT and elsewhere, about whether humanity is collectively on a path to artificial general intelligence (AGI). In its most extreme state, also known as artificial super intelligence, it exists as a system that vastly exceeds all human minds across all domains. Bakker referenced Dario Amodei, CEO of ​​Anthropic, who refers to this as “a million Einsteins in a data center.”

We are far off from that potentiality, but researchers have historically overestimated the timeline towards AGI. Currently, there are areas where AI already exceeds human capacity: “It’s still not doing research autonomously, but it’s already answering textbook biology questions better than a biology PhD student,” Bakker noted.

The potential for AGI could theoretically include scientific breakthroughs and financial abundance, but it’s also susceptible to misalignment and misuse, not to mention human disempowerment. But Bakker is most interested in its potential in the realm of global problem-solving: aligning AI with our collective human values and using it to improve collective decision making and governance.

In other words, “Can we find common ground using large language models (LLMs)?” Bakker said.

Michiel Bakker | SM ’19, PhD ’20, Assistant Professor
Collective intelligence has helped us get very far in the world to build great structures, build societies, build knowledge. But there is no fundamental reason why machines can’t do the same or even much more.

His research has resulted in “The Habermas Machine,” so named after Jürgen Habermas, a political philosopher known as the father of deliberative democracy. Starting with a question like “Should the UK introduce universal free childcare from birth?” the Machine uses a finely calibrated set of LLMs to read differing statements and “map” them into a group statement that maximizes agreement. Using rating and ranking systems, these statements can then be fine-tuned, aggregating and improving a “winning” statement.

This research has already been put into semi-practical settings; Bakker and researchers worked with the Alliance for Middle East Peace to find statements that Israeli and Palestinian peace builders might agree on. And in research with participants, the Machine outperformed a human mediator in creating statements of agreement.

Its potential usage in real-world scenarios is still nascent. But Bakker is hopeful.

“I’m really motivated by making sure that we have collective governance systems that we can use, both to align artificial intelligence and to make sure that it does what we collectively want—and to help us better understand the world and better make decisions about where the world should go in times of very fast AI progress,” he said.

Check out the MIT Sloan Reunion 2025 website to see more highlights and videos.

For more info Andrew Husband Sr. Associate Director Content Strategy, OER (617) 715-5933