In late March, the second annual Social Media Summit@MIT, a virtual gathering hosted by the MIT Initiative on the Digital Economy, brought together experts to discuss fake news, misinformation, algorithmic transparency, and other related topics.
Participants included former Facebook data engineer and scientist Frances Haugen, Massachusetts state senator Rebecca Rausch, and CNBC political analyst Richard Stengel. Also among the invited contributors were two MIT Sloan School of Management alumnae.
Natalia Levina, PhD ’01, professor of information systems at NYU Stern School of Business, joined David Austin Professor of Management Sinan Aral, PhD ’07, to discuss “The Information War in Ukraine” on an afternoon panel. Subsequently, Rumman Chowdhury, SB ’03, director of Twitter’s ML Ethics, Transparency, and Accountability team, spoke with senior lecturer Renée Richardson Gosline about “Responsible AI” and what its adoption means for industry.
Being active skeptics, not cynics
A native of Kharkiv, Ukraine, located less than twenty miles from the Russian border, Levina drew on her personal experience and her role as a teacher and researcher to make several pertinent observations about social media and Russia’s invasion of her homeland. “This crisis has put direct friendships and relationships back into social media,” she said.
While the social media world “thrives on impersonal or pseudo-personal connections,” Levina explained that, in the Russian invasion of Ukraine, “we’re building these relationships with people we actually know but haven’t talked to for thirty years.” She characterized this “revival of interpersonal networks” through social media as “extremely strong,” adding: “It’s where opportunities for help are going through.”
As for the ongoing information war between Russia, Ukraine, and their respective allies on social media platforms like TikTok and encrypted messaging systems like Telegram, Levina offered a single piece of advice: “We have to be active skeptics, not cynics.”
“To have several significantly different sources of information, and to question things, is what I hope we do and what I hope we teach our kids,” she said. “We have to choose who we can actually trust and not be cynical. This balance between skepticism and cynicism would serve everybody well because there is a real war going on and people are dying. We should not doubt that. Instead, we should try to understand more about it before deciding to trust a single source.”
Making responsible AI a core business value
Turning inward to industry and its increasing development and use of algorithms and AI, Chowdhury both praised Silicon Valley companies and their culture of optimism and criticized them for “solving the problems they see in front of their faces” and “solving the problems of the privileged.”
“A lot of the people in this universe do not have the day-to-day problems of trying to live paycheck to paycheck, trying to feed their families, trying to figure out how to get adequate healthcare on a budget—problems that, frankly, most of America faces,” she said. “Silicon Valley does offer the ability to solve problems. The question is, what problems are we solving?”
Chowdhury argued that AI practitioners should adopt and implement a culture of algorithmic accountability and responsible AI—one that bakes concerns about the impacts of machine learning into the development process itself, instead of waiting until the last minute.
“Carve out meaningful room for responsible AI practices, not as a feel-good function but as a core business value. There is a lot to be said about prioritizing that kind of work, not just in word but in structure and practice,” she said.
“It does not have to mean investing in some complicated systems. It could just mean asking questions like, ‘How do we get better community engagement? How do we stress test the models we are building from a different lens? How do we understand how it works in different communities?’ I guarantee that you will have a data scientist or someone else in your organization who is very interested in these questions and wants to get involved.”
Over 12,000 virtual attendees tuned in to watch the livestream of this year’s summit, which is still available on demand. Videos of the individual sessions will be made available on other platforms at a future date.