Electricity changed the course of industrialization, but when it was first introduced in factories at the turn of the 20th century, there wasn’t much of a transformation. Factory floor layouts remained the same, and while the waterwheels outside were removed, inside the same pulley systems were still being used, only now they were fueled by the new power source.
Less than 100 years later, the internet did the same thing to commerce. Instead of considering how this global network transform the economy, it was used to make the current way of doing business just a bit more efficient. “When new technologies come along, it’s hard to figure out what kind of impact they’re going to have in the world,” said Dan Huttenlocher the inaugural dean of the MIT Stephen A. Schwarzman College of Computing. “And I think that's certainly where we are with respect to AI today.”
Huttenlocher would know. He was part of a team that built one of the first self-driving cars, developed an algorithm for scanning documents that is still used today, and he worked as a scientist at Xerox’s Palo Alto Research Center for 12 years.
Now, Huttenlocher leads a school with the mission of addressing “the opportunities and challenges presented by the ubiquity of computing … perhaps most notably illustrated by the rise of artificial intelligence.”
“We're really still fairly early in both the rise of digital technologies broadly — not just AI — and certainly of their impact,” Huttenlocher said. “In many ways so much of the technological advancement has yet to happen.”
In an interview, Huttenlocher discussed leadership in a digital age, workforce impacts, and the responsibilities and ethics of companies operating in the AI era.
How can global business leaders meet the challenge of taking advantage of the digital age?
I think that the great advantage of a good business leader or a manager in a company is that they understand their business and their market really well. The challenge is also that you understand your business well, so when a new technology comes along, it's harder to think about how might this technology really change the way you do your business.
It takes the right kind of leader who's willing to really say “OK, we've made the small changes that this new technology enables, and that's still putting us behind both where we could be in terms of what we could do for the business” … but also paying attention to competitors and saying “Well these new guys, these new upstarts are taking advantage of this technology in ways that we're not.”
Microsoft is an example. That company started in the era of shrink-wrap software, stuff where you got a box at a store, and you figured out how to install the DVD off the box onto your computer. Their intermediate model was you download the DVD over the internet. [CEO Satya Nadella] completely transformed Microsoft in the last five years or so to a company that is fundamentally and integrally a company that's developing software-as-a-service online.
That took a change of leadership at the company. It took somebody who understood the company really well. It took a change to a leader who knew the status quo was going to be a declining business for them and that nobody really understands what the future would look like in this new architecture. But he was willing to really energize the company toward understanding what that future was going to look like.
How is artificial intelligence affecting the workforce?
[To use a manufacturing example] if you just think about how modern large-scale manufacturing works today, it used to be that the human managers of a factory could and needed to plan the material flows — the raw materials coming in, the finished products going out and how they were going to meet the delivery needs of the salesforce that was selling the stuff that was coming out of the factory.
You could still in principle run a modern factory that way, but it would probably have like 10% of the output because all of these enterprise resource planning (ERP) systems, but also the factory planning and management systems; the management would be a bottleneck, not the labor, which is a really interesting thing about these factories. If the electronic systems stop working the labor is idled. And it's not easy stuff to scale. Just hiring 10 times as many people to plan how the stuff is moving through the factory doesn't scale very well. This is all pre-AI.
It's a bunch of decision-making type things, and AI's going to just take that to a whole new level. But what's interesting is we didn't really replace that many jobs there, it's not like there was this army of planners in these factories, it's just that the factories weren't as efficient, and the new ERP technology allowed it to get much more efficient, produce much more, etc., and I think AI will take that to the next level.
MIT’s stated goal with the college is to strengthen its position in the “responsible and ethical evolution of technologies” that will transform society. Why should global business leaders care about ethical computing and responsibility to customers?
Computing technology is so readily affecting people's lives today, and we just see it everywhere. We have the social media example, where the initial view was it was going to make society a better, more well-connected place. And it's made it kind of a less pleasant, more well-connected place in many regards.
There are aspects of social media that are very good — you can keep in touch with people that you weren't necessarily keeping in touch with. We shouldn't lose those things. But there are also very unanticipated negative outcomes.
I would say the best analogy that we have is health care. Because health care is a place where research results very quickly get into practice, the practice directly affects happiness, longevity, all kinds of things. That's happening increasingly in the computing domain, so to me saying “I don't care about the ethical and societal questions in computing,” would be like saying “I don't care about ethical and societal questions in health care.”
There are always irresponsibly managed companies and governments, any kind of organization, but most companies care about their customers having a good experience and wanting to come back. If you're abusive to your customers, it's a one-shot deal. I do think that focus on the customer, and on the customer experience, and that repeat customer is really super important.
How should executives and managers answer the ethical questions about computing and artificial intelligence?
I think any time a business is using computing technology to make decisions that affect its employees or its customers — with little or no human direct sort of oversight of those decisions — those are the places where you have to look. And it's not just that you have to make sure that those [decisions] aren't bad, sometimes they have the opportunity to be better.
For example, if you look at things like racial bias in bank lending. Controlling your many thousands of employees and making sure that they're not exhibiting racial bias in making their decisions is actually harder than having an appropriately trained and developed decision-making algorithm.
So I think there really are opportunities to do better, but there are certainly also opportunities to do worse. Like if you had a machine-learning algorithm and you trained it on your biased humans, then it would behave like your biased humans and you would have sort of encoded that.
But I think the broader point is you want to make sure that in making these automated decisions that you're treating people well, fairly, that their experience is not just good in the moment ... but that when people reflect on it, it feels like a fair process.
This interview has been edited and condensed.