Amid widespread anxiety about automation and machines displacing workers, the idea that technological advances aren’t necessarily driving us toward a jobless future is good news.
At the same time, “many in our country are failing to thrive in a labor market that generates plenty of jobs but little economic security,” MIT professors David Autor and David Mindell and principal research scientist Elisabeth Reynolds write in their new book “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines.”
The authors lay out findings from their work chairing the MIT Task Force on the Work of the Future, which MIT president L. Rafael Reif commissioned in 2018. The task force was charged with understanding the relationships between emerging technologies and work, helping shape realistic expectations of technology, and exploring strategies for a future of shared prosperity. Autor, Mindell, and Reynolds worked with 20 faculty members and 20 graduate students who contributed research.
Beyond looking at labor markets and job growth and how technologies and innovation affect workers, the task force makes several recommendations for how employers, schools, and the government should think about the way forward. These include investing and innovating in skills and training, improving job quality, including modernizing unemployment insurance and labor laws, and enhancing and shaping innovation by increasing federal research and development spending, rebalancing taxes on capital and labor, and applying corporate income taxes equally.
The first step toward preparing for the future is understanding emerging technologies. In the following excerpt, Autor, an economist, Mindell, a professor of aeronautics, and Reynolds, now the special assistant to the president for manufacturing and economic development, look at artificial intelligence, which is at the heart of both concern and excitement about the future of work. Understanding its capabilities and limitations is essential — especially if, as the authors write, “The future of AI is the future of work.”
To address the time to develop and deploy AI and robotic applications, it is worth considering the nature of technological change over time. When people think of new technologies, they often think of Moore’s Law, the apparently miraculous doubling of power of microprocessors, or phenomena like the astonishing proliferation of smartphones and apps in the past decades, and their profound social implications. It has become common practice among techno-pundits to describe these changes as “accelerating,” though with little agreement on the measures.
But when researchers look at historical patterns, they often find long gestation periods before these apparent accelerations, often three or four decades. Interchangeable parts production enabled the massive gun manufacturing of the Civil War, for example, but it was the culmination of four decades of development and experimentation. After that war, four more decades would pass before those manufacturing techniques matured to enable the innovations of assembly-line production. The Wright Brothers first flew in 1903, but despite the military application of World War I, it was the 1930s before aviation saw the beginnings of profitable commercial transport, and another few decades before aviation matured to the point that ordinary people could fly regularly and safely. Moreover, the expected natural evolution toward supersonic passenger flight hardly materialized, while the technology evolved toward automation, efficiency, and safety at subsonic speeds — dramatic progress, but along other axes than the raw measure of speed.
More recently, the basic technologies of the internet began in the 1960s and 1970s, then exploded into the commercial world in the mid-1990s. Even so, it is only in the past decade that most businesses have truly embraced networked computing as a transformation of their businesses and processes. Task Force member Erik Brynjolfsson calls this phenomenon a “J-curve,” suggesting that the path of technological acceptance is slow and incremental at first, then accelerates to break through into broad acceptance, at least for general-purpose technologies like computing. A timeline of this sort reflects a combination of perfecting and maturing new technologies, the costs of integration and managerial adoption, and then fundamental transformations.
While approximate, four decades is a useful time period to keep in mind as we evaluate the relationship of technological change to the future of work. As the science fiction writer William Gibson famously said, “The future is already here, it’s just not evenly distributed.” Gibson’s idea profoundly links the slow evolution of mass adoption to what we see in the world today. Rather than simply making predictions, with their inevitable bias and poor results, we can look for places in today’s world that are leading technological change and extrapolate to broader adoption. Today’s automated warehouses likely offer a good glimpse of the future, though they will take time for widespread adoption (and likely will not be representative of all warehouses). The same can be said for today’s most automated manufacturing lines, and for the advanced production of high-value parts. Autonomous cars are already 15 years into their development cycle but just beginning to achieve initial deployment. We can look at those initial deployments for clues about their likely adoption at scale. Therefore, rather than do research on the future, the task force took a rigorous, empirical look at technology and work today to make some educated extrapolations.
AI today, and the general intelligence of work
Most of the AI systems deployed today, while novel and impressive, still fall into the category of what task force member, AI pioneer, and director of MIT’s Computer Science and Artificial Intelligence Laboratory Daniela Rus calls “specialized AI.” That is, they are systems that can solve a limited number of specific problems. They look at vast amounts of data, extract patterns, and make predictions to guide future actions. “Narrow AI solutions exist for a wide range of specific problems,” write Rus, MIT Sloan School professorand Robert Laubacher of the MIT Center for Collective Intelligence, “and can do a lot to improve efficiency and productivity within the work world.” Such systems include IBM’s Watson system, which beat human players on the American TV game show “Jeopardy!” and its descendants in health care, or Google’s AlphaGo program, which also bests human players in the game of Go. The systems we explore in insurance and health care all belong to this class of narrow AI, though they vary in different classes of machine learning, computer vision, natural language processing, or others. Other systems in use today also include more traditional “classic AI” systems, which represent and reason about the world with formalized logic. AI is no single thing but rather a variety of different AIs, in the plural, each with different characteristics, that do not necessarily replicate human intelligence.
Specialized AI systems, through their reliance on largely human-generated data, excel at producing behaviors that mimic human behavior on well-known tasks. They also incorporate human biases. They still have problems with robustness, the ability to perform consistently under changing circumstances (including intentionally introduced noise in the data), and trust, the human belief that an assigned task will be performed correctly every single time. “Because of their lack of robustness,” write Malone, Rus, and Laubacher, “many deep neural nets work ‘most of the time’ which is not acceptable in critical applications.” The trust problem is exacerbated by the problem of explainability because today’s specialized AI systems are not able to reveal to humans how they reach decisions.
The ability to adapt to entirely novel situations is still an enormous challenge for AI and robotics and a key reason why companies continue to rely on human workers for a variety of tasks. Humans still excel at social interaction, unpredictable physical skills, common sense, and, of course, general intelligence.
From a work perspective, specialized AI systems tend to be task-oriented; that is, they execute limited sets of tasks, more than the full set of activities constituting an occupation. Still, all occupations have some exposure. For example, reading radiographs is a key part of radiologists’ jobs, but just one of the dozens of tasks they perform. AI in this case can allow doctors to spend more time on other tasks, such as conducting physical examinations or developing customized treatment plans. In aviation, humans have long relied on automatic pilots to augment their manual control of the plane; these systems have become so sophisticated at automating major phases of flight, however, that pilots can lose their manual touch for the controls, leading in extreme cases to fatal accidents. AI systems have not yet been certified to fly commercial aircraft.
Artificial general intelligence, the idea of a truly artificial human-like brain, remains a topic of deep research interest but a goal that experts agree is far in the future. A current point of debate around AGI highlights its relevance for work. MIT professor emeritus, robotics pioneer, and Task Force Research Advisory Board member Rodney Brooks argues that the traditional “Turing test” for AI should be updated. The old standard was a computer behind a wall with which a human could hold a textual conversation and find it indistinguishable from another person. This goal was achieved long ago with simple chatbots, which few argue represent AGI.
In a world of robotics, as the digital world increasingly mixes with the physical world, Brooks argues for a new standard for AGI: the ability to do complex work tasks that require other types of interaction with the world. One example might be the work of a home health aide. These tasks include providing physical assistance to a fragile human, observing their behavior, and communicating with family and doctors. Brooks’ idea, whether embodied in this particular job, a warehouse worker’s job, or other kinds of work, captures the sense that today’s intelligence challenges are problems of physical dexterity, social interaction, and judgment as much as they are of symbolic data processing. These dimensions remain out of reach for current AI, which has significant implications for work. Pushing Brooks’ idea further, we might say that the future of AI is the future of work.
Excerpted from The Work of the Future: Building Better Jobs in an Age of Intelligent Machines by David Autor, David A. Mindell and Elisabeth B. Reynolds. Reprinted with permission from the MIT PRESS. Copyright 2022.