‘Climate capitalism’ can help scale green solutions

New leadership and management thinking from MIT experts

Use imagination to make the most of generative AI

Ideas Made to Matter

When the automatons explode


As automation becomes cheaper and robotics innovation accelerates, how we work and who we work with will change. In this excerpt from their new book, “ Machine Platform Crowd: Harnessing Our Digital Future ,”(W.W. Norton & Company) MIT Sloan’s Andrew McAfee and Erik Brynjolfsson identify five areas driving automation and consider where humans fit in the new world of work. 

San Francisco-based fast causal restaurant Eatsa —  where customers order, pay for, and receive meals without encountering any employees — wants to do more than virtualize the task of ordering meals; it also wants to automate how they’re prepared.

Food preparation in its kitchens is highly optimized and standardized, and the main reason the company uses human cooks instead of robots is that the objects being processed — avocados, tomatoes, eggplants, and so on — are both irregularly shaped and not completely rigid.

These traits present no real problems for humans, who have always lived in a world full of softish blobs. Most of the robots created so far, however, are much better at handling things that are completely rigid and do not vary from one to the next.

This is because robots’ senses of vision and touch have historically been quite primitive — far inferior to ours — and proper handling of a tomato generally entails seeing and feeling it with a lot of precision. It’s also because it’s been surprisingly hard to program robots to handle squishiness — here again, we know more than we can tell — so robot brains have lagged far behind ours, just as their senses have.

But they’re catching up — fast — and a few robot chefs have already appeared. At one restaurant in China’s Heilongjiang Province, stir-fries and other wok dishes are cooked over a flame by an anthropomorphic purple robot, while humans still do the prep work. 

At the Hannover Messe Industrial Trade Fair in April 2015, the U.K. company Moley Robotics introduced a highly-automated kitchen, the centerpiece of which was a pair of multi-jointed robotic arms that descended from the ceiling. These arms emulated movements made by master chefs as they prepared their signature dishes.

At the fair, the arms whipped up a crab bisque developed by Tim Anderson, a winner of the UK’s televised “MasterChef” competition. 

One online reviewer said of the dish, “It’s good. If I was served it at a restaurant I wouldn’t bat an eye.”

Here again, though, food preparation had to be done by a human, and the robot arms had no eyes, so they would fail if any ingredients or utensils were not exactly where they were expected to be.

The most advanced robot cook the two of us have seen is the hamburger maker developed by Momentum Machines, a startup funded by venture capitalist Vinod Khosla.

It takes in raw meat, buns, condiments, sauces, and seasonings, and converts these into finished, bagged burgers at rates as high as 400 per hour. The machine does much of its own food preparation, and to preserve freshness it does not start grinding, mixing, and cooking until each order is placed. It also allows diners to greatly customize their burgers, specifying not only how they’d like them cooked, but also the mix of meats in the patty. We can attest to their deliciousness.

DANCE of the robots

These automatic chefs are early examples of what Gill Pratt, the CEO of the Toyota Research Institute (and our former MIT colleague) calls an unfolding “Cambrian Explosion” in robotics. The original Cambrian Explosion, which began more than 500 million years ago, was a relatively brief period of time during which most of the major forms of life on Earth — the phyla — appeared. Almost all the body types present on our planet today can trace their origins back to this burst of intense evolutionary innovation.

Pratt believes we’re about to experience something similarly transformative with robotic innovation.

As he wrote in 2015, “Today, technological developments on several fronts are fomenting a sim- ilar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend —  particularly computing, data storage, and communications — have been improving at exponential growth rates.”

One of the most important enablers of the Cambrian Explosion was vision — the moment when biological species first developed the ability to see the world. This opened up a massive new set of capabilities for our ancestors. Pratt makes the point that we are now at a similar threshold for machines. For the first time in history, machines are learning to see, and thereby gain the many benefits that come with vision.

Our conversations and investigations point to recent major developments in five parallel, interdependent, and overlapping areas: data, algorithms, networks, the cloud, and exponentially improving hardware. We remember them by using the acronym “DANCE.” 


Music CDs, movie DVDs, and web pages have been adding to the world’s stock of digitally encoded information for decades, but in the past few years the rate of creation has exploded. IBM estimates, in fact, that 90 percent of all the digital data in the world was created within the last twenty-four months. Signals from sensors in smartphones and industrial equipment, digital photos and videos, a nonstop global torrent of social media, and many other sources combine to put us in an era of “big data” that is without precedent.


The data deluge is important because it supports and accelerates the developments in artificial intelligence and machine learning described in the previous chapter. The algorithms and approaches that are now dominating the discipline — ones like deep learning and reinforcement learning — share the basic property that their results get better as the amount of data they’re given increases.

The performance of most algorithms usually levels off, or “asymptotes,” at some point, after which feeding it more data improves results very little or not at all. But this does not yet appear to be the case for many of the machine learning approaches in wide use today. Andrew Ng told us that with modern algorithms, “Moore’s law and some very clever technical work keep pushing the asymptote out.”


Technologies and protocols for communicating wirelessly over both short and long distances are improving rapidly. Both AT&T and Verizon, for example, announced 2016 trials of wireless 5G technology with download speeds as high as 10 gigabits per second. This is fifty times faster than the average speed of LTE networks (the fastest networks currently in wide deployment), and LTE is itself ten times faster than the previous generation, 3G technology. Such speed improvements mean better and faster data accumulation, and they also mean that robots and flying drones can be in constant communication and thus coordinate their work and react together on the fly to quickly-changing circumstances.

The cloud.

An unprecedented amount of computing power is now available to organizations and individuals. Applications, blank or pre- configured servers, and storage space can all be leased for a long time or rented for a few minutes over the internet. This cloud computing infrastructure, largely less than a decade old, accelerates the robotic Cambrian Explosion in three ways.

First, it greatly lowers barriers to entry, since the kinds of computing resources that were formerly found only in great research universities and multinationals’ R&D labs are now available to startups and lone inventors.

Second, it allows robot and drone designers to explore the important trade-off of local versus central computation: which information-processing tasks should be done in each robot’s local brain, and which should be done by the great global brain in the cloud? It seems likely that the most resource-intensive work, such as replaying previous experiences to gain new insights from them, will be done in the cloud for some time to come.

Third, and perhaps most important, the cloud means that every member of a robot or drone tribe can quickly know what every other member does.

As Pratt puts it, “Human beings take decades to learn enough to add meaningfully to the compendium of common knowledge. However, robots not only stand on the shoulders of each other’s learning, but can start adding to the compendium of robot knowledge almost immediately after their creation.”

An early example of this kind of universal “hive mind” is Tesla’s fleet of cars, which share data about the roadside objects they pass. This information sharing helps the company build over time an understanding of which objects are permanent (they’re the ones passed in the same spot by many different cars) and thus very unlikely to run out into the middle of the road.

Exponential improvements in digital hardware.

Moore’s law — the steady doubling in integrated circuit capability every eighteen to twenty-four months — celebrated its fiftieth anniversary in 2015, at which time it was still going strong. Some have suggested recently that the law is running up against the limits of physics and thus the doubling will increasingly slow down in the years to come. This may be true, but even if the tech industry’s scientists and engineers can’t figure out how to etch silicon ever more finely in future decades, we are confident that we’ll continue to enjoy simultaneously lower prices and higher performance from our digital gear — processors, memory, sensors, storage, communications, and so on — for a long time to come.

How can this be? Chris Anderson, CEO of drone maker 3D Robotics, gave us a vivid illustration of what’s going on in the drone industry and, by extension, in many others.

He showed us a metal cylinder about 1 inch in diameter and 3 inches long and said, “This is a gyro sensor. It is mechanical, it cost $10,000, it was made in the nineties by some very talented ladies in an aerospace factory and hand-wound, et cetera. And it takes care of one axis of motion. On our drones we have twenty-four sensors like this. That would have been $10,000 each. That would have been $240,000 of sensors, and by the way, it would be the size of a refrigerator. Instead, we have a tiny little chip or a few tiny little chips that cost three dollars and are almost invisible.”

Anderson’s point is that the combination of cheap raw materials, mass global markets, intense competition, and large manufacturing scale economies is essentially a guarantee of sustained steep price declines and performance improvements.

He calls personal drones the “peace dividend of the smartphone wars, which is to say that the components in a smartphone — the sensors, the GPS, the camera, the ARM core processors, the wireless, the memory, the battery — all that stuff, which is being driven by the incredible economies of scale and innovation machines at Apple, Google, and others, is available for a few dollars. They were essentially ‘unobtainium’ 10 years ago. This is stuff that used to be military industrial technology; you can buy it at RadioShack now.”

Together, the elements of DANCE are causing the Cambrian Explosion in robots, drones, autonomous cars and trucks, and many other machines that are deeply digital. Exponentially cheaper gear enables higher rates of innovation and experimentation, which generate a flood of data. This information is used to test and refine algorithms, and to help systems learn. The algorithms are put into the cloud and distributed to machines via robust networks. The innovators do their next round of tests and experiments, and the cycle continues.

Where the work is dull, dirty, dangerous, and dear

How, then, will robots, drones, and all the other digital machines that move in the physical world spread throughout the economy? What roles will they assume in the coming years? The standard view is that robots are best suited for work that is dull, dirty, and dangerous. We would add to this list one more “D” — namely, “dear,” or expensive. The more of these attributes a given task has, the more likely it is to be turned over to digital machines.

Visiting construction sites to check on progress is an excellent example. These sites are usually dirty and sometimes dangerous, and the work of ensuring that the job is being done according to plan, dimensions are correct, lines are plumb, and so on can be dull. It’s worth it, however, to regularly send a person to the site to per- form these checks because small mistakes can amplify over time and become very expensive. It seems, though, that this work could soon be automated.

In the fall of 2015 the ninety-five-year-old Japanese firm Komatsu, the second largest construction equipment company in the world, announced a partnership with the US drone startup Skycatch. The American company’s small aerial vehicles would fly over a site, precisely mapping it in three dimensions. They would continuously send this information to the cloud, where software would match it against the plans for a site and use the resulting information to direct an autonomous fleet of bulldozers, dump trucks, and other earth- moving equipment.

Agriculture, too, could soon be transformed by drones. Chris Anderson asked us to imagine a farm where drones fly over the fields every day, scanning them in the near-infrared wavelengths of light. These wavelengths provide a great deal of information about crop health, and current drone sensors are accurate enough to assess each square foot of land separately (and, given exponential improvement in the sensors, soon it will probably be possible to look at each plant individually).

Flying a plane over the fields every day would be both dull and dear, but both of these barriers vanish with the arrival of small, cheap drones. Information gained from these daily flyovers enables a much deeper understanding of change over time with a given crop, and also enables much more precise targeting of water, fertilizer, and pesticides. Modern agricultural equipment often has the capability to deliver varying amounts of these critical ingredients foot by foot, rather than laying down a uniform amount. Drone data helps make the most of this capability, enabling farmers to move deeper into the era of precision agriculture.

It’s likely that drones will soon also be used by insurance companies to assess how badly a roof was damaged after a tornado, to help guard herds of endangered animals against poaching and remote forests against illegal logging, and for many other tasks. They’re already being used to inspect equipment that would be dull, dirty, dangerous, or dear to get to. Sky Futures, a UK company, specializes in flying its drones around oil rigs in the North Sea, where metal and cement are no match over time for salt water and harsh weather. Sky Futures’ drones fly around and through structures in all conditions so that human roughnecks don’t have to climb and dangle from them in order to see what’s going on.

We see this pattern — machines assuming the dull, dirty, dangerous, or dear tasks — over and over at present:

  • In 2015, Rio Tinto became the first company to utilize a fleet of fully remote-controlled trucks to move all the iron ore at its mine in Western Australia’s Pilbara region. The driverless vehicles run twenty-four hours a day, 365 days a year and are supervised from a control center a thousand miles away. The savings from breaks, absenteeism, and shift changes enable the robotic fleet to be 12 percent more efficient than the human-driven one.
  • Automated milking systems milk about one-quarter of the cows in leading dairy countries such as Denmark and the Netherlands today. Within ten years, this figure is expected to rise to 50 percent.
  • Ninety percent of all crop spraying in Japan is currently done by unmanned helicopters.

Of course, this pattern of machines taking over tasks has been unfolding for many decades inside factories, where engineers can achieve high levels of what our MIT colleague David Autor calls “environmental control,” or “radically simplify[ing] the environment in which machines work to enable autonomous operation, as in the familiar example of a factory assembly line.”

Environmental control is necessary when pieces of automation have primitive brains and no ability to sense their environments. As all the elements of DANCE improve together, however, pieces of automation can leave the tightly controlled environment of the factory and head out into the wide world.

This is exactly what robots, drones, autonomous vehicles, and many other forms of digital machines are doing at present. They’ll do much more of it in the near future.

What humans do in a world full of robots

How will our minds and bodies work in tandem with these machines? There are two main ways. First, as the machines are able to do more work in the physical world, we’ll do less and less of it, and instead use our brains for creative endeavors, and for work that requires empathy, leadership, teamwork, and coaching. This is clearly what’s happening in agriculture, humanity’s oldest industry.

Working the land to bring forth a crop has long been some of the most labor-intensive work done by people. It’s now some of the most knowledge-intensive.

As Brian Scott, an Indiana farmer who writes the blog The Farmer’s Life, puts it, “Do you think when my grandfather was running ... harvesters and combines ... he could’ve imagined how ... today’s machines would be ... driving themselves via invisible GPS signals while creating printable maps of things like yield and grain moisture? Amazing!”

Similarly, workers in the most modern factories no longer need to be physically strong and hardy. Instead, they need to be comfortable with both words and numbers, adept at troubleshooting problems, and able to work as part of a team.

The second way people will work with robots and their kin is, quite literally, side by side with them. Again, this is nothing new; factory workers have long been surrounded by machines, often working in close quarters with them. Our combination of sharp minds, acute senses, dexterous hands, and sure feet have not yet been matched by any machine, and it remains a hugely valuable combination. Andy’s favorite demonstration of it came on a tour of the storied Ducati motorcycle factory in Bologna, Italy. Ducati engines are particularly complex, and he was interested to see how much automation was involved in assembling them. The answer, it turned out, was almost none.

Each engine was put together by a single person, who walked alongside a slow-moving conveyor belt. As the belt passed by the engine parts that were needed at each stage of assembly, the worker picked them up and put them where they belonged, fastening them in place and adjusting as necessary. Ducati engine assembly required locomotion, the ability to manipulate objects in a variety of tight spaces, good eyesight, and a highly-refined sense of touch. Ducati’s assessment was that no automation possessed all of these capabilities, so engine assembly remained a human job.

Similar capabilities are required in the warehouses of many retailers, especially those like  Amazon that  sell products of all shapes, sizes, and consistencies. Amazon has not yet (at least of when we are writing) found or developed a digitally driven hand or other “grabber” that can reliably pick all products off the shelf and put them in a box.

So the company has hit on a clever solution: it brings the shelves to a human, who grabs the right products and boxes them for shipment. Racks of shelves are whisked around the company’s huge distribution centers by knee-high orange robots originally made by Boston-based Kiva Systems (Kiva was bought by Amazon in 2012).

These robots scoot underneath a rack, lift it up, and bring it to a stationary human. When this person has taken the items needed, the rack-and-robot unit scoots away, and another one takes its place. This arrangement allows the people to use their skills of vision and dexterity, where they have an advantage over machines, and avoid the physical exertion and lost time that comes from walking from one shelf to another.

How much longer will we maintain our advantages over robots and drones? It’s a hard question to answer with any confidence, especially since the elements of DANCE continue to advance individually and together.

It seems, though, that our senses, hands, and feet will be a hard combination for machines to beat, at least for a few more years. Robots are making impressive progress, but they’re still a lot slower than we are when they try to do humanlike things. After all, our brains and bodies draw on millions of years of evolution, rewarding the designs that solved well the problems posed by the physical world.

When Gill Pratt was a program manager at DARPA, the U.S. Defense Department’s R&D lab, he oversaw its 2015 robotics challenge. Its automaton contestants moved at such a careful pace that he compared being a spectator at the competition to watching a golf match. Still, this represented a big improvement over the original 2012 competition. Watching that one, according to Pratt, was more like watching grass grow. 

For more info Zach Church Editorial & Digital Media Director (617) 324-0804