Too Smart for Our Own Good?

Author: Jason Kelly ’95

Editor’s Note: Artificial intelligence is having a moment. Its promise — and peril — have come to the forefront of public consciousness with the advent of tools such as ChatGPT and their potential, for better or worse, to transform human life. Five years ago, AI breakthroughs were on the horizon, giving scientists, philosophers and ethicists much to ponder, as this 2018 Magazine Classic notes: “A technology that has crept forward for decades is gathering speed, causing some big brains to worry that artificial intelligence could alter the natural order of things.”


At a poker tournament last year, an artificial intelligence system called Libratus emptied the metaphorical pockets of four top professional players.

 

They weren’t playing for real money — a good thing for the humans, who divided $200,000 among themselves for their participation but otherwise would have been left with lint and loose change. The best among the four, Dong Kim, lost more than $85,000 in chips to Libratus.

 

 “I felt like I was playing against someone who was cheating, like it could see my cards,” Kim told Wired magazine. “I’m not accusing it of cheating. It was just that good.”

 

No-limit Texas Hold ’em was the game, but Carnegie Mellon University researchers developed Libratus to display decision-making agility in multiple scenarios where it would have incomplete or misleading information, such as negotiating contracts or defending against hackers.

 

In poker, Libratus succeeded, playing with algorithm-randomized unpredictability, mixing risk-taking bluffs and cautious bets in no pattern that its opponents could perceive. As the three-week tournament went on, its strategies only became more impenetrable.

 

Feature Kelly Spot1Illustrations by Nolan Pelletier

While the pros ate and slept after each day of competition, Libratus incorporated what had happened into its database, each hand contributing to better play. If the humans spotted and exploited a weakness one day, Libratus patched it up by the next. All on its own.

 

After 20 days and 120,000 hands — 60,000 hands played twice simultaneously against two humans who held the opposite cards in order to minimize the role of luck — Libratus finished $1.7 million ahead.

 

Losing in poker to a computer might seem minor in the scheme of things, just another entry on the long list of human inability to keep up with algorithmic competitors that includes high-profile defeats of world-class players in chess, Jeopardy! and the ancient Chinese board game called Go. By now, the storylines of these exhibitions have become summer-blockbuster predictable: Overconfident humans have their hubris handed to them in a humbling introduction to a new frontier of artificial intelligence.

 

Chess master Garry Kasparov in 1997, Jeopardy! champion Ken Jennings in 2011 and Go hero Lee Sedol in 2016 all experienced this arc. Sedol lost four out of five games to a program developed by Google’s DeepMind called AlphaGo. In a documentary about that competition, directed by Greg Kohs ’88, Sedol became so unnerved at AlphaGo’s creative play that, in the middle of one match, he went outside for a smoke.

 

At times, even AlphaGo’s team of developers didn’t understand its moves, once describing it as “on tilt.” Television commentators broadcasting to hundreds of millions of viewers in Asia expressed slack-jawed disbelief at decisions that seemed to defy centuries of wisdom about the game. AlphaGo was almost always proven right.

 

Impressive as these artificial intelligence systems are, though, their capabilities have pretty strict boundaries. Within the domains where the machines are designed to be smart, they’re very smart. Beyond that, not so much.

 

Libratus could handle only head-to-head poker hands. Adding more players would have required the developers to go back under the hood and upgrade the algorithm. And AlphaGo could never identify a picture of a cat, like any toddler could, let alone perform more complex tasks outside the grid of a Go board.

 

So what’s the big deal? Nobody could beat a calculator on a math test. Usain Bolt couldn’t outrun a car on his own two feet. Why have these events seemed so momentous?

 

Because they’ve demonstrated the increasing ability of artificial intelligence systems to navigate complex, unpredictable environments, to weigh options with incomplete information, and to apply what they learn in intuitive, creative ways. To trespass, in other words, on territory thought to be reserved for humans.

 

“People don’t realize how tough it is to write that kind of program,” Jennings said in a 2013 TED Talk, “that can read a Jeopardy! clue in a natural language like English and understand all the double meanings, the puns, the red herrings, unpack the meaning of the clue.”

 

Yet that’s exactly what Watson, IBM’s AI baby, did against Jennings, mastering all the intricacies of the game, like the chess-playing Deep Blue before it, and AlphaGo and Libratus since.

 

Although each of those computer triumphs represented the culmination of painstaking human effort, they also focused public imagination — and the attention of researchers — on the potential for this technology to damage more than our pride. The capacity of machines to learn, to improve, to bewilder their opponents and creators alike, foreshadows a possible future when artificial intelligence could escape our control altogether.

 

Some really smart people have issued warnings about the destructive forces that could be unleashed in that event. People like Stephen Hawking.

 

Although Hawking acknowledges great potential benefits from advanced artificial intelligence, he also said that realizing such a technological achievement could be “the worst event in the history of our civilization.”

 

Tech icon, Tesla founder and space-exploration visionary Elon Musk, who started a safety-focused research initiative called OpenAI, sounded an alarm bell in 2014 that still reverberates. In pursuing the development of artificial intelligence that could become superior to humans, he said, “we are summoning the demon.”

 

Hawking and Musk are envisioning what’s known as “artificial general intelligence.” Not robot armies overrunning cities, but technologies that resemble those already in widespread use developing the ability to perform multiple complex tasks, like humans, except with far greater speed and skill.

 

That’s probably pretty far off — expert predictions range from a couple decades to never — but the nature of the potential threat raises eyebrows. Advanced machines with near-perfect proficiency and autonomous ability to improve could grow so smart and efficient that humans become obsolete or, worse, obstacles to be overcome in a system’s pursuit of its goals.

 

It’s not that they’ll turn on us, Terminator-style. As Hawking put it, “the real risk with AI isn’t malice but competence.” If machines already are good enough to beat the world’s best humans at complicated games, imagine their potential when they possess multifaceted skills and even greater intelligence.

 

Couldn’t you just turn it off? Not necessarily. Shutting down an AI would prevent it from achieving its goals and, with its superior intellect, it could disable that option. How to avert such situations is an active area of study in the development of what researcher Eliezer Yudkowsky has dubbed “friendly AI.”

 

Friend or foe, some believe machine superintelligence — a step beyond human-level capability, but a big one — is coming soon. The hypothesized tipping point when computers exceed the capacity of human minds is known as “the singularity.”

 

For futurist, inventor and author Ray Kurzweil, whose books include 2005’s The Singularity Is Near, the A in AI stands for “accelerating.” Throughout history, technology has progressed at an exponential rate, the pace of each advance far exceeding the one before it. He expects that to continue and the singularity to be here around 2045.

 

Experts surveyed by Oxford University philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, assigned a 90 percent probability to the notion that human-level artificial intelligence would be achieved by 2075. They estimated that superintelligence — defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — would follow by about the end of the century.

 

Feature Kelly Spot2

Even before technology reaches that hazy horizon, smart machines that exceed human performance on specialized tasks will have a disruptive effect — for better or worse, to be determined — on everyday life. Jobs. Transportation. Communication. Education. Medicine. Name it.

 

Board games, poker and trivia are just the beginning. Those losses serve as unsettling alerts, like the first, faint rumbles of approaching footsteps. Something big is coming. A technology that has crept forward for decades is gathering speed, causing some big brains to worry that artificial intelligence could alter the natural order of things.

 


 

Cognitive scientist John McCarthy coined the term “artificial intelligence” for a 1956 Dartmouth conference, the first academic gathering to explore the prospect of computers going beyond data processing and number crunching. McCarthy, along with fellow pioneers in the field Marvin Minsky, Nathaniel Rochester and Claude Shannon, believed machines could learn. So they convened researchers to study “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They were confident about the progress they could make “if a carefully selected group of scientists work on it together for a summer.”

 

It has taken somewhat longer than that optimistic forecast. Over the decades that followed, boom and bust cycles recurred in AI research. A sense that scientists were overpromising led to occasional lapses in commercial and government investment. Those cold spells were called AI winters.

 

We are now in the full bloom of an AI spring. Google, Facebook, Amazon, Apple and Microsoft are leading the way with immense financial and informational resources, gathering and deploying avalanches of data in a race for digital supremacy.

 

Various types of artificial intelligence technologies, also called “machine learning,” already have filtered into widespread use. Search engines. Virtual assistants like Siri, Alexa and Cortana. Voice to text. Social-media advertising and suggestions of products or entertainment “you might also like” from online shopping and streaming services. Fraud monitoring for banks. Smart appliances that know when you’re home, what room you’re in, and how much heat and light you need. That customer-service rep “typing” in the chat box? Might be a bot.

 

Something big is coming. A technology that has crept forward for decades is gathering speed, causing some big brains to worry that artificial intelligence could alter the natural order of things.

 

Most of these uses have become as ordinary as turning on the television, not the stuff of space-age awe, or omens of great technological upheavals ahead for human life. The integration of these now-routine technologies contributes to the sense, mistaken in the minds of many early researchers, that artificial intelligence has not lived up to the hype.

 

“As soon as it works,” McCarthy once said, “no one calls it AI anymore.”

 

Machines are now capable of driving cars, diagnosing diseases, buying and selling stocks, evaluating job and loan applications and translating spoken and written languages, among many other talents — often measurably better than experts.

 

Advances in computing power in recent decades, combined with the volume of data available in the internet era, have sped the progress of machine-learning algorithms that benefit from immense processing capacity and access to vast information. These AI systems learn from experience. As a driverless vehicle clocks miles, or a diagnostic program reads X-rays, the system assimilates the knowledge that it uses to refine its performance. There’s another big advantage: the ability to gorge on shared data from other internet-connected systems that perform the same jobs and to metabolize it in a relative instant, leading to dramatic improvement overnight, if not faster.

 

It’s cool and disconcerting at the same time. Progress is happening so fast in so many areas that it’s hard to keep up with all the new technical capabilities and the ethical implications each one raises.

 

Facial recognition software, for example, has the potential to help identify criminals and terrorists. It also could have unforeseen flaws that ensnare the innocent — or overlook the guilty — if unconscious biases have been encoded into the system through the test data sets.

 

“Given that systems like that are used in policing very widely, the consequences of those mistaken identifications can be really profound in people’s lives,” says Notre Dame philosopher Don Howard, a fellow and former director of the Reilly Center for Science, Technology and Values.

 

Feature Kelly Spot3

Even short of life-changing criminal accusations, egregious mistakes can happen. A Google photo app sparked a controversy in 2015 when its system misidentified images of African-Americans as gorillas.

 

Such humiliating errors occur because the technology does not “see” like humans do. Algorithms analyze shapes, colors and patterns, comparing them against their previous inputs to classify images. That’s a technical limitation, but it also exposes a risk inherent in companies developing systems without sensitivity to the diversity of populations that will use them.

 

In health care, machine-learning programs designed to improve diagnoses or treatments for heart patients, for example, could be built around partial data. Men have been subjects of coronary disease research for decades longer than women. Important information about the differences between them could be overlooked if algorithms fail to account for that disparity, potentially causing harm.

 

High-tech defense systems already in the field employ machine learning to identify and track incoming missiles, then analyze and execute the appropriate response, all before a soldier would even be aware of the threat. That technology has saved lives, but it also might threaten them if used in an offensive capacity on civilians mistaken for enemy combatants.

 

Researchers like Georgia Tech’s Ronald Arkin argue that, despite those concerns, autonomous weapons systems could become far less prone to mistakes — physical, mental or moral — than soldiers.

 

“People make bad judgments if they’re fatigued, if they haven’t eaten in three days, if they’ve just seen their best friend killed brutally,” Notre Dame’s Howard says. “There are all these stressors in the battle space that are some of the most important drivers of bad moral choices by human combatants.”

 

To retired U.S. Air Force Maj. Gen. Robert H. Latiff ’71, ’74Ph.D., an adjunct professor at the Reilly Center, new technology often tempts military leaders into excessive expectations and unnecessary expense. In Future War: Preparing for the New Global Battlefield, Latiff writes that “this willingness to be seduced by technology and our addiction to it are worrisome.” Navigational skills and subjective judgment, for example, atrophy with over-reliance on high-tech systems that, in battle, could be as fragile as they are sophisticated. How much military strategy and weaponry are we willing to defer to smart machines? What happens when they fail?

 

Across multiple fields, artificial intelligence generates that tension between help and hazard. All this stuff is supposed to improve our lives. It is meant to make people safer from commute to combat; healthier through more precise, personalized medical care; wealthier as more efficient stewards of public and private finances; wiser with educational applications that identify where students need additional help; less susceptible to fraud as programs detect transactional anomalies invisible to individuals; more impartial in hiring decisions, criminal prosecutions and national defense.

 

Yet the potential benefits come with big risks. Of jobs lost to automation. Of privacy sacrificed to unaccountable companies and government agencies collecting the data we emit into the cloud like carbon dioxide. Of life and death decisions — from transportation safety to health care to whether that’s a suicide bomber or a pregnant woman at the checkpoint — surrendered to technology so complex and autonomous that its processes are often opaque even to those who designed it.

 

“Advanced algorithms inscrutable to human inspection increasingly do the work of labeling us as combatant or civilian, good loan risk or future deadbeat, likely or unlikely criminal, hireable or unhireable,” writes Shannon Vallor, a Santa Clara University ethicist, in her book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting.

 

The algorithms can even label us gay or straight. Liberal or conservative. High- or low-IQ. Stanford researcher Michal Kosinski published a study last fall showing that a facial-recognition system could guess a person’s sexuality with notable accuracy in comparisons of dating-site photos. That’s not the same thing as saying the algorithm could pick out the gay people in a crowd, or even from a single picture — the test wasn’t that simple, the results not that clear-cut.

 

Still. Nuances notwithstanding, tech website The Verge noted that the study raised fears that the development of similar technology “will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people.”

 

The specter of Big Brother arose not so much from the technology’s capability to make such accurate guesses — although that’s alarming enough — but from the inability of the researchers themselves to understand why.

 

Kosinski altered the faces studied to see how that affected the results. He averaged the facial proportions of those the system identified as most likely to be gay and straight, comparing that to typical masculine or feminine traits. He teased out the possible influence of cultural signifiers like the greater tendency of straight men to wear baseball caps. Nothing conclusive.

 

What’s left is speculation that the algorithms identified something invisible to the naked eye.

 

“Humans might have trouble detecting these tiny footprints that border on the infinitesimal,” Kosinski told The New York Times Magazine. “Computers can do that very easily.”

 

If how computers do that cannot be discerned, that’s called a “black box” problem. When you apply for a loan, say, it’s illegal for the bank to consider your race when calculating your creditworthiness. An article in the MIT Technology Review explained how an artificial intelligence system could, unbeknownst to programmers and regulators alike, violate that law and actually entrench prejudice that data-based tools are supposed to help eradicate: “Algorithms can learn to recognize and exploit the fact that a person’s education level or home address may correlate with other demographic information, which can effectively imbue them with racial and other biases.”

 

Concerns about such unintended consequences with unidentifiable causes has made “explainable AI” a research priority. As of this year, a law in the European Union requires technology to be transparent enough for humans to understand its decisions and actions. Failing to meet the legislation’s intentionally broad standard could lead to billions in fines for big data companies.

 

The implications require cross-pollination between science and philosophy to establish what we want from these new tools before they redefine our lives. Vallor writes that “a contemporary theory of ethics — that is, a theory of what counts as a good life for human beings — must include an explicit conception of how to live well with technologies, especially those which are still emerging and have yet to become settled, seamlessly embedded features of the human environment.” 

 

Attention to AI ethics in general seems to be increasing. Researchers Kate Crawford of Microsoft and Meredith Whittaker of Google founded the AI Now Institute at New York University to raise awareness of the technology’s social implications. That includes holding the math that underlies machine learning to high standards of clarity and fairness.

 

Max Tegmark, an MIT physicist, established the Future of Life Institute with his wife, Meia, and a few colleagues. “We felt that technology was giving life the power either to flourish like never before or to self-destruct,” Tegmark writes in Life 3.0: Being Human in the Age of Artificial Intelligence, “and we preferred the former.”

 

Notre Dame’s Reilly Center, the Deloitte Center for Ethical Leadership in the business school and the Interdisciplinary Center for Network Science & Applications (iCeNSA) each engage moral issues related to new technology. Many of the questions are “age-old,” says Christopher Adkins, director of the Deloitte Center — questions about human dignity, personal autonomy and integrity, the meaning of work, now applied in a context where much of who we are and what we do has been uploaded and shared, collected and analyzed, then used to develop powerful new technologies.

 

“This is going to sound a little old-school, but I intend it to,” Adkins says. “What are the virtues of being a leader when it comes to AI? What are the habits of living in a world with so much data?”

 

The answers to those and other questions that artificial intelligence raises — such as how to enhance people’s lives and reduce the risk to their livelihoods as new technology emerges — remain elusive. Technical progress will not pause for philosophical pondering, so implications have to be considered in real time, with scientists and ethicists in dialogue with business, government and military leaders.

 

That’s how Nitesh Chawla sees it. Director of iCeNSA, Chawla is an upbeat, eyes-forward kind of guy, all about devising what’s new, the next thing. Not for his own scientific satisfaction, though. That’s an important caveat. Too often, he says, in the rush toward technological achievement, researchers lose sight of the bigger picture.

 

“We have to keep innovating. We have to keep inventing. We have to keep thinking about what the future will be and creating that. At the same time,” Chawla says, “we have to think that innovation is about people and have that frame of reference always as we develop these things.”

 

Otherwise, the machines might not concern themselves with the impact of their actions on people. Without careful attention to that human objective, the best-case scenario, as Elon Musk sees it, would relegate us to the status of pets to machine superintelligence.

 

The worst case? We pass through the Silicon Valley of the shadow of death.

 


 

Jerry Kaplan, an author and lecturer on the impact of emerging technologies, thinks the word “intelligence” itself, used in the context of technology, has freaked people out a little bit. The fear comes from the anthropomorphic analogy embedded in the concept. Intelligence is supposed to be a humans-only domain, velvet-roping us into an exclusive club.

 

Assigning that quality to machines feels like an infringement on — and maybe the beginning of the end of — human supremacy. Concerns about this fall along a spectrum from the philosophical to the apocalyptic. From: Will machine intelligence alter our understanding of what it means to be human? To: Will we survive the ascent of autonomous technology?

 

In his 2016 book, Artificial Intelligence: What Everyone Needs to Know, Kaplan imagines a past where early airplanes were called “artificial birds” and considers how concerns about the future of smart flying machines might have echoed today’s prophets of technological doom.

 

“What will happen when planes learn to make nests? Develop the ability to design and build their own progeny? Forage for fuel to feed their young?”

 

Sounds absurd, Kaplan says, but he insists it’s not that different from the way wary people today interpret the word “intelligence” as it applies to machines. He thinks the metaphor, and all the metaphysical pondering it prompts, distracts from more pressing concerns.

 

The late Dutch scientist Edsger Dijkstra expressed a similar view. “The question of whether machines can think,” Dijkstra said, “is about as relevant as the question of whether submarines can swim.”

 

Notre Dame’s Howard is another restrained voice amid the megaphoned warnings and existential hand-wringing. These fast-advancing technologies pose genuine dangers, he believes, but to suggest at this early stage that they’re hastening the end of days obscures more immediate issues.

 

Howard calls that extreme view the “apocalyptic fallacy.” Once the specter of annihilation arises, however distant and implausible, it makes any discussion of intermediate threats or advantages seem trivial.

 

“If you immediately jump to the possibility that there is some highly improbable but infinitely awful outcome,” Howard says, “then that just sort of swamps all of your reasoning about the real risks.”

 

For Howard, the “real risks” include, first and foremost, widespread job loss to autonomous technology. There are, for example, about 3.5 million truck drivers, placing that occupation among the largest job categories in the United States. The advent of driverless vehicles threatens to leave them unemployed. Cab and delivery drivers face the same fate. Uber already dispatches autonomous cars to ferry passengers in Pittsburgh.

 

“Is the economy creating other jobs, equally well-paying, that can draw from the skill set of the truck driver?” Howard says. “I’m not sure our economy is capable of reabsorbing that many people thrown out of work.”

 

In the case of driverless cars, that’s only part of the equation. Alongside the human cost from the expected economic disruption, there’s a potential benefit that Howard considers far greater, one that makes the universal implementation of self-driving vehicles a “moral imperative.”

 

About 35,000 Americans die in automobile accidents every year. More than one million are killed worldwide. If everyone rode in an autonomous vehicle, Howard says, the number of deaths could be drastically reduced.

 

Liability questions have held back the development of autonomous vehicles. Who should be held responsible in an accident? The owner of the car, whether an individual or a company? The automaker that built it?

 

There’s also reluctance among some people to relinquish the control they feel behind the wheel, which overlaps with concern about how autonomous technology would perform in dangerous situations. How would it weigh the ethical considerations involved in harrowing moments? Better than humans would, according to the writer David Roberts of Vox, if we just let the cars figure it out for themselves, rather than trying to establish and program the appropriate reactions.

 

“Autonomous vehicles will very quickly have seen more, and experienced more, than any human driver possibly could,” Roberts writes. “And unlike most human drivers, they will deliberately comb through that experience for lessons and apply them consistently.”

 

To Howard, the lives saved would be worth the jobs lost.

 

“Not being mindless of the other social costs and consequences, but . . . I think it’s a compelling moral imperative that we move as rapidly as we possibly can toward the full deployment of self-driving technology,” he says. “We will save a lot of lives in that way.”

 

It’s not just commercial drivers at risk of losing their jobs. According to an Oxford study, technology will not discriminate based on the color of your collar: Blue and white alike will face mechanized obsolescence. Accountants and groundskeepers, cashiers and insurance underwriters, sheet metal workers and loan officers are among the nearly half of all U.S. workers whose jobs the study found to be subject to automation over the next 20 years. The effect is evident in some fields already.

 

Watch the credits of an old episode of The Simpsons, Howard tell his students, and you’ll see a long scroll of Korean names: the show’s animators. Then, he says, fast-forward through a more recent episode. That list of names is no longer there. The people have been replaced by intelligent machines capable of rendering Springfield, USA, and its inhabitants at least as well as the computers’ living, breathing predecessors.

 

“What happened to all of those people and their careers as graphic artists?” Howard wonders.

 

Same goes for the staffs of large architecture and engineering firms. Not so long ago, such companies had vast rooms filled with entry-level employees laboring at drafting tables to convert their bosses’ visions into blueprints by hand.

 

An optimistic interpretation of this development is rooted in history. New technology has often inspired fears of lost livelihoods and a cascading economic calamity to follow. The worst predictions have never come true — at least not at a macro scale.

 

British economist David Ricardo first considered “the influence of machinery on the interests of the different classes of society” in 1821. Similar concerns have ebbed and flowed ever since, as advances like assembly lines and combines have automated manufacturing and farming.

 

At the turn of the 20th century, nearly 40 percent of Americans worked in agriculture. By 2000, only 2 percent did. Better technology made farming more efficient, but also contributed to the creation of new types of jobs for the people who no longer tended the fields.

 

Less than a century ago, a “calculator” was a human job title. Now it refers to a machine capable of solving complex math problems in an instant, far exceeding a brain’s processing speed. We didn’t wring our hands over the jobs made machine-redundant by calculating technology.

 

Even more recently, ATMs loomed as the end of the line for bank tellers. And at the individual branch level, fewer bank clerks were needed, on average dropping from 21 to 13. With those cost savings, though, banks opened additional locations to provide more convenient customer service, increasing the overall demand for employees.

 

So the economy has weathered major technological advances without catastrophic effects. That’s one way to look at it. Adkins, the Deloitte Center director, leans that way. The complementary combination of human hearts and machine minds gives him hope for a productive future coexistence.

 

Thomas Friedman, the New York Times columnist, coined the term STEMpathy to describe a vision of people and computers working together. Even if an artificial intelligence system diagnoses a disease faster and with more precision than a doctor could, he writes, relating bad news to a patient still requires person-to-person compassion. The idea is that no matter how much mental work we outsource to machines, the emotional and the psychological — the heart and the soul — resist digital enhancement. Friedman’s view grows out of the hopeful vision of Dov Seidman, a philosopher, lawyer, business executive and author, who believes the new economy will need “hired hearts.”

 

Another perspective is that technological changes are opening a wider skills chasm between jobs at risk and new ones that might emerge. Howard is among those who sense something different in the technological revolution at hand.

 

Even at a time of almost full employment, he says, jobs tend to be less secure, pay and benefits less substantial today than they were 25 years ago. Historically, “increases in productivity in an economy were tracked very closely by increases in income for workers,” he says. Sometime during the early 1990s, “those two curves started to diverge.” Productivity has continued its steady rise, while wages have stagnated.

 

“What broke that link between productivity and income growth?” Howard says. “I’m not the only one who thinks that the key to that is automation.”

 

He concedes that his position might be “an example of apocalyptic thinking, but the trend lines are already there.”

 

Beyond employment, trend lines in the automation of many facets of human life point sharply upward. Toward what, exactly, remains uncertain. The ambiguity invites speculation that tends toward the extremes. As billionaire entrepreneur Peter Thiel told Vanity Fair, artificial intelligence “encapsulates all of people’s hopes and fears about the computer age.”

 

Scientists and philosophers, engineers and ethicists are now engaged in the intertwined technical and metaphysical work of determining which of those hopes and fears will be realized.

 


Jason Kelly is an associate editor of this magazine.