Being Human in the Age of Artificial Intelligence
Artificial intelligence (AI) will probably be the most important agent of change in the 21st century. It will transform our economy, our culture, our politics and even our own bodies and minds in ways most people can hardly imagine. If you hear a scenario about the world in 2050 and it sounds like science fiction, it is probably wrong; but if you hear a scenario about the world in 2050 and it does not sound like science fiction, it is certainly wrong.
Technology is never deterministic: it can be used to create very different kinds of society. In the 20th century, trains, electricity and radio were used to fashion Nazi and communist dictatorships, but also to foster liberal democracies and free markets. In the 21st century, AI will open up an even wider spectrum of possibilities. Deciding which of these to realise may well be the most important choice humankind will have to make in the coming decades.
This choice is not a matter of engineering or science. It is a matter of politics. Hence it is not something we can leave to Silicon Valley – it should be among the most important items on our political agenda. Unfortunately, AI has so far hardly registered on our political radar. It has not been a major subject in any election campaign, and most parties, politicians and voters seem to have no opinion about it. This is largely because most people have only a very dim and limited understanding of machine learning, neural networks and artificial intelligence. (Most generally held ideas about AI come from Sci-Fi movies such as The Terminator and The Matrix.) Without a better understanding of the field, we cannot comprehend the dilemmas we are facing: when science becomes politics, scientific ignorance becomes a recipe for political disaster.
Max Tegmark’s Life 3.0 tries to rectify the situation. Written in an accessible and engaging style, and aimed at the general public, the book offers a political and philosophical map of the promises and perils of the AI revolution. Instead of pushing any one agenda or prediction, Tegmark seeks to cover as much ground as possible, reviewing a wide variety of scenarios concerning the impact of AI on the job market, warfare and political systems.
Life 3.0 does a good job of clarifying basic terms and key debates, and in dispelling common myths. While science fiction has caused many people to worry about evil robots, for instance, Tegmark rightly emphasises that the real problem is with the unforeseen consequences of developing highly competent AI. Artificial intelligence need not be evil and need not be encased in a robotic frame in order to wreak havoc. In Tegmark’s words, “the real risk with artificial general intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
As for the obsession with robots, we should remind ourselves that a surveillance system – one that constantly tracks people and uses Big Data algorithms to analyse their behaviour and personality – can destroy our privacy, our individuality and our democratic institutions without any need for Terminator-style killer machines.
Naturally enough Tegmark’s map is not complete, and in particular it does not give enough attention to the confluence of AI with biotechnology. The 21st century will be shaped not by infotech alone, but rather by the merger of infotech with biotech. AI will be of crucial importance precisely because it will give us the computing power necessary to hack the human organism. Long before the appearance of superintelligent computers, our society will be completely transformed by rather crude and dumb AI that is nevertheless good enough to hack humans, predict their feelings, make choices on their behalf, and manipulate their desires.
Once an algorithm knows you better than you know yourself, institutions such as democratic elections and free markets become obsolete, and authority shifts from humans to algorithms. Instead of fearing assassin robots that try to terminate us, we should be concerned about hordes of bots who know how to press our emotional buttons better than our mother, and use this uncanny ability to try to sell us something. It might be apocalypse by shopping.
Yet the real problem of Tegmark’s book is that it soon bumps up against the limits of present-day political debates. The AI revolution turns many philosophical problems into practical political questions and forces us to engage in “philosophy with a deadline” (as the philosopher Nick Bostrom called it). Philosophers have been arguing about consciousness and free will for thousands of years, without reaching a consensus. This mattered little in the age of Plato or Descartes, because in those days the only place you could create superintelligences was in your imagination. Yet in the 21st century, these debates are shifting from philosophy faculties to departments of engineering and computer science. And whereas philosophers are patient people, engineers are impatient, and hedge fund investors are more restless still. When Tesla engineers come to design a self-driving car, they cannot wait while philosophers argue about its ethics.
Consequently, Tegmark soon leaves behind familiar debates about the job market, privacy and weapons of mass destruction, and ventures into realms that hitherto were associated with philosophy, theology and mythology rather than politics. This can hardly be avoided. For the creation of superintelligent AI is an event on a global or even cosmic rather than a national level. For 4 billion years, life on Earth evolved according to the laws of natural selection and organic chemistry. Now science is about to usher in the era of non-organic life evolving by intelligent design, and such life may well eventually leave Earth to spread throughout the galaxy. The choices we make today may have a profound impact on the trajectory of life for countless millennia and far beyond our own planet.
Though Tegmark is probably correct in taking things to this cosmic level, I fear that many, if not most, of his prospective readers will not follow him there. Our political systems, and indeed our individual minds, are just not built to think on such a scale. Current political mechanisms barely manage to make decisions on the scale of decades – how can they make decisions on the scale of millennia? Who has time to worry about AI taking over the planet when you have to deal with Donald Trump and Brexit?
In the case of the AI revolution, as so often before in human history, we will probably make the most profound decisions on the basis of myopic short-term considerations. The future of life on Earth will be decided by small-time politicians spreading fears about terrorist threats, by shareholders worried about quarterly revenues and by marketing experts trying to maximise customer experience.
Courtesy: The Guardian