The business press has churned out plenty of books on artificial intelligence (AI) and machine learning over the last five years. Ray Kurzweil offers one more, and he is an especially knowledgeable author: He is a principal researcher and “AI visionary” at Google, a recipient of the National Medal of Technology and Innovation, an inductee into the National Inventors Hall of Fame, and he has been working on AI for 61 years.
This is not his first book on the subject. In 2005, he released The Singularity Is Near, using the term “singularity” as a metaphor for the merger of human and artificial intelligence. By this he means people augmenting themselves with computational power millions of times beyond what our innate biology provides. Kurzweil predicts this will happen around 2045. I admit this sounds wild, but look at his resume: We should consider what he has to say. And his new book argues that his earlier book is being proven right.
Technologically, much has happened since the first book. Artificial narrow intelligence (ANI)—technology that carries out a specific task or a limited set of tasks; think spam filters, website recommendations, and digital assistants like Siri and Alexa—today is embedded in a wide range of industries, firms, and products and services across the globe. Now comes general AI and machine learning software, which are “general-purpose technology” capable of a host of applications. Kurzweil sees the rapid acceleration in AI technologies as offering benefits in augmenting, enlarging, and enhancing the human intellectual experience. Throughout the new book, he explains how AI and machine learning will contribute to the development of revolutionary new systems in biotechnology and nanotechnology—radical developments that have the potential to alter the trajectory of global civilization.
Kurzweil claims that by the end of this decade we will have reached what he characterizes as the “Fifth Epoch,” where we directly merge biological human cognition with the speed and power of our digital technology. This will result from what he calls the “law of accelerating returns” (LOAR), whereby information technologies get exponentially better and cheaper because each technological advance makes it easier to design the next stage of its own evolution.
Reinventing intelligence and consciousness / Kurzweil takes readers through the surprisingly long history of AI, from its birth in the 1950s with the seminal work of mathematicians Alan Turing and John McCarthy, to the current Connectionist approach to AI. Connectionism entails networks of nodes that create intelligence through their structure rather than through their content. This is based on the construction of the neocortex in the human cerebellum, a simple repeating modular structure consisting of about 100 neurons. These modules can learn, recognize, and remember a pattern and learn to organize themselves into hierarchies, with each higher level mastering ever-more-sophisticated concepts. Connectionist approaches came to the fore in the mid-2010s when computer hardware advances—utilizing lighter and stronger materials—finally unlocked the potential for wider applications. That made computing power and training examples (for AI, “learning”) cost-effective, allowing the technology to excel.
What is the endgame of this Connectionist approach to merging AI technologies and humankind? Kurzweil writes, “When humans are able to connect our neocortices directly to cloud-based computation, we’ll unlock the potential for even more abstract thought than our organic brains can currently support on their own.” He views these brain–computer interfaces as a process of co-creation, where the human mind can acquire deeper intellectual insights that result in transcendent new ideas. Humanity will have access to its own “source code,” allowing “super AI” capable of redesigning itself and ourselves. This is what Kurzweil views as at the core of his definition of the singularity. Yet, to others, it may sound eerily like Star Trek’s “Borg Collective,” wherein “Resistance is futile.”
As to a state of “consciousness,” Kurzweil cites the philosopher’s use of the term “qualia,” describing “the ability to have subjective experiences inside a mind.” He argues that the singularity, with the merger of AI and the neocortex of the brain, will not just have abstract problem-solving capabilities, but it will also deepen subjective consciousness itself.
To address the difficult problem of defining consciousness (and who possesses it), he borrows philosopher David Chalmers’s idea of “panprotopsychism.” Kurzweil explains that panprotopsychism treats consciousness as a fundamental force of the universe and not a force that can be reduced to a simple effect of other physical forces. In his interpretation, it is a “kind of information-processing complexity found in the brain that ‘awakens’ that force into the kind of subjective experience we recognize.” He argues that if an AI can proclaim its own consciousness, ethically we lack a basis for insisting that only our own carbon-based biology possesses sentience, and not the silicon-based, non-human entity endowed with its own subjective inner life. Importantly, he opts for a human-created Turing Test as a metric for measuring (human-like) subjective consciousness rather than a theological view of only a deity-created, sentient human being imbued with subjective consciousness.
Well-being, work, and medicine / Kurzweil further argues that technological change has contributed exponentially to civilization’s progress. As an example, LOAR means that certain technologies create feedback loops that accelerate innovation. Accordingly, information-related digital technologies such as AI and machine learning make related innovations easier. Hence, LOAR pushes down on technology’s costs relative to its benefits, allowing innovation to progress. LOAR has been directly responsible for nearly every aspect of human life getting progressively better, from the decline in the proportion of the world’s population living in extreme poverty (which fell 50 percent between 1998 and 2018), to a global literacy rate of 87 percent (and 99 percent in most developed countries).
However, not all technologies are so benign. Social media algorithms, for instance, create a selection bias toward stories about looming crises, so that news of “threats” relegates much of the positive data on human well-being to the bottom of news feeds. What other negatives could technology produce?
One concern of both society and policymakers is AI’s future effects on employment. Kurzweil cites a 2023 McKinsey report that found that 63 percent of all working time in today’s developed economies is spent on tasks that could be automated with current technology. Kurzweil argues that if AI adoption proceeds quickly, half of this work replacement could be completed by 2030 (though the McKinsey report predicts this point won’t be reached until 2045). That means many workers will be displaced and unable to retrain quickly for new jobs. Yet, like the creative destruction of previous innovations, Kurzweil believes AI will also open up new, innovative employment opportunities and, because of AI-expanded material abundance, society will be able to support a deeper “social safety net” for workers not “retrainable” for jobs in the new economy. He recognizes that the social safety net (including the advent of basic universal income) does not replace the sense of purpose that jobs provide. I would add that this also raises concerns about social stability during the technological transition.
When it comes to advances in medicine ranging from drug discovery to disease surveillance and robotic surgery, Kurzweil forecasts exponential progress of information technologies, combining biotechnology with LOAR AI and digital simulations. Pharmaceutical companies are today finding answers to biochemical problems and emerging viral threats by digitally searching through possible options and identifying solutions in hours rather than years. Moreover, we are moving toward an AI revolution in drug trials. Kurzweil notes the increasing scale-up of AI to model ever larger systems will lead to cures for diseases whose complexity places them out of reach of today’s medicine. Early AI applications have, over the last decade, contributed to many promising cancer treatments, including immunotherapies and immune checkpoint inhibitors. Moreover, AI combined with nanotechnology has remarkable potential for health care innovation; Kurzeil predicts that in the next decade or two we will be able to redesign and rebuild, molecule by molecule, our bodies and brains as well as the world with which we interact.
Perils and challenges / Kurzweil concludes the book by discussing four technologies that present major perils to civilization: nuclear weapons, synthetic biology, nanotechnologies, and AI.
Globally there are about 9,440 active nuclear warheads. (The good news is that’s a major reduction from 64,449 active warheads in 1986.) Kurzweil believes that AI can provide smarter command-and-control systems that would reduce the risk of sensor malfunction leading to the unleashing of these weapons. But he does not address the risk that advanced AI itself could unleash a thermonuclear exchange. How do we keep non-human decision-making from usurping certain human decisions, given that AI will be able to evaluate information faster, with greater access to critical data, than any human mind?
Synthetic biology—using technology to design and build new biological systems—offers enormous promise to humankind. Kurzweil argues that we should use AI to accelerate the speed of virus sequencing to create vaccines. A rapid response system could sequence the genetic material of a virus in about a day, and then rapidly design medical countermeasures. But there are risks in such research; think of the “lab leak” that some have hypothesized as the origin of COVID-19. Care by researchers can reduce that risk and the benefits could be immense, but there will always be some risk.
Nanotechnology—manipulating matter at the nanoscale to yield new technologies—offers considerable promise in medicine, electronics, energy production, and environmental mitigation. It also may deliver a wide range of inexpensive but extremely destructive offensive weapons. While responsible people can design safe nanobots, bad actors could design dangerous ones. Kurzweil argues for creating a nanotechnology “immune system” capable of contending with both obvious destruction (e.g., cleaning up a toxic spill) and potentially dangerous stealthy nanotechnology replication—that is, “good guy” nanobots (called “blue goo” in the literature) that combat bad nanobots (“gray goo”). Because gray goo is a potential (albeit low-probability) extinction-level event for humanity, it is key that blue goo be deployed globally before gray goo self-replication chain reactions take off—a “catastrophe theory” scenario that should keep one awake at night.
Lastly, according to Kurzweil, AI—specifically what is called artificial super intelligence (ASI)—presents three distinct risks. First is misuse, where the AI functions as its human operators intend, but those operators deploy it to harm others. Second is outer misalignment, which means a mismatch between the programmers’ benign goals and unintentionally harmful prompts they give the AI in pursuit of those goals. Third is inner misalignment, which occurs when the methods the AI learns to achieve its goal result in undesirable behavior.
Kurzweil believes the Asilomar AI Principles, drafted by the Future of Life Institute, offer guidance for responsible AI development. Several of the principles push for AI use to reflect human values or promote human well-being. Relatedly, countries can take the Lethal Autonomous Weapons Pledge to not develop military systems that can independently select and engage targets without direct human intervention. However, Kurzweil notes, the United States, Russia, the United Kingdom, France, and Israel—all nuclear weapons powers (or, in Israel’s case, suspected nuclear power)—have not taken the pledge.
Conclusion / Kurzweil generally has a “permissionless” view of technological innovation (which I tend to share), as opposed to the European “precautionary principle” view that tends to blunt innovation. I do worry that that this laissez-faire view may be naive. I also worry that he does not recognize that governments themselves can be nefarious actors that will use AI to surveil, censor, and enforce policies against their own people—an unwelcome variation of Kurzweil’s singularity. Even a permissionless approach to technological innovation recognizes the need for clear and accountable regulatory and legal guideposts enforced by a transparent and responsive government to ensure that human liberties are not infringed by innovation.
Kurzweil asks, “Will psychological and cultural forces make people more conservative about their choices” concerning merging AI with the human corpus—that is, the singularity? This was once a question for science fiction, but it is becoming a defining, present question for our age.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.