Human history has been shaped by a series of transformative engineering breakthroughs that expanded our productivity and amplified our physical capabilities—first through steam power, then diesel, and later electricity. From the First Industrial Revolution, powered by steam, to the third, which introduced the computer and the internet, each leap has fundamentally reshaped society.
Artificial intelligence (AI) may represent the most significant technological revolution yet—a true paradigm shift. The reason is simple: earlier breakthroughs enhanced our physical strength and efficiency. AI, by contrast, has the potential to replicate or even replace aspects of our cognitive abilities. As a result, many existing jobs may disappear.
The concept of artificial intelligence first emerged more than 60 years ago, although thinking machines and robots had long been staples of science fiction. The term was coined only a few years after the British mathematician Alan Turing developed the first programmable computer—essentially the foundation of modern computing. Although today’s machines are infinitely more powerful, they remain evolutions of Turing’s original design. Computers do not think independently; they execute instructions written in code.
AI—particularly Artificial General Intelligence (AGI), often referred to as “strong AI”—is fundamentally different. In this case, software would simulate human-level intelligence, and machines would be capable of self-directed learning. They could develop and refine their capabilities through observation, trial and error, and communication.
Whether AGI would also develop consciousness or self-awareness remains an open question. Science fiction often associates AGI with traits such as consciousness, sentience, sapience, and self-awareness—qualities typically observed in living beings. However, the assumption that a machine displaying human-level intelligence must also possess a mind or consciousness is far from settled. AGI refers only to the level of intelligence demonstrated, regardless of whether it includes subjective awareness.
Some observers warn of a dystopian scenario in which a machine improves itself at such a pace that it rapidly surpasses human comprehension. The ability to reprogram and enhance its own capabilities—known as “recursive self-improvement”—could enable a system to become progressively better at improving itself, creating a rapidly accelerating cycle. This could ultimately lead to an intelligence explosion and the emergence of superintelligence.
The coming of computers with true humanlike reasoning remains decades in the future, but when the moment of “artificial general intelligence” arrives, the pause will be brief. Once artificial minds achieve the equivalence of the average human IQ of 100, the next step will be machines with an IQ of 500, and then 5,000. We don’t have the vaguest idea what an IQ of 5,000 would mean. And in time, we will build such machines–which will be unlikely to see much difference between humans and houseplants.
– David Gelernter, attributed, “Artificial intelligence isn’t the scary future. It’s the amazing present.”, Chicago Tribune, January 1, 2017
Despite decades of global research, AGI has not yet been achieved. At the same time, substantial investment is being directed toward quantum computing. Although still in its early stages, quantum technology—if fully realized—could be exponentially more powerful than today’s classical computers, dramatically accelerating developments in artificial intelligence, big data, and machine learning.
In contrast to AGI, “weak AI” or “narrow AI” is already embedded in everyday life. Consumers have interacted with AI-driven systems for years, from smartphone personal assistants to recommendation algorithms. AI is also widely used in social media platforms to identify and remove false information, hate speech, and extremist content.
Weak or “narrow” AI, in contrast, is a present-day reality. Software controls many facets of daily life and, in some cases, this control presents real issues. One example is the May 2010 “flash crash” that caused a temporary but enormous dip in the market.
— Ryan Calo, Center for Internet and Society, Stanford Law School, 30 August 2011.
Many argue that AI will enhance society and simplify daily life, and that jobs lost to automation will ultimately be replaced by new forms of employment. Others fear a dystopian future in which technological progress undermines economic and social stability.
These perspectives remain informed speculation. No one can say with certainty how the AI revolution will ultimately reshape society. What is clear, however, is that businesses are drawn to the prospect of a 24-hour workforce—never sick, never fatigued, and never in need of vacation.
Historical automation offers only limited guidance. While creative destruction has long characterized technological progress—old roles disappearing as new industries emerge—there is uncertainty as to whether AI-driven disruption will follow the same pattern.
One can only hope that companies consider broader societal implications alongside profitability. Technological unemployment driven by AI may lack the “creative” element traditionally associated with creative destruction.
What distinguishes AI from earlier technological shifts—such as the introduction of robots in manufacturing—is its cognitive dimension. With AI support, even professions that currently require advanced degrees may become partially or fully automated. For example, JPMorgan’s AI system COIN uses text analysis to complete in seconds work that previously required 360,000 hours of legal review.
A broad public discussion may therefore be necessary—one that addresses political, philosophical, and even religious dimensions. Few fully understand the opportunities this technology presents, and even fewer grasp its risks. What is certain is that the consequences will include both benefits and drawbacks.
Innovation cannot be halted. History shows that new technologies can generate economic growth and unlock new possibilities. Technology may eliminate certain roles, but it can also free up human resources for new and potentially more meaningful pursuits.
What we should more concerned about is not necessarily the exponential change in artificial intelligence or robotics, but about the stagnant response in human intelligence.
– Anders Sorman-Nilsson, “Will Artificial Intelligence Take Our Jobs? We Asked A Futurist”, HuffPost, February 16, 2017
