We are living in a pivotal period of human history; we are about to transform mankind and life itself. Technology is no longer something that happens outside of us; rather, it is about to move inside of our minds and our bodies.
Machines that can ‘think’ will inevitably change what and how we think, and the rise of artificial intelligence (AI) is sure to ignite a global debate on what human intelligence actually is, and what it means to be human.
We are standing at the take-off point of the exponential curve – this is one of key realisations delineated in my last book, Technology vs Humanity.
The next 20 years will bring more change to humanity than the previous 300 years.
In this coming ‘machine society’ pretty much everything that can be connected, digitised, automated, virtualised or robotised, will be. But in a world where data is the new oil and where AI is the new electricity (riffing off Andrew Ng, here), what role will we – mere, linear, inefficient and complicated humans – play?
Computers and machines are now becoming capable of seeing, hearing, speaking, and even thinking and learning, albeit in their very un-human, machine-ways. But when the machines we build become smarter than us – in the sense of almost limitless computational power – what will happen to mankind? How will we ensure human flourishing?
Clearing up the confusion about intelligence
I am concerned that because we have struggled to define what human intelligence actually is, we may now be tilting our debate about AI too much towards either outright techno-optimism, or towards general hyperbole and fear-based narratives.
The American Psychological Association states that there are various types of human intelligence and that “Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning and to overcome obstacles by taking thought” (source). Harvard psychologist Howard Gardner stated that there are eight kinds of intelligences – and most of them are not based on computational firepower, such as how many calculations per second our brain – or a computer simulating a brain – can accomplish.
I suggest that when we talk about AI we need to take into account that human intelligence is not just the result of our brain’s massive computing power, and of its amazing energy efficiency. Rather, our unique intelligences are much more complex than that, and machines are unlikely to ‘become like us’ anytime soon (which certainly does not mean they won’t be dangerous way before that time). As Martin Seligman, Moshé Feldenkrais and many others have similarly stated: we think with the body, not the brain (for more details on this please take a look at my Youtube talks on AI, here).
Are humans really just data, as well? Organisms are algorithms?
Human intelligence is, among many other things, defined by our individual, neural, social, emotional, intellectual and experimental intelligences. Human intelligence, in short, is the capacity to acquire capacity. Computing power is only the beginning.
I believe that we are not just ‘fancy machines’ or ‘wetware’ in desperate need of an upgrade. I don’t think that organisms are algorithms, a paradigm frequently presented in Silicon Valley – at least not the kind of algorithms as we define them today.
The Hungarian scientist Hans Moravec’s paradox posits: “Things that machines find difficult are easy for humans, and vice versa.”
But if that’s true, why not attempt to combine man and machine to achieve that perfect symbiosis? This is the plot of an increasing number of Silicon Valley ‘leaders’, inventors, investors, pundits, transhumanists and many technophile futurist colleagues. They believe that man and machine must ultimately converge, in the not-too-distant future, because humans are really just technology, anyway.
Frankly, I think this concept is the epitome of machine-thinking, fuelled by a reductionist approach to humanity that seems to thrive mostly in societies that focus solely on profit and growth (or the state’s power, as in China).
AI vs IA: never mistake a clear view for a short distance
AI is often defined as “the ability of machines to accomplish a task that humans used to do” – and this is where it gets confusing if we started with an overly simplistic definition of intelligence before we even apply that concept to our creations.
Faced with this dilemma, let’s consider this clarification offered by Luciano Floridi, currently Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, at the University of Oxford. “Artificial intelligence can outperform human intelligence when it is not understanding emotional states, intentions, interpretations, deep semantic skills, consciousness, self-awareness or flexible intelligence” he says.
To this astute summary I would add my own observation that 95 per cent of what is currently presented as ‘AI’ is not really artificial intelligence in the sense of ‘thinking machines’ that could rival human intelligence in its complexity, but rather just intelligent assistance (IA) – and I don’t really see this changing until quantum computers are here. IA is really the low-hanging fruit that drives digital transformation.
Yet within just a few years even IA (sometimes also referred to as ‘narrow AI’) will certainly have a bigger impact on society than the invention of the PC, the Internet, and the rise of the mobile phone, mostly because IA will beyond a doubt cause a vast, if hopefully temporary, wave of technological unemployment and will bring about a total reset of education and training. While this is clearly a social and economic challenge it is not yet a truly existential risk (as opposed to reaching the point of artificial general intelligence / AGI).
Some future-principles related to AI
Allow me to share my key recommendations at this point, riffing off some of the principles similarly set forth in my 2016 book, Technology vs Humanity.
- Today, the greatest danger is not that machines will take over and treat us like pets if we’re lucky, but that we will become too much like them – thinking and acting like machines, because it’s all so convenient.
- Technology has no ethics because it does not exist like we do – it only simulates our existence. People and societies without ethics are doomed, and going too deep into too many amazing simulations will turn us into machines, ourselves.
- Artificial intelligence is algorithmic and thereby devoid of most things that matter to humans and that define human intelligence (what I call the Androrithms). Sure, an AI can read a million books in a minute but it will never feel a single emotion while doing so; it has vast data-sucking capabilities but cannot comprehend information that is not data to begin with (something that humans do without even thinking).
- We should not confuse algorithmic superiority with truly superior intelligence, and we should not anthropomorphise (i.e. consider as equal) machines too much
- We should not equip machines with the ability to simulate human intelligences such as social, emotional or kinetic intelligence.
Black box challenges
The spectrum of human intelligences extends much further and deeper than we currently know. Yet if we build machines that are infinitely more intelligent (in the computational sense, at least) than we are, they will very likely explore this spectrum in ways we can’t imagine – and then quickly exceed us in unfathomable ways. This black-box problem (i.e. the machines’ capability to solve problems surpassing our own at which point we can’t follow or question their decisions any longer) will loom larger than we could ever have imagined.
If we then bring into existence an AI that has the potential to rewrite or upgrade itself (i.e. to achieve recursive functionality), we may face the era of super-intelligence before we’ve had a chance to ensure that humans can stay in the loop, and that won’t end well.
How will we avoid an AI arms race?
Today AI and related areas such as machine learning and deep learning are not only the world’s hottest investment sector, but also the subject of many ambitious claims made by the governments of the United States, China, India and Russia. AI is a tremendous magnet for money and power. Are we headed towards ‘Cold War2‘, an AI arms race, a race that could quickly spread to related fields such as biotechnology, in particular human genome editing? We must work together to prevent this, urgently.
Digital ethics & human values
In less than 20 years we will reach a point where almost everything will be technically feasible. Witness the rise of quantum-computing, the exponential progress of nanotechnology and the material sciences (3D printing is only one of the areas to mention here), and the coming convergence of technology and biology. All of these changes will happen gradually then suddenly (exponentially), which represents a huge challenge to human thinking.
When technology has near-infinite power, the key question will no longer be how or if something can be done, but why and who.
Who will be ‘mission control for humanity’?
As we are going from feasibility to responsibility, from engineering to ethics, will this world be heaven or will it be hell?
Once thing is certain: when everything is hyper-connected and machines are smart beyond human comprehension, I think there is only one possible future for us: we must become more human – not less.
Embrace technology but don’t become it
by Gerd Leonhard, Futurist, Humanist, Keynote Speaker and Author of Technology vs Humanity. With editing support by Oliver Pickup and Philip Hunziker