What do you think about “It”? Is “It” the end of civilization? Is “It” eventually going to rule over humankind?

“It”, “It”, “It” … the formless and shapeless energy of “It” dominates business meetings, think-tanks, academia, conferences and media, with a swagger and a jaunty spring to its step these days. It’s now unforgivably unfashionable if an important policy, business, or even a mild personal decision was made without a servile, obsequious acknowledgement of “It’s” presence.

Lest there be any doubt about what “It” is, well, we are obviously speaking about Artificial Intelligence (AI) and the question of AI taking over from Humans. 

To start with, well, by Jove, you absolutely need to have a spontaneous coin-in-the-slot answer or, even better, a carefully considered ponderous opinion or an incisive perspective on the question. You have no choice, you cannot just irresponsibly shrug your shoulders and say “I don’t quite know yet,” because that might render you unworthy of inclusion at the next meeting, the next interview, the next conference, the next strategy session, the next something-that-we-attribute-preposterously-asymmetric-importance to.

And therefore, most of us do what we have perfected so well over millennia — we nervously read some random literature, we listen to one illuminated luminary or another, we write down, rehearse, practice, and perfect our lines on the topic. And the more prognostically dark our lines are, the more likely we are to invite the attention and respect that we crave every conscious moment of our lives. Sadly, and unforgivably, our brain does not permit itself the basic courtesy and let us just say plain good manners of letting “things” such as AI soak a bit in our brain while we wait and locate our authentic (not necessarily impressive) point of view on it.

How pathetically reverential and obediently conformist we have become! And why and for what? So that we can somehow fit in to a dim and dull and sanctioned societal mold that has formed over generations? This constant day-to-day inauthenticity savages our hearts and burns our brains. But we cheerfully scrape the skin off our faces and determinedly shred ourselves into ribbons, nervously copying, desperately conforming, cheerfully drowning in voices and words and opinions and dictums of others and parroting them with a carefully cultivated and practiced conviction.

With that as the background, we now return to the Season’s fashionable question of whether AI will eventually “take over” from humankind. I have patiently let the question soak in my modest brain for a bit without aggressively soliciting a point of view from it, just staying with a simple and worshipful reverence for my experiential facts and actualized personal experience (not someone else’s facts or personal experience, which are to be greatly respected, but they are not mine — I have not felt them and am not able to feel most of them, so they are not authentic to me). And this is the best I can make of this whole AI business — a simple and consciously illiterate and layman’s point of view that I don’t pretend or portend to have a shred of penetrative intelligence accompanying it.

It strikes me that the answer to the Season’s Question lies in the simple fact — that AI lacks that one energy that has propelled the evolution (devolution?) of civilization and humankind over centuries, the energy of “self-interestm.” It’s the energy that seeks self-gratification, that drives and has driven all human action. Even selflessness is just another manifestation of self-interest. Both selflessness and selfishness flow from the same energy source of self-interest seeking the exhilaration of self-gratification. It is self-interest that takes some of us to a temple or to a church or to a mosque or to a synagogue, or indeed to the mountains or to the water. And it is that same self-interest that deposits some of us into a bar or a pub with, let’s say, somewhat dull and disagreeable company in the evenings. In a strange and simple manner, all pursuits and manifestations of self-interest have the same feel, the same grain, the same texture — everything is the same, everything is perfect including being perfectly imperfect at times, everything is as it should be.

I would therefore confidently predict (with the usual annoying often-wrong-but-never-in-doubt tone) that AI will always be under the thumb of human beings (and the Heavens know that there are all hues and colors of our species out there that can make both great many good things and also great many bad things happen, as we have seen over time). Self-interest has been and is and will always be the root of all human action, and that is a depthless chasm that AI can never bridge.

In my view, fine folks, we are and will always be “safe” in our own rotten hands, as we mindlessly drive the degeneracy of human consciousness to unspeakably depthless levels. And yes, AI could “take over,” because history has shown that our hands are capable of anything, including the inexplicable instinct to self-injure.

In case you have unwittingly suffered through reading this entire piece, I unconditionally apologize to you, and I grant you all rights and reactions related to it, from nauseated incredulity and depthless contempt to tacit agreement and mindless repetition, from bafflement to kindred understanding, from mirth to abuse, and all such polarized emotions. I personally have no interest in what befalls this piece of authorship (and that is entirely and predictably driven by my own self-interest!).

(This article strictly represents the writer’s own personal views, and most certainly not the views of his organization, and knowingly not the views of anyone else)

Author(s)

  • Srini Rao

    Srini Rao, Global Leader, Professional Services industry