Have you ever done something you knew you shouldn’t have? If you have, don’t worry — we’ve all been there! The science of self-sabotage is well-documented, and it’s reflected in our behaviors.

This is because of cognitive dissonance. We all have a certain world-view, and we avoid things not aligned with that world-view. If something is mis-aligned, we may even self-sabotage.

This is why we sometimes take actions that are counter-productive. If your diet consists of unhealthy junk-food, it may be difficult to switch to a wholesome diet, because it’s not what you’re used to. A break in continuity creates cognitive dissonance.

So what does this have to do with AI? Well, AI can be described as an “inference engine” that seeks to minimize “error.” An AI is a model of the world, and this model is constantly compared with the real-world, and the resulting difference, or error, is minimized. Many researchers would describe the human brain in a similar fashion:

“The brain is an ‘inference engine’ that seeks to minimize ‘prediction error.'”

Therefore, our “engine,” or our world-view, may be tuned wrong, but we’ll still be geared to preserve and maintain that engine. Let’s say that you’re an alcoholic (7-14+ drinks per week) and you learn that any amount of alcohol is toxic to the body, so you decide to reduce drinking. As soon as your body realizes you’ve stopped taking the poison, the “error” increases – you face withdrawal. To get rid of withdrawal, and return to the homeostasis, or error-free engine, you drink again.

Therefore, it is often necessary to break through a world-view, and change your model of the world. If your model of the world says “I like fast-food” or “I like binge-drinking” or “I like spending all my time online” – then operating in that engine will only cause problems.

All living things “fiercely resist” entropy and dissolution. After all, being an organized entity distinct from the rest of the world is what it means to be alive. But more than just resisting entropy, we resist anything contrary to our internal world-views. When these world-views are incorrect, it’s described as “false inference” – both for people and for AI.

Further, we use “cognitive membranes,” or “Markov blankets,” to separate our world-view from pure reality. If all we knew was reality, we wouldn’t exist as ourselves, we would just be. Our biases and flaws literally create us, they are part of the cloaks we use to separate us from nothingness.

However, when those cloaks (or cognitive membranes or Markov blankets or beliefs) become skewed, they become our worst enemy. In order to thrive, we need to throw off those cloaks, doing the opposite of what we’d been doing prior, and fiercely resist those false inferences.