For years, some mental health experts have warned that the rise of disinformation and so-called “fake news” can have a serious effect on mental health. It’s “often inflammatory in nature and can elicit feelings of anger, suspicion, anxiety, and even depression by distorting our thinking,” psychiatrist Dr. Vasilis Pozios told Psycom. 

People who recognize the falsehood of disinformation can also experience “anger and frustration, especially if the reader/viewer feels powerless in the face of attempts to manipulate public opinion by way of fake news,” Pozios added.

The Covid-19 pandemic has exacerbated the problem. Numerous studies have found that disinformation about the virus leads people not only to take dangerous, unproven steps to try to protect their health, but also causes psychological problems. “Panic, depression, fear, fatigue, and the risk of infection influence psychological distress and emotional overload,” one study found. Another noted that, “Repeated exposure to this stream of misinformation may affect the construct of external reality. This may lead to a delusion-like experience, which has been linked to anxiety and social media overuse.”

Through my work at the Center for Human-Compatible Artificial Intelligence (CHAI), I focus on ways to combat the spread of false information. Action is needed from governments, Big Tech leaders, businesses, and individuals. The more we do to tackle the “crisis of trust and truth,” the better job we’ll do of protecting mental health.

Educate

There’s much discussion about the need to help people discern fact from fiction and determine what a reliable source is. But the kind of education we need to create goes beyond this.

It’s time to educate people about how the new information ecosystem works.

In previous generations, people got information from newspapers, radio, and TV. There’s a huge difference: Those forms of media don’t “read” you while you’re reading them. Although reading websites and social media may feel like a one-way process, these platforms continually gather information about us. 

People need to learn how Facebook, Amazon, Google, Twitter, Apple, TikTok and Instagram “personalize” their news and information feeds using algorithms. Even tech savvy young people often don’t know how this works, researchers have found. Kids should be taught from a young age how these algorithms work, with greater detail introduced in later grades. Adults should be educated as well. One way for this to happen is for businesses to make annual courses in media literacy and disinformation a part of workplace learning.

Regulate

In the wake of recent testimony from Facebook whistleblower Frances Haugen, calls have grown for the government to regulate major social media platforms. It’s high time. Our existing laws were not designed for an era in which lies could spread so quickly, and with such power, to so many people. 

Which types of regulation are most important? These fall into two categories. First, there must be transparency. Big Tech platforms should be required to share their methods and data with nonprofits or universities that can study the psychological and social effects of these algorithms. It’s time to end the era of secrecy behind such powerful forces shaping our society.

Second, Big Tech platforms must be held liable for the harms they cause when they use their algorithms to spread dangerous disinformation. Even if the platforms themselves don’t create the content, their algorithms assure that vulnerable people will be influenced by it — from lies about vaccines to hateful rhetoric about minority groups and much more. This means amending Section 230 of the Communications Decency Act. (For much more on this, see the report I co-authored with my Berkeley colleague Camille Carlton, Technology Solutions to Promote Information Integrity.)

All of us can be part of the effort to call on lawmakers to enact such changes.

Innovate

Finally, developers should create new types of algorithms that can operate differently and cause less harm. For example, at CHAI we call for “explicit uncertainty” at the core of how these should operate. Rather than being given a fixed objective of keeping people addicted in order to satisfy an ad model, platforms should be built with no fixed objective. Instead, they should be designed to conform to how individuals choose to use them, and programmed in such a way that they can be redirected quickly if they are found to be causing harm.

It is, of course, a complex task. But it can be done. The same ingenuity that led to the creation of these platforms can be used to combat disinformation and protect users’ psychological well being. The more people from all walks of life join this effort, the more we’ll fight back against the scourge of information — and the more likely we’ll all be to thrive.