This week, lawyers from Twitter, Google and Facebook appeared in front of a Senate Judiciary subcommittee to address their role in influencing the 2016 election—much of which centered around the proliferation of fake news on their platforms, according to the New York Times.

It’s taken these internet behemoths months to acknowledge just how many people saw fake news or Russian-backed ads on their sites. On Monday, Facebook said more than 126 million users may have seen “inflammatory political ads bought by a Kremlin-linked company,” according to the New York Times. Google said the same company uploaded more than 1,000 videos to YouTube and Twitter said the company had published more than 131,000 messages on its platform.

This is just the start of what will no doubt be a long investigation into how some of our most-used websites influence our lives (and important political outcomes) offline. But regardless, the hearing did clarify what many already know: these tech companies don’t seem to know how to control the machines they’ve created.

While many people argue that the burden to fix these problems should fall on the tech giants themselves, part of the fake news situation is on us. And that’s because, as Brian Resnick writes for Vox, of the illusory truth effect.

The research-backed psychological effect works like this: the more often you see something, the more likely you are to think it’s true. Even if it’s definitely not.

This “mental heuristic”—a shortcut your brain takes—actually serves us most of the time, Resnick writes, noting that it’s how we remember basic facts about our world, like gravity. But false things can get swooped up in it, too.

Resnick cites a recent study from Yale where participants were shown six fake news headlines that looked like regular Facebook posts and asked to rate how accurate they were. Participants were then distracted with an unrelated task, and afterwards saw a new list of headlines to evaluate that included the fake ones they’d seen earlier. Supporting previous findings, when subjects—both Republicans and Democrats—had seen a fake news headline before, they were more likely to think it was true—even a week later, as confirmed by a follow-up experiment.

About 10 percent of participants reported that a headline was true when they’d seen it before, which is around 5 percent more than people who hadn’t seen it before.

That may seem like a small increase, but Resnick puts it in perspective: “Facebook and Google reach just about every person in the United States. A 5 percent increase in the number of people saying a fake news headline is true represents millions of people.”

Think about it this way: as of August 2017, two-thirds of Americans reported getting at least some of their news from social media, according to Pew Research Center. That’s a lot of people turning to a unregulated news service, one that’s also been infiltrated by nefarious organizations intent on disrupting our democracy.

“Every time a lie is repeated, it appears slightly more plausible to some people,” Resnick writes. That’s true even for experts, meaning that literally everyone is subject to the illusory truth effect, regardless of how well-versed they are in a subject.

There are other complications too, namely having to do with how our memories work—we might not be able to undo the damage that seeing fake news does to us. Resnick spoke with memory expert Roddy Roediger, a psychologist at Washington University in Saint Louis, who told him “when you see a news report that repeats the misinformation and then tries to correct it, you might have people remembering the misinformation because it’s really surprising and interesting, and not remembering the correction.”

Some people and organizations, including Facebook, have advocated for adding banners or flags that indicate whether an article has been disputed or not. But that might not do the trick either, as another study Resnick cites found that those type of alerts don’t really make a difference in terms of the illusory truth effect—even if we’ve been alerted that a story is potentially untrue, if we’ve seen it before, we’ll be more likely to believe it.

Resnick calls on social media publishers to step up, as they are “the newspapers of today.” Or as Alexis Madrigal writes for The Atlantic, we need to stop offloading blame onto algorithms. “The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process,” Madrigal writes, “the sooner the better.”

Read more on Vox.