Facebook has been having a rough month (see: extremely not-okay ad targeting and probably selling ads to Russians trying to meddle in the US election, for a start). The latest misstep comes after Guardian reporter Olivia Solon tweeted that Facebook was using an “engaging” Instagram post of hers to advertise using the platform via Facebook (which just so happens to own Instagram). The problem, to put it mildly, was that Solon’s most engaged post was a screenshot of a rape threat she’d received via email.

Solon posted a screenshot of the violent threat last year. “Sadly this is all too common for women on the internet,” read Solon’s caption for the post, which, according to the Guardian’s Sam Levin, received three likes and more than a dozen comments.

The ad was brought to the internet’s attention after Solon tweeted a photo of the ad, which included the original screenshot underneath the words “See Olivia Solon’s photo and posts from friends on Instagram.” In an email, Solon told me her younger sister and brother also saw the ad.

Among other serious ethical problems this situation raises, it also underscores how “engagement” as decided by a machine can produce some pretty horrible experiences for the people actually engaging with the content. (For instance, A piece on The Verge from a few years back details how Facebook’s scrapbook-like features—like Year in Review or On This Day—sometimes resurface painful memories.)

It’s worth noting that artificial intelligence and algorithms can be used for good, like potentially spotting and providing support to suicidal users. But the main problem is that, for now, those algorithms lack the nuance of a human reviewer. Which is, of course, the whole point of an algorithm.

That’s something Facebook recognizes: following the ProPublica report detailing that Facebook allowed advertisers to directly target people interested in extremely offensive subjects, including anti-semitism, COO Sheryl Sandberg announced (via Facebook post) that the company will crack down on how ads work and add “more human review and oversight to our automated processes.”

The role that technology giants should play in monitoring their platforms, and making them habitable for all users, is murky. And obviously, humans aren’t perfect either. We have biases and make decisions, consciously or not, based on our subjective experiences—something algorithms are largely designed to eliminate. But at least for now, bringing humans back into the mix seems like a good place to start.

Solon told me the incident “highlights an ongoing problem of algorithmic accountability,” adding “the oft-repeated excuse from tech companies of ‘Sorry, the algorithm did it’ is starting to wear very thin.”