Back in the day (as in a decade ago), my only presence on social media was a MySpace page. It had a nice animated background featuring some beans (it was the era of being “random”) and automatically played music from my teenage years, (probably Bright Eyes), when you visited. It wasn’t the most efficient tool at staying connected but it was gleefully undramatic, pleasant and, dare I say it, nice.

“Nice” is not a word many would use to describe social media today. The world-at-your-fingertips reality we inhabit is a double edged sword. The sharper side of that blade includes the limitless hate that breeds online, the horror show that is the comments section, cyberbullying, the proliferation of fake news and rampant trolling, for starters.

The role that platforms like Instagram and Facebook should take in curbing unpleasant and potentially dangerous behavior online is, to put it mildly, very complicated. It ranges from the technical aspects of making software nuanced enough to parse out hate without sanitizing political discourse, for example, to the more philosophical question of who’s responsible for making decisions about what counts as “hate” and “political discourse” in the first place.

And on a more basic level, is it really possible to make social media nice?

Instagram, at least, is trying to make that happen. In a recent Wired interview, Instagram’s CEO Kevin Systrom outlined a two-step plan to make Instagram, in essence, the nicest darn social media platform on the web. The first part of that plan was released in late June and used artificial intelligence to spot and remove hateful comments. (Interestingly, they used Taylor Swift’s account as a test case, specifically deleting snake emojis from the comments section. It’s worth noting that it did actually work.) Or, as Nicholas Thompson wrote in Wired, the “service launched a series of tools that roughly model what would happen if an empathetic high school guidance counselor hacked your phone.”

The second phase has yet to be rolled out, but it will be the rough equivalent of having an optimistic, arguably delusional, friend curating your feed. The idea is that the app will spot positive comments and prioritize them in your feed in hopes of creating a domino effect of digital goodwill. In essence, the company hopes that having nice things in your feed will make you nicer to others and make everyone more willing to join the conversation.

It’s not clear whether this will really work, but most people would agree that efforts to make the social media platform nicer, especially as many of Instagram’s 700 million users are teens, is a valuable and worthwhile effort. This is no simple task though, to say the least, not to mention a potentially futile one. As Thompson points out, if phase two works, “Instagram could become one of the friendliest places on the internet. Or maybe it will seem too polished and controlled. Or maybe the system will start deleting friendly banter or political speech.” What Thompson seems to be getting at is that even if the tech works, the actual implications of it are hazy.

Systrom was clear that in trying to curb mean comments, Instagram will only be going after the “really, really bad stuff. I don’t think we’re trying to play anywhere in the area of gray,” he told Thompson. But even going after the “really, really bad” stuff is more loaded than it might seem, as evidenced by the fact that the algorithms Instagram and Facebook use to identify hate speech have already censored reclamatory language—often from marginalized users like members of the LGBTQ community—because it was incorrectly identified as hate speech.

Machines aren’t very good at spotting nuance, but even if they get better, hiding the hate and boosting the feel-good stuff might not be a solution for making a social media platform nice. (Not to mention that even algorithms are built by people, who, conscious of it or not, have biases.)

And not to attach self-serving motives to good deeds, but as Thompson points out, a kind platform has capitalistic advantages as well. The kinds of changes these platforms are trying to make are “as good for business as they are for the soul,” he writes. People—and advertisers—want to spend money on a platform that encourages maximum use, i.e. time spent online, and wouldn’t we all spend more time on a platform where mean things didn’t happen?

Vying for our attention spans through our social feeds, even if they’re full of positivity, is the biggest problem of all, according to former Google ethicist Tristan Harris. Harris runs the nonprofit Time Well Spent, which partnered with Thrive Global to create an ongoing section dedicated to spreading awareness about the addictive tactics apps like Facebook and Instagram use to keep us coming back for more.

Thompson asked Systrom about Harris and he shrugged off the comment: “Sorry I’m laughing just because I think the idea that anyone here tries to design something that is maliciously addictive is so far-fetched. We try to solve problems for people, and if that means they like to use the product, I think we’ve done our job well.”

But making the platform nice doesn’t necessarily equate to a job well done. Harris told me over the phone that, “the thing that Kevin is reacting to is the frame of maliciously trying to addict people. Obviously that is not true—in most cases,” he said. The problem, according to Harris, is that the entire tech industry is overlooking the daily onslaught of information—kind or otherwise—that social media platforms profit from.

Even if these measures succeed, and we end up exclusively scrolling through photos of rainbows and butterflies and never have to see a mean comment again, would that fix the other effects of social media, ones that algorithms can’t measure? As Harris told me, “If they really care about people’s safety, and cleaning things up, they ought to care about the protecting and caring of people’s minds, and the safety of people’s social comparison.”

While it’s unlikely that we’ll see major tech companies actively trying to get us to spend less time on their platforms anytime in the near future, I’m not sure I’d want a platform that was intentionally cheery. I’m vehemently anti-hate speech, and think social media platforms should try—even if they fail—to find better ways to curb such behavior, and take accountability for the ways they’re currently getting it wrong. But as someone who has used social media for almost as long as I’ve used the internet, the idea of a too-nice platform flattens how bold and expansive social media at its best can be.

We already curate beautiful overhead shots of our coffees and artfully designed photos of ourselves looking at art as if nothing could possibly be awry in our personal or political lives out of the frame. If an entire platform took on such a rosy hue, it’s not hard to imagine that people would engage in social media escapism more than they already do. It’s too easy, and perhaps ignorant, to live in a more pleasant and liveable simulation of our world than to dwell in the reality of what’s going on offline. And on an incredibly basic level (in all meanings of the word), a sanitized platform sounds boring. Like the social media version of Celebration, Florida.

So yes, hate on the internet is a serious problem. But it’s a serious problem in real life, too. And finding a balance between making online spaces friendly enough to encourage discourse, not so open that hate speech is allowed, but not so censored that platforms feel sanitized, is an ethical problem that I’m not sure we’ll know the answer to anytime soon.