Let’s talk about the assumptions upon which Upworthy is built.

Eli Pariser first came across my feeds when he delivered a TED talk decrying technologies that tailor our online world to what we want to see instead of the one we maybe should see. For example, Pariser noted that two different people searching Google for the term “Egypt” see two different results, depending on their location, search history, and more: one sees sunny tourism photos, and the other sees the roiling social and political conflict in the country.

Pariser’s conclusion is that these kind of “filter bubbles” obscure the real world from us and let us live complacent lives. The subtext is that we should be afraid that the machines are controlling what we see, that Big Brother is a robot, and we don’t even know we’re not seeing everything. Wake up, sheeple, etc.

These are potent concerns, and I’m generally inclined to agree with Pariser that we should all be able to make our own decisions about what kind of content we see. But by clicking on tourism photos of Egypt instead of news articles about the political upheaval there, we actually have made a choice. We’ve made thousands of little choices, in fact, that demonstrate what we actually want. It turns out we maybe don’t want to see as many updates on political turmoil in foreign countries as we do pretty photos of tourist attractions. Our filter bubbles accurately reflect our desires.

If that’s the case, if an algorithm has correctly teased out our natural preference, what’s Pariser’s objection? It seems the simplest explanation for Pariser’s concern is that we should all have better preferences. That we, in his eyes, prefer the wrong things.

(As an aside, there’s another explanation for why people are quick to demand more politically active search results even though they don’t actually click on them. People like to look politically interested even if they are not actually politically interested, and decrying politically blind filter bubbles is an easy way to express this desire. In essence, objecting to the filter bubble is our penance for getting the benefit of the filter bubble. (Žižek would recognize this: he describes buying fair trade coffee from Starbucks as, via the consumerist act itself, buying your redemption from being a consumerist. It’s an idea so beautifully formed that I can’t look directly at it for fear of blinding myself, and I have to stop myself from mentioning it in everything I write.))

Anyway, the big problem is that there’s an underlying assumption to Pariser’s logic that goes unchallenged, probably because most people share it. Anger at Google for presenting a sunnier, more fun vision of Egypt only makes sense if you think it is capital-I “Important” to see stories about Egypt’s political situation.

If I wanted to see the protests in Egypt, I wouldn’t search for “Egypt.” I’d search for “Egypt protests.” And I would get exactly what I wanted. Pariser assumes that I am less, that I am not being engaged with the world enough, if, when I search for Egypt, I’d prefer to see pyramids and ancient paintings.

Don’t I get to choose what I find Important? Don’t I get to decide which political causes I pay the most attention to? Don’t I have the right to merely skim the news, get informed, and then go about my day looking at pictures of the Sphinx? Who is Eli Pariser to tell me my sense of what is Important is wrong?

Pariser’s world-view is that what is Important is Obvious and possibly even Objective. I do not share that world-view.

Now how do you think Eli Pariser would respond to a company that makes all of its ad revenue and spends all of its time presenting feel-good stories that make the world look like a better place than it might actually be? Presumably, this is just another “filter bubble,” a curator with a sense of what is Important. But it ignores strife, poverty, and all sorts of ugly stuff all over the world, in favor of presenting a reality that looks a lot nicer. Such a company would be obscuring what Eli Pariser might think is Important.

That company is called Upworthy. And Eli Pariser co-founded it. Which is just further proof: in a sense, Pariser doesn’t really care what you want to see; he wants you to see the things he wants you to see.

People that take Pariser’s anti-filter-bubble position present themselves as advocates for openness and democracy and making our own decisions. Obviously I’m all for those positions. But if Upworthy is meant to be a solution to the filter bubble problem, its solution is “you should all care more about the things I think are Important.” That’s a position I can’t get behind.

There are larger questions hiding in this discussion. For instance, is it morally objectionable to be entirely uninformed about global politics? Is it possible to be politically aware without being entirely informed? Is it morally sound for “True Detective” to be the thing you care most about, even though there’s major turmoil in Ukraine? These are, I think, interesting questions, but they all relate to the fundamental issue: is there such thing as Objective Importance, things that are Important no matter where you are, what kind of life you live? These are questions that actually do threaten the structural integrity of our filter bubbles. These questions might actually make our filter bubbles work for us. [And, since I first wrote this piece, we’ve seen at least some demonstration of how filter bubbles have actual political consequences.]

I’m really interested in all of these ideas. I don’t think Upworthy is.

(This post doesn’t really get to some of the other problems with Upworthy: that it’s a content aggregator that doesn’t add much to the content, that its headlines are misleading clickbait, etc. I’ve complained about all that in other forums, so I didn’t reiterate that here. Sorry! I think Upworthy is a big problem!)

Updated: