Algorithmic human engineering

Boredom

A month or two back it hit me that I haven’t been bored in years. As a child in the 90s boredom was a constant threat: nothing good on TV, no neighbourhood friends around to play, etc. VHS tapes, N64, and dial-up internet could only alleviate boredom to a certain extent. I could only watch the same dozen tapes so many times, Nintendo games get old, and the internet wasn’t the bottomless content well it is today — and, it was slow.

Today is different. My Roku box provides streaming video on demand. My Facebook feed is always a few taps away via the app on my phone. If I tire of doing real work, YouTube will provide me with videos to watch instead.

You might at first have thought that the main difference between then and now is the amount of content available, or its ease of access. But neither of these are the main issue. With respect to ease of access, as a young kid I had all day to sit in front of the TV; it didn’t matter that I didn’t have a smart phone to carry with me. About the lack of content, there’s a lot of options on Netflix, but there aren’t that many. Similarly, it doesn’t take too long to scroll through my entire Facebook feed.

Instead, the difference, I think, is in the algorithmic recommendations. Youtube, for example, isn’t good at alleviating my boredom because it has so much content. (And it does have a lot of content! A quick Google search suggests 300 hours of content are uploaded every minute.) Instead, it’s good because whenever I open it up it immediately recommends a few videos it thinks I’ll like — and usually at least 2 or 3 of them catch my interest. Similarly, Netflix, Facebook, etc, are all very good at showing me content that engages me.

Low-hum stimulation

All this may seem great, at first. After all, it really sucked to be bored so much as a kid. What isn’t great about smart algorithms learning what I like and constantly feeding me entertainment?

The first issue that strikes me is that the quality of that entertainment just isn’t very good. For example, YouTube is pretty good at figuring out what I’d like to watch, but inevitably it settles into the same shallow rotation of videos from previously watched channels or nearby genres. The videos are interesting, and catch my interest, but never push me. They never really excite me. They are never really all that new. They don’t expand my horizons.

So, now, I may never be bored, but I’m never terribly excited, either. Part of it is that the constant barrage of entertainment is numbing, but (I suspect) the bigger part is that what YouTube and other platforms are showing me just isn’t actually all that good. This makes sense: these algorithms are optimized to maximize clicks. They want me clicking through a lot of stuff, over a long period of time. Facebook and YouTube don’t want me to stumble onto great content that catches my attention for hours and takes me so high I need to reflect by myself in silence for hours more, or makes me run outside or to my journal.

So, never bored, but never terribly excited, either. It’s hard not to, at this point, envision dystopian scenes of humans affixed to their screens, addicted to the low-hum of just-good-enough stimulation keeping them from dreaded boredom, but never actually engaging with anything worthwhile, never consuming art or ideas which take them high and expand their horizon. It’s like a life in Nozick’s “experience machine”, except the machine is low-tech and the experiences it affords are pretty lackluster — uninspiring, dull, “meh”.

Re-engineering humanity

You might suggest that the problem is just bad algorithms. With better, more creative algorithms these platforms could recommend to me the kind of high-quality content I really want. This ignores the problem just articulated: such algorithms wouldn’t be as profitable, since they would generate fewer clicks, and hence there’s no incentive for platforms to develop them.

But there is a bigger concern. I was introduced to this concern through a CBC Ideas podcast, “Re-Engineering Humanity”. The podcast interviews Brett Frischmann and discusses his recent Sir Graham Day Lecture and book Re-Engineering Humanity. Summarizing his concerns, he says (loosely transcribed): “We rely so heavily on digital networked technology … that the degree of freedom we have to be authors of our own lives shrinks … whittled away to a choice about what to plug into, what show to watch on Netflix.” He notes that “more and more the world around you is an engineered set of stimuli that are already predicting and programming responses on your part, and not raising, not giving you the opportunity to reflect, not making it salient, … so you never really think you have reason to deliberate” (loose transcription).

Frischmann here is making several points that intersect with my concerns about boredom. Consider how we all rely on the algorithms used by various platforms. I mostly rely on YouTube to show me new videos. I rarely actually search, or think about what I want to watch. This reliance shrinks how I live and shapes my life, those choices (as he says) becoming choices about whether to “browse” (read: be fed suggestions from) YouTube, Netflix, Facebook, or Instagram. The algorithms of these platforms predict what I want to see, and are just good enough at it that they provide an addicting reprieve from boredom. My own behaviour is in turn shaped by them: I become conditioned to open an app as soon as I find myself without something to do, I’m conditioned to click through the app in a certain way (a way that maximizes my clicks), to follow the recommendations, to watch ads, not to do any searching myself. All the while, I never bother to reflect on what I’m doing, whether there might be something better that the algorithm hasn’t shown me. I simply reflexively plug into the app and “enjoy”, free from boredom for another moment.

Frishmann discusses other examples as well, including how we’re all conditioned to reflexively click “accept” on digital user contracts we never read, or how our adaption of wearable technology like Fitbit conditions us to accept a certain level of surveillance. These are bad, too, but I think the conditioning — rather, programming — from the algorithms on platforms like YouTube is much more pernicious. As the host of the podcast pointed out, this kind of programming doesn’t just shape our tastes for art and music, but also our political views. Think, for example, about how advertisements on Facebook or viral memes shown to you shape your beliefs about important topics like politics, gun laws, healthcare, climate change, or the current pandemic. Sure, much of these posts are “shared” by your friends, but Facebook algorithms helped you shape your friends list (via suggestions), and Facebook algorithms determine which of your friends’ shared posts show up in your newsfeed.

When I was a teen I discovered new video games through friends. As a young adult recently moved to a new city, I discovered new music through DJs at clubs and on the radio. As an academic, my political and professional philosophical views were shaped through real life discussions with colleagues, at both the office and bars, unfiltered by social-media bubbles. Before 2015, when I really got into the trap of algorithmic platforms like YouTube and a modern smartphone, I spent a lot of time bored, looking for my next interaction with someone who could share with me something worth experiencing. The lows were low, but the highs were high. The boredom left time to reflect on what mattered to me — when there’s nothing to do but walk off a hangover, you have plenty of time to reflect on the previous day. It was a life filled not with experiences selected by an algorithm designed to keep me clicking, but with those curated by other bored, and hence creative, people.