What You Can Do About “The Social Dilemma”

If you haven’t done so already, do yourself a favour and spend 96 minutes tonight watching “The Social Dilemma“. This is a decently put together Netflix documentary outlining, among other things, how and why the pendulum has swung too far in terms of citizen use of and addiction to social media, driven in large part by algorithms optimized to exploit our data for clicks and profit. Don’t worry, you’ll likely get your 96 minutes back in the form of saved time by not checking your social feed for the rest of the night. Sadly, most people will go back to their regular routine by the time morning rolls around. The dopamine hit that comes with checking their feed is simply too good.

To those of us working in the digital space, there is really nothing new here. However, the fact that this documentary has hit mainstream audiences (ironically, thanks to recommendation algorithms) did remind me just how important it is to continually educate and build up the digital literacy skills of all internet users. The intent of this post is not to provide a synopsis (there are more than enough of those out there), but rather to share some high-level thoughts on what I liked and disliked about it, as well as concrete actions people can take.

What I Liked About It

  • The people interviewed were former senior social media company employees, investors and/or technologists admitting that things have gotten out of control. This ups the believability quite a bit. I remember watching the U.S Senate Hearings with Mark Zuckerberg and cringing at some of the questions being asked by uninformed Senators, completely unaware of how the Internet works. Not the case here.
  • It emphasized the sheer scale at which all of our micro-engagements (likes, glances, searches, comments, clicks, hesitations) are monitored across all platforms and mined in real-time by AI-driven algorithms optimized to keep us engaged, regardless of the effects this may have on our health or the health of society.
  • It covered the business model dilemma at the heart of all of this. Primarily that we, “the users”are the product (i.e. our data) that brings in the profit thanks to all the 3rd party individuals/organizations (including both state and non-state actors) willing to pay for this data so they can influence us through modern advertising techniques. This can be anything from getting us to purchase a product to small nudges in opinion on a social issue, which lead us to behaviour change over time. While it can be argued that the latter can be a good thing if it’s, let’s say, a public health organization basing its messaging on science, it becomes more problematic if it gets in the hands of someone or some organization that has malicious intent or is trying to advance their own idea of what is good for society.
  • It addressed the fact that real people are getting hurt, manipulated, abused, and in the worst cases killed as a result of the above.
  • Rather than simply pointing fingers at social media company leaders, it touched upon the fact that nobody set out to design these platforms with malicious intent. Lack of ethical control measures? Yes. Willful blindness to issues being raised by whistleblowers? Yes. Hubris? Yes. The desire for profit? Yes. But not malicious intent. While that doesn’t justify what these platforms have become and how they are used with malicious intent by others, it does help to shift the focus inward to examine our own behaviours, since without us (i.e. the product /data) there would be no platforms.

What I Didn’t Like About It

  • I don’t think it went quite far enough in exploring how a certain percentage of social media users (a minority but often the most active) are simply negative, broken individuals spewing hate on others. That’s a human problem that existed long before social media. No algorithms can be blamed for that. They amplify it, but they can’t be blamed for it.
  • Most daily social media usage has moved to so-called “private” social media (i.e. instant messengers). Most of these are not driven by algorithms or recommendation engines but rather by humans. The forwarding of messages on these platforms can lead to the rapid spread of disinformation. True, there are bot accounts disguised as individuals, but they are created by humans.
  • It didn’t cover the rapidly growing area of ethical algorithm design and government policies in the ethical use of algorithms. A good example of this is the Government of Canada’s Directive on Automated Decision Making.
  • It missed an opportunity to provide tools and resources that people can immediately use to help them reduce the ways in which they contribute to the problem.

What You Can Do To Help Minimize “The Social Dilemma

  • Turn on and use the screen/app limiting features of your smartphone. On Google/Android phones the built-in app is called Digital Wellbeing and on iPhones, it’s Screen Time. Study your own usage and set controls for each app. You can set a hard stop for all social media past a certain time.
  • Go to your notification settings right now and ask yourself if you really need to be proactively notified if “friend/organization X” posts something. What will happen if you miss that selfie? Please, for your own sanity, turn at least 3 notifications off. Imagine the exponential effects this would have if everyone took this step.
  • If you have young kids ( I have three for the record), seriously ask yourself if they need to be on public social media. It’s no longer just about “trusting” them to do the right thing. They might be doing everything right but that doesn’t stop them from being exposed to bad actors and manipulated over time as the algorithms learn about their preferences. I realize that many people use dedicated accounts these platforms provide for kids but I find that approach only feeds the beast. Imagine all the precise data (i.e. personal behavioural profiles) the machines will have by the time your kids are of age to create regular “adult” accounts.
  • Discuss, discuss, discuss. Talk about this with your friends, family, colleagues. We need more people talking about this to keep it on the radar as a pressing issue (not just when there is a Netflix doc about it). Eventually, the folks that are civically engaged (you perhaps?) will write to government leaders pushing for more digital literacy education (i.e. self-awareness as to our own role), better regulation, and the need to build responsible digital engagement into the curriculum of the public school system. Given the effect algorithms currently have on our lives and society as a whole, it’s crazy this hasn’t become a major election issue other than from the perspective of “influencing” an election.
  • Use basic tools like Snopes. com to verify links before sharing anything that creates an emotional reaction in you. Better yet, do some basic detective work yourself by getting familiarized with tools that can be used to detect fake photos, accounts, emails, etc. (refer to slides 22-28 in this presentation I posted).
  • Next time you see something on social media that triggers a reaction, take a deep breath, put down your phone/screen, and go for a walk outside. There is still hope for us.

Your Shopping cart

Close