Main arguments from the readings

Our readings this week began with a confirmation bias gut check. Clemson University’s online Spot the Troll quiz presents social media accounts and has users guess whether the account belongs to a real person or if it’s a troll account. After making a selection, the quiz reveals the true answer and highlights indicators that suggest the what is real or suspicious about each account.

In his TED Talk, “Beware Online Filter Bubbles”, Eli Pariser identifies the behavior of YouTube and other social media and information platforms who push users towards controversial and/or extremist content to keep them glued to the platform. This essentially prevents people from receiving a balanced flow of information from a variety of sources that could challenge their opinions or worldviews, instead driving them dangerously and irresponsibly further and further down one side of a spectrum of political beliefs or ideologies.

In Inman’s “Believe” from the beloved comic series, The Oatmeal, one character explains the “backfire effect”, the psychological phenomenon of bias against factual information that challenges strongly held worldviews and opinions. The argument goes that we cannot change a person’s core beliefs with facts alone, so in order to affect change in the world, emotionally thoughtful, responsive reactions and approaches to information is needed. This will also be revisited in academic readings.

Lauren Bryant provides a deeper discussion of filter bubbles, focusing on YouTube’s bias to suggest alt-right content in “The YouTubeAlgorithm and the Alt-Right Filter Bubble”. She argues that this leads to a deceptive definition of alt-right groups and opinions as belonging along the same spectrum of more widely held political or pop culture viewpoints.

In the web report, Deep Fakes and Cheap Fakes: The Manipulation of Audio and Video Evidence, Britt Paris and Joan Donovan discuss modern AV manipulation technology, the harbinger of the “information apocalypse”, deep fakes and cheap fakes, which she describes as sociotechnical means of influencing the interpretation of data. Historically, evidence manipulation is not a new phenomenon because, she claims, truth has always been defined socially, culturally, and politically, benefiting the privileged. Deep fakes and cheap fakes can be technically sophisticated or simple, but they are dangerous as tools for changing or maintaining power because they redefine evidence and the expertise needed to define and identify acceptable evidence.

Mike Caulfield offers a student-focused solution to building literacy on the web by teaching the fact-checking capabilities of the internet. He expands upon four practical moves for being a savvy truth-seeker online. First, checking for previous work through fact-checking sites should a be a common habit. Second, “going upstream” to the source of information can uncover the cleaner version or facts of a story, wiping away the sensationalism or truth manipulation that sometimes becomes appended to a story as it swims through social media platforms and user biases. Third, he suggests to read laterally. That is, we should find out what is said about websites to evaluate their trustworthiness and veracity. Finally, we should circle back when needed to trace claims that falsely simplify information or mislead. Caulfield also urges us to avoid our own confirmation biases.

Next, the introduction to the special issue of Literacy in Composition Studies on literacy, democracy, and fake news discusses the gap between print literacy and the click-and-go literacy of the web. While fake news is not new, the quick virality of misleading or manipulated information is. Rhetoricians, as claimed by Thomas Miller and Adele Leon, are uniquely suited to navigate the differences in pragmatics and identity in modes of thinking and reasoning we do online compared to what we do with print text. Our surface presuppositions and expectations are challenged by the nature of networks and online information ecosystems because we over-estimate our personal “expert intuition”. They argue that what is needed is a shift to more deliberate, reflective thinking to deal with the overwhelming amount of bias-confirming information hurled at us online.

Finally, David Riche writes of rhetorical vulnerability in “Toward a Theory and Pedagogy of Rhetorical Vulnerability”. Vulnerability, Riche argues, is fundamental to rhetorical interaction because it represents our imperative to respond. What is important is to reflect on that vulnerability. Maneuvering around the negative connotations, Riche traces the term back to Gorgias and links vulnerability to the audience’s shared responsibility towards the pursuit of persuasion, efficacy, and common good. In terms of pedagogy, Riche views vulnerability as a kind of risk-management to audience encounters, and he encourages discussion and listening-based approaches that raise awareness of rhetorical vulnerability. Riche also discusses internet trolling in detail, defining it as a genre of rhetoric which has the primary aim of exposing rhetorical vulnerability by disrupting the flow in information and communication and drawing as much attention as possible.

Key ideas

Some key ideas for me coming out of these readings are the new responsibilities of online reading and research. It seems that a sense of wariness and openness are being encouraged simultaneously. Caulfield’s field guide for web literacy gets a lot of things right, and I think something like this needs to be adapted for young learners who are just starting to use the internet as a resource for school and socializing. This reading folded nicely into Riche as well, who noted that we need to be mindful of the integrity of sources in addition to the management of our rhetorical vulnerability. In the intro to the same journal, Miller and Leon also remarked on different modes of reasoning and thinking needed for online communication, and I think these also relate to the skills outlined by Caulfield.

I also appreciated the exploration of the politics of evidence and the interpreters of evidence in Paris & Donovan’s web pamphlet, and I found myself thinking of their work a lot when listening to Radiolab’s most recent episode about Benford’s Law. Apparently, Benford’s can potentially identify AV manipulation by graphing numerical values in the digital file along a frequency-distribution curve. It’s amazing. What’s troubling to me though is that once it’s established that this law can spot a fake, it won’t take much for manipulators to work around it, drawing an even wider line that demarcates the negotiation of expertise.

Discussion questions

Here are my topics for the Slack discussion.

  1. Bryant quoted Susan Wojicki’s response to questions about YouTube’s company stance on morality and responsibility to which she responded, “we don’t want it to be necessarily us saying, we don’t think people should be eating donuts, right. It’s not our place to be doing that”. What was your reaction to reading this? If “engagement monsters” like YouTube behaved more like information commons libraries, how do you think their response would be different?
  2. Riche writes that awareness of rhetorical vulnerability can emphasize the good intentions of the speaker and the shared responsibility of the audience. However, even though she notes that disappointment is possible, this pursuit of the common good seems to assume the good faith of both sides. How then do we reckon with rhetorical vulnerability in light agenda or profit-driven communication (e.g., being pushed into filter bubbles) and intentionally false or misleading information?
  3. Talk about your Spot the Troll quiz score.
  1. Clemson University’s Spot the Troll online quiz: https://spotthetroll.org/
  2. Eli Pariser’s TED Talk, “Beware Online Filter Bubbles”: https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles?language=en
  3. The Wall Street Journal, “Blue Feed, Red Feed”:
    https://graphics.wsj.com/blue-feed-red-feed/
  4. Inman “Believe”: https://theoatmeal.com/comics/believe (select the Regular or Classroom-friendly version; Regular includes vulgarities)
  5. Lauren Valentino Bryant, “The YouTube Algorithm and the Alt-Right Filter Bubble”
  6. Data & Society’s Deepfakes & Cheap Fakes report
  7. Mike Caulfield, Web Literacy for Student Fact-Checkers and Other People Who Care about Factshttps://webliteracy.pressbooks.com/
  8. Literacy in Composition Studies Special Issue on Literacy, Democracy, and Fake News: (a) skim Miller & Leon’s introduction and (b) read Riche: https://licsjournal.org/index.php/LiCS/issue/view/13