search automation bias (SAB)

    November 10th, 2022

    The below is from a short Twitter thread of mine from 2017-05-23:

    Is there a term to refer to the seeming authoritativeness of algorithmic expertise of a list of organic and inorganic search results?

    The clean aesthetic of search results that reduces the cost and shortcuts the processing requirements when reading Google answer boxes?

    Or stopping at the first result?

    Our performed unwillingness to do the costly contextualization of the results of a given query with how we know - if asked - it works?

    How do we reference the difficulty in disentangling what “Google says” from what a page, which Google says the web values, says?

    What do we call the production of epistemic automaticity from Google’s algorithms’ automated search results?

    2022-11-10 Notes:

    Search automation bias (SAB) is one possible term (though my questions above point to ideas probably worth disentangling, for instance, my larger questions are broader than “position bias”—pointing also to the “clean aesthetic”, thinking past the context and construction of search, and differentiating the search engine’s claimed credencing from the whole experience of query => “search media1 => reading the page2).

    I put some remarks from others’ research below. I’d love to hear more of other terms or ways of thinking through this (or of doing search differently), newly proposed by you or in literature/practice that I’ve missed or failed to recall. SAB seems increasingly common in the the Google Web Search paradigm—with instant answers, rich components, and featured snippets—though perhaps less so where people perceive/believe spam to have clouded the results3. I think the risks from SAB are likely lower for some types of searches or searches within particular contexts (like what I study in my dissertation: data engineers searching for work) and perhaps higher in others (see, for example, a working paper from Lurie & Mulligan4 and their section on defining the search results that are “likely to mislead”) or searches with representational harms (Noble 2018)5).

    I wrote the 2017 thread above while working on Mulligan & Griffin (2018)6—re Google returning Holocaust-denier search pages at the top of the search results for the query [did the holocaust happen] (Cadwalladr 2016)7.

    See also:

    Vaidhyanathan (2011, 15)8

    our habits (trust, inertia, impatience) keep us from clicking past the first page of search results

    Sundin & Carlsson (2016)9

    … and if you put trust in Google’s relevance criteria, as a consequence you outsource critical assessment of information to the information infrastructure and, more precisely, to the algorithms of the search engines

    Noble (2018, 116)10

    …search results belie any ability to intercede in the framing of a question itself. [ . . . ] What we find in search engines about people and culture is important. They oversimplify complex phenomena. They obscure any struggle over understanding, and they can mask history. Search results can reframe our thinking and deny us the ability to engage deeply with essential information and knowledge we need, knowledge that has traditionally been learned through teachers, books, history, and experience. [em added]

    Haider & Sundin (2019, 24)11

    shifting the locus of trust from people and institutions to a technology that aims at merging and relocating it

    Haider & Sundin (2019, 33-34)12 — see the full paragraph (which pulls together White (2016, p. 65)13 on “position bias” (White: “also referred to as ‘trust’ bias or ‘presentation bias’”); Pan et al. 200714; Schultheiß et al. 2018’s15; and Höchstötter & Lewandowski 200916), starting with:

    Another line of research investigates what people choose from the search engine results page – often referred to as SERP – and why they choose as they do. This work convincingly shows that how people choose links is primarily based on where these links are located on the search engine results page.

    Tripodi (2022, 116)17:

    …conflates the explorative search processes——searches that embody learning and investigating——with queries focused on fact retrieval and verification.[40] Further, as Google has worked to “oversimplify complex phenomena” and to prioritize profits over societal engagement with complicated ideas, the tech giant has transformed itself from an exploratory platform into one designed around verification…[41]


    Fn40. [Marchionini 2006]18

    Fn41. [Noble 201819; Haider & Sundin 201920]

    Narayanan and De Cremer (2022, 2)21:

    users of search engines act as if search engine algorithms are providers of testimony, and acquire or alter beliefs on the basis of this apparent testimony


    2022-11-11 Edit: Added line from Sundin & Carlsson (2016)22


    Notes and References


    Footnotes

    1. Metaxa, Park, Landay, & Hancock’s Search Media and Elections: A Longitudinal Investigation of Political Search Results in the 2018 U.S. Elections (2019), in Proc. ACM Hum.-Comput. Interact.. doi:10.1145/3359231 [🚨 paywalled, author copy at Stanford’s Social Media Lab] [metaxa2019search]↩︎

    2. Distinguishing the search results from the complete search, including “the complex impact a page of results can have on users” (Metaxa et al. 2019)[^metaxa2019search]. Alternately worded, in Mulligan & Griffin (2018, 567)[^mulligan2018rescripting]:

      results-of-search (the results of the entire query-to-conception experience of conducting a search and interpreting search results)

      ↩︎
    3. Are the claims of Google dying (Brereton 2022)[^brereton2022google] showing up in research on the use of Google?

      Chayka (2022):[^chayka2022google]

      Brereton’s post–which ended “Google is dead. Long live Google + ‘site:reddit.com’”—became the No. 10 most upvoted link ever on the tech-industry discussion board Hacker News. No. 11 is a complaint about Google’s search results looking too similar to its ads, while No. 12 is a link to an alternative, indie search engine. Clearly, others share Brereton’s sense of search-engine discontentment. [Algolia link not in the original]

      ↩︎
    4. Lurie & Mulligan’s Searching for Representation: A sociotechnical audit of googling for members of U.S. Congress [DRAFT] (2021) [lurie2021searching_draft]↩︎

    5. Noble’s Algorithms of Oppression How Search Engines Reinforce Racism (2018), from New York University Press. [book] [noble2018algorithms]↩︎

    6. Mulligan & Griffin’s Rescripting Search to Respect the Right to Truth (2018), in The Georgetown Law Technology Review. [direct PDF link] [mulligan2018rescripting]↩︎

    7. Cadwalladr’s Google is not ‘just’ a platform. It frames, shapes and distorts how we see the world (2016), in The Guardian. [cadwalladr2016googleb]↩︎

    8. Vaidhyanathan’s The Googlization of everything:(and why we should worry) (2011), from University of California Press. [book] doi:10.1525/9780520948693 [vaidhyanathan2011googlization]↩︎

    9. Sundin & Carlsson’s Outsourcing trust to the information infrastructure in schools (2016), in JD. doi:10.1108/JD-12-2015-0148 [🚨 paywalled, author copy available at Lund University’s Research Portal] [sundin2016outsourcing]↩︎

    10. Noble’s Algorithms of Oppression How Search Engines Reinforce Racism (2018), from New York University Press. [book] [noble2018algorithms]↩︎

    11. Haider & Sundin’s Invisible Search and Online Search Engines: The ubiquity of search in everyday life (2019), from Routledge. [open access book] doi:10.4324/9780429448546 [haider2019invisible]↩︎

    12. Haider & Sundin’s Invisible Search and Online Search Engines: The ubiquity of search in everyday life (2019), from Routledge. [open access book] doi:10.4324/9780429448546 [haider2019invisible]↩︎

    13. White’s Interactions with Search Systems (2016), from Cambridge University Press. [book; author copy at ] DOI: 10.1017/CBO9781139525305 [white2016interactions]↩︎

    14. Pan et al.’s In Google we trust: Users’ decisions on rank, position, and relevance (2007), in Journal of computer-mediated communication. doi:10.1111/j.1083-6101.2007.00351.x [pan2007google]↩︎

    15. Schultheiß, Sünkler, & Lewandowski’s We still trust Google, but less than 10 years ago: An eye-tracking study (2018), in Information Research. [schultheiß2018still]↩︎

    16. Höchstötter & Lewandowski’s What users see – Structures in search engine results pages (2009), in Information Sciences. doi:10.1016/j.ins.2009.01.028 pre-print [höchstötter2009users]↩︎

    17. Tripodi’s The Propagandists’ Playbook: How Conservative Elites Manipulate Search and Threaten Democracy (2022), from Yale University Press. [book] [tripodi2022propagandists]↩︎

    18. Marchionini’s Exploratory search: From Finding to Understanding (2006), in Commun. ACM. doi:10.1145/1121949.1121979 [🚨 paywalled, author copy at ResearchGate] [marchionini2006exploratory]↩︎

    19. Noble’s Algorithms of Oppression How Search Engines Reinforce Racism (2018), from New York University Press. [book] [noble2018algorithms]↩︎

    20. Haider & Sundin’s Invisible Search and Online Search Engines: The ubiquity of search in everyday life (2019), from Routledge. [open access book] doi:10.4324/9780429448546 [haider2019invisible]↩︎

    21. Narayanan & De Cremer’s “Google Told Me So!” On the Bent Testimony of Search Engine Algorithms (2022), in Philosophy & Technology. doi:10.1007/s13347-022-00521-7 [🚨 paywalled, email author for a copy] [narayanan2022google]↩︎

    22. Sundin & Carlsson’s Outsourcing trust to the information infrastructure in schools (2016), in JD. doi:10.1108/JD-12-2015-0148 [🚨 paywalled, author copy available at Lund University’s Research Portal] [sundin2016outsourcing]↩︎