This is an incomplete doc about search audits, or search engine audits.
This is overly self-referential at this time.
Search audits, as a form of “algorithm audit”, are systematic approaches to evaluating the performance of a search tool through documenting interacting with the tool in various ways. These audits may be exploratory, like Gerhart (2004)’s development of a methodology and exploration of five controversial subtopics across three search engines (Google, Teoma, and AllTheWeb) and two “multi-searchers”. Or they may examine a single topic much more intensely, like Urman et al. (2022) constructing 200 virtual agents to search seven queries on one topic across six search engines.
See also Metaxa et al. (2021) for a more comprehensive treatment of audits of search engines. Kulshrestha et al. (2017)—also below—and Trielli & Diakopoulos (2018) note the importance of examining the role of the “input bias” (including both the corpus and the user, respectively).
Some related non-audit work takes a much smaller subset of query-result pairs, perhaps a single example, and conducts extensive theoretical and functional examination of the causes, effects, and implications (see, for example, Noble (2018), Mulligan & Griffin (2018), or Haider & Rödl (2023)). Other work may be more focused on aspects of the search experience outside of the search engine results page, like how the search engine presents itself in response to searcher complaints (Griffin & Lurie (2022)).
Here is a very incomplete list of search audits in academic research (they may not always use the term “audit”):
Press reporting on search audits include (these audits may be commissioned or performed by the journalists or external entities):
Gerhart, S. (2004). Do web search engines suppress controversy? First Monday, 9(1). https://doi.org/10.5210/fm.v9i1.1111 [gerhart2004web]
Griffin, D., & Lurie, E. (2022). Search quality complaints and imaginary repair: Control in articulations of Google Search. New Media & Society, 0(0), 14614448221136505. https://doi.org/10.1177/14614448221136505 [griffin2022search]
Haider, J., & Rödl, M. (2023). Google search and the creation of ignorance: The case of the climate crisis. Big Data &Amp; Society, 10(1), 205395172311589. https://doi.org/10.1177/20539517231158997 [haider2023google]
Kulshrestha, J., Eslami, M., Messias, J., Zafar, M. B., Ghosh, S., Gummadi, K. P., & Karahalios, K. (2017). Quantifying search bias: Investigating sources of bias for political searches in social media. Proceedings of the 2017 Acm Conference on Computer Supported Cooperative Work and Social Computing, 417–432. [kulshrestha2017quantifying]
Metaxa, D., Park, J. S., Robertson, R. E., Karahalios, K., Wilson, C., Hancock, J., & Sandvig, C. (2021). Auditing algorithms: Understanding algorithmic systems from the outside in. Foundations and Trends® in Human–Computer Interaction, 14(4), 272–344. https://doi.org/10.1561/1100000083 [metaxa2021auditing]
Mulligan, D. K., & Griffin, D. (2018). Rescripting search to respect the right to truth. The Georgetown Law Technology Review, 2(2), 557–584. https://georgetownlawtechreview.org/rescripting-search-to-respect-the-right-to-truth/GLTR-07-2018/ [mulligan2018rescripting]
Noble, S. U. (2018). Algorithms of oppression how search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/ [noble2018algorithms]
Trielli, D., & Diakopoulos, N. (2018). Defining the role of user input bias in personalized platforms. Paper presented at the Algorithmic Personalization and News (APEN18) workshop at the International AAAI Conference on Web and Social Media (ICWSM). https://www.academia.edu/37432632/Defining_the_Role_of_User_Input_Bias_in_Personalized_Platforms [trielli2018defining]
Urman, A., Makhortykh, M., & Ulloa, R. (2022). Auditing the representation of migrants in image web search results. Humanit Soc Sci Commun, 9(1), 5. https://doi.org/10.1057/s41599-022-01144-1 [urman2022auditing]