Semantic Scholar shared a short paper (“Sealed Knowledges: A Critical Approach to the Usage of LLMs as Search Engines”) to my “Research Feed”, with this as their automated TLDR:

    It is proposed that doubting the outputs of LLMs can function as a feminist intervention that resists the marginalization and sealing of certain knowledges and perspectives through the usage of LLMS as chatbots.

    Lindemann (2023a):

    This research examines the implications of the usage of large language models (LLMs) as search engines on knowledge. Drawing on feminist theories of knowledge, I argue that LLMs used to generate direct answers to search engine inquiries both rely on and reinforce a disembodied and non-situated view of knowledge. This, it is argued, can lead to a “sealing” of non-dominant knowledges. Through this sealing of knowledges, marginalized voices may be heard even less than before. Lastly, drawing on the works of feminist theorists such as Donna Haraway and Sara Ahmed, the research proposes that doubting the outputs of LLMs can function as a feminist intervention that resists the marginalization and sealing of certain knowledges and perspectives through the usage of LLMS as chatbots. This research as part of a wider discourse on the usage of LLMs as search engines is crucial considering the current trend of major search engine providers to integrate LLMs for the production of direct answers into their search engines.

    I found the author, PhD student Nora Freya Lindemann (ResearchGate | Twitter), sharing a poster version (Lindemann, 2023b) on Twitter.

    I found the paper and poster very interesting—a provocative feminist engagement with Shah & Bender (2022)—, and shared the following reply:

    @danielsgriffin via Twitter on Aug 29, 2023

    This is great! I’m really curious to read what you find in how different platforms & tools *may* make strides to support *unsealing knowledge*, whether through articulations that help users doubt & dig deeper, providing multiple drafts, or RAG adaptations by marginalized voices?


    term: sealed

    The ‘sealing’/‘sealed’ language comes from the German “Versiegelte Oberflächen” in Mühlhoff (2018).

    References

    Cotter, K. (2021). “Shadowbanning is not a thing”: Black box gaslighting and the power to independently know and credibly critique algorithms. Information, Communication & Society, 0(0), 1–18. https://doi.org/10.1080/1369118X.2021.1994624 [cotter2021shadowbanning]

    Cotter, K. (2022). Practical knowledge of algorithms: The case of BreadTube. New Media & Society, 1–20. https://doi.org/10.1177/14614448221081802 [cotter2022practical]

    Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016). First i "like" it, then i hide it: Folk theories of social feeds. Proceedings of the 2016 Chi Conference on Human Factors in Computing Systems, 2371–2382. https://doi.org/10.1145/2858036.2858494 [eslami2016first]

    Eyert, F., Irgmaier, F., & Ulbricht, L. (2022). Extending the framework of algorithmic regulation. The uber case. Regulation & Governance, 16(1), 23–44. https://doi.org/https://doi.org/10.1111/rego.12371 [eyert2022extending]

    Lindemann, N. F. (2023a, August). Sealed knowledges: A critical approach to the usage of llms as search engines. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3600211.3604737 [lindemann2023sealed_paper]

    Lindemann, N. F. (2023b). Sealed knowledges: A critical approach to the usage of llms as search engines. https://doi.org/10.13140/RG.2.2.18050.04800 [lindemann2023sealed_poster]

    Mühlhoff, R. (2018). Digitale entmündigung und user experience design. Leviathan, 46(4), 551–574. https://philpapers.org/archive/MHLDEU.pdf [mühlhoff2018digitale]

    Shah, C., & Bender, E. M. (2022, March). Situating search. ACM SIGIR Conference on Human Information Interaction and Retrieval. https://doi.org/10.1145/3498366.3505816 [shah2022situating]