jump to search:/  select search:K  navigate results:/  navigate suggestions:/  close suggestions:esc

    But how do we ground our relations to hallucination?

    August 3rd, 2023

    Klosterman’s “But What If We’re Wrong?” and pursuing “hallucination”. Reflecting on Leahu (2016), Munk et al. (2022), Rettberg (2022), and trying to “see the world we already inhabit” (Powles & Nissenbaum, 2018).

    Added 2023-08-04 12:07:04:

    @goodside via Twitter on Aug 4, 2023

    “we can’t trust LLMs until we can stop them from hallucinating” says the species that literally dies if you don’t let them go catatonic for hours-long hallucination sessions every night

    References

    Leahu, L. (2016). Ontological surprises: A relational perspective on machine learning. Proceedings of the 2016 Acm Conference on Designing Interactive Systems, 182–186. https://doi.org/10.1145/2901790.2901840

    Munk, A. K., Olesen, A. G., & Jacomy, M. (2022). The thick machine: Anthropological ai between explanation and explication. Big Data & Society, 9(1), 205395172110698. https://doi.org/10.1177/20539517211069891

    Powles, J., & Nissenbaum, H. (2018). The seductive diversion of “solving” bias in artificial intelligence. OneZero. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53

    Rettberg, J. W. (2022). Algorithmic failure as a humanities methodology: Machine learning’s mispredictions identify rich cases for qualitative analysis. Big Data & Society, 9(2), 205395172211312. https://doi.org/10.1177/20539517221131290