trace-watery-biases

    October 19th, 2022

    Looking at a recent paper from Jill Walker Rettberg1—discussing a method of “algorithmic failure”, which “uses the mispredictions of machine learning to identify cases that are of interest for qualitative research”—reminded me of a card in my notes: trace-watery-biases. The use by Rettberg is very distinct, but reminds me to re-frame search “failures” and failed searches as opportunities to better understand articulations, constructions, and expectations of search.

    The card was prompted back in 2018 by a line from Heather M. Roff2, highlighted on Twitter by Suresh Venkatasubramanian3

    All AIs can do is to bring into relief existing tensions in our everyday lives that we tend to assume away.

    Around the same time a stray bit from a line from Julia Powles and Helen Nissenbaum jumped out at me4:

    The tales of bias are legion: online ads that show men higher-paying jobs; delivery services that skip poor neighborhoods; facial recognition systems that fail people of color; recruitment tools that invisibly filter out women. A problematic self-righteousness surrounds these reports: Through quantification, of course we see the world we already inhabit. Yet each time, there is a sense of shock and awe and a detachment from affected communities in the discovery that systems driven by data about our world replicate and amplify racial, gender, and class inequality. [links omitted, emphasis added]

    The “we see the world we already inhabit”, (though of course some already see and feel that world) made me recall the joke about the fish and water, here it is as told by David Foster Wallace:

    There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says, “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, “What the hell is water?”

    And I recalled, at the time, a fascinating 2016 paper by Lucian Leahu5 that looks at “failures” of neural networks as opportunities for tracing relations and for “ontological surprise”, seeing new sorts of categories and ways of being:

    Other networks, however, surprised the engineers conducting these experiments. One such example is that of a network trained to detect dumbbells. Indeed, the images output by the neural net contained dumbbells, however they also depicted human arms operating them—see Figure 3. In other words, from the network’s perspective the essential characteristics of a dumbbell include human arms.

    [ . . . ]

    The network encodes a central aspect pertaining to dumbbells: the fact that they can be operated in particular ways by arms. My interpretation is rooted in a relational worldview. From this perspective, what characterizes an entity is not the attributes of that entity but the relations that perform the object as such: the relations through which an object’s identity is performed—a weight emerges as a dumbbell by being used as a training weight, typically by lifting it with one’s arms. In sum, the network in this experiment can be thought to encode a relation between dumbbells and arms; indeed, a constitutive relation. Viewed from a relational point of view, then, this experiment is a success and motivates a further analysis on how we may use neural networks beyond identifying objects.

    +

    The dumbbells experiment surprised the engineers with the presence of human arms in the output images. Although this particular relation (dumbbells-arms) is not particularly surprising, this experiment suggests that it might be possible for machine learning technologies to surprise us by tracing unexpected, indeed emerging, relations between entities. Such tracings may shift ontologies, the ways we understand and categorize the world. This would require us to be open to ontological surprises: e.g., relations and/or categories that 1) are native to specific configurations of machine learning technologies, the contexts in which they are applied, and the human labor necessary to operate the technologies and interpret their results and 2) might be different than what we expect.



    Footnotes

    1. Rettberg’s “Algorithmic failure as a humanities methodology: Machine learning’s mispredictions identify rich cases for qualitative analysis” (2022), in Big Data & Society. https://doi.org/10.1177/20539517221131290 [rettberg2022algorithmic]

      • HT:
      • I replied in-thread:
      • I particularly appreciated “§ Using machine learning against the grain”.

      Rettberg highlights and builds on Munk et al.’s “The Thick Machine: Anthropological AI between explanation and explication” (2022), in Big Data & Society. https://doi.org/10.1177/20539517211069891 [munk2022thick]↩︎

    2. Roff’s “The folly of trolleys: Ethical challenges and autonomous vehicles” (2018), from Brookings. https://www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/ [roff2018folly]↩︎

    3. ↩︎
    4. Powles & Nissenbaum’s “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence” (2018), from OneZero. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 [powles2018seductive]

      • I previously shared about this piece contemporaneously:
      ↩︎
    5. Leahu’s “Ontological Surprises: A Relational Perspective on Machine Learning” (2016), in Proceedings of the 2016 ACM Conference on Designing Interactive Systems. https://doi.org/10.1145/2901790.2901840 [leahu2016ontological]↩︎