still searching for gorillas

    An article yesterday in the New York Times, from Nico Grant and Kashmir Hill, following up on the racist image labeling in Google’s Photos app from 2015 (mentioned in class): Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s. (2023)

    Original case: Jacky Alciné’s tweet complaints about Google Photos labeling photos of him and his friend as containing ‘gorillas’ (original tweets have since been deleted; but widely reported, of public interest, and available on Archive.org). Google’s response was to disallow such labeling/searching in the tool.

    The article recounts an evaluation of Google Photos, Apple Photos, photo search in Microsoft OneDrive, and Amazon Photos for searches for [gorillas], and other primates ([baboons], [chimpanzees], [orangutans], [monkeys], [lemurs]) and other animals more broadly, inc. [cats] and [kangaroos]).1 The key claim is that while Google and Apple tools work well for many animals, they do not return any images for primates (except for lemurs). (This is distinct also from the performance on, say, Google Images, for similar searches.)

    (1)

    Consumers may not need to frequently perform such a search

    • Questions:
      • Why limit the concern to “consumers”2? Students? Researchers?3
      • “may not”? How might we learn this?
      • Why limit the concern to “need”?

    (2) Grant & Hill appear to have interviewed Alciné (“dismayed to learn that Google has still not fully solved the problem and said society puts too much trust in technology”). Did they also reach out to Yonatan Zunger (now no longer at Google), given his—widely reported—prominent role in the initial response from Google?

    (3) Grant & Hill quote a Google spokesperson by name and other company responses. Consider the function of those comments, recalling perhaps the discussion of the Google Search Liaison in Griffin & Lurie (2022).

    (4) Why did Google fail earlier?

    In the gorilla incident, two former Google employees who worked on this technology said the problem was that the company had not put enough photos of Black people in the image collection that it used to train its A.I. system. [emphasis added]

    • Compare that to the analysis from Noble (2018) (of different but related search failures) of poor engineering training and racist exclusion.

    (5)

    The Fix?

    While Google worked behind the scenes to improve the technology, it never allowed users to judge those efforts.

    • Another interesting (and somewhat vague) line, highlighting for me the missing discussion of whether aiming for inclusion (here or in general) might ‘improve’ the technology.

    (6)

    the poisoned needle in a haystack

    • Note how this language (quoted from M. Mitchell) may function rhetorically quite similarly to “data void” or “evil unicorn”, though situated differently here.

    See also

    Further discussion of the case

    • Noble (2018) [p. 82]4: “What we know about Google’s responses to racial stereotyping in its products is that it typically denies responsibility or intent to harm, but then it is able to “tweak” or “fix” these aberrations or “glitches” in its systems."
    • Sundin et al. (2021, p. 4): “Throughout the years, Google has had to deal with various instances in which its search results were criticized for advancing racist, sexist, anti-Semitic, or otherwise offensive values. The impression is that Google employs a haphazard whack-a-mole approach. It only reacts in response to media reports and only if these are publicized widely enough to constitute a problem for their brand. The specific issue is addressed—but only after a delay where the problem is explained away and blamed on users.” [internal endnote omitted]
    • Raji et al. (2022)

    Earlier reporting

    Comments elsewhere

    • On Twitter:
      • Nico Grant (author): “raises broader questions about the underlying AI, computer vision, which has permeated throughout our world”
      • Kashmir Hill (author): “raises questions about other unfixed, or unfixable, flaws lurking in services that rely on AI”
      • J. Khadijah Abdurahman: “google search and the apps reliant on its api, collective inability to remedy the canonical example of digital information science ecosystems being structured by antiblackness, except via a manual override indicates there’s larger socio-technical issue at play”
      • M. Mitchell: “I was a ‘proponent’ of removing the ‘gorilla’ label for photo tagging.”

    Footnotes

    1. N.b. They do not indicate whether they also searched the singular form(s). This is probably not of importance, but something I always like to note. ↩︎

    2. This reminds me of the language of ‘consumer search’ that Neeva used when reporting their recent pivot.↩︎

    3. Perhaps there are specialty tools available to conservationists and others?↩︎

    4. Noble includes a screenshot from Alciné’s tweet (p. 7), but incorrectly identifies it as a Google Images search result. The larger analysis from Noble is very applicable to this situation.)↩︎

    5. Related post: more than a party trick?↩︎

    References

    Grant, N., & Hill, K. (2023). Google’s photo app still can’t find gorillas. And neither can apple’s. The New York Times. https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html [grant2023google]

    Griffin, D., & Lurie, E. (2022). Search quality complaints and imaginary repair: Control in articulations of Google Search. New Media & Society, 0(0), 14614448221136505. https://doi.org/10.1177/14614448221136505 [griffin2022search]

    Machkovech, S. (2015). Google dev apologizes after photos app tags black people as ’gorillas’. Ars Technica. https://arstechnica.com/information-technology/2015/06/google-dev-apologizes-after-photos-app-tags-black-people-as-gorillas/ [machkovech2015google]

    Noble, S. U. (2018). Algorithms of oppression how search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/ [noble2018algorithms]

    Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022, June). The fallacy of AI functionality. 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533158 [raji2022fallacy]

    Seaver, N. (2018). What should an anthropology of algorithms do? Cultural Anthropology, 33(3), 375–385. [seaver2018should]

    Shen, H., DeVos, A., Eslami, M., & Holstein, K. (2021). Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. Proc. ACM Hum.-Comput. Interact., 5(CSCW2). https://doi.org/10.1145/3479577 [shen2021everyday]

    Simonite, T. (2018). When it comes to gorillas, google photos remains blind. Wired. https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/ [simonite2018gorillas]

    Sundin, O., Lewandowski, D., & Haider, J. (2021). Whose relevance? Web search engines as multisided relevance machines. Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.24570 [sundin2021relevance]