jump to search:/  select search:K  navigate results:/  navigate suggestions:/  close suggestions:esc

    Tech Policy Press on Choosing Our Words Carefully


    This episode features two segments. In the first, Rebecca Rand speaks with Alina Leidinger, a researcher at the Institute for Logic, Language and Computation at the University of Amsterdam about her research– with coauthor Richard Rogers– into which stereotypes are moderated and under-moderated in search engine autocompletion. In the second segment, Justin Hendrix speaks with Associated Press investigative journalist Garance Burke about a new chapter in the AP Stylebook offering guidance on how to report on artificial intelligence.

    HTT: Alina Leidinger (website, Twitter)

    The paper in question: Leidinger & Rogers (2023)


    Warning: This paper contains content that may be offensive or upsetting.

    Language technologies that perpetuate stereotypes actively cement social hierarchies. This study enquires into the moderation of stereotypes in autocompletion results by Google, DuckDuckGo and Yahoo! We investigate the moderation of derogatory stereotypes for social groups, examining the content and sentiment of the autocompletions. We thereby demonstrate which categories are highly moderated (i.e., sexual orientation, religious affiliation, political groups and communities or peoples) and which less so (age and gender), both overall and per engine. We found that under-moderated categories contain results with negative sentiment and derogatory stereotypes. We also identify distinctive moderation strategies per engine, with Google and DuckDuckGo moderating greatly and Yahoo! being more permissive. The research has implications for both moderation of stereotypes in commercial autocompletion tools, as well as large language models in NLP, particularly the question of the content deserving of moderation.


    Leidinger, A., & Rogers, R. (2023). Which stereotypes are moderated and under-moderated in search engine autocompletion? Proceedings of the 2023 Acm Conference on Fairness, Accountability, and Transparency, 1049–1061. https://doi.org/10.1145/3593013.3594062