There are two recent examples of search autocompletes (search predictions or search suggestions) in Google Search being used to help people think or feel a particular way. I think the claims are unfounded (or at least incomplete), even if effective or well-meaning because (1) search queries are not a perfect predictor of search intent and (2) autocompletes are not a perfect picture of actual searches.
Google[flights to => flights to israel]
Google[why do => why does the us support israel]
It was easy to verify the actual autocompletes via searches of my own. Others also verified them on social media.
While perhaps discursively powerful (and maybe reflective of actual underlying reality), I do not think the claims made from these Google autocompletes or suggested searches are well-grounded. Autocomplete is designed to reduce keystrokes. While it can be used, as Francesca Tripodi (website | Twitter) notes, to add context and texture “to what’s going on in the world” (2022), it is not a comprehensive reflection of searching.
Search queries are not a perfect predictor of search intent
Autocompletes are not a perfect picture of actual searches
Via Google Search Help > How Google autocomplete predictions work:
These systems try to identify predictions that are violent, sexually explicit, hateful, disparaging, or dangerous, or which lead to such content."
It is possible that more popular topical queries that express a wide range of [why do…] are removed.
See commentary from Rosie Graham (website | Twitter) (2023) on these “exemption policies” (quoted below, in part)
Note: Some autocompletes rely more or less on localization. You can explore this by changing your location through a VPN, (as suggested down-thread in the first tweet), or, in Chromium browsers, through developer tools: Documentation > Chrome DevTools > Sensors: Emulate device sensors > Override geolocation
suggests
See Ronald Robertson (website | Twitter)’s suggests to explore Google autocompletes programmatically (Robertson et al., 2019).
>>> import suggests
>>> s = suggests.get_suggests(‘flights to’, source=‘google’)
2023-10-10 13:39:09,681 | 35036 | INFO | suggests.logger | google | flights to
>>> s[‘suggests’]
[‘flights to israel’, ‘flights to vegas’, ‘flights to hawaii’, ‘flights to tel aviv’, ‘flights to vegas from seattle’, ‘flights to san diego’, ‘flights to miami’, ‘flights to las vegas’, ‘flights to maui’, ‘flights to hawaii from seattle’]
>>> s = suggests.get_suggests(‘why do’, source=‘google’)
2023-10-10 13:38:16,401 | 35036 | INFO | suggests.logger | google | why do
>>> s[‘suggests’]
[‘why does the us support israel’, ‘why does iran hate israel’, ‘why do dogs eat grass’, ‘why do dogs lick you’, ‘why do i sweat so much’, ‘why does my stomach hurt’, ‘why does egypt blockade gaza’, ‘why do cats knead’, ‘why do cats purr’, ‘why do cats make biscuits’]
Bing[why do…]
Yandex[why do…]
Kagi[why do…]
Swisscows[why do…]
Caulfield, M. (2019). Data voids and the google this ploy: Kalergi plan. https://hapgood.us/2019/04/12/data-voids-and-the-google-this-ploy-kalergi-plan/.
Graham, R. (2023). The ethical dimensions of google autocomplete. Big Data &Amp; Society, 10(1), 205395172311565. https://doi.org/10.1177/20539517231156518
Robertson, R. E., Jiang, S., Lazer, D., & Wilson, C. (2019). Auditing autocomplete: Suggestion networks and recursive algorithm interrogation. Proceedings of the 10th Acm Conference on Web Science, 235–244. https://doi.org/10.1145/3292522.3326047
Tripodi, F. (2022). The propagandists’ playbook: How conservative elites manipulate search and threaten democracy (Hardcover, p. 288). Yale University Press. https://yalebooks.yale.edu/book/9780300248944/the-propagandists-playbook/