1. Introduction

    tags: diss
    December 16th, 2022

    “Probably 90% of my job is Googling things,” Christina said. Then she chuckled.

    Amar, seemingly also amused, laughed as he said, “I’ve really never thought about it myself, even though it’s kind of like 90% of my job to just look things up.”

    And Noah, too, described searching the web as a central aspect of doing the work of a data engineer:

    If I found out that some coworkers were doing it a lot less than me, I would actually wonder if they weren’t doing their job as effectively as they could. Which is not to suggest that I’m the greatest googler3 of all time. But I consider it a core of doing my job. You have to be able to search.

    The quotes above are from interviews conducted for this research. Christina, Amar, and Noah are data engineers.4 They work in enterprise software, social media, and media streaming. Like many who write code, and all the data engineers that I talked with across a range of industries, they heavily rely on general-purpose web search engines to do their job.

    I take a close look—through interviews and digital ethnography—at this heavy reliance. How are they able to search? Does this work for them? How does it work for them? How do they learn to search the web as data engineers? Do they really “just google it”? Are their employers OK with this? What can we learn from their success?

    I do find that data engineers have generally been successful in making use of web search at work. I present this case not only of data engineers’ heavy reliance, but a successful heavy reliance. Though I do not suggest they are successful in every search or that every data engineer finds success in their ways of searching. Rather, I find data engineers’ work practices incorporate and facilitate searching the web and this contributes to successful work performance. So the practical success I describe, and seek to understand, is not determined by some solid ‘gold standard’ metrics or objective standpoint, but by how they have embraced web search and present it as useful (or at least a “satisfactory accomplishment” (Thompson, 1967)) and essential, with little complaint. It is success for their purposes: it works in gradations, located in practice, and relative to alternatives (see de Laet & Mol (2000)). My focus has not been on the boundary line of success or failure. My focus is on what data engineers do to make their use of search successful, success in their eyes.

    Why look at the use of general-purpose web search engines? People go to search engines to find out all sorta of information from the most mundane or trivial to the deeply significant. People use search engines as they seek to determine if they are pregnant (Kraschnewski et al., 2014), or to navigate unwanted pregnancy (Guendelman et al., 2022, Mejova et al., 2022), or pregnancy loss (Andalibi & Bowen, 2022, Andalibi & Garcia, 2021). People use search to find and make sense of health information (Mager, 2009), including whether to have their newborns receive the lifesaving Vitamin K shot (DiResta, 2018). People use search engines to learn about recycling programs (Haider, 2016) and sustainability (Haider et al., 2022). People use search engines to “fact check” the news (Tripodi, 2018) or to “just” google it (Toff & Nielsen, 2018). People use search engines to critique politicians (Gillespie, 2017) and to find which one to vote for (Mustafaraj et al., 2020). People use search engines for everyday life (Haider & Sundin, 2019) and to learn about things of the most pressing societal relevance (Sundin et al., 2021).

    Search engines shape the web—what we find and what people will write (Introna & Nissenbaum, 2000). But, web search is a relatively new technology. People continue to negotiate its role, map out its limitations, and imagine alternative designs and practices. I took up this research in the context of high profile failures of search engines and of those searching and discussions about how web search, or the people using it, might do better. Search engines promote racist and sexist representations of people (Dave, 2022, Noble, 2018, Urman & Makhortykh, 2022). Search engines sometimes lend credibility to false and hateful beliefs (Mulligan & Griffin, 2018). Search engines are manipulated to promote propaganda (Tripodi, 2022b).

    I should qualify some of my comments above. I use a sociotechnical lens, drawing on many approaches, to see searching as a product of the interplay between the technology and the people. Search engines, including the technology and the people managing and making it, and the people that use them do those things. Search engines and their uses are shared creations. People, at the search engines, the websites, the regulatory agencies and politicians offices, the newspapers and school house, and the searchers, together, make search engines what they are. People make or allow racist and sexist representations of others. People say or let others say, “Google told me so!” (Narayanan & De Cremer, 2022). People manipulate others or permit such manipulation. I follow others that point to how “search engines, and Google’s powerful position in particular, are negotiated and stabilized in social practices.” (Mager, 2012)

    This is a case study showing and analyzing successful web search practices. Rather than join the extensive literature documenting ways that web search engines are harmful, fail, are mistaken and misused, or abused, I describe where web search is made to work. I do not discuss how people should search the web in the abstract. I discuss how these people situated web search to be useful in their work.

    First, some complicating context. On the face of it, web search for data engineers is a private and solitary activity. Data engineers’ direct web search activity is generally not shared with peers, monitored or managed by their supervisors in performance evaluations, nor facilitated by special tools that might guide or correct them. The direct web search activity, which I will refer to as the ostensible web search, is not publicized or socialized. They are not applying their coding skills to their searching tasks. They are not sharing the data about their searching activity. Nor, it seems, are their managers surveilling their searching performance.

    For data engineers, the key ingredients in situating web search practices for success are:

    • admitting searching as a tool appropriate for the work,
    • practices for repairing fruitless searches,
    • supporting query generation and results evaluation, and
    • providing privacy for searching.

    These ingredients promote the situated learning of search and promote, rely on, and align with search as extended.

    What does it mean to say search is extended? Earlier work distinguished the search results and the results-of-search (Mulligan & Griffin, 2018). The search results are the ranked webpages, the snippets describing them, the advertisements, and the other content returned by the search engine for a query. The results-of-search are not the set of pages, but the results of the search itself. In the search breakdown described in (Mulligan & Griffin, 2018), where Holocaust-denier search pages ranked at the top for the query [did the holocaust happen], the results-of-search included “what searchers experience Google as communicating about those sites” (p. 571). This distinction was developed with reference to Bucher (2017)’s “algorithmic imaginary”—“ways of thinking about what algorithms are, what they should be, how they function and what these imaginations in turn make possible” (p. 39-40) and her use of Introna (2016)’s argument that “[t]he doing of the algorithm is constituted by the temporal flow of action and is not ‘‘in’’ the particular line of code, as such.” [pp. 21-22]. We can identify the immediate output of the search algorithms and the design of a search engine results page (SERP). But that is not, nor does it constitute, the result of the search.

    The observation that search is extended follows from the above, a straightforward consequence of using a sociotechnical lens (in my case, using the Mulligan & Nissenbaum (2020) Handoff analytic). To say that searching is extended (and extendable) refers to searching not being a singular or separable moment. The doing, or performance, of a web search includes the impetus to search, the generation of a query, the time and place to type, paste or speak the query into a search box and look at the search results, and the evaluation of results that continues long after the clicking, scrolling, and reading is finished. Looking at search as extended provides exploratory and explanatory advantages, as I will show.

    But there are some problems. While data engineers use of web search is central to the work and is generally successful, that does not mean that such searching has been solved5. The key problem is that some of the same factors that drive the successful use of search in data engineering work (namely, how search is admitted, how search failures are repaired, and privacy for searching) are perverted in some environments. In some companies, or pockets within, the acceptance of search is contorted to produce environments where data engineers are hesitant to openly ask questions and find themselves struggling and flailing about searching again and again in fear of being misjudged or mistreated for asking a question. The privacy that protects space to learn and stretch one’s knowledge is expanded to block effective collaboration within the company. These negative effects particularly shape the experience of those people already marginalized within technology work. The penchant for searching and privacy can become excessive, sometimes putting newcomers and women under suspicion for both not searching enough and for having to search all the time. This is an important part of the story, and the failure here is seen more keenly when the success is clearly explained.

    Background

    Here I will share the origins of this research while also introducing some of the research shaping my questions and my understanding of the importance of this topic.

    In a class in the fall of 2018, Professor Jenna Burrell talked about a moment of surprise that led her to consider and look for all the ways you could share a phone6. This led me to consider exploring how people share or share about search. I could look for the competing articulations of search, the claims of definition and legitimacy, in the “public dialogue” (Gillespie, 2014), in how they are “articulated, experienced and contested in the public domain” (Bucher, 2017, p. 40).

    Consequently, for a project in that class I looked, using Twitter search, for people talking about web search on Twitter. I stumbled on numerous accounts of people in coding roles discussing their heavy reliance on web search in their work. Many people made acclamations (though sometimes nervously) of the central role that general-purpose web searching played in their work. I had been looking for examples of people talking about and sharing searching and was struck. I wondered, if they seem to so successfully incorporate general-purpose web search into their work maybe I might look closer?

    Could I look at how these people in coding roles talked about and shared search to better understand what searching is and could be? Do they “just google it”? How does knowledge of the mechanisms of search inform their searching practices? How is responsibility assigned? I was initially interested in how individuals and organizations addressed the epistemic risks involved in such a reliance on web search7. Here was a group of people seemingly heavily reliant on web search. This same group might also have some better awareness of some of the difficulties in web search, and some of them might be able to marshal a range of responses.

    By marshaling of responses I imagined that people who wrote code for their work tasks might also write code to facilitate their searching-for-work tasks. Would I find examples of innovation from “lead users” grafted to web search engines to improve some aspect, like that discussed by Hippel (1988)? Was the technology flexible enough to permit such modifications, like that discussed in Leonardi (2011)?

    Starting in that final project in the winter of 2018, four years ago, I have explored the broad contours of of these questions about people who write code searching the web. All the while I was also coding myself. I have written Python code for personal or school projects since applying to the I School in 2014. I also taught the I School’s Summer Python Boot Camp for incoming graduate students for three years (including to seasoned programmers learning a new language). This document itself was produced with the aid of several scripting tools I’ve written and many code-related web searches.

    To gain tractability, I narrowed my focus to a subset of those who write code for work: data engineers.8 I selected this site because it seemed likely to include people who were relatively technically sophisticated and it appeared to be a particularly dynamic field that would require a significant amount of learning on the fly, and so heavily reliant on search. Data engineers are also involved in constructing tools to control, replace, or surveil other people, practices, and tools. So they would likely be able to both refashion their tools and practices around search—if they saw that as beneficial—and be attentive if tools to control, replace, or surveil were directed at them.

    I saw them as perhaps more likely to have sophisticated understandings of the underlying technologies and and appreciation, perhaps, for the uses and misuses of data and automation built on or around it. This lead me to think it may be a particular valuable site to consider the role of technical literacy.

    The COVID-19 pandemic constrained my research approaches. I developed a plan to build on my prior work (informed by digital ethnography) with a study focused on in-depth interviewing of data engineers. I submitted my prospectus and passed my qualifying exam soon after my first son was born and took the next semester off. Then, in the summer of 2021, I started interviewing data engineers.

    Overview

    In Methods and Methodologies , I first introduce my two methodological frames for better understanding data engineer use of web search. I describe my use of the Handoff analytic ( Mulligan & Nissenbaum (2020); Goldenfein et al. (2020)) and how it focused my attention on the larger and longer configurations of the sociotechnical systems—the extensions—of the data engineer web search practices, the various components and their modes of engagement, and with particular attention to how attending to the engagements between components reveal or make salient the practical achievements of the data engineers. Then I discuss the legitimate peripheral participation (LPP) analytic ( Lave & Wenger (1991)) and how I use it to identify and understand the data engineering practices of situated learning. These frames, or lenses, help me recognize the role of various social and technical factors in the construction of successful uses of web search. I then discuss my methods. I describe how I developed this research site as multi-sited and networked. This methods section also covers my document analysis, interviewing, sampling and recruitment, coding and memoing, my attention to surprise and use of member checks. I close this chapter with reflections on my positionality and study limitations.

    There are then four analytical chapters, discussed below, followed by a conclusion.

    Admitting searching

    The first chapter discusses how data engineers come to learn how to search for work, with an application of the LPP analytic ( Lave & Wenger (1991)). There are two core ideas. The first is a finding, an empirical observation that there is limited explicit instruction, discussion, demonstration, or collaboration in the moments of web search in data engineering. Given the minimal instruction, I looked for “search talk”, where data engineers might discuss their search queries, search result evaluation, or how they reformulate queries or follow links in pursuit of an answer. I do not find much “search talk” for new data engineers to learn from, rather, questions about it revealed how they see web search as a sensitive topic).

    I then describe a key type of talk about search that I do find: search confessions. This is the second core idea of this chapter, an argument. Search confessions are the self-deprecating (yet proud) or hyperbolic remarks data engineers make about their extensive reliance on web search and their web searching practices. I describe how these confessions legitimate their searching, shape norms of use, and direct others to also rely on web search. The informal nature of this legitimation, though, does not fully delineate appropriate use or the limitations of using web search for work. Search confessions also do not fully address perceptions that such reliance is a sign of weakness or is otherwise shameful. This limits the full inclusion of search into the workplace and affects organizations’ abilities to build inclusive learning environments.

    Extending searching

    Next, I turn to look at how the two analytic frames help see how the occupational, professional, and technical components of the work practices of data engineers effectively extend web searching to include activity well-before and -after the time the data engineer is in the search bar or on the search results page. This claim of a sociotechnical practice be extending is not unique, but it undergirds the two core arguments of this chapter. First, it is not that the data engineers know more about the technical mechanisms of search, but that their work tools and practices and domain expertise make search work for their work purposes. This stands in contrast to claims that appear throughout literature on search engines that views individual ignorance of search mechanisms as contributing to failed searches and search literacy as a necessary, if independently insufficient, path towards mitigating search failures.

    Second, the occupational, professional, and technical components, as extensions of web search, provide sites and activities for new data engineers to gradually increase participation in the search work of data engineers. Even though the search activity in the search box and on the search results page itself is not shared, the new data engineer can participate in the larger work practices that scaffold web search.

    I find this extension in looking at how data engineers engage in two core aspects of web search: (1) where their queries come from and (2) how they evaluate search results. The occupational, professional, and technical components provides scaffolding, or helpful guiding structure, for their web searching in both aspects and this scaffolding provides a significant supplement to their domain knowledge. I show how data engineers find some of their queries in the tools they use (in the names of the functions they are struggling with, or the exception messages returned when there is an error). While they have learned to often quickly evaluate which links on a results page are likely authoritative or helpful, they also engage in collaborative evaluation of search results with both their systems (running a test of the changes suggested from the search, building prototypes, automated testing) and peers (meetings and code review processes). In a supplementary finding from looking at how the components of the data engineer’s work interact, I identify how the data engineers larger work practices also provide ways of decoupling data engineering performance from potential issues introduced through web search, in cases where their search evaluation may fail. I also note how this success in searching is not all-encompassing and is limited to only some types of system qualities and does not include topics outside their direct remit (like ethical or legal aspects of the systems they are designing).

    Repairing searching

    The data engineers sometimes are not successful in their searches. I look at how data engineers repair failed searches. For the purposes of this chapter, failed searches are where the searcher does not find workable answers to their questions. Data engineers have developed practices, similar to those found elsewhere in coding work, for addressing such search failures. Building on the two prior chapters, the first core argument of this chapter is that these practices provide both additional talk about search , further legitimating it within their work, and opportunity for the learning data engineers to participate in extensions of web searching.

    While not focused on the search queries themselves or the moments of searching, the asking and answering of questions among colleagues provides a key opportunity for data engineers to learn about how each other search, including sometimes when, where, and for how long. The repair attempts involve carefully packaging questions, and answering them, in ways that are sensitive to respecting each other’s knowledge and place as experts in their field. Here I analyze these interactions around packaging questions, performing competence, and prompting renewed searching. This repair work, in addition to filling in where web search is not working for them, also provides teaching moments — discussing problem solving & searching writ large (as well as serving as a small site for coworkers to coordinate around what they each know or not).

    The second core argument of this chapter is that the search repair practices, as a whole, constitute the articulation work necessary to support such heavy reliance on web search. The search repair practices are a way to decouple from web searching itself, providing repairs necessary to retain such reliance.

    Owning searching

    The web search activity of data engineers remains solitary and private. In fact, the data engineers’ managers and employers do not seem to manage the search activity itself, despite its importance to the data engineers work and the extensive behavior trace data available. I did not find tracking of search records or technology-enabled management of the search activity. Why? In this chapter I develop a sustained argument building on two core findings. First, individual data engineers identify themselves as responsible for their web searching. Second, management has not pursued a strategy of technocratizing search, or the intentional application of technique to influence search activities themselves.

    I present data engineers describing their searching as solitary, speedy, and secret. Search is presented as done alone, apart from colleagues, and on one’s own, apart from the help of others. Search is hoped to provide a faster solution to problems than alternatives. Search activity itself is also done in secret and kept secret. I recount the lack of technocratization of search for the work of data engineers and discuss aspects of technocratization elsewhere in search. I then review a range of literature to make further sense of this, looking at the design and articulation of search engines as single-user tools and made to protect privacies, the history of “rugged individualism” in coding related fields ( Ensmenger (2015)), and discuss norms of both generalized reciprocity and self-reliance in open source communities (norms and communities that overlap with many data engineers).

    I build a sustained argument from the analysis of the secrecy and seeming lack of ownership by the firm in the data engineers’ searching activity. To do so I appeal to both organizational theory related to learning and research on learning benefits of privacy. Organizations have distributed search responsibilities to individuals (Girard & Stark, 2002, Stark, 2009) and both “skilling up” and “keeping up” are the responsibilities of individual workers (Avnoon, 2021, Kotamraju, 2002, Neff, 2012). I analyze this as a strategic choice to pursue flexibility in the face of uncertainty (perhaps made mimetically (DiMaggio & Powell, 1983)9 ). I argue this pursuit of flexibility and associated responsibility creates a feedback of solitary and secretive searching that, while perhaps generally successful, limits the learning of the organization, particularly the sort of inclusive learning necessary to welcome newcomers that are different from those data engineers in positions of power.

    So what?

    The conclusion looks at the eight core findings and arguments from the preceding chapters and highlights two further arguments developed across the chapters. While the general story of this research is an examination of the factors that support successful use of web search by data engineers, around each of those factors are two risks. There are risks to organizational search performance and an inclusive learning environment. First, while generally successful for their purposes the effectiveness of data engineering search practices are limited by the ambiguity of the search confessions, the taken-for-grantedness of web search and the occupational, professional, and technical components supporting it, and firms’ hands off approach to both search repair and responsibility for searching. Second, the silences and secrecies around search and the way search is framed as an individual responsibility can produce an unwelcome learning environment for new data engineers, limiting who makes effective use of web search and who learns to fully participate as data engineers.

    In the conclusion I develop reflections for practice, design, and research built on key takeaways and make an argument for why this research matters. The dissertation closes with an appeal for reimagining search. I suggest that if we can better see how searching the web is already shared, we might find ways to share it better.

    Bibliography

    Andalibi, N., & Bowen, K. (2022). Internet-based information behavior after pregnancy loss: Interview study.JMIR Form Res,6 (3), e32640. https://doi.org/10.2196/32640 [andalibi2022internet]

    Andalibi, N., & Garcia, P. (2021). Sensemaking and coping after pregnancy loss.Proc. ACM Hum.-Comput. Interact.,5 (CSCW1), 1–32. https://doi.org/10.1145/3449201 [andalabi2021sensemaking]

    Avnoon, N. (2021). Data scientists’ identity work: Omnivorous symbolic boundaries in skills acquisition.Work, Employment and Society,0 (0), 0950017020977306. https://doi.org/10.1177/0950017020977306 [avnoon2021data]

    Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of facebook algorithms.Information, Communication & Society,20 (1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086 [bucher2017algorithmic]

    Burrell, J. (2010). Evaluating Shared Access: Social equality and the circulation of mobile phones in rural Uganda.Journal of Computer-Mediated Communication,15 (2), 230–250. https://doi.org/10.1111/j.1083-6101.2010.01518.x [burrell2010social]

    Dear, P. (2001). Science studies as epistemography.The One Culture, 128–141. [dear2001science]

    de Laet, M., & Mol, A. (2000). The zimbabwe bush pump: Mechanics of a fluid technology.Social Studies of Science,30 (2), 225–263. https://doi.org/10.1177/030631200030002002 [delaet2000zimbabwe]

    DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields.American Sociological Review, 147–160. [dimaggio1983iron]

    DiResta, R. (2018).The complexity of simply searching for medical advice. https://www.wired.com/story/the-complexity-of-simply-searching-for-medical-advice/ [diresta2018complexity]

    Ensmenger, N. (2015). “Beards, sandals, and other signs of rugged individualism”: Masculine culture within the computing professions.Osiris,30 (1), 38–65. http://www.jstor.org/stable/10.1086/682955 [ensmenger2015beards]

    Gillespie, T. (2014).The relevance of algorithms (T. Gillespie, P. J. Boczkowski, & K. A. Foot, Eds.; pp. 167–193). The MIT Press. https://doi.org/10.7551/mitpress%2F9780262525374.003.0009 [gillespie2014relevance]

    Gillespie, T. (2017). Algorithmically recognizable: Santorum’s google problem, and google’s santorum problem.Information, Communication & Society,20 (1), 63–80. https://doi.org/10.1080/1369118X.2016.1199721 [gillespie2017algorithmically]

    Girard, M., & Stark, D. (2002). Distributing intelligence and organizing diversity in new-media projects.Environment and Planning A,34 (11), 1927–1949. [girard2002distributing]

    Goldenfein, J., Mulligan, D. K., Nissenbaum, H., & Ju, W. (2020). Through the handoff lens: Competing visions of autonomous futures.Berkeley Tech. L.J.. Berkeley Technology Law Journal,35 (IR), 835. https://doi.org/10.15779/Z38CR5ND0J [goldenfein2020through]

    Guendelman, S., Pleasants, E., Cheshire, C., & Kong, A. (2022). Exploring google searches for out-of-clinic medication abortion in the united states during 2020: Infodemiology approach using multiple samples.JMIR Infodemiology,2 (1), e33184. https://doi.org/10.2196/33184 [guendelman2022exploring]

    Haider, J. (2016). The structuring of information through search: Sorting waste with google.Aslib Journal of Information Management,68 (4), 390–406. https://doi.org/10.1108/AJIM-12-2015-0189 [haider2016structuring]

    Haider, J., Rödl, M., & Joosse, S. (2022). Algorithmically embodied emissions: The environmental harm of everyday life information in digital culture.IR,27. https://doi.org/10.47989/colis2224 [haider2022algorithmically]

    Haider, J., & Sundin, O. (2019).Invisible search and online search engines: The ubiquity of search in everyday life. Routledge. https://doi.org/https://doi.org/10.4324/9780429448546 [haider2019invisible]

    Hippel, E. (1988).The sources of innovation. Oxford University Press. [vonhippel1988sources]

    Introna, L. D. (2016). Algorithms, governance, and governmentality.Science, Technology, & Human Values,41 (1), 17–49. https://doi.org/10.1177/0162243915587360 [introna2016algorithms]

    Introna, L. D., & Nissenbaum, H. (2000). Shaping the web: Why the politics of search engines matters.The Information Society,16 (3), 169–185. https://doi.org/10.1080/01972240050133634 [introna2000shaping]

    Kotamraju, N. P. (2002). Keeping up: Web design skill and the reinvented worker.Information, Communication & Society,5 (1), 1–26. https://doi.org/10.1080/13691180110117631 [kotamraju2002keeping]

    Kraschnewski, J. L., Chuang, C. H., Poole, E. S., Peyton, T., Blubaugh, I., Pauli, J., Feher, A., & Reddy, M. (2014). Paging “dr. Google”: Does technology fill the gap created by the prenatal care visit structure? Qualitative focus group study with pregnant women.J Med Internet Res,16 (6), e147. https://doi.org/10.2196/jmir.3385 [kraschnewski2014paging]

    Kruschwitz, U., Hull, C., & others. (2017).Searching the enterprise (Vol. 11). Now Publishers. [kruschwitz2017searching]

    Lave, J., & Wenger, E. (1991).Situated learning: Legitimate peripheral participation. Cambridge university press. https://www.cambridge.org/highereducation/books/situated-learning/6915ABD21C8E4619F750A4D4ACA616CD#overview [lave1991situated]

    Leonardi, P. M. (2011). When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies.MIS Quarterly,35 (1), 147–167. http://www.jstor.org/stable/23043493 [leonardi2011flexible]

    Mager, A. (2009). Mediated health: Sociotechnical practices of providing and using online health information.New Media & Society,11 (7), 1123–1142. https://doi.org/10.1177/1461444809341700 [mager2009mediated]

    Mager, A. (2012). ALGORITHMIC ideology.Information, Communication & Society,15 (5), 769–787. https://doi.org/10.1080/1369118X.2012.676056 [mager2012algorithmic]

    Mejova, Y., Gracyk, T., & Robertson, R. (2022). Googling for abortion: Search engine mediation of abortion accessibility in the united states.JQD,2. https://doi.org/10.51685/jqd.2022.007 [mejova2022googling]

    Mulligan, D. K., & Griffin, D. (2018). Rescripting search to respect the right to truth.The Georgetown Law Technology Review,2 (2), 557–584. https://georgetownlawtechreview.org/rescripting-search-to-respect-the-right-to-truth/GLTR-07-2018/ [mulligan2018rescripting]

    Mulligan, D. K., & Nissenbaum, H. (2020).The concept of handoff as a model for ethical analysis and design. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.15 [mulligan2020concept]

    Mustafaraj, E., Lurie, E., & Devine, C. (2020). The case for voter-centered audits of search engines during political elections.FAT* ’20. [mustafaraj2020case]

    Narayanan, D., & De Cremer, D. (2022). “Google told me so!” On the bent testimony of search engine algorithms.Philos. Technol.,35 (2), E4512. https://doi.org/10.1007/s13347-022-00521-7 [narayanan2022google]

    Neff, G. (2012).Venture labor: Work and the burden of risk in innovative industries. MIT press. https://mitpress.mit.edu/books/venture-labor [neff2012venture]

    Noble, S. U. (2018).Algorithms of oppression how search engines reinforce racism. New York University Press. https://nyupress.org/9781479837243/algorithms-of-oppression/ [noble2018algorithms]

    Stark, D. (2009).The sense of dissonance: Accounts of worth in economic life. Princeton University Press. [stark2009sense]

    Sundin, O., Lewandowski, D., & Haider, J. (2021). Whose relevance? Web search engines as multisided relevance machines.Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.24570 [sundin2021relevance]

    Thompson, J. D. (1967).Organizations in action: Social science bases of administrative theory (1st ed.). McGraw-Hill. [thompson1967organizations]

    Toff, B., & Nielsen, R. K. (2018). “I Just Google It”: Folk Theories of Distributed Discovery.Journal of Communication,68 (3), 636–657. https://doi.org/10.1093/joc/jqy009 [toff2018just]

    Tripodi, F. (2018). Searching for alternative facts: Analyzing scriptural inference in conservative news practices.Data & Society. https://datasociety.net/output/searching-for-alternative-facts/ [tripodi2018searching]

    Tripodi, F. (2022b).The propagandists’ playbook: How conservative elites manipulate search and threaten democracy (Hardcover, p. 288). Yale University Press. https://yalebooks.yale.edu/book/9780300248944/the-propagandists-playbook/ [tripodi2022propagandists]

    Urman, A., & Makhortykh, M. (2022). “Foreign beauties want to meet you”: The sexualization of women in google’s organic and sponsored text search results.New Media & Society,0 (0), 14614448221099536. https://doi.org/10.1177/14614448221099536 [urman2022foreign]


    1. While “Googler” is sometimes used to refer to Google employees, it is also commonly used, rendered here in lowercase, to refer to someone who uses Google or other search engines. ↩︎

    2. All names of my research participants are pseudonyms. ↩︎

    3. Kruschwitz et al. (2017) use the “solved” language in the opening line of the abstract to their book on enterprise search: “Search has become ubiquitous but that does not mean that search has been solved.” ↩︎

    4. This surprise led to Burrell (2010) . ↩︎

    5. The epistemic risk is meant to convey risk in relation to beliefs or ways of believing not to prescriptivist epistemology (see Dear (2001) on epistemography). The phrasing above might be better termed the risks in the ways knowing; the costs and consequences in different cognitive practices entangled in search. My initial interest was not on finding, or determining whether one found, truth. I was focused on the larger effects from the various ways in which search is used by people to come to know or to believe or to perform a knowledge or skill. ↩︎

    6. This choice is discussed further in [Methods and Methodology]. ↩︎

    7. As DiMaggio & Powell (1983) argue [p. 151]: “Modeling” after, or mimicry of, other organizations “is a response to uncertainty.” ↩︎