Here is text of my prospectus for applying to the 2021 Information Operations Fellowship with Twitter’s Trust & Safety Team. (edited for a missing citation, modified & expanded presentation of citations, links to personal websites added, footnote re META added)
I recalled this while thinking through the approach to full text search taken by Mastodon. See my post from 2022-11-05: Searching Mastodon?
In light of Elon Musk’s recent tweet, analysis and discussion of how Twitter search might be used for harassment (and how search might be resisted or friction added to searching) is needed even more: 2022-11-05 15:31
Search within Twitter reminds me of Infoseek in ‘98! That will also get a lot better pronto.
Who is left to build guardrails to direct the use of Twitter search? Who is left to monitor and mitigate the harms?
See the conversation under Taylor Lorenz’s (@taylorlorenz@mastodon.social) recent comment re search and discovery on Mastodon. The desire to be able to find what one wants seems to propel one to bowl right over what appears to be deliberately established community protocols and norms that consider also the desires of those wanting to be found (or not).
[I’ll confess I made some lighthearted comments about Infoseek - 1, 2]
Applicants should submit a 1-2 page prospectus along with standard application materials that lays out the scope and objectives of a proposed investigative project or projects. The prospectus should include:
- A short description of the proposed project and motivations behind it
- Project objectives and optimal outcomes for the applicant
- Expectations around materials, resources and data access needed to complete the work
- Demonstration of any preparation or pre-work already completed in relation to the project or projects
This project will examine behaviors around Twitter Search that may be related to harmful or empowering uses of the tool. Using a two-pronged approach this project will explore both mentions of searching Twitter in tweets and search behavior on Twitter Search. Mentions of searching Twitter in tweets will be inductively coded to better understand how Twitter Search is used (and resisted) to enable or mitigate harmful behavior. Twitter Search log data will be analyzed for particular subsets of users in order to connect patterns of search behavior with other platform behavior.
In a 2019 Twitter conversation-interview, Jack Dorsey told Kara Swisher that search was one of the four spaces where abuse happens the most on Twitter.1
Examining the use and mentions of Twitter Search may be one avenue to explore where harmful activity may be disrupted and where legitimate user control may be expanded. Examining the use of Twitter Search may suggest design or documentation changes that may “improve the integrity, relevancy, and authenticity of the public conversation”.2 That line is taken from Jutta Williams’s3 thread announcing Dr. Sarah Roberts’s consulting at Twitter. When Williams announced that Roberts would be consulting at Twitter this summer, she shared that Roberts would “research the role of user agency in algorithmic decision making and how communities might be best benefited by more choice.”4 That aim resonated with research I’ve previously conducted with colleagues about control on Twitter (discussed below).
Beyond the choice of what to tweet, the search tool on Twitter is probably the aspect of the interface that presents the most expansive set of possible choices to people. The tool can be used to learn various things, connect with others, and facilitate harmful or empowering behavior.
Dr. Sarita Schoenebeck and Lindsay Blackwell have an article forthcoming in the Yale Journal of Law & Technology that “propose[s] several key shifts for social media governance to better recognize and repair harm”. One suggestion is that a shift of focus from content to behavior “will allow social media platforms to become more proactive in their governance, implementing interventions that discourage harmful behaviors before they manifest on the platform.”5
Closely exploring discussions around and practices of Twitter search in relation to both empowerment and harm may reveal paths towards concrete interventions.
This project intersects with two of the Twitter Trust & Safety focus areas for 2021:
- Developing understanding of emergent harm networks on Twitter - driving signals development and approaches to addressing them through policy and enforcement
- Technical analysis to better understand how our products are abused and exploited by coordinated actor groups.
The project also broadens the scope beyond looking at those causing the harm, spam, manipulation, and abuse to also look at how users of Twitter take action on their own or in community to counteract those actors.
Familiarity with the operations of a company exploring these questions will greatly improve my ability to conduct academic research in the public interest and advocate for effective design or policy changes. It would also help me better prepare my students for similar work in industry or government.
My PhD research has looked at tradeoffs in assignments of responsibility related to the use of algorithmic tools, particularly search. My co-authored 2018 law review article looked at the Holocaust denial search results on Google—revealing differences in interpretations of the search results.6 We argued that Google had an obligation to respect the right to truth regarding gross violations of human rights. In conclusion, we showed that close examination of how people use search reveals space for action that don’t run contrary to the company’s longstanding values and engineering commitments. Google could act in the seams to equip people to search more responsibly.
My co-authored 2019 paper looked at how people discussed the “Twitter algorithm” on Twitter (using the Twitter API to collect tweets with that phrase or variants).7 We focused on one theme we identified: the value and utility of control according to platform users. Users discussed many ways in which they exercised control against or with “the algorithm” to improve their experience on the platform. In the end, like the announcement from Williams mentioned above, “We argue[d] for the need [ . . . ] to consider support for users who wish to enact their own collective choices.”
Though it was not the focus of the study, we noted elements related to the use of Twitter Search:
Finally, my dissertation research that I am currently conducting looks at the use of web search by data engineers at work. The core connection between that and this proposal, beyond questions around how responsibility is assigned in relation to search and people’s understanding of platform ecosystems, is that both are premised on the belief that there may be value in looking closely at beliefs and behaviors implicated in uses of search tools. I suggest exploring search practices as a lens or lever to explore or shape the broader experience.
Kara Swisher (@karaswisher@twitter.com): 2019-02-12 14:15
Ok but I really want to drill down on HOW. How much downside are you willing to tolerate to balance the good that Twitter can provide? Be specific. #KaraJack
Jack Dorsey (@jack@twitter.com): 2019-02-12 14:19
This is exactly the balance we have to think deeply about. But in doing so, we have to look at how the product works. And where abuse happens the most: replies, mentions, search, and trends. Those are the shared spaces people take advantage of #karajack
Jutta Williams (@williams_jutta@twitter.com): 2021-06-23 09:36
By giving people more choice, we aim to improve the integrity, relevancy, and authenticity of the public conversation. This is the driving hypothesis we shared in April that I’m really excited to start exploring alongside @ubiquity75 [Sarah Roberts]. More to come!
[Added footnote on 2022-11-05: Williams was at the time the Staff Product Manager for Twitter’s META (ML Ethics, Transparency & Accountability) team. The team, founded by Ari Font and led by Rumman Chowdhury, was profiled on the same day as the tweet above (June 23rd, 2021) by Anna Kramer in Protocol: “How Twitter hired tech’s biggest critics to build ethical AI”. All but one member of META was laid off on November 4th, 2022 (ref: Luca Belli). Prior to its then instantiation, Ayşe Naz Erkan and Luca Belli created its initial version. (refs: Ariadna Font Llitjós, Meg Young, Nick Matheson, Kristian Lum)] ↩
Jutta Williams (@williams_jutta@twitter.com): 2021-06-23 09:35
She’ll research the role of user agency in algorithmic decision making and how communities might be best benefited by more choice. She’ll work with people on Twitter, researchers, regulators, consumer advocates, and our team to help us figure out where to focus our work.
Schoenebeck & Blackwell’s “Reimagining Social Media Governance: Harm, Accountability, and Repair” (2021), in Yale Journal of Law & Technology. link [schoenebeck2021reimagining] ↩
Mulligan & Griffin’s “Rescripting Search to Respect the Right to Truth” (2018), in The Georgetown Law Technology Review. link [mulligan2018rescripting] ↩
Burrell, et al.’s “When Users Control the Algorithms: Values Expressed in Practices on Twitter” (2019), at CSCW https://doi.org/10.1145/3359240 [burrell2019control] ↩
van der Nagel’s “‘Networks that work too well’: intervening in algorithmic connections” (2018), in Media International Australia. https://doi.org/10.1177/1329878X18783002 [nagel2018networks] ↩