Can you write about examples of LLM hallucination without poisoning the web?

    This is a standing query / open question.

    Can you write about examples of LLM “hallucination” without poisoning the web? How do you share about misinformation without spreading it? How do you link to the outputs of chatbots and generative search engines without deceiving folks?

    My research on the lack of diligence from major search engines has gone from examining Google’s unmarked results from white supremacist Holocuast-denier websites at the top for Carole Cadwalladr’s [did the holocaust happen] query (Mulligan & Griffin 2018 to now looking at Bing and Google and others not marking the top rankings they give results from chatbots and generative search engines (from melting eggs to imaginary Claude Shannons).

    Note: I do not think search engines should de-index this sort of content. I think people can learn from seeing what others do with these new tools, whether that is showing things to do, not to do, or why they should be refused or restricted in certain situations. But perhaps search engines should provide some more notice/labeling/marking?

    Round-up: