Claude 2 on my Claude Shannon hallucination test

    Last updated: September 28th, 2023

    Added September 28, 2023 11:18 PM (PDT)

    It appears that my attempts to stop the search systems from adopting these hallucinated claims have failed. I share on Twitter screenshots of various search systems, newly queried with my Claude Shannon hallucination test, highlighting an LLM response, returning multiple LLM response pages in the results, or citing to my own page as evidence for such a paper. I ran those tests after briefly testing the newly released Cohere RAG.

    Added October 06, 2023 10:59 AM (PDT)

    An Oct 5 article from Will Knight in Wired discusses my Claude Shannon “hallucination” test: Chatbot Hallucinations Are Poisoning Web Search

    A round-up here: Can you write about examples of LLM hallucination without poisoning the web?

    Reminder: I think “hallucination” of the sort I will show below is largely addressable with current technology. But, to guide our practice, it is useful to remind ourselves of where it has not yet been addressed.
    @AnthropicAI via Twitter on Jul 11, 2023

    Introducing Claude 2! Our latest model has improved performance in coding, math and reasoning. It can produce longer responses, and is available in a new public-facing beta website at http://claude.ai in the US and UK.