Human-centered Evaluation and Auditing of Language Models Workshop

    January 30th, 2024
    Ziang Xiao, Wesley Hanwen Deng, Michelle S. Lam, Motahhare Eslami, Juho Kim, Mina Lee, and Q. Vera Liao’s HEAL: Human-centered Evaluation and Auditing of Language Models on Dec 20, 2023

    This workshop aims to address the current ‘’evaluation crisis’’ in LLM research and practice by bringing together HCI and AI researchers and practitioners to rethink LLM evaluation and auditing from a human-centered perspective. The recent advancements in Large Language Models (LLMs) have significantly impacted numerous and will impact more, real-world applications. However, these models also pose significant risks to individuals and society. To mitigate these issues and guide future model development, responsible evaluation and auditing of LLMs are essential.