"We also asked participants about their ChatGPT expertise"

    August 21st, 2023

    Kabir et al. (2023, p. 5)

    The participants were recruited by word of mouth. Participants self-reported their expertise by answering multiple-choice questions with 5 options— Novice, Beginner, Competent, Proficient, and Expert. Regarding their programming expertise, 8 participants were proficient, 3 were competent, and 1 was beginner. We also asked participants about their ChatGPT expertise—3 participants were proficient, 6 participants were competent, 1 participant was a beginner, and 2 were novices. Additionally, we asked participants about their familiarity with SO and ChatGPT by asking how often they use them. They self-reported this by answering multiple-choice questions with 5 options— Never, Seldom, Some of the Time, Very Often, and All the Time. For SO, 4 participants answered all the time, 5 answered very often, 2 answered some of the time, and 1 answered seldom. For ChatGPT, 3 answered very often, 3 answered some of the time, 2 answered seldom, and 4 answered never. [emphasis added; italics in original]

    References

    Kabir, S., Udo-Imeh, D. N., Kou, B., & Zhang, T. (2023). Who answers it better? An in-depth analysis of chatgpt and stack overflow answers to software engineering questions. http://arxiv.org/abs/2308.02312 [kabir2023answers]