jump to search:/  select search:K  navigate results:/  navigate suggestions:/  close suggestions:esc

    "And what matters is if it works."

    This is a comment about Kabir et al. (2023), following a theme in my research. @NektariosAI is replying to @GaryMarcus saying: “the study still confirms something I (and others) have been saying: people mistake the grammaticality etc of LLMs for truth.”
    @NektariosAI via Twitter on Aug 10, 2023

    I understand. But when it comes to coding, if it’s not true, it most likely won’t work. And what matters is if it works. Only a bad programmer will accept the answer without testing it. You may need a few rounds of prompting to get to the right answer and often it knows how to correct itself. It will also suggest other more efficient approaches.


    Kabir, S., Udo-Imeh, D. N., Kou, B., & Zhang, T. (2023). Who answers it better? An in-depth analysis of chatgpt and stack overflow answers to software engineering questions. http://arxiv.org/abs/2308.02312

    Widder, D. G., Nafus, D., Dabbish, L., & Herbsleb, J. D. (2022, June). Limits and possibilities for “ethical AI” in open source: A study of deepfakes. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. https://davidwidder.me/files/widder-ossdeepfakes-facct22.pdf