A follow-up to: very few worthwhile tasks? (weblink), with the subject tweet shared below.
As I thought more about the @benedictevans tweet (at bottom) from yesterday, I hit on another, and perhaps clearer, connection to a core discussion in my dissertation.
One struggle people have with finding effective uses (or developing fruitful practices) for ChatGPT, etc., is not that “very few worthwhile tasks can be described in 2-3 sentences typed in or spoken in one go”. The struggle is that very few 2-3 sentence prompts/queries, in particular contexts, can effectively describe the imagined referent worthwhile tasks.
The problem here is not that worthwhile tasks cannot be described in 2-3 sentences. The problem is finding some of those, perhaps very few, 2-3 sentences, for particular task contexts, that connect the searcher with their next steps in relation to their worthwhile task.
The data engineers I interviewed were particularly successful, in part, because their work was organized in such a way that they could grow their queries out of seeds in the code or text or conversations they were working with.
So, perhaps “in one go” is apt and we need to think about how to see and seed how to construct effective queries/prompts.
Here is the tweet in very few worthwhile tasks? (weblink post):
[highlighting added]
The more I look at chatGPT, the more I think that the fact NLP didn’t work very well until recently blinded us to the fact that very few worthwhile tasks can be described in 2-3 sentences typed in or spoken in one go. It’s the same class of error as pen computing.
On Twitter Jun 29, 2023