ChatGPT is going to tempt me to be more skeptical of your work

Lately I’ve been seeing a lot of posts on LinkedIn and elsewhere crowing about how ChatGPT could be used to perform UX tasks. Examples:

The enthusiasm is great but this level of shortcutting worries me. It’s okay to ask ChatGPT to find things to read about a topic if you’re fine with some of the results not being appropriate or even not existing. But I don’t think it is fine to ask it how to do something or to perform research on your behalf. ChatGPT’s emphasis is on delivering something that looks sensible, nothing more.

ChatGPT is not a knowledge model, it’s a language model. If you’d like to dive into just how ChatGPT works, Steven Wolfram has a great explanation in his article What is ChatGPT Doing…and Why Does It Work?

The core idea is that ChatGPT is very good at figuring out what a very likely next word might be based on the prior words it has chosen, the prompt that it was given, and word frequency and proximity data derived from a huge amount of copy scraped from the internet. Since it doesn’t actually know anything it does a great job of making plausible-sounding English1 of the sort you might find anywhere on the net. Since the internet is the training data, the quality of the output is only about as good as the average quality of internet writing, which is not fabulous.

It’s important to remember that there’s no attempt to make sure that what ChatGPT returns is factually accurate. Bloggers and reporters experimenting with ChatGPT have accused it of making things up or “hallucinating,” but this complaint assumes that accuracy should be expected. It should not. ChatGPT is just trying to be plausible.

I’m not saying not to use ChatGPT. It’s great as a memory jogger, or to avoid the tyranny of the blank page. It makes a perfectly shitty first draft that you can then do real work on. But if you just accept what it has to say you are choosing a below-average and likely nonsensical result. And if you use it as a substitute for doing the work that ChatGPT is simulating the output of, you are lying to yourself and others.

Since ChatGPT produces superficially plausible output, hiring managers are going to need to scrutinize a candidate’s work more closely, and quiz a candidate more carefully. (Yes, we should already be doing this.)

On a Slack team I’m on there was a recent debate as to whether or not an engineering manager should accept ChatGPT output as the answer to a coding test, if during their regular duties a new hire would be allowed to use resources like StackOverflow, which often provides code snippets, Google Search, or even ChatGPT. What do you think, given the above?

  1. ChatGPT can produce reasonable looking Python and other languages; a co-worker successfully asked it to return JSON in response to a bit of copy where someone asked for an appointment on a specific date and time. []