How *Not* To Be Overly Impressed with #ChatGPT

Estimated reading time: 2 minutes, 31 seconds

So I am one of those people who is normally critical of AI but fell in love with #ChatGPT once I tried it. I love its tone and… never mind. This post is not about that. Although for a lovely post about letting oneself get excited, read this.

This post is about how I had a lightbulb moment. About the “trick” that #ChatGPT (unintentionally?) plays on us. Or, no, it doesn’t itself trick us, intentional or not intentional, because it’s not human. But I mean, the trick we’re experiencing when *we* think ChatGPT is doing something more intelligent than we logically expect it to be able to do? And we think it’s just because we don’t understand how AI and machine learning work? I mean, that’s always possible, but I don’t think it’s always the case, you know?

So a post on a mailing list finally mobilized me to investigate something. You know the paper that blew our minds, about how #ChatGPT could respond relatively OK to some Wharton MBA exam questions? It suddenly occurred to me that:

a. It’s probably not the case that #ChatGPT would give the exact same (correct?) answer each time. So I tested the prompt, and sure enough, with 6 regenerations, it got 2 different answers, none of which was the same one the professor got in the paper. I could have kept going til it got it, but that’s not the point, right? The point is, it doesn’t know the right answer, it just knows some possible answers and will give a different one each time unless there is only one answer ever out there in the world and it is the only one it had learned. This reminded me that my 11 yo daughter realized that if you want to verify, you use a search engine, not ChatGPT, because it can fabricate stuff; so ….

b. I Googled the prompt. And sure enough, I got two links, one of them in a site we all know students go to for help with assignments, acronym CH and another one that is called HBR Case study solutions.

My interpretation is just #ChatGPT had access to what I found on Google. What students probably already have access to via Google. And it trained on that. And that’s how it was answering. By paraphrasing answers it had seen before, not by logically deducing anything.

Here you go – the Google doc documenting my experiment (open for commenting, or you can comment below).

This is not to say that there will never be an AI model that could solve unique, original case studies in operations management (e.g. there is AI that solves math problems already). But it is to say that that model is not #ChatGPT. #ChatGPT was trained on a lot of text, and that particular exam question? It had seen something like it before, so it could answer it, to some degree of coherence, because of that. I think!

What do you think?

Header image: Image by Sergei Tokmakov, Esq. https://Terms.Law from Pixabay

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.