Estimated reading time: 4 minutes, 53 seconds
This post is an adaptation of something I wrote in a mailing list in response to Jon Ippolito’s “AI Sandwich” idea, which he describes in his post: “How To Teach with AI and Still Put People First”
To assess the value of an AI sandwich (i.e. in an assignment, to encourage/allow students to use AI at the start for brainstorming or outlining, then ask them to do some human-only work, then to allow them to use AI again for data analysis or final writeup) is very contextual.
Context truly matters here. Here are a few thoughts that I think can help us think through when AI use may be appropriate in an educational setting and when not. I use this to help educators but also students make decisions on whether some type of AI sandwich is useful or harmful – the location of AI in a process can be different from Jon’s suggestion of AI/human/AI and may actually be the other way round, right, it can be human/AI/human. I think whenever we encourage AI use, we do not emphasize #4 sufficiently.
- What are the learning outcomes of this particular course or assignment? Are they still relevant in the Age of AI or should we adapt them now? [this is one our assistant director always brings up]
- What would students need to do outside of an educational context (in their personal, professional lives or as citizens) based on what they’re learning in this course? What uses of AI now would help or hinder them to develop those competencies or values?
- What would drive students to use AI or other unauthorized help? To save time? Because they cannot do it on their own? Because AI seems to do it better? Because we are emphasizing product over process? Because they’re uninterested in what we’re asking them to do? Because “why not”?
- What kind of harm might AI use introduce into this assessment, in terms of reproducing biases, losing the thread of knowledge creation (who came up with these theories/ideas that AI is giving us), and interrupting students’ own thinking process a little too early? What about native speakerism that makes non-native speakers feel like their writing would never be good enough if it didn’t sound like what AI produces?
- What kind of benefit might AI offer in some contexts, e.g. save time on writing so students/people can focus on the important work AI cannot do like data collection or doing actual experiments in a lab or doing work in thr community? This is Jon’s initial premise, right?
- What basic skills and foundational knowledge do students need to have in order to use AI well and judge the quality of its output? Undergraduate students, for the most part, won’t have the basic skills to tell whether AI’s output is good quality or not on a topic they’re just learning, or if it summarized a paper well enough. If they don’t know how to brainstorm, they may always resort to AI and never learn how to come up with their own ideas. Is it worth it (in our course) to teach students how to come up with their own ideas first, then use AI and compare (something Jon did as an exercise with educators for Equity Unbound and AUC last month – recording here)
7. Is there a point where crafting really good prompts becomes in itself part of the writing process, and learning to write a good prompt would in itself be the key thing? (some people say no). I discovered if I gave GPT4 raw focus group data that is written clearly and prompt it carefully, it can summarize that data well. But I can only know it did it well if I was myself the moderator of the focus group. If I wasn’t, I would need to revise it extensively. Does that make sense? Like, I feel like in some use cases, AI is useful when the human element is really strong.
Sooooooo I think AI sandwiches are worth experimenting with and discussing in a meta way with students. Deciding which parts need to be human and which can be done faster or better or just as well with AI is important, as is fostering a critical judgment of its output. In the session linked above, Jon also differentiates between tasks that have one accurate and correct answer (for which AI is not a good way to go) and things that are more creative and divergent, where AI might give us good ideas that we may not come up with ourselves, or would take us longer to come up with on our own or with multiple web searches. I love the Botshit Article. It shows 4 use cases for AI and the amount of human intervention needed to avoid botshit (I.e. humans using hallucinations uncritically):
A personal example of what I often use AI for is that since I will never hire a graphic designer to create my blog banners, I sometimes create them myself with images from online or edit something on Canva or such. I now use AI sometimes to do something more unique and creative and exactly what I want based on a prompt. Sometimes it works perfectly and sometimes it is a horrible idea. I learned this by trying different things out. Take this blog’s banner. I gave DALL-E a couple of prompts for how I imagine visualizing an AI sandwich and I just ran it multiple times. See this slide deck for my process:
These are my preliminary thoughts. Each of these questions would be answered differently depending on context IMHO.
Header image generated by DALLE-3 via poe.com but not the last one it included. See the slide deck above for all the different images and prompts.
Interesting!