How Will Our Imagination Be Affected by AI? Musings

Estimated reading time: 9 minutes, 7 seconds

“Imagination is the central formative agency in human society. . . . It’s because we can imagine different futures that we can struggle against the present state of things”. —Ngu˜gı˜ wa Thiong’o

There is a lot of controversy over whether widespread and frequent AI use is likely to enhance creativity and imagination or diminish them. I think the answer is not straightforward, and quite nuanced, but I have a clear perspective on this based on my experiences using generative AI in the past 18 months or so, for text and image generation.

We need to exercise our imagination

Long story short: if we stop exercising our imagination and use AI all the time, then of course AI use could diminish our capacity for imagination and creativity, and if we don’t already have confidence in our own creativity, then we’ll end up using AI that creates seemingly new things that are really only synthesized kaleidoscopic recreations of stuff that already existed in its dataset. It’s white, male, Western, Northern biased dataset, occasionally peppered with some human anti-bias hard-coding or something, which ends up making it a little… ummm… prudish at times, and censoring on implicit values the developers deemed important,  which may or may not align with ours. 

My biggest concern is with non-native speakers of English who have no confidence in their writer’s voice or their writing ability, who may not enjoy writing and therefore relegate some of their writing to AI and then become dependent upon it and can no longer express themselves without its help. This is a bigger problem if AI sort of grooms them to sound more like itself: soulless formulaic English. But of course, much of our schooling and university writing instruction does exactly this. So I guess using AI to help fix grammatical mistakes or help paraphrase or understand a difficult text sounds like a reasonable way to use AI as a “ladder”, but we have to enter into this with eyes wide open: constantly using the ladder means they may never learn to jump higher on their own.

The Hybrid or Augmented Human-AI imagination 

My use case for using AI often is image generation. I’m not a visual artist or graphic designer, so it would take me a long time to create the exact visual I want, but I  can write a good prompt for an AI tool to give me multiple versions of an idea and pick the one I prefer. It’s a huge time saver, and occasionally, analyzing the AI generated image for some of its faults is even a plus. Sometimes simple prompts give me what I want, other times it takes a bit more nuance and detail.

What I mean to demonstrate here is that if something is not a priority for us, we are old enough to know this is not what we will invest time learning in the near future, and we can’t afford to pay a professional for a low-impact use case (like the featured image of my blog), I think the AI use helps augment my imagination.

In all cases, I would have built on other people’s work anyway, getting CC images from pixabay or such, and I still do that a lot. I also take my own photos.

Where accuracy and bias are not a concern, I am happy to try AI for image generation. But not for scientific info or historical accuracy. And I consider myself aware enough to detect bias in human imagery.

For brainstorming, it can go either way. If you’re really having a complete writer’s block and don’t know where to start, many people find AI helpful. Or if you have an idea already and want to take it further a bit quicker, AI can help.

One way I’ve used AI that I’m torn about is to develop a syllabus for something. First of all, if it is my own course, I should be able to do this on my own. Second, there is value in taking time to browse through 5 or 6 other educators’ versions of a syllabus for a similar course, picking what suits me, and also checking out who these people are, where they teach, whether I know them and can follow up with them on things. There’s value on that long term and short term. It is worth the time investment. Versus an AI generated one that is probably an amalgam of 100s of syllabi without acknowledging the efforts of people who wrote them, nor clarifying their positionality, and it may not end up being fully coherent or have all “real” and good quality references. However, when I put my educational developer hat on, it can help give me ideas of how someone else’s syllabus teaching a thing I don’t know MIGHT look 

I’d still rather see a real one, though, and be able to say this is the syllabus of so-and-so at university X. Then we assess the credentials of the person and the university, you know?

I sometimes write a blurb and boring title for a workshop I am about to give, and ask an LLM to generate 10 interesting titles for me to choose from. Occasionally one or two help me take the title a step further than I would have alone, but not always. It sometimes comes up with sleazy hyperbolic titles that I would feel awful using. So not “me” and because workshops are so much of what I do and who I am, it matters that my values and personality come through in how I name and describe them. Giving AI my own material to work with often makes it closer to something I would write/say, but am I also giving the LLM my ideas and writing voice for free? It’s not rocket science, but for someone else it might be.

Human creativity augmented with AI would only be beneficial, IMHO, if the weight of human judgment before and after AI use is high. In the same way it’s important to make a mental calculation of something before you put it into the calculator so that you know roughly what should come out, I think with AI we need to constantly do this, not just to preserve our imagination, but to also preserve our criticality and awareness of epistemic bias.

Instances Where I Believe AI Would Limit Imagination 

Pretty much any time we’re asked for our opinion and we use AI instead is a red flag for me. AI doesn’t know your history and positionality (as my friend Rima has said) in order to represent what you think about something. Even if it collected all the data analytics, these are all the measurable, machine readable dimensions of you, and AI will assume you comply with the norm of what people with similar characteristics would do. Two instances of this happened in front of me recently, of someone using AI instead of giving their opinion. One was reviewer 2 for a paper on AI I had co-authored. Their response was clearly formulaic and out of context, and additionally it seems they didn’t upload our entire paper, because the feedback mentioned us lacking a discussion and conclusion section altogether, which wasn’t true. The other instance was a survey where several respondents used AI to answer a question about what they think the future of a particular community of educational researchers and practitioners should do. The answers were way longer than everyone else’s and had that generic, redundant, repetitive feel of poorly prompted AI. If you aren’t interested in the paper, don’t peer review it; if you aren’t interested in this survey question, don’t answer it. They’re not graded assignments. Ha.

Back to the original point. When asked for our opinion I don’t think we should use AI. This can be most dangerous when used by a young learner used to an authoritarian schooling or political system. They’re not used to being asked for or expressing their opinion as it is. They may find the AI written opinion impressive, but it cannot possibly represent what they were really, can it?

In a sense, our ideas or opinions build on ideas of others, of course, and they also incorporate something of ourselves in them. AI doesn’t have that latter part, and when it does the former part, it doesn’t bring with it the life experience you gained while learning of others’ ideas. This shortcut helps nobody if you’re doing something worthwhile.

I want to say that if we keep wanting to use AI for something (in its entirety not in a small part) instead of doing it ourselves we should probably question whether the thing itself is worth doing at all? (Inspired by Anne Marie Scott). If the answer is that it is definitely worth doing, then we need to ask why we’re not capable of doing it efficiently on our own, and whether doing it a lot on our own might help us become more efficient at it? For example, I write emails much faster than most people (I do most written/verbal things faster than most people, maybe because I type fast and also because my brain gets distracted if I don’t do the thing or finish the thought fast enough). But I’m sure I used to take longer. And people who don’t send as many emails do take longer. 

In the same way that authoritarian regimes or educational systems can limit our capacity to imagine beyond the box we’re placed in, AI is just a new, shinier box that appears to give us more possibilities, but they’re still within a box. Like a kaleidoscope, it gives a different pattern each time you look, a unique pattern, but it’s still limited to the use of ingredients it has within it. Whether dataset for preordained GenAI, or even the whole of the internet. Not all knowledge and wisdom exist in machine readable form, and not all valuable things to learn is cognitive, and not all of it can be digitized.

We need to ask ourselves a lot of questions before we decide to use AI to expand our imaginations, because that’s not what it’s really for. I’m not sure what it’s really for, even, because it’s not that good at many things I deem important anyway. I understand why university students don’t have the judgment to know this. I just don’t understand why some educators still don’t see this. Not that GenAI is completely useless. Just that there’s so many important things it either can’t do or won’t do well, or won’t do if the way we would have done it. And we need to maintain the confidence in our own capacity to create new and original things.

Featured image of a person in a cage with colorful brains and birds and fields around, outside created on DALL-E3 via poe.com, prompt “limiting imagination”, which, after 3 prompts kept giving me cages or locks/chains.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.