One Year into ChatGPT: Resources & Possible Directions for Educators in 2024

Estimated reading time: 8 minutes, 43 seconds

I haven’t blogged in a while – the world has been up in flames, my heart has been hurting, I’ve been feeling helpless, and no words can express how I’ve felt the past three months (though I tried here). The Palestine issue is urgent, and I’ll blog about it when I can, and even its intersection with AI…

But this post is about AI and getting ready for the new semester knowing what we now know, roughly one year into ChatGPT taking over our classrooms and media feeds.


Before I review some useful resources to get you started, I want to say that I feel the priorities for 2024 in terms of AI in Education are:

  1. Teachers need to focus on enhancing their own critical AI literacy, particularly around what is possible with AI and exposure to the variety of uses available to students. Whether or not a teacher believes AI can or could or should be integrated into their teaching (this choice is highly contextual and differs by teaching philosophy, subject matter, and type of student in their class), they need to be aware of what is available to students for authorized or unauthorized use. Here is some good updated guidance for the beginning of the semester from Rutgers.
  2. Teachers are better off redesigning their assessments or at least assessment criteria into forms that students would not perform well with AI tools (below are some examples of how to support students in learning the limitations of AI tools) unless they are willing to either integrate AI tools or do pen and paper assessments. Basically, every teacher’s choice remains similar to before, and is either:
    1. Make AI use impossible (e.g. by doing pen and paper assessments in class). This will likely remain the case for at least some foundational courses in languages and writing;
    2. Discourage AI use by redesigning assessments to forms AI would not perform well (e.g. more experiential learning, more based on personal reflection and writing voice, include live oral verification following written submissions). This is possible across the board but requires a lot of effort from teachers. Students may still use AI in unauthorized ways, but they wouldn’t be able to perform well on such tasks with just AI;
    3. Allow AI use within boundaries (e.g. for certain parts of an assignment like brainstorming, or for certain assignments, like literature review, but not others). This is useful as a step towards critical AI literacy when students may not yet know when AI can be useful and when it can be harmful; or
    4. Allow indiscriminate AI use (I suggest to keep the “citation” element for now, though). This is useful once students have developed sufficient AI literacy and are more advanced in their studies with basic reading and writing skills and foundational knowledge in their discipline.
  3. Teachers will likely benefit from taking some time to promote student critical AI literacy – particularly with regards to the way AI hallucinates or fabricates, or really, more eloquently put, produces “botshit” (see the botshit article slide deck by Hannigan et al) – although eventually institutions will likely ensure a foundation for all students at some point such that teachers will not need to repeat this in every classroom (similar to foundations in information literacy, perhaps augmented by what Shah and Bender explain here, basic research skills and citation). It is also important within critical AI literacies to raise awareness of bias in AI output which can reproduce epistemic injustice about marginalized cultures and populations in the world, as well as ethical concerns related to exploitation of human labor, impact on the the environment, and intellectual property rights of creators of content used to train AI models. It is also important that none of us follow the “AI hype” (see the AI Hype Wall of Shame). It is essential that we “[see] AI not as a neutral technology but as political technology that can be used to serve particular policy objectives and ideologies.” (Ben Williamson).
  4. AI detectors are still very inaccurate and not worth using because of the high degree of false positives and false negatives, and the bias towards falsely labeling non-native speakers’ human writing as AI. Check out the latest article testing several detectors.
  5. Institutions and teachers will need to be clear on AI policies/guidelines (here is a good article on ways to do so; here is an article on how my institution has approached it). In the meantime, we need to continue to critique the politically and epistemic implications of integrating AI into our lives, and question the ethical implications.

Review of AI Education Resources

AI Pedagogy Project from Harvard Meta Lab

[Disclaimer, I am on their advisory board]

This resource has three sections worth looking at:

  1. The AI starter and LLM tutorial for beginners. This can be useful for anyone to plug in when they want to make sure teachers or students have the basics of AI. I recommend using the tutorial with students and both starter and tutorial for teachers who are totally new to AI. This will likely get “old” as the technology spreads and updates
  2. Resource section – probably overwhelming, but a good place to go for people who want to dig deeper
  3. Assignments section is what is worth looking at the most because these are curate assignments from some of the best quality assignments out there and takes from other spaces. Focused on humanities, each assignment is labeled/tagged to make it searchable

Examples of interesting assignments I found here include:

  • Debating the Ethics of Generative AI by Mohsin Yousufi. This is a simple, clear, straightforward discussion guide to help students (or honestly teachers even) think through three options for AI policies: ban use; use with boundaries, use freely with disclosure. Probably worth doing this activity in writing/language teaching departments, and any course that plans to integrate AI literacy, but possibly after students have learned a little bit about the limitations, concerns or dangers of AI first so they can take a critical stance.
  • Illustrate a Hoax by Juliana Castro. This is a fun assignment that encourages students to create images to provide fake evidence for known hoaxes – and uses this as an opportunity to both explore these tools and question the impact of AI on misinformation going forward and how it is different from other forms of image manipulation. Useful for media literacy, critical AI literacy type courses.
  • A Tale of Two Critiques by Anna Mills. This is an assignment where students read a primary source, and ask ChatGPT to write a critique of it, and then compare the output of ChatGPT to a previously human-written critique of the same article. Students would hopefully develop a critical approach to looking at output of ChatGPT and figure out its limitations and where human writing stands out against it. Useful for any class that involves critical reading or writing.
  • AI Misinformation Campaign by Daniel Stanford. This is a relatively quick activity to show students how easily AI can create misinformation intentionally (though I think the bigger issue is when AI creates misinformation unintentionally!). I tried it on the vaccine/autism issue and struggled to make Claude respond – which is something the activity designer anticipated and asked students to think about workarounds (here is the chat thread on Poe of how I got it to do it in the end). This is useful as a reflection on credibility of AI output and also of prompt engineering around some of the AI censorship.
  • Critical Analysis Across AI Tools & Stereotypes by Peter Hartley. Invites students to try different image AI tools to see if they perpetuate stereotypes and whether one can prompt them to do otherwise. Strangely, version A of the assignment did not produce gender and race stereotype but option B2 did. I’m not really sure why this is so. Useful for those teaching AI or media literacy, and those wanting to teach about bias/stereotypes in AI and useful for any visual discipline.
  • Rock, Paper, Scissors by Sebastian Rodriguez. For an intro to computer science/Python class, where students use their own code and ChatGPT’s to learn about edge cases and assess ChatGPT’s ability to identify and correct edge cases.
  • Many Sides of Many Coins by Dana Dawson. Invites students to prompt AI in different ways about a controversial issue and reflect on the outputs and how they might impact the future of journalism. Useful mainly for journalism courses, but can work in other writing-heavy courses.

Exploring AI Pedagogy

This site is a collection of applications of AI in teaching, and provides a brief description of the activity/assignment, followed by a reflection from the educator on how it worked out in their own class. The descriptions of the activity or assignment itself are relatively brief, but the reflections are truly helpful to the reader.

Examples of ones I found interesting:

  • Co-learning with AI: Roadmap processing by Stefani Boutelier: An assignment to help learners think about when and when they would use AI in an assignment. A really useful graphic on the process for students to think through their use of AI at different points: and the second page of this slide contains four strategies for verifying AI output and applying critical thinking when using it. Useful for any course that will allow some use of AI.
  • Reading Assistants: ExplainPaper and SciSpace by Marc Watkins. Encouraging students to use AI reading assistants (either ExplainPaper or SciSpace, which is in papers students had historically struggled with, while encouraging students to:
    • Only use the reading assistants in sections that are difficult for them, rather than the whole paper, so as not to completely remove the deep reading component
    • Reflect later on how the reading assistant may have distorted meaning or tone or such

Watkins suggests we use these tools as part of developing critical AI literacy, encouraging students to examine the affordances and limitations of using the tools and the potential impact on the development of their reading skills.

More Resources

Two more resources worth I didn’t have time to write a full review of are:

101 Creative Ideas to Use AI in Education

Learn WITH AI Toolkit


Featured image: abstract design with pink, purple and blue clouds – created with DALL E + Canva

One thought on “One Year into ChatGPT: Resources & Possible Directions for Educators in 2024

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.