How can we help students be critical of the output AI gives them?
We know that students are using AI in their schoolwork and in their lives, but do they have the skills they need to know how to use AI to serve their learning rather than to hinder it? Research suggests that they do not and that they are looking to faculty to help them develop these skills. In fact, in a recent survey 58% of students (n=3839) reported that they do not feel that they have sufficient AI knowledge and skills. In the same survey, 52% reported that they believe over-reliance on AI negatively impacts their academic performance. These data suggest that students realize that they need help to become more critical of AI, of its role in their learning, and in their own use of AI tools. They need our help to make sense of the limitations of AI and to be critical of the information that a confident-sounding chatbot gives them when they prompt it.
Understanding what AI can and can’t do
Before we can help our students begin to navigate an educational and professional landscape that includes AI, it’s important to have a general understanding of what AI is. Artificial Intelligence is a broad term used to refer to the ability of a machine or computer to complete tasks in ways that simulate human learning. The AI tools that have received the most attention in the past two years represent a move forward in the technology and are called generative AI because they are able to create new content based on the data they have been trained on.
While putting a prompt into a generative AI tool can quickly produce impressive results, it’s important to recognize that generative AI doesn’t actually think: instead, it learns patterns from the data it is trained on and uses those patterns to generate human-like text. Rather than producing anything original, it is basically mirroring the characteristics of the data it has seen before. Understanding this distinction is important because it can help us begin to recognize the limitations of what AI can produce—and to appreciate how that differs from what human intelligence can do. It can also help us identify the kind of thinking we want to make sure our students learn to do in our courses and make the case for why they need to do that thinking for themselves.
Helping students think critically about AI in your discipline
Students need practice reflecting on, critiquing, and discovering the limits and biases of a large language model. This means students need to engage with AI in structured ways to help them develop habits of critical thinking about AI. Let’s look at three examples of these structured experiences with AI in a range of disciplines:
Example: Students critique AI-generated work in a discipline in the social sciences
Students in an education course prompt AI to create lessons plans for the grade level they intend to teach. They then evaluate those lesson plans using course principles such as culturally responsive pedagogical practices, skills differentiation, or alignment with key standards. Students formalize their critique of the lesson plan in a short essay where they thoughtfully describe how they used new learning about the course principles, the limitations of AI in their discipline of education, and their plans for how they will or will not use AI in their fields in the future.
Example: Students critique AI-generated work in a discipline in the humanities
Students in a sculpture class are learning about installation art. They prompt AI to design a museum space and an installation within that space. They then evaluate the installation space generated by AI using aesthetic, cultural, and critical theories they are studying in their course. Students formalize their evaluation in writing, describing their new understandings of both course concepts and the limitations and potential of AI for artists and for those who design spaces for them.
Example: Students critique AI-generated work in a health care-related discipline
Students in a public health course learn about a commercial algorithm that is used to make health care decisions in the U.S. health care system. They then estimate the level of bias that might be built into the system. Students share ideas with each other and reach consensus on the level of bias they believe might be built into the system. The instructor debriefs their ideas and then reveals the findings of one study of the system. The abstract of that study is provided here:
The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are [in reality] sicker than White patients. The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care. (Obermeyer et al., 2019)
Students then work to determine how the AI bias might have come about and how public health can put precautions in place when training AI to work with large data sets.
In each of these examples, students spend time deeply exploring and critiquing the limits of AI in the discipline. These activities will take some time, but will pay off greatly in terms of students’ growth and understanding about what AI is and their own beliefs about its use in their work. A structured activity like this can take place over the course of a class meeting or could be assigned as homework. Make time for students to articulate their individual ideas about AI and share them in discussion in small groups or in a whole class setting. Remember, students are seeking guidance and trying to create their own ethical frameworks for the use of AI. This takes time.
As you consider how you might structure an activity like this in your own course, use the reflection prompts below to begin planning:
- What ideas for critically investigating AI in the examples above look potentially helpful for your students?
- How is AI used in your discipline, in the field, or in practice? What kind of work does AI produce that your students could investigate?
- How might you have students evaluate or critique the work that AI generates?
- How could you ask them to reflect on and make use of the experience they had with AI individually and with their peers?
Helping students think critically about AI in your course
Students need practice reflecting on, critiquing, and discovering the limits and biases of AI in relation to the work of your course. This means structuring learning experiences for students in which they critically evaluate the work AI can generate in response to your course assignments or activities. Without these experiences, students may uncritically lean into AI in ways that short-circuit their own growth. Let’s look at three examples of these structured experiences in a range of disciplines.
Example: Students compare AI and human-generated work for a journalism course
The instructor of the course provides students with two outlines for articles on Afro-Indigenous farms. Unbeknownst to the students, one outline is AI-generated, and one is created by a person. The instructor says, “We’ve been working on planning out the arc of articles, especially in relation to a narrative that weaves the pieces of the story together. Use that focus and other aspects of our learning in the last two weeks and compare these outlines. Work in groups to do this comparative analysis. I want you to assign a score of 0 – 10 to both outlines, 0 being a horrible outline and 10 being a perfect outline. Make sure that your group has 5 reasons for your score.” After groups share their ideas and the instructor debriefs the reasons for their critique, the instructor reveals that one of the outlines was generated by AI. It is likely that some students gave that outline a low score. It is possible that some students preferred it; this can often reveal student misunderstandings about what the instructor is actually looking for in an assignment (for example, a clear position, specific kinds of evidence, connections to course principles). The instructor helps students see how the AI outline was limited or fell short of the standards for the course.
Example: Students critique AI-generated work for a history course
The instructor of the course provides students with a short essay that was generated by AI but does not tell them that it is AI-generated. The instructor says: “We’ve been exploring the use of primary sources in our work on the history of women’s crafting. I’m going to give you a three-paragraph essay on quilting in Appalachia, and I’d like you to read it and consider the extent to which the writer is productively using women’s voices to support their central idea. In groups use our writing rubric to help you decide how strong this historian’s writing is. Make sure you identify specific aspects of the essay in relation to the scores you give on parts of the rubric.” After groups share their ideas and the instructor debriefs the reasons for their critique, the instructor reveals that the essay was generated by AI. It is likely that many students found the essay to be wanting. It is possible that some students thought that aspects of the essay were strong; this can often reveal student misunderstandings about what the instructor is actually looking for in an assignment (for example, a strong thesis, good support for that thesis, connections to course principles). The instructor helps students see how the AI essay was limited or fell short of the standards for the course.
Example: Students critique AI-generated work for a chemistry course
Each week, students in the course Chemistry in Every Day Life submit short reflective journals in which they give personal examples of key chemistry concepts they are learning about. The first week of class, the instructor of the course provides students with a sample journal entry but does not tell them that it is AI-generated. The instructor says, “I want us to learn more about how to create a strong journal entry for the course. Using this sample entry and the checklist I use to evaluate the entry, I want you to work in groups and first see if it meets my expectations and then make three suggestions about how to make it stronger.” After groups share their ideas and the instructor debriefs the reasons for their critique, the instructor reveals that the entry was generated by AI.
In each of these examples, students spend time deeply exploring and critiquing the limits of AI in relation to the thinking, writing, or problem-solving you want them to be doing in your course. Structured activities like these work best when we have students engage with them during a class meeting. This is time well spent: students need support as they make ethical decisions and work to resist using AI to cheat in their academic careers. While these activities will take some time, it will pay off greatly in terms of students’ growth and understanding about what AI is and their own beliefs about its use in their education. Students often initially think that the text or work generated by AI is sophisticated and better than what they can create. Help dispel these ideas by describing the value their own individual ideas, voices, and efforts bring to the work they do for you.
These activities can lead to rich discussions about why we might lean into AI when it simply will not help us learn. It is important to help students understand that your goal as their instructor is not to police their work, but rather to ensure that they learn; help them understand that using AI inappropriately will rob them of that opportunity. You can ask students to generate ideas for how to resist turning to AI when they feel tempted to do that, and make a genuine appeal to them in which you explain how you want them to do the work you’ve assigned because you are invested in their learning, their growth, and their development. And be sure to share the resources you have to support their learning so that they are not tempted to cheat with AI. But keep in mind that your goal in these kinds of activities is not to focus on cheating, but rather to focus on the ways that AI falls short in relation to the thinking you want them to do and know they can do in your course.
References
Digital Education Council Global AI Student Survey 2024: AI or Not AI: What Students Want. (n.d.) Digital Education Council. https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-student-survey-2024
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used manage the health of populations, Science (366), 6464, 447-453. DOI: 10.1126/science.aax2342