Should my students be using AI?
The rapid proliferation and widespread availability of generative AI tools in recent years has created a great deal of confusion and anxiety for many teachers in higher education. On the one hand, many faculty have experienced frustration with students’ misuse of AI and have stepped up efforts to police students’ behavior. On the other hand, many faculty feel pressure to have students use AI and have worked quickly to find ways to integrate it into their courses. Both these approaches are unproductive because they are reactive and driven by the presence of the technology, not by our thinking about student learning.
Is there a better way to make teaching decisions amidst all the AI noise? There is! This guide provides a two-part framework to help you make pedagogically sound decisions about students using AI in your courses.
Part 1: Commit to a deliberative mindset
As scholars, we recognize the value of focused investigation and reflection. It’s important to use that same approach when we are making decisions about the role of AI in our students’ learning. While there is a tremendous amount of pressure to embrace AI as the next big thing, we can choose to stop and think before we make decisions about whether it belongs in our courses.
Give yourself permission to slow down and think
Whenever a new technology emerges, it’s very easy to feel like you don’t want to be left behind—and that you don’t want to do your students a disservice by letting them get left behind. However, we must be careful to ensure that this anxiety doesn’t cloud our teaching decisions and make us feel rushed into adopting AI in our courses. Yes, AI is widely available, but the availability of a technology has nothing to do with its appropriateness for our teaching. You are not doing your students a disservice by taking time to think through your decision: in fact, you are demonstrating genuine concern for their learning by doing this deliberative work. Instead of deciding “AI’s out there, so we better be using it!”, take time to make an informed decision that is grounded in the learning you want students to do in your courses. Part 2 of this resource provides a step-by-step framework to guide that thinking.
Give your students permission to slow down and think
Many of us still believe the notion that our students are “digital natives” who eagerly and easily adapt to new technologies. This is a myth. While traditional-aged college students have undoubtedly grown up surrounded by technology, that does not mean that they have an inherent understanding of how to use it well, how to think critically about it, or how to make good decisions about when they should and shouldn’t use it. Findings from the Digital Education Council Global AI Student Survey 2024 suggest that students recognize that they need guidance to understand AI: 58% of students (n=3839) responding to this survey reported that they do not feel that they have sufficient AI knowledge and skills. In the same survey, 52% reported that they believe over-reliance on AI negatively impacts their academic performance even though 86% reported that they were using AI in their studies.
What do these findings tell us? They tell us that students are using AI—but even they recognize that they are doing so in ways that are potentially harmful to their learning. These findings are a cry for help, and we have a responsibility to provide appropriate guidance: we need to help them make better decisions. In some cases, this means guiding them to use AI effectively in their studies; in other cases, it means helping them understand why using AI is unhelpful or even detrimental to their learning. We need to help them move beyond the mentality that just because they have access to a tool, they should use it.
Give yourself permission to have genuine conversations with students about AI
The survey cited above provides a global perspective on students’ use of AI, but students in your courses may have a different set of experiences. Rather than make assumptions about what they are thinking or how they are using AI, ask them! As you make sense of your own expectations, include them in the conversation. You will likely discover that students have complex and varied feelings about AI and that they are confused or concerned about what is expected of them and about what they can expect in a future where AI continues to grow.
Give yourself permission not to use AI when it doesn’t serve students’ learning
Students are asking for our help, but they are not asking us to teach them how to blindly turn cognitive tasks over to AI. Instead, they need us to help them determine when using AI makes sense—and when it doesn’t. And the reality is that there are many cases in which using AI does NOT make sense for students’ learning.
In some cases, using AI can de-skill our students. In other cases, its use is simply unrelated to the deep disciplinary thinking and problem solving that is at the heart of our courses. Being able to identify when AI use is unproductive can help you think more critically as you make decisions about your own courses. These unproductive uses of AI fall into three broad categories.
When AI “de-skills” students by doing work for them
Instructors may be tempted to have students use AI to do certain kinds of work because they are searching for ways to help students see or discover the right way to think or problem solve. Let’s look at an example of this unproductive use of AI.
Imagine a history instructor who wants students to make decisions about the primary sources they will draw on to create their history projects and to find themes in those sources to help focus their interpretations. The instructor tells students to use AI to find five relevant primary sources for their history project and to use AI to create an annotated bibliography of those sources. She will require students to tell her what prompts they used when they turn in their annotated bibliography. Her rationale is that soon historians will use AI to do this work.
But here’s the catch: if the instructor wants students to learn the skills of choosing and annotating sources, the students must build these skills! Offloading this work to AI means the instructor is “de-skilling” her students. While historians might use AI to do some of this work, they would also be critically exploring what AI generates. Clearly the instructor should not have students use AI to do the work for them. If the instructor feels that AI will be part of this kind of historical research in the future, she should change her plans and have students review and critique the work AI does.
When AI “de-skills” students by simplifying work for them
Instructors sometimes hear that AI can help them make cognitively challenging or complex work more accessible for students. While this sounds initially like a good idea, it may result in the instructor having students use AI in ways that are completely unproductive.
For example, imagine an instructor who assigns a weekly research paper to students in an introductory genetics course. The instructor has chosen the weekly research papers to align with key concepts and topics because she wants students to see that these concepts and topics are central to current research. She wants students to know about this research, but she feels her freshmen and sophomores are not prepared to read and understand the research papers. She tells students to feed the papers to AI and have it summarize the papers for them.
But there are clearly problems with this decision! Not only is AI doing the work for the students, but the very simplification that AI performs is indicative of another issue. If the instructor is assigning work students cannot do, she should slow down and ask herself what the value of this work actually is. In this case, reading original research may not be a learning goal in her course and therefore may not be a necessary assignment. Offloading this work to AI indicates that the work is unnecessary. The instructor should reconsider what students can do with the summaries AI produces or should reconsider this assignment.
When students do work with AI that is entertaining but pedagogically unsound
Instructors hear a lot of messages about the importance of diving into AI and playing with it; they are also often encouraged to have their students explore AI. Perhaps in an effort to normalize the technology or perhaps because instructors think they must use it, they can be tempted to find “fun” things for their students to do with AI. What might this look like?
Let’s picture an instructor of a public speaking class. He wants students to learn to identify, create, and use figures of speech to good effect in public speaking. He has students prompt AI to create twenty novel metaphors and then asks them to find the most unusual and unexpected metaphors from the list of twenty.
When we think a bit about this use of AI, we come immediately to the problem: if the instructor wants students to create metaphors, simply seeing metaphors generated by AI will not help them. Instead, the instructor must design work that will help students develop the necessary skills to use figures of speech themselves. There is little to no connection between the use of AI and the aims he has for his students’ learning. The “novelty” use of AI usually results in these meaningless and unproductive uses of the technology.
Now that we have considered the mindset that can help you ground your decisions about using AI, let’s look at a set of concrete steps for determining whether AI has a role in students’ learning in your course.
Part 2: Determining whether students should use AI
The rest of this resource provides a framework that you can use to decide whether students should be using AI in your courses. There are no right or wrong answers here: the most important thing is to make sure that your students’ learning is your primary consideration.
The choice to use AI may be pedagogically valid in some contexts, but in other cases the best choice is not to use AI. This means that you will need to think about individual courses, and even individual assignments, differently.
This work is purposely broken into small steps so that you can work thoughtfully and deliberately, and we encourage you to pause and complete each phase as you read through this part of the guide.
Step 1: Identify the key skills you want students to learn
To start, make some notes for yourself about the work that you ask students to do in your course. That work, broadly speaking, probably entails developing skills such as problem-solving skills, lab skills, discussion skills, research skills, or other broad categories of skills. After you have written notes about these broad categories, consider the subskills under each category that you want your students to learn. Try to be as concrete as possible.
Below are some examples to help you imagine what this list of skills might look like.
- Broad category: Problem-solving skills
- Subskill: Identify key dimensions
- Subskill: Identify most salient contextual factors
- Broad category: Lab skills
- Subskill: Create concrete plan for entire experiment
- Subskill: Make predictions about potential measurement error
- Broad category: Discussion Skills
- Subskill: Prepare questions that help peers think
- Subskill: Create counterarguments in a discussion
Step 2: Identify the role of AI in developing these skills
After you’ve made your list, ask yourself this question for each skill and subskill: Would artificial intelligence help your students develop this skill? The question is not whether students could use AI to do this work, but instead if using AI will help them develop the skill you’re targeting. In some cases, you may find it easy to answer “yes” or “no”, and, in other cases, you may find that you initially have to answer “maybe” until you’ve taken more time to think! There are no right or wrong answers here: the most important thing is to make sure that your students’ learning is your primary consideration.
Step 3: Articulate why you made your decisions about using AI
After you’ve considered whether using AI will help students develop the skills you’re targeting, take some time to articulate for yourself why you think the use of artificial intelligence might help or hinder your students’ learning.
In cases where you answer no, you may find yourself writing things like this: “I want my students to be able to edit writing without any assistance from AI. They won’t be able to evaluate the suggestions AI would make unless they have their own strongly developed sense of the components of good writing.”
In cases where you answer yes, you may find yourself writing things like this: “I see AI as a thinking partner, so it is okay for students to use AI to generate initial ideas for essays/projects as long as they are aware that this partner is limited, sometimes wrong or biased, and might hallucinate.”
Step 4: Communicate with students about your decisions about using AI
After you have taken time to make deliberate decisions about the role of AI in your course, it’s important to communicate those decisions to students in ways that help them understand your reasoning. Be clear about your expectations, and let students know that you have made deliberate choices with their learning in mind. There are several places in your course where you can do this.
- Course policies – Course policies that help students understand how to navigate their learning in your course—with or without the help of AI—are crucial. This guide has already helped you do some of the foundational work that will help you draft effective policies; visit CATLOE’s resource on designing and writing course policies about AI to continue developing your ideas into a policy.
- Assignments – Students are seeing different expectations about AI use in different courses, and chances are that you have different expectations across individual assignments in your course. Each assignment description you provide should be clear about whether they should use AI to complete their work and why. If you want students to use AI on an assignment, be clear about how and why you want them to use it. Visit CATLOE's resource on designing assignments so that students use AI responsibly for guidance on writing assignments descriptions that communicate your decisions and help students approach their work effectively.
- Feedback – When you see students using AI when they shouldn’t be or in ways that don’t align with your expectations, remember that these are opportunities for communication. Rather than making assumptions and punishing students, give them feedback that reminds them that you care about their learning and that you have made deliberate decisions about the role AI does—or doesn’t—play in that learning. If you suspect misuse of AI, we encourage you to visit CATLOE’s resource on responding to suspicions of student cheating with AI to learn about how you can create teachable moments when students make unproductive choices.
A final note: Broader ethical considerations
While most of this resource has focused on thinking about AI use through a pedagogical lens, it’s important to realize that the decision to use AI at all has many ethical implications as well that go beyond our specific teaching contexts. For example, it is well-established that generative AI holds harmful biases that can influence its predictions and possibly perpetuate existing biases among users. There are very real concerns about the environmental impact of AI due to its high levels of electricity consumption, water use, and carbon emissions. Privacy concerns abound as personal information and intellectual property are used to “train” new AI models. We need to make sure that students are aware of these implications when we ask them to upload their work—or anyone else’s—into AI. The list of concerns goes on and on. If you decide to have students use AI in your courses, it’s important to be aware of these ethical concerns and to talk with your students about them. In fact, they may bring up some of these concerns to you; if so, take advantage of that opportunity to engage in thoughtful discussion about their response to these concerns and how they might address them.
Resources
Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A Practical Guide to a New Era of Human Learning. Johns Hopkins University Press.
Digital Education Council. (2024). Global AI student survey 2024.
Glickman, M., Sharot, T. (2025). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour 9, 345–359. https://doi.org/10.1038/s41562-024-02077-2
Leffer, L. (2023, October 26). Humans absorb bias from AI—and keep it after they stop using the algorithm. Scientific American. https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algorithm/