What if the problem isn’t AI, but the incentives built into our courses?
Over the past year, much of the conversation in higher education has focused on artificial intelligence and academic integrity. Faculty workshops, institutional policies, and media coverage frequently center on the same concern: students using generative AI to complete assignments, tests, and exams. In online environments especially, this concern is understandable. The tools are widely available, increasingly capable, and often difficult to detect.
However, I increasingly wonder if we are asking the wrong question.
Rather than focusing exclusively on how students might cheat using AI, perhaps we should ask a more fundamental question: why do students feel the need to cheat in the first place?
If a student genuinely wants to become a data scientist, analyst, or software developer, the motivation to learn statistics, programming, and modeling should be intrinsic. These are foundational skills that cannot be outsourced in a professional environment. The ability to interpret data, construct models, and explain analytical decisions is not something an AI tool can do on someone’s behalf in a real workplace. When students attempt to shortcut the learning process, it raises a deeper question about the structure of our courses and the incentives we create.
Are our assignments designed to cultivate understanding, or simply to measure compliance?
Artificial intelligence did not invent academic dishonesty. Students have always had ways to copy, outsource, or shortcut their work. Solution banks, shared answer keys, ghostwritten papers, and copied code long predate generative AI. What AI has done is not create the problem but make it more visible. It forces instructors and institutions to confront something that may be uncomfortable: some assignments may be easier to outsource than we would like to admit.
This does not mean AI is inherently the problem. In many ways, AI simply exposes weaknesses in how learning activities are designed. If an assignment can be completed entirely by an AI system with minimal understanding from the student, that may say as much about the design of the assignment as it does about the technology.
Perhaps the conversation in higher education should shift from asking, “How do we stop students from using AI?” to something more productive: “How do we design courses where AI cannot replace thinking?”
I would be interested to hear how other instructors, researchers, and industry professionals are thinking about this issue.
Robert Foreman
Doctoral Student – Educational Technology
Central Michigan University
Email: forem1r@cmich.edu
Phone: 480-415-0783
Website: https://NhanceData.com
