Higher education is moving very quickly to adopt generative AI platforms, dashboards, and tools. In many cases, institutions are committing real money, faculty time, and instructional redesign before they truly understand how these systems function in authentic learning contexts.
When these tools fail, they are often quietly abandoned. Rarely do we see formal evaluations, postmortems, or public explanations. What gets labeled as an “implementation issue” is usually something deeper.
The core problem is not the technology itself. It is how AI gets stabilized too early. Learning with AI is inherently iterative. It requires questioning, adapting, verifying, and reflecting. Universities, however, are structured to demand stability. Procurement cycles, accountability metrics, and scalability pressures push AI into fixed platforms long before learning processes are understood.
That mismatch produces predictable outcomes. Shallow learning. Cognitive offloading. Tools that slowly lose instructional value. Over time, platforms degrade pedagogically and are deprecated or sunsetted without much institutional memory of why they failed.
What we tend to call failure is not individual misuse or technical error. It is structural. When exploratory practices are prematurely locked into products, the learning process collapses first, even if the technology technically “works.”
If higher education wants to learn from AI rather than just deploy it, we need better language for recognizing these breakdowns and more willingness to examine what did not work, not just what looked successful in the moment.
—
Robert Foreman
Doctoral Student, Educational Technology
Central Michigan University
📧 forem1r@cmich.edu
#HigherEducation #EdTech #AIinEducation #LearningDesign #EducationalTechnology #InstructionalDesign #AIandLearning
