In July 2025, MIT’s Project NANDA released the State of AI in Business 2025 Report. The findings are striking. Despite an estimated $30–40 billion in enterprise investment, 95 percent of organizations reported no measurable return on investment from their Generative AI pilots. This result raises a significant question: what does it actually mean for an AI project to “fail”?
The GenAI Divide
The report introduces the concept of the GenAI Divide, which describes the gap between widespread adoption of AI tools and the limited organizational transformation that results. Tools such as ChatGPT and Copilot are widely piloted and deployed in some capacity, but most enterprise-grade systems stall before they reach production. According to the findings, the barriers are not primarily related to infrastructure, regulation, or even model quality. Instead, the failures are often attributed to resistance to adoption, static tools that cannot adapt or learn, poor workflow integration, and unrealistic expectations from leadership. In short, the challenge is less about technical capability and more about the intersection of learning, culture, and process.
The AAEL Lens
In my doctoral research at Central Michigan University, I am developing a model known as AI-Augmented Exploratory Learning (AAEL). This is a scaffolded, self-directed learning approach in which professionals use AI as a co-creator through iteration, prompting, and refinement. Rather than focusing on static training or one-time adoption of a tool, AAEL emphasizes learning by doing with AI.
Viewed through this lens, the reported failures of enterprise AI projects are better understood as failures of learning rather than technology. The limitations appear across four dimensions: people, programming, frameworks, and culture. People are often insufficiently trained to understand how to learn with AI. Programming environments are static and lack memory or adaptability. Frameworks for adoption fail to align workflows with tools. Finally, organizational culture is hindered by unrealistic expectations and resistance to mistakes.
AAEL suggests that mistakes are not evidence of failure. They are necessary steps in the process of adapting to a new paradigm. In this view, the real measure of success lies in whether organizations can capture learning from mistakes and turn them into improved processes and outcomes.
Why This Matters
The report also describes what it terms the “shadow AI economy.” In this environment, employees succeed in using personal tools such as ChatGPT and Claude even as official enterprise deployments stall. This finding validates the central premise of AAEL. When individuals are empowered to explore, iterate, and refine their use of AI, measurable learning and productivity gains emerge. The problem arises when organizations fail to build structures that allow this type of learning to scale beyond individual initiative.
Accessing the Report
You can download the full State of AI in Business 2025 Report here:
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
Citation: MIT Project NANDA (2025). The State of AI in Business 2025: The GenAI Divide.
Conclusion
The word failure may not be the most accurate way to describe the outcomes of these AI initiatives. Adapting to a paradigm shift will necessarily involve trial, error, and iteration. The central issue is not whether mistakes occur, but whether organizations are prepared to learn from them. The AAEL model provides one way of reframing these challenges by shifting attention from adoption metrics toward building capacity for continuous learning.
Doctoral Contact: forem1r@cmich.edu
Website: NhanceData.com
