Not one studied how people actually learn with AI.


We’re measuring:

• Performance
• Efficiency
• Outcomes

But we’re ignoring:

→ What people do when they don’t know what they’re doing
→ How they iterate with AI
→ How they handle bad outputs
→ How confidence is built… or lost


Let me be more direct:

We’re studying AI like it’s a calculator.
Not like it’s changing how humans think.


So I tested it in my own workflow.

I built a Python agent to process my Google Scholar alerts, rank relevance, and generate daily research briefings:

👉 https://github.com/robazprogrammer/google_scholar_agent

122 papers this week.

Zero that actually examined the learning process with AI.


That’s the gap.

And it matters because AI is doing something education isn’t prepared for:

👉 It’s pushing people into unfamiliar tasks faster than we’ve ever trained them to handle.

Not just faster answers.

Faster exposure to uncertainty.


This is where my work is heading:

AI-Augmented Exploratory Learning (AAEL)

Not:
“AI helps you complete tasks”

But:

→ How humans approach unfamiliar problems
→ How they iterate when the first answer is wrong
→ How they decide whether to trust the output
→ How confidence is built through friction, not avoided by it


Hot take:

If AI can complete your assignment…
you weren’t measuring learning in the first place.


I’m designing a study around this right now:

→ How professionals use AI to solve unfamiliar tasks
→ AI-only vs AI + coaching conditions
→ Measuring workflow, iteration, and self-efficacy


Because the real question isn’t:

“Should we allow AI?”

It’s:

👉 What does learning look like when the answer is always available… but understanding isn’t?


Curious:

Are we studying AI…

or avoiding the harder question of how people actually learn?


Robert Foreman
Doctoral Student, Educational Technology
Central Michigan University

📧 forem1r@cmich.edu
🌐 https://nhancedata.com

Spread the love