3RecursiveIntelligence.io

Practical AI Methodology Meets Cognitive Science|Looking for Ricursive (the AI chip design company)? You want ricursive.com

Why Some People Love AI and Others Think It's Junk: The Hidden Divide That's Reshaping Our Workforce

10 min read
  • AI methodology
  • critical thinking
  • workforce
  • education

The Moment Everything Changed About How I Learn

I used to hate history.

In high school, I was terrible at remembering dates and facts. While I loved the stories about the past, trying to cram just the facts into my brain and regurgitate them for standardized tests was daunting and took all the fun out of learning. But there was something else that bothered me even more: the way history was presented. The past was painted as a time when people were more perfect, hardworking, braver, and more morally pure than we are today. I felt nothing like these people, and I thought the lesson was that I should strive to be like them. I beat myself up when I couldn't live up to those impossible standards.

Fast forward to college, when I took my first real history class. From the first lecture, we didn't start with facts, stories, or academic rigor. We started with primary sources. The entire class textbook consisted of first-hand accounts, and the very first sources we examined were the journals of Christopher Columbus.

We learned how to evaluate primary sources critically — not through academic rote, but through careful analysis. The first thing that struck me when reading Columbus's journals was that this was not the Columbus I had learned about in school. This was a flawed, sometimes scared, often wrong, narcissistic, and deeply morally compromised individual.

I immediately fell in love with history as an academic pursuit.

Later, I took technical courses in instrumentation, software engineering, electronics, and networking. I learned troubleshooting, data analytics, and error-correcting iterative behaviors. But I always came back to my early education in history — I eventually majored in it and wanted to be a teacher before switching to technical and engineering studies. That foundation in historical critical analysis became the bedrock for everything else I did later, serving as a solid foundation for troubleshooting and data analytics because I had learned to question and evaluate sources, data, and information.

I had no idea at the time that this educational transformation would prepare me for something that didn't even exist yet: working effectively with artificial intelligence.

When AI Started "Hallucinating" and I Wasn't Bothered

About a year ago, I started using AI rigorously during my postgraduate degree classes to help with exploratory data analysis in machine learning applications. AI wasn't as sophisticated at writing long sections of code then, so I had to use it more to augment my coding skills and help with syntax as I was learning Python.

When I tried to get it to write longer sections of code, it would start hallucinating — creating output that looked like code but was essentially junk. Complex functions that seemed plausible but wouldn't work. Variable names that didn't exist. Logic that made no sense.

But here's what surprised me: I wasn't bothered by this at all.

I quickly adapted to recognize the gap between my skills and AI's abilities, and I adjusted my strategy. Instead of using AI to do all the work for me, I learned to use it as a tool to augment my skills, help me learn, think of alternative strategies, and structure my writing and coding. It became less like writing code alone and more like writing code with a team.

I was able to do this because I had academic training in error-correcting strategies: critical thinking, problem solving, troubleshooting, and iterative improvement. When I spotted hallucinations — if they were even critical enough to need correcting — I was seldom bothered or annoyed, no more than I would be with a human colleague. I simply pointed out the error and provided a correction to clearly communicate and align our understanding so we could move forward. In worst-case scenarios, sometimes I had to start over, just like I would with a person if the conversation had gotten too far off course.

The Realization: AI Isn't Broken, Our Expectations Are

When I started a conversation with an AI, the first output was often just the beginning of the conversation, not the final solution I was looking for. It's not much different than starting a conversation about a new project with a human. I might have a clear vision for what I wanted the project to be, but I had to take time to explain that to another person, and it often never came across fully formed from my first explanation. They might have to ask questions and provide their insights. They might misunderstand what I said, and I'd have to explain it differently. This was an iterative process, not a question-and-answer process or a query-and-result data search.

But then I started noticing something troubling in online forums and even at work: people dismissing large language models entirely simply due to hallucinations. They didn't talk about hallucinations as a small percentage of AI's output, but rather as if the presence of any hallucinations at all meant the whole thing couldn't be trusted.

I wondered for months: why would you dismiss the whole system for one or two occasional errors? Hell, humans hallucinate too. I can think of people I work with who are brilliant technical experts, who can code far better than I can, creating incredible applications that serve critical functions in chemical manufacturing. But they might have strange ideas about history that I know aren't true because I have deep knowledge and academic training in that field. Yet I don't dismiss their entire technical expertise and knowledge because I question their insights on history.

The Hidden Divide: How We Were Taught to Learn

That's when I realized what was really happening. There's a fundamental divide in how people approach information, and it goes back to how we were educated.

I'm not using AI to provide me with facts. I'm using AI to extend my thinking. It's like having a brilliant but naive expert on far more topics than I could ever master as a thinking partner. I use AI to bounce ideas off of, help me structure my thinking and my writing or coding, and speed up my access to knowledge and data sources — not to provide me with definitive facts. If I do use it for factual information, I make sure to vet those facts and check sources because I'm trained to do that anyway.

For me, AI doesn't add cognitive load because I'm not naive about sifting through information to determine fact from fiction. I have the tools and background that make that part easy. But I can see where someone without those tools, and with biases toward what is true or false, would become cognitively overwhelmed with the output from an AI when they don't know how to discern what is reliable information and what isn't.

This is especially true when so many people are familiar with search engines for looking up information. You ask a question and you want clickable links to authoritative sources. Search engines have become algorithmically biased toward users' own biases, showing them the information they want to see. When presented with information that you have to think through — information that might challenge your biases — it suddenly becomes a cognitive burden to think through those results with no rigorous error-correcting methodology to guide you.

The Stakes: Why This Matters More Than You Think

This isn't just an academic observation. As philosopher David Deutsch points out, this is a critical issue for our society. Static societies without error-correcting and problem-solving mechanisms will not progress like dynamic and open societies that can adapt and discard bad explanations.

In the same way, right now people with critical thinking skills and error-correcting methodologies are at a major advantage using AI, and I expect we will begin to see wider gaps in worker abilities as AI advances. Users with these skills will become ever more capable of leveraging AI to bridge gaps in their skills and abilities. People without these error-correcting abilities, and with cognitive biases against the output of large language models, will struggle more and more to adapt to this new way of working.

We're not just talking about productivity differences — we're talking about the creation of a two-tier workforce. Those who can work symbiotically with AI will amplify their capabilities exponentially. Those who can't will find themselves increasingly left behind, not because they lack technical skills, but because they lack the cognitive frameworks to work with probabilistic, imperfect, but incredibly powerful tools.

What Needs to Change (And It's Not What You Think)

Most AI training today focuses on how to prompt better, how to use specific tools, or how to integrate AI into existing workflows. That's not enough.

It's critical that we change our strategies for educating our workforce about using AI to focus on learning error-correcting methods like troubleshooting, problem solving, and critical thinking. I firmly believe that once learned, these methods cannot be unlearned and will serve our workforce well in utilizing AI going forward.

Even more critical is making this change in our education system as a whole. It's less efficient and often too late to retrain workers after they leave the public education system with its focus on rote memorization rather than error correction.

We need to move from David Deutsch's "bucket" model of education — where minds are containers to be filled with facts — to his preferred model of minds as error-correcting mechanisms that can evaluate, test, and refine ideas. The students who learned to question Columbus's journals rather than memorize his accomplishments are the ones who will thrive in an AI-augmented world.

A Call to Leaders: The Opportunity Hidden in Plain Sight

If you're a business leader, trainer, or educator reading this, here's what I want you to understand: the hallucination "problem" isn't holding back AI adoption — our approach to training people to work with AI is.

Companies that figure out how to teach their workforce to think critically about AI output, to iterate and improve with AI as a thinking partner, and to distinguish between AI as a fact provider versus AI as a cognitive amplifier will have an enormous competitive advantage.

This isn't just about implementing new technology. It's about developing human capabilities that complement AI rather than compete with it. It's about creating learning organizations that can adapt, error-correct, and continuously improve — the exact capabilities that will matter most in an AI-augmented future.

The divide between organizations that succeed with AI and those that struggle won't be determined by which models they use or how sophisticated their prompts are. It will be determined by whether their people have the cognitive frameworks to work effectively with powerful but imperfect tools.

Where Do We Go From Here?

The good news is that these skills can be taught. Critical thinking, error correction, and iterative problem-solving aren't mystical talents — they're learnable methodologies. But they require a fundamental shift from how most of us were educated.

Instead of teaching people to seek authoritative answers, we need to teach them to evaluate provisional solutions. Instead of training them to avoid errors, we need to train them to correct errors quickly and learn from them. Instead of focusing on tool mastery, we need to focus on cognitive flexibility.

This is the hidden opportunity of our time. While others debate whether AI will replace human workers, the real question is: which humans will be able to work effectively with AI? The answer depends less on technical skills and more on the fundamental approaches to learning and thinking that we can start developing today.

The future belongs to the error-correctors, the question-askers, the people who learn to approach problem solving with a critical eye. The question is: are we ready to adapt to create more of them?


Subscribe for deeper dives and the full archive.