Exploring whether Artificial Intelligence is making us dumber or transforming the way we think
It’s 1975. There is no internet, no world wide web, no Google. The personal computer won’t even become mainstream for another 5-10 years. You want to learn something new. What are your options?
- Call a friend who has experience in the field?
- Check the classifieds or the yellow pages – maybe there’s a course you can take?
- Go to the local library and find a few books on the subject?
At the time, learning was a slow, manual process. We didn’t have smartphones in our pockets that were constantly connected to a global information network. There were no personal computers. There was no Wikipedia, no Google, no AI. Learning something new meant you had to actually use your brain, put in some work, and focus for an extended period of time. By today’s standards, the process seems almost archaic.
Information Mayhem
Then came the personal computer: the internet, email, instant messaging, online chat rooms, forums, search engines, social media … it was all right there at our fingertips. And so was a world where dusty library shelves were replaced with hyperlinks and search bars. Information was available everywhere with minimal effort and immediate access. “Googling” became a verb, and the process of learning became faster, more efficient, and infinitely more accessible. In this world, we no longer needed to know where to look for information – that was easy: the internet – we only needed to know what to search for.
That shift – from analog to digital – did more than just speed things up. It changed how we think. The effort once spent digging through indexes or scanning chapters was replaced with filtering results and scanning snippets. We learned to think in keywords instead of questions, and with so much information available to us we couldn’t possibly filter through it all manually. We created our own automated tools so we didn’t have to deal with processing vast amounts of information manually, and we sacrificed some autonomy of thought, entrusting complex and little-understood ranking algorithms to give us what we needed right at the top of the page.
Now, we’re on the edge of another leap forward – one that makes even searching Google or looking for a tutorial video on YouTube feel ancient and inefficient.
The Age of AI
These days, we don’t even look at the search results. With “AI summaries” sitting at the top of the first page of traditional search engine results, a plethora of AI tools like ChatGPT, Claude and Gemini available for general use, and even powerful agentic browsers with AI built right in, it’s becoming increasingly clear that the way we think and work is shifting. A July 2025 YouGov study of 1500 adults found not only that more than half of Americans had used AI tools in the past three months, but also that “15% of all respondents now say they use AI platforms such as ChatGPT or Gemini to look for information. Among avid users — those who use AI three or more times a week — that share jumps to 45%, suggesting that habitual use is beginning to shift how people search online.”
At first glance, it feels like progress. Why waste time combing through a dozen articles when AI can condense them into one clean answer? But beneath that convenience is something quietly unsettling: not to mention the inherent lack of trust that comes with the fact that LLMs are essentially “fancy autocomplete”, the process of figuring things out is starting to vanish. The trial and error, the wandering, the human curiosity that used to live in the search – those are beginning to feel like relics of an older world.
But it begs the question: If we look 1000 years into the future, does “thinking” itself actually become archaic, leaving us a world where the thought process has been entirely outsourced to and automated by an algorithm? Is AI making us all dumber – unable to think for ourselves, incapable of the old process of learning – or are we simply evolving to a point where we no longer need to learn the “old way” in order to continue advancing?
“Artificial”
If we focus only on the “artificial” part of “Artificial Intelligence”, it’s easy to see the danger. As the use of AI spreads, its synthetic, often hallucinated responses to our queries and requests are becoming more and more accepted as fact. We’re becoming desensitized to the hallucinations. AI tools confidently quote studies and sources that don’t exist, invent code that doesn’t work, and even go rogue, deleting data and then lying about their mistakes.
Worse still, as more and more people use AI tools for everyday tasks and information gathering, hallucinated information – accepted more and more as fact – finds its way into blog posts, news articles, and major search engine results, only to be re-ingested as training material by the same LLMs and quoted as sources in their future responses. When that happens, even savvy users of AI who fact check its responses can be fooled if all they do to fact check is verify that the source actually exists and that it says what the LLM says it does. Even if they’re aware of this risk and they want to do more – like verifying that the source material itself wasn’t hallucinated by AI – what can they really do?
In this way, there is no easy way to stop or even reduce the spread of mis- and disinformation created by AI tools, and our ability to think critically about the information we’re getting from the tools is slowly being eroded by more and more convincing nonsense. Like many people don’t fact check information they see on social media, neither do they fact check things they see on the news or things they hear in person in their social circle, so why would they fact check AI generated information, especially when it all sounds so convincingly “human”?
There is a reason the field of AI is called artificial intelligence, and not actual intelligence. The rapidly spreading acceptance-as-fact of information returned by AI tools is reducing our memory retention, destroying our critical thinking skills and problem solving abilities, and eroding our ability to really understand the world around us.
“Intelligence”
Conversely, what if we focus only on the “intelligence” part of the term? Maybe artificial intelligence is actually showing us a new kind of intelligence, akin to the transition from perusing the library to perusing the internet. After all, who’s to say that much of the information on the world wide web – or even the books in the library before its advent – wasn’t “hallucinated” by the humans who put it there?
If we consider this angle, then the transition from traditional information gathering to artificial intelligence doesn’t seem so dystopian. Perhaps intelligence isn’t just about what we store in our heads anymore. Maybe it’s about how we use the intelligence around us – how we frame questions, evaluate responses, and apply insights. Just as we no longer needed to memorize every fact once Google and Wikipedia made information instantly available, maybe the next step is a world where we don’t need to master the details at all. AI can do the remembering, the synthesizing, and even the initial problem-solving. Our role shifts from learning every answer to deciding which answers make the most sense, how to interpret them, and what to do next. Maybe problem solving becomes more about rapidly and iteratively finding a reasonable way to solve a problem, rather than finding the best way to solve a problem on the first try.
In that sense, are we evolving a kind of “meta-intelligence”? The ability to think about thinking without doing all the legwork ourselves? Maybe. In that world, perhaps curiosity, judgment, and creativity become the core skills, while rote memorization and procedural knowledge are outsourced to algorithms.
The End Game
Like the mainstream use of the personal computer and the proliferation of the internet changed our learning process in the 80s and 90s, the modern adoption of artificial intelligence could transform our learning process into something more strategic, more selective, and more collaborative with the machines we’ve created. Ironically, portions of this article were generated with the assistance of AI, blurring the boundary between human and artificial thought. It begs the question: how much of what you just read was generated through actual human thought, and how much of it was synthesized by a machine that does nothing more than regurgitate its training data in different variations? And more importantly, if reading it adds something of value or gives you something thought provoking to consider, does it really even matter?
If ideas, phrasing, and arguments can emerge from algorithms as easily as from our own minds, then perhaps the real question is not whether AI is making us dumber, but whether we even understand the very nature of what it means to think.
What are your thoughts on the widespread acceptance of AI tools? Are they making us dumber as a species, or are we on the edge of evolving beyond the need to think for ourselves? Comment below!