Does AI Think and Understand?
I recently read James Somers’ essay “The Case That A.I. Is Thinking” in The New Yorker, and I must say, it was one of the most thought-provoking pieces I’ve come across lately. It tackles a topic I’ve been reflecting on for years: the relationship between artificial intelligence, thinking, and consciousness.
1. Consciousness
Let’s start with consciousness. It’s probably one of the most elusive concepts in science and philosophy. For all our advances in neuroscience and cognitive psychology, consciousness remains a black box. We can describe it, measure parts of it, and even induce altered states of it but we still can’t explain why subjective awareness exists.
Broadly speaking, consciousness is a form of awareness: awareness of the external world and awareness of our inner states: our emotions, sensations, and selfhood. AI, no matter how sophisticated, doesn’t have that kind of self-reflective awareness. It processes information, but it doesn’t “feel” or “experience” it.
That said, many AI visionaries believe it’s only a matter of time before some form of artificial consciousness emerges perhaps not human-like, but something parallel. If artificial general intelligence or superintelligence ever arrives, it might develop a sense of internal representation that resembles awareness. But until then, consciousness remains firmly on the human side of the equation.
2. Thinking
Now, thinking is another fascinating topic. Like consciousness, it’s abstract, but unlike consciousness, we understand it far better. Thinking involves reasoning, problem-solving, memory retrieval, and creativity, a whole network of cognitive processes interacting together.
But does ChatGPT think the way we do? Probably not. As Somers notes in his essay, human thinking carries a sense of inner life, what he calls a “Joycean inner monologue or the flow of sense memories in a Proustian daydream.” There’s a texture and depth to human thought that AI lacks.
However, AI does reason and reasoning, in its simplest sense, means ‘working through a problem step by step’. This kind of logical sequencing is a form of higher-order thinking, even if it isn’t conscious. So, while AI doesn’t think in a biological sense, it certainly engages in what we might call synthetic cognition.
3. Understanding
Then comes understanding, the third piece of this triad. Consciousness, thinking, and understanding are all connected for us humans. They blend together in a holistic way that machines still can’t emulate. AI systems can approximate parts of these processes, but integration, the “wholeheartedness” of cognition as Dewey (1933) refers to it , is missing.
In his book What Is Thought?, Eric Baum (cited in Somers, 2025) defines understanding as compression, and that’s an intriguing idea. As Somers explains, AI models “compress experience just like real neural networks do.” They take massive amounts of data and distill them into compact, predictive structures.
However, critics like Ted Chiang and Emily Bender dismiss this as mere imitation, what Bender famously called “stochastic parroting,” a process of remixing and regurgitating the data fed into the system.
And they have a point. AI’s knowledge is constrained by its training data and context window. When it can’t find an answer, it sometimes fabricates one, a phenomenon known as hallucination (with all due respect to health care jargon).
Yet, despite these limitations, today’s AI systems show remarkable reasoning abilities. Somers shares a simple but striking example:
On a scorching summer day, his friend Max found himself at a playground where the kids’ sprinkler wasn’t working. Faced with a maze of old pipes and valves, he snapped a photo and asked ChatGPT-4o for help. The model analyzed the image and told him he was looking at a backflow-preventer system used in irrigation. It even pointed out a yellow valve that likely controlled the water flow. Max turned it and water burst forth to the cheers of the kids.
Was that understanding? Or was it an illusion of understanding so convincing we can’t tell the difference anymore?
Neuroscientists like Doris Tsao, Jonathan Cohen, and Douglas Hofstadter (cited in Somers, 2025) argue that these models might actually reveal something profound about how human intelligence works. They suggest that AI, in its pattern recognition and associative reasoning, mirrors aspects of the brain’s own neural processes. In that sense, studying AI might teach us as much about ourselves as it does about machines.
And yet, Somers ends his essay with a caution I share. We may be entering an era where understanding thinking itself could be both our greatest scientific triumph and our most dangerous discovery. If we truly unravel how intelligence works (biological or artificial) we might unlock something we can’t fully control or contain.
For now, I’m content standing on this edge of curiosity and wonder. AI may simulate thinking impressively, but the mystery of consciousness and the beauty of being aware still belongs to us.
References
Dewey, J. (1933). How we think. D.C. Heath and Company.
Huckins, G. (2023, October 16). Minds of machines: The great AI consciousness conundrum. MIT Technology Review. https://www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum/
Larson, E. J. (2024, November 9). Generative AI was supposed to collapse… so, why hasn’t it? Colligo.
Somers, J. (2025, November 10). The case that A.I. is thinking. The New Yorker. https://www.newyorker.com/magazine/2025/11/10/the-case-that-ai-is-thinking


