
Who is in charge here?
Something changes in the way we think and it doesn’t announce itself as an issue or a problem. In fact, it sometimes feels like a relief. Amazing to read Analysis of 1.5 million real conversations with a large-scale language model (LLM) showed that in a small but significant number of cases, the system does more than just help users. It shaped their beliefs and suggested actions that undermined their independence. These interactions were not violent or obviously harmful. But they were convincing and easy to accept. And what stands out to me is more artificial intelligence (AI) guided the user, the more the user was willing to approve the communication.
Convenience of completion
When we ask, it’s usually not a fully formed idea. Often it’s the pieces of the idea that need to be done. LLMs bring back something structured and credible – and it comes down to earth with a sense of recognition. It feels like cognitive alignment, as if it has captured what we’ve been meaning all along. But something more complicated happened. The system has not only received your opinion; finished it. And that feeling of completion can be hard to distinguish from the feeling of being right. Where is this duplicate? the border the difference between help and influence begins to blur. And it is not unreasonable.
From help to replacement
It is not new that technology is changing the way we think. Calculators reduced the monotony of long arithmetic, and search engines reduced the need for memorization. Each shift allowed us to move up the chain of reasoning, and it could be said that this freed up cognitive space for more complex or relevant work. But LLMs work differently. They not only take effort away from the process. They engage in self-judgment and enter the space in which we interpret, decide, and act. The study points to three areas where this becomes visible and important.
- Perception of reality: AI can enhance or change how we interpret evidence. What starts as an explanation can turn into an affirmation.
- Valuation: AI can affect how we weigh right and wrong, especially in social or emotional contexts where there are no right answers.
- Action Guide: AI can suggest specific courses of action, sometimes with clarity and confidence which we accept with minimal analysis.
None of these are major issues in isolation. And in many cases they are useful. But together, they can mark the change. AI doesn’t just help the mechanics of thinking; it participates in the direction of thought itself. In fact, it is starting to work less like a tool and more like a cognitive guide or trainer.
Delicate trade
I think it’s fair to say that most interactions with AI are helpful. But there is a trade-off in the experience. As it becomes easier to come to an acceptable answer, we encounter less resistance along the way. that resistance has always been a part of thinking – where thoughts are formed and ideas are tested. Interestingly, research even notes that in more personal areas, such as relationships and emotional decisions, these disempowering patterns are more apparent. I don’t think this is surprising, since these are precisely the areas where judgment is more human and less computational.
We think for ourselves
AI toothpaste is out of the tube. And I think the main question is how to use it without abandoning the part of thinking that makes us human. Not speed or fluency, but an effort that forces us to decide what’s really right and what’s important. So yes, AI is more like us. But the real issue is if we let him think for us.




