Skeptics Shall Inherit the AI: Why Those Who Doubt AI Can Make the Most of It
First post of the year: a new reflection on the impact of AI on human progress. This is a topic I think about often. AI has made its way into every field and is being used indiscriminately.
I spend a good part of each day at the park and try to chat with other parents to pass the time. I also take these opportunities to bring up the topic of AI with people who are less tech-savvy (they probably think of me as the AI guy).
The responses always surprise me, regardless of their profession, everyone has found a way AI makes their work easier. There’s an obvious positive takeaway, but I won’t go into detail because it doesn’t seem particularly interesting, we’re already familiar with it.
What interests me is understanding the risks, probably due to my naturally pessimistic outlook.
Here’s an example: a professor using LLMs for text analysis. They upload a PDF of a publication, ask the LLM to summarize it, and, if needed, request further insights about the content. How much is being lost by following this process? Maybe text analysis is one of those tasks where LLMs excel, and little to nothing is lost. But if we extend this approach to other areas of professional life, how many mistakes are being made by delegating certain tasks to AI? Mistakes that might go unnoticed at first, leading to incorrect conclusions or even introducing bugs in the world of software development.
The real risk lies in the trust we place in AI. The greater the trust, the less we review. The less we review, the more errors slip through. That’s why I believe those who will truly benefit from these new tools are the ones who remain the most skeptical.
Let’s take software development as an example. As I explored in another post, You shouldn’t use AI for programming, blindly following AI-generated solutions doesn’t just impact the present state of a project—it also shapes long-term habits that affect both the project’s future and the developer’s growth.
Those who remain skeptical will rely less on AI tools, and when they do use them, they will analyze the proposed solutions more carefully. This approach helps prevent errors and enhances professional development. The tool becomes a means to understand different ways of solving a problem, rather than a crutch. The professional grows, and the tool assists.
Everything else, in any field, will be replaced. I’m not a fan of predictions, but I truly believe that those who should fear these new tools the most are the ones using them indiscriminately. What unique value do they bring to the process that couldn’t be replaced by someone else? Someone who questions AI a little more, perhaps.
This replacement isn’t just about professionals. I’ve seen companies embrace technology without asking the right questions, without the deep reflection needed to understand these risks and how to manage them. A day will come when AI no longer makes mistakes, but until then, the real value still comes from humans.