I can’t help but think about the recent achievements of Elon Musk’s companies.
In the same week that he presents his new autonomous vehicles and a humanoid robot, SpaceX makes impressive advances in its mission to take humanity to Mars and manages to recover a rocket falling from space using two “chopsticks” with astonishing precision. Although the person himself isn’t particularly admirable to me, there’s no denying that Elon Musk is shaping the future.
What did you get done this week?
— Tesla Hype (@TeslaHype) October 13, 2024
Elon Musk: hold my beer pic.twitter.com/gMsY37aTuG
In recent years, we’ve witnessed another revolution (in which Elon Musk is also involved): generative models in all their forms (LLM, vision, audio, diffusion, etc.), commonly referred to as AI.
However, humanity is placing too much hope in these tools. Society’s expectations far exceed the capabilities of these systems today. In a previous article, You shouldn’t use AI for programming, I already discussed the relationship between AI and software development and the quality risks it has. This phenomenon extends to many other fields as well.
A clear example is online content creation, such as blogs, news media, and similar platforms. AI-generated content is flooding us in many ways. The generation of images by systems like Copilot or DALL-E is omnipresent in many media, often resulting in low-quality images, which reflects the low value some creators place on these resources. In my opinion, this devalues the content, the image, and the brand. It would be more honest to limit their use.
Beyond the fact that these systems are not yet developed enough for the applications we are trying to give them, there is an essential and concerning aspect: we are deliberately removing human involvement from many projects, regardless of the outcome. I’m not talking about the economic aspect or the implications of a society with less employment. I’m talking about human capabilities. We are not enhancing people’s skills. We are driving them to do less, achieving worse results. We are leading humanity down a path of less effort, dedication, discipline, reflection, and imagination. Do you really believe it’s possible to reach Mars while diminishing these human capabilities? No future advancement will be possible with a humanity held back by a lack of encouragement in a mediocre society.
This is a vicious circle: one of the pillars of the current generative systems paradigm is that they feed on existing content to learn what is expected of a task. However, we’re not producing higher-quality content, on the contrary, we’re making it worse.
The future of AI is uncertain, with no guarantees, and we don’t really know where we stand or how far we’ll be able to go without new innovations in the field. Significant advances in the current systems could take months or decades.
Welcome. The age of mediocrity begins here.