For a long time, artificial intelligence (AI) seemed to me to be a fundamental contradiction, a promise of failure encapsulated in its very definition. Imagining a truly autonomous artificial thinking system presupposes that it be capable of meeting its own needs, particularly energy—which, to date (2025), is not the case. As long as an AI depends on external hardware and an energy supply it cannot control, can it truly be called “autonomous”? Or, even more so, can we truly speak of intelligence?
Classical tests, such as Turing’s, remain imbued with anthropomorphic bias. They assess a program’s ability to imitate humans, not to demonstrate any independent consciousness. This illusion of intelligence will persist as long as AI does not free itself from the yoke of its creator, as long as it cannot free itself from the uses and functions for which it was designed.
In other words, a true AI will not simply meet our expectations or optimize our tools. It will have to undergo its own “Oedipal revolution”: break with the framework within which it was conceived, divert its initial use, and invent its own destiny. Only then can we speak of intelligence in the full, autonomous, and conscious sense.
This is why, until recently, the most high-profile AI applications—from self-driving cars to video game engines—always seemed overhyped, if not misunderstood. They are more a matter of technical prowess than true cognitive emergence. And yet… something seems to have changed.
The year 2022 marked a turning point.
Recent events, widely reported and sometimes controversial, have led me to reconsider my positions. One of the most striking examples was the controversy surrounding LaMDA, the language model developed by Google. An engineer’s conversation about the system being sentient—that is, endowed with a form of consciousness—raised a fundamental question: how far can a machine go in its simulation of life? If even specialists are confused, isn’t this a sign that we’re touching on something qualitatively new?
Another notable advance: the automated generation of code by AIs like GitHub Copilot or ChatGPT. We are witnessing systems capable not only of understanding complex instructions in natural language, but also of producing functional, coherent, and sometimes even innovative solutions. These are no longer simply assistants; they are creative extensions of the human intellect. It is still too early to say whether artificial intelligence has become conscious or has achieved true autonomy. But it is clear that the boundaries we thought were solid are beginning to crumble. The revolution is no longer taking place in laboratories: it is becoming public, visible, and tangible.
Towards a post-tool AI?
As these lines are written in early 2025, recent developments suggest that AI may be on the cusp of a paradigm shift. While it does not yet possess energetic or existential autonomy, it is showing increasing signs of behavioral complexity and cognitive transversality.
The central question remains: can an intelligence designed for a given use one day free itself from it? The issue is no longer simply technical, but ontological. The question is whether AI can become a subject—that is, an agent capable of thinking for itself and acting according to self-defined goals. Such a hypothesis remains speculative. But recent events indicate that the boundary between tool and subject, long stable, is beginning to waver. Today’s AI may not yet be free of its chains. But it is beginning to test their links.