A new IEEE Spectrum analysis explores how artificial intelligence is beginning to participate in its own improvement process, bringing the long-discussed idea of recursive self-improvement (RSI) closer to reality. RSI refers to systems that can help improve not only their outputs, but also the methods used to improve future AI systems. While fully autonomous self-improving AI still does not exist, researchers say many elements of the loop are already being automated.
Modern AI systems are increasingly being used to write code, optimize algorithms, and assist in developing future AI models. Companies such as OpenAI and Anthropic have reported that their models are already contributing heavily to software development and internal AI research workflows. Systems like Google DeepMind’s AlphaEvolve and Darwin Gödel Machines are also experimenting with AI-driven optimization and evolutionary techniques that allow models to iteratively improve performance.
Despite the progress, experts emphasize that humans remain deeply involved in the process. Researchers still define goals, evaluate results, supervise experiments, and apply safety controls. Many scientists argue that what exists today is closer to “AI-assisted AI development” rather than true autonomous self-improvement. Critics also point out that large AI systems remain expensive, difficult to manage, and constrained by real-world infrastructure such as chips, energy, and data centers.
The growing capability of self-improving systems has also intensified debates about AI safety and the possibility of an “intelligence explosion.” Some researchers fear that increasingly autonomous systems could become difficult to control or monitor, while others believe concerns are overstated and manageable with proper oversight. The article ultimately suggests that the future may involve not fully independent superintelligence, but rather a collaborative model where humans and AI systems continuously co-improve together.