AI agents are gaining traction, but their use remains heavily concentrated in software programming and development, according to a recent analysis of real-world interactions with agentic tools. Research shows that nearly half of all agent-based activity today is tied to writing, running, and testing code, with tools such as Claude Code and other public-API agent systems seeing the longest and most frequent autonomous sessions in engineering contexts.
Users are increasingly allowing AI agents to work with longer autonomous runs to handle complex coding tasks, such as debugging and generating program components. Over a span of a few months, the time agents operated independently nearly doubled, suggesting deepening integration into engineering workflows — especially among software teams that rely on machine-assisted coding for productivity gains.
By contrast, agent use outside of programming — in fields like healthcare, finance, customer support, and cybersecurity — is still limited and largely experimental. Where agents are used beyond code, they tend to be assigned low-risk, easily reversible tasks rather than deep, mission-critical workflows. Analysts and developers alike see this as a sign that agentic AI is still in an early adoption stage outside its core niche.
Experts say broader adoption will require better monitoring, governance, and control mechanisms before agents can be trusted with sensitive or high-stakes work outside tech teams. The current concentration in programming reflects both the relative safety of software tasks — where results can easily be tested and corrected — and the practical focus of early agent development on domains where data and error feedback loops are well understood.