Anthropic has released an analysis of millions of interactions with its AI agent, Claude Code, revealing how agent autonomy evolves in practical settings. The study shows autonomous task durations nearly doubled over three months and highlights that experienced users both grant more autonomy and actively monitor the agents. Claude Code often self-limits autonomy by asking for clarifications, and while most agent actions are low-risk, some operate in sensitive domains like healthcare and finance. The report stresses the importance of post-deployment monitoring, designing for effective human oversight, and calls for policymaker focus on enabling intervention rather than imposing rigid controls.

Source: Anthropic