
Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making while keeping tight control over outputs.
This approach is especially clear in sectors where errors carry real financial or legal risk. The question is not just what AI can do, but how its behaviour can be managed, checked, and trusted.
One example comes from S&P Global Market Intelligence, which builds AI tools into its Capital IQ Pro platform. The system is used by analysts to review company filings, earnings calls, and market data. Its AI features are designed to stay grounded in source material.
According to S&P Global Market Intelligence, its AI tools extract insights from structured and unstructured data, including transcripts and reports, while working with verified source data.
AI adoption moves ahead of autonomy
The current wave of AI tools in business is often described as a step toward autonomous agents. These systems may eventually plan tasks, make decisions, and act without direct human input. But most companies are not there yet.
AI adoption is already widespread, with a majority of organisations using AI in at least one part of their business, according to research from McKinsey & Company. At the same time, many organisations have yet to scale AI across the enterprise, showing a disconnect between initial use and broader deployment.
Instead, AI helps with tasks such as summarising documents or answering queries, but it does not act independently.
S&P Global Market Intelligence’s tools enable users to query large datasets through a chat interface, but the results are tied to verified financial content. In many cases, users can refer back to underlying documents, lowering the risk of errors or unsupported outputs.
In its research, the company outlines AI governance as a process in which systems are designed, deployed, and monitored, with attention to fairness, transparency, and accountability.
AI adoption in high-risk sectors
In finance, small errors can have large consequences. That shapes how AI is built and used.
Tools like Capital IQ Pro are designed to support analysts rather than replace them. The system may help surface insights or highlight trends, but final decisions still rest with human users.
The gap between adoption and value is becoming clearer. Many organisations report a gap between AI deployment and measurable business outcomes, according to findings from McKinsey & Company.
While autonomous systems may be able to handle certain tasks, companies often need clear accountability. When decisions affect investments, compliance, or reporting, there must be a way to explain how those decisions were made.
Research from S&P Global notes that organisations are increasingly focused on building governance frameworks to manage AI risks, including data quality issues and model bias.
A step toward future systems
The gap between today’s controlled AI tools and future autonomous systems remains wide.
Interest in more autonomous and agent-driven systems is also growing, even as most organisations remain in early stages of deployment. Systems that can explain their outputs, show their sources, and operate within defined limits are more likely to be trusted.
Autonomous agents may one day handle tasks such as financial analysis, customer support, or supply chain planning with minimal input. But without clear control mechanisms, their use will remain limited.
These themes will feature at AI & Big Data Expo North America 2026 on May 18–19. S&P Global Market Intelligence is listed as a bronze sponsor of the event. The agenda features topics such as AI governance, ethics, and the use of AI in regulated industries.
Balancing capability and control
The push toward autonomous AI is unlikely to slow down. Advances in large language models and agent-based systems continue to expand what AI can do.
At the same time, enterprise users are asking a different question: how to keep those systems under control. S&P Global Market Intelligence’s approach reflects that concern. By keeping AI grounded in verified data and placing humans at the centre of decision-making, it prioritises trust over autonomy.
As systems grow more capable, the ability to govern and control them could become just as important as the tasks they perform.
(Photo by Hitesh Choudhary)
See also: Why companies like Apple are building AI agents with limits
Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

