The missing layer between agent connectivity and true collaboration

Binance
The missing layer between agent connectivity and true collaboration
Binance



Thank you for reading this post, don't forget to subscribe!

Today's AI challenge is about agent coordination, context, and collaboration. How do you enable them to truly think together, with all the contextual understanding, negotiation, and shared purpose that entails? It's a critical next step toward a new kind of distributed intelligence that keeps humans firmly in the loop.

At the latest stop on VentureBeat's AI Impact Series, Vijoy Pandey, SVP and GM of Outshift by Cisco, and Noah Goodman, Stanford professor and co-founder of Humans&, sat down to talk about how to move beyond agents that just connect to agents that are steeped in collective intelligence.

The need for collective intelligence, not coordinated actions

The core challenge, Pandey said, is that "agents today can connect together, but they can't really think together."

While protocols like MCP and A2A have solved basic connectivity, and AGNTCY tackles the problems of discovery, identity management to inter-agent communication and observability, they've only addressed the equivalent of making a phone call between two people who don't speak the same language. But Pandey's team has identified something deeper than technical plumbing: the need for agents to achieve collective intelligence, not just coordinated actions.

How shared intent and shared knowledge enable collective innovation

To understand where multi-agent AI needs to go, both speakers pointed to the history of human intelligence. While humans became individually intelligent roughly 300,000 years ago, true collective intelligence didn't emerge until around 70,000 years ago with the advent of sophisticated language.

This breakthrough enabled three critical capabilities: shared intent, shared knowledge, and collective innovation.

"Once you have a shared intent, a shared goal, you have a body of knowledge that you can modify, evolve, build upon, you can then go towards collective innovation," Pandey said.

Goodman, whose work bridges computer science and psychology, explained that language is far more than just encoding and decoding information.

"Language is this kind of encoding that requires understanding the context, the intention of the speaker, the world, how that affects what people will say in order to figure out what people mean," he said.

This sophisticated understanding is what scaffolds human collaboration and cumulative cultural evolution, and it's what is currently missing from agent-to-agent interaction.

Addressing the gaps with the Internet of Cognition

"We have to mimic human evolution,” Pandey explained. “In addition to agents getting smarter and smarter, just like individual humans, we need to build infrastructure that enables collective innovation, which implies sharing intent, coordination, and then sharing knowledge or context and evolving that context.”

Pandey calls it the Internet of Cognition: a three-layer architecture designed to enable collective thinking among heterogeneous agents:

Protocol layer: Beyond basic connectivity, these protocols enable understanding, handling intent sharing, coordination, negotiation, and discovery between agents from different vendors and organizations.

Fabric layer: A shared memory system that allows agents to build and evolve collective context, with emergent properties arising from their interactions.

Cognition engine layer: Accelerators and guardrails that help agents think faster while operating within necessary constraints around compliance, security, and cost.

The difficulty is that organizations need to build collective intelligence across organizational boundaries.

"Think about shared memory in a heterogeneous way," Pandey said. "We have agents from different parties coming together. So how do you evolve that memory and have emergent properties?"

New foundation training protocols to advance agent connection

At Humans&, rather than relying solely on additional protocols, Goodman’s team is fundamentally changing how foundation models are trained not only between a human and an agent, but between a human and multiple agents, and especially between an agent and multiple humans.

"By changing the training that we give to the foundation models and centering the training over extremely long horizon interactions, they'll come to understand how interactions should proceed in order to achieve the right long-term outcomes," he said.

And, he adds, it's a deliberate divergence from the longer-autonomy path pursued by many large labs.

"Our goal is not longer and longer autonomy. It's better and better collaboration," he said. "Humans& is building agents with deep social understanding: entities that know who knows what, can foster collaboration, and put the right specialists in touch at the right time."

Establishing guardrails that support cognition

Guardrails remain a central challenge in deploying multi-functional agents that touch every part of an organization's system. The question is how to enforce boundaries without stifling innovation. Organizations need strict, rule-like guardrails, but humans don't actually work that way. Instead, people operate on a principle of minimal harm, or thinking ahead about consequences and making contextual judgments.

"How do we provide the guardrails in a way which is rule-like, but also supports the outcome-based cognition when the models get smart enough for that?" Goodman asked.

Pandey extended this thinking to the reality of innovation teams that need to apply the rules with judgment, not just follow them mechanically. Figuring out what’s open to interpretation is a “very collaborative task,” he said. “And you don't figure that out through a set of predicates. You don't figure that out through a document. You figure that out through common understanding and grounding and discovery and negotiation."

Distributed intelligence: the path to superintelligence

True superintelligence won't come from increasingly powerful individual models, but from distributed systems.

"While we build better and better models, and better and better agents, eventually we feel that true super intelligence will happen through distributed systems," Pandey said

Intelligence will scale along two axes, both vertical, or better individual agents, and horizontal, or more collaborative networks, in a manner very similar to traditional distributed computing.

However, said Goodman, "We can't move towards a future where the AIs go off and work by themselves. We have to move towards a future where there's an integrated ecosystem, a distributed ecosystem that seamlessly merges humans and AI together."



Source link

Binance