Cisco Live: AI will bring developer workflow closer to the network

At Cisco Live 2025 in San Diego, Cisco CEO Chuck Robbins compared AI’s effect on networking to what the Internet felt like in the 1990s. Back then, the arrival of TCP/IP shattered the boundaries of closed networks, connecting individual computers to a worldwide web of information.

AI is reshaping networking in ways that demand a new degree of programmability, observability, and optimization, Robbins said. And if history is any guide, what starts as an infrastructure concern for network and platform teams will eventually trickle down to developers.

From static routers to programmable platforms

The rise of cloud-native infrastructure made containers and APIs the lingua franca of compute. Now, networking is undergoing a similar transformation. Thomas Graf, CTO of Cisco’s Isovalent and creator of Cilium, an open-source, cloud-native, eBPF-based networking, observability, and security solution, sees legacy routers and switches becoming programmable platforms in their own right.

“With the launch of the new DPU-enabled Cisco top-of-rack switch… we now have switches that not only perform traditional functions but also bring firewalling and load balancing directly into the switch,” Graf said. “This means replacing legacy firewall boxes with functionality baked into the networking fabric itself.”

Graf describes DPU-enhanced switches as programmable systems, allowing software-defined network services to be delivered from the switch itself rather than in VMs or sidecar containers. Combined with runtime protection tools like Tetragon and other kernel observability frameworks enabled by eBPF, this paves the way for classic network operations—firewalls, segmentation, even observability—to be more flexibly managed in code.

It’s a shift from “ticket ops” to GitOps for networking, Graf said.

AI agents tackle network debugging

At Cisco Live, Cisco introduced its Foundation AI Security Model, an open-source, purpose-built model for security trained on a curated set of five million security-specific tokens. The company also unveiled the Deep Network Model, optimized for network operations with an agentic UI experience called AI Canvas.

David Zacks, director of innovation, advanced technologies and AI and machine learning at Cisco, introduced the model. “The AI system isn’t smarter than the network engineer—it just has access to more data,” Zacks said. The ability to collect telemetry at scale, process it using machine reasoning, and surface actionable insights is rapidly becoming table stakes for operating reliable networks, he added.

As these feedback loops mature, it’s only a matter of time before developers start leveraging the same frameworks in pre-production environments, modeling how inference pipelines behave under load and automatically flagging performance cliffs or bottlenecks.

A new architecture to meet the needs of AI

A repeated theme at Cisco Live has been that the full-stack redesign necessary to support AI is collapsing the traditional boundaries between applications and infrastructure.

“Two things in parallel—the models and the silicon—are becoming more bespoke,” said Cisco’s Jeetu Patel. “Models are getting smaller, silicon more programmable, and the time to market is shrinking. The model becomes part of the application itself. Iterating the app is iterating the model.”

That compression between application logic and inference hardware is triggering an architectural rethink. For AI workloads to perform, developers need visibility into how model design, network bandwidth, and inference placement intersect. Bandwidth-heavy queries from large language models are especially sensitive to latency and congestion—issues that are invisible until they hit the network.

At Cisco Live, sessions have emphasized how AI workflows are now being mapped directly to the network topology itself. Distributing load through pipeline parallelism, optimizing inference placement based on network path characteristics, and pre-caching model shards near compute boundaries are just a few of the strategies being discussed.

This is infrastructure thinking at the developer level, because performance is no longer just about the GPU, but about where, how, and how fast the data flows.

A convergence of application logic and network control

So then, are we approaching a moment where developers will get direct programmability over the network?

“Network programmability has been a goal for years within the networking community,” said Jim Frey, principal analyst for networking at Enterprise Strategy Group. “There is even a term of art for it, NetDevOps, as well as a growing community, the Network Automation Forum, that is focused on making it a reality.”

“But achieving that goal has been fiendishly difficult due to a lack of standard interfaces and closed, proprietary equipment architectures,” Frey said. “The arrival of AI is changing the rules of the game. Network infrastructure teams, and the equipment providers that supply them, are having to fall back and regroup for this new world, and find a path to programmability that aligns with rest of infrastructure domains.”

Given this new reality, the idea that a future control plane will give AI developers declarative access to bandwidth, latency profiles, or even Layer 7 behavior is not far-fetched, according to Cisco. “We’re building for AI not as a workload, but as the platform,” said Patrick LeMaistre, technical solutions architect at Cisco. “That changes everything.”

Total
0
Shares
Previous Post

Digital AI introduces Quick Protect Agent, a no-code way to protect mobile apps

Next Post

Mistral AI unveils Magistral reasoning model