DR’s MWC26 takeaway? The telecom stack is moving up the software layer. Hyperscalers are positioning their clouds as the platform for AI-driven network operations. Satellite networks are turning coverage into infrastructure that operators can source instead of build. And AI is lowering the cost of building telecom software itself. When infrastructure becomes easier to outsource, competitive advantage shifts from owning networks to running the business on top of them. Are you ready?
Gwynne Shotwell made Starlink’s next move clear at MWC26: a “tower in space” connecting directly to smartphones, offering operators global coverage they can source instead of build. The pitch is complementary—it fills gaps where building towers doesn’t pencil out. But sourcing coverage changes the economics more than operators are admitting. Coverage has been a moat for decades. Once it becomes infrastructure any operator can buy, competitive advantage migrates entirely to pricing, services and customer experience—the stack above the network. Operators who’ve invested billions in that physical moat need to be paying attention. 🧐
Benedict Evans framed AI as the next platform shift after web and mobile at MWC26, with one implication operators keep underselling: AI dramatically lowers the cost of building software. That matters because telecom has been held hostage to expensive, slow-moving software stacks dominated by a handful of vendors who profit from that complexity. Amdocs makes almost 80% of its revenue from services—a number that only makes sense when software is too costly and slow for operators to own it themselves. When AI collapses build costs, that dependency weakens. The question isn’t whether to buy AI tools. It’s whether the business case for outsourcing your entire software stack still holds when build costs drop by an order of magnitude.
GSMA announced the “Open Telco AI” initiative at MWC26 to train models on telecom-specific data: networks, protocols, service workflows. The logic seems sound until you realize frontier models already understand telecom. That’s not what’s failing. What’s failing is that billing systems, product catalogs, network inventory, and customer data live in disconnected systems with inconsistent definitions—which makes it impossible for any model, domain-trained or not, to reason coherently across the business. Training a smarter model on top of semantic chaos doesn’t fix the chaos. It just hallucinates more fluently about your specific protocols. The breakthrough in telco AI won’t come from better models. It will come from giving AI a coherent, executable representation of how the operator actually works. Hint: the solution is an ontology.
Google Cloud arrived at MWC26 with an accurate diagnosis: most telcos can’t operationalize AI because their data is trapped in “data swamps” across disconnected OSS and BSS. Google Cloud’s answer is cloud consolidation and graph models to map relationships across networks and operations. But graph models don’t tell AI what those connections mean or what actions are permitted. For example, a graph can tell you a subscriber is linked to a provisioning record and a billing account. It can’t tell you whether that subscriber is eligible for a retention offer, what the margin constraints are, or whether provisioning and billing even agree on who that subscriber is. Mapping relationships is not the same as modeling the business—and AI that can’t tell the difference keeps generating insights you can’t act on. You need… an ontology!
RCR Wireless News published a companion article based on the interview I did at MWC with Editor in Chief Sean Kinney. The skinny: Operators have deployed hundreds of AI agents, but most of them are stuck waiting for human approval on every decision. The breakdown isn’t the model; it’s governance. And the reason governance breaks down is context: billing defines “subscriber” one way, provisioning defines it another, care has a third version. When AI lacks a coherent representation of the business, it guesses. Operators know it guesses. So they pump the brakes. The Totogi Ontology solves this by giving AI one model of the business—a single truth—that all agents reason from. Zain Sudan cut dormant cell resolution time from 48 hours to 30 minutes. StarHub is using it to support live customer sales conversations in real time. Nearly 10 Tier-1 operators are already running on it. Context isn’t a nice-to-have for AI at scale. It’s the prerequisite.
Microsoft’s MWC26 message was blunt: stop running pilots, start showing ROI. AT&T is reporting 5x returns from AI, and Microsoft is pitching Azure—combining AI, data, and governance—as the foundation for scaling those gains. But the architecture has a tell: Fabric as the data foundation, MCP as the agent connectivity layer, Open APIs as the BSS interface. That appears to be connectivity without comprehension. MCP moves bytes between systems that still speak different languages. When a retention agent queries billing, provisioning, and care and gets three different definitions of the same subscriber, semantic reconciliation happens at inference time—which is exactly where hallucination lives. Microsoft gets it: “Show Me the Money” is the right question. But autonomous AI acting on semantically incoherent data at scale isn’t the answer. Solve that and you’ll be in business.
The news that an AWS data center in Dubai was targeted by Iran will do exactly one thing in most telco boardrooms: get printed out and handed to the CTO as evidence that public cloud is too risky for critical workloads. That’s the wrong lesson. AWS has more security engineers, more redundancy, more threat detection infrastructure, and more incident response capability than any telco IT team on the planet. Your on-premise data center does not. The AWS customers who came through this cleanly weren’t just on the public cloud. They designed their workloads for automatic failover across availability zones and regions from day one. Yes, it costs more. It’s also your disaster recovery plan, whether the threat is an Iranian missile or a general outage. Targeted attacks on critical infrastructure are an argument for consolidating ONTO hyperscalers and designing for resilience—not retreating from the cloud. The question isn’t whether the public cloud is safe. It’s whether you can afford to keep pretending the alternative is safer.