StarHub has selected the Totogi Ontology to turn enterprise sales calls into deals the telco can actually close. Conversation intelligence platforms can transcribe a call and score an opportunity—but they can’t tell you whether that deal is priceable, sellable, or fulfillable, because they don’t understand how telco products work. That gap won’t be closed by a feature shipping next quarter; it’s structural. The Totogi Ontology ships with that knowledge pre-built. Every StarHub call gets analyzed in real time and translated into deal recommendations the business can execute immediately. Expected impact: up to 10% lift in enterprise deal conversion, 50% reduction in sales training time. And, it’ll be in production in four weeks with a single forward-deployed engineer. Four weeks. That’s not a speed claim. It’s proof that the knowledge built-into the Totogi platform. Voilà!
Don’t miss the Totogi testimonial from Tier-1 operator Zain in my latest YouTube talk (at 4:50). Waleed Abdelmajeed, Director of Technology Strategy & Digital Transformation at Zain Sudan, describes how Totogi’s ontology solved a problem no monitoring system could: dormant cells. These cell sites show green lights and zero alarms, but carry zero traffic and silently bleed revenue. What used to take a technician 48 hours to diagnose now takes 30 minutes, and the system can now predict dormant cells before they happen. As Waleed put it, the Totogi Ontology is like LEGO bricks—once you build one capability, it becomes the foundation for the next use case. That’s the compounding value of an ontology: one foundation, infinite possibilities.
OpenAI and Anthropic released their most powerful coding models within 16 minutes of each other on February 5th—GPT-5.3-Codex and Claude Opus 4.6—and neither blinked. Codex clocks 25% faster execution and leads on terminal automation; Opus 4.6 leads on complex reasoning, real-world bug fixing, and multi-agent orchestration, with 16 Claude instances stress-tested building a C compiler from scratch. The underlying reality is remarkable: AI coding agents have gone from novelty to genuinely capable software engineers in under two years, and the gap between model generations is now measured in weeks. For telcos embarking on BSS modernization, this changes the build-vs-buy calculus permanently. The cost of custom software development is collapsing faster than any vendor pricing model has adjusted for.
Fierce Network’s latest Amdocs aOS coverage pitches “open by design” agents that work across any BSS/OSS stack, any vendor, any cloud. Read that again from the customer’s perspective: Amdocs just built the abstraction layer that makes Amdocs optional. For decades, switching off Amdocs meant a $200+ million transformation program and your career on the line; now it’s handing operators a multi-vendor AI framework that decouples intelligence from the underlying stack. Every operator using aOS to orchestrate across heterogeneous systems is one step closer to pulling the plug on the most expensive line item in its IT budget. Amdocs built aOS to stay relevant; their customers will use it as an exit ramp to escape.
At a pre-MWC2026 briefing, Ericsson unveiled AI-ready radios with neural network accelerators baked into the silicon, betting that AI workloads belong inside the RAN. Think about that for a second. The entire compute industry is consolidating AI into hyperscale cloud infrastructure where models improve by the week, costs drop by the quarter, and you’re never stuck on last year’s hardware—and Ericsson wants to embed inference into radios bolted to towers with 7-10 year refresh cycles. The justification is edge AI use cases—smart glasses, AR, agent-to-agent traffic—that have been “18 months away” since 2016. Meanwhile, the vendor is also launching an “Agentic rApp as a Service” hosted on AWS, which quietly concedes the real point: the cloud is where AI workloads actually run. I guess when you’re a hardware company watching the AI market explode, you slap “AI-ready” on your radios and hope nobody notices all the workloads are running on AWS.
French shipping giant CMA CGM just signed a two-year deal to put Eutelsat OneWeb on 300+ of its 650 vessels as Europe’s sovereign alternative to Starlink. Noble goal, brutal math. Starlink has 9,500 satellites in orbit versus Eutelsat’s 600, delivers half the latency, and here’s the part that makes competition nearly impossible: Musk owns the rockets. SpaceX launches Starlink satellites essentially for free while Eutelsat pays commercial rates for every ride. Europe is spending €2.5 billion in state funding to build a constellation that will compete against a vertically integrated monopoly where the launch costs are zero. Sovereignty is a worthy ambition, but you can’t subsidize your way past free rockets.