An Indian mobile network operator running across multiple telecom circles faced a familiar problem: a monolithic analytics stack that cost too much to run, took too long to scale, and frustrated both executives and field managers. The company had standardized on a traditional BI deployment model centered on a fixed cluster with long-lived nodes and a named-user licensing structure.
While it worked for early dashboarding needs, it struggled once the carrier’s analytics footprint expanded to thousands of managers, retail partners, and network operations personnel who needed responsive, governed insights on any device.
This article breaks down how the carrier migrated from Sisense to InetSoft’s serverless BI microservice architecture, and the tangible impacts the IT team measured in licensing, resource consumption, operational overhead, support burden, and end-user satisfaction. The narrative focuses on practical, reproducible patterns an enterprise IT group can adapt without rewriting the entire data platform.
The carrier’s Sisense deployment evolved organically: a set of Elasticube servers, a handful of API nodes, and gateway services pinned to fixed VMs. Cost was predictable—but high—because the team had to size for peak: quarter-end finance closes, Black Friday-style promotions, and network incident surges. That meant running compute at near-peak 24×7 to avoid timeouts, even when utilization dipped below 20% overnight or on non-reporting days.
InetSoft’s serverless BI microservice allowed the team to treat visualization, semantic modeling, caching, and delivery as independently scalable components. Instead of pinning analytics to fixed hosts, the platform ran as on-demand workloads on Kubernetes (AKS/EKS-style equivalent in the chosen cloud). Each query burst was handled by short-lived workers; idle time returned capacity to the cluster. The semantic layer, RLS/column policies, and caching were stateless or state-light, backed by managed storage and a message bus for orchestration.
Core components deployed as microservices: - Gateway & auth (OIDC/SAML SSO) - Semantic layer & catalog - Query workers (autoscaled, ephemeral) - Cache/accelerator store (hot measures, lookup dims) - Renderers (chart/PDF/CSV) - Scheduler & alerting - Embed/API service for partner portals
Two architectural details mattered most for the carrier’s economics:
The migration included a pivot in licensing. Instead of paying primarily per named user, the carrier negotiated a usage-aligned model combining a modest base subscription with concurrency and compute-minute packs. The result was financially favorable for a profile with thousands of occasional viewers and a smaller group of heavy analysts.
Illustrative numbers for context (the carrier’s internal ledger used similar proportions):
Cost Area | Before (Sisense) | After (InetSoft serverless BI) | Delta |
---|---|---|---|
Licensing (annual) | ₹7.8 crore (mixed named/viewer) | ₹4.9 crore (base + concurrency/compute packs) | −37% |
Cloud compute & storage | ₹5.1 crore (always-on cluster) | ₹2.7 crore (ephemeral workers + tiered storage) | −47% |
Professional services & support | ₹1.6 crore | ₹0.9 crore | −44% |
Total annual TCO | ₹14.5 crore | ₹8.5 crore | −41% |
Figures vary by contract, but the pattern is durable: when most users are viewers and traffic is spiky, a serverless + concurrency model tends to reduce spend without rationing access.
Telecom usage in India is bursty—festival promotions, cricket match nights, and last-day recharge rushes. The InetSoft stack used cluster autoscalers and Horizontal Pod Autoscalers to spin up query workers only when the queue depth or CPU crossed a threshold. The team also implemented a simple policy:
Measured outcomes in the first 90 days:
Operationally, the win came from replacing long-lived, hand-configured VMs with declarative manifests and CI/CD pipelines. The BI team shifted from caretaking servers to maintaining templates and policies.
Staff time freed up quickly. The platform team measured a 45% reduction in routine maintenance hours per month. Releases moved to a weekly cadence, and model changes were versioned alongside application code. Circle-level requests—like adding a language-specific label set or a partner-only drill path—shipped as config, not projects.
Before the migration, the help desk fielded a steady stream of “report slow” and “export failed” tickets. Many correlated to predictable spikes that the fixed cluster absorbed poorly. After moving to InetSoft’s microservice design, the ticket categories changed:
Support also benefited from admin UX improvements. Circle admins could clone a dashboard package, apply their language and RLS rules, and publish to their audience without opening central IT tickets. That reduced the “last mile” friction that previously turned into support work.
The carrier tightened governance during the move. InetSoft’s semantic layer consolidated metric definitions—ARPU, MOU, recharge conversion, drop-call ratio—so numbers matched across circles and functions. Row-level security was expressed as policies on the semantic model, driven by SSO attributes (circle, partner tier, channel). Because services were stateless, failover tests became realistic: the team routinely killed pods during business hours to validate graceful degradation.
For regulated data, the team used network policies to fence services and a private egress for warehouse access. PII fields stayed masked in the model unless the viewer had a specific claim. Audit logs recorded query context, filter state, and export destinations for every run, satisfying internal audit requirements without custom tooling.
Adoption surged when the product met users where they were. Executives favored a bilingual (English + Hindi) board with three levels of drill: national → circle → cluster. Field managers got a mobile-first view with just eight KPI tiles, a trend sparkline, and a tap-to-explain note for anomalies. Retail partners accessed an embedded portal that exposed only their counters’ activations, recharges, and SIM swaps with next-best-action nudges.
Measured after rollout:
Two design choices mattered: first, the team removed low-value widgets and invested in in-place explanations for KPI movements; second, they standardized on a small set of canonical dashboards per audience with filter books (e.g., “Prepaid North,” “Postpaid Corporate,” “Retail Tier-2”). Less choice led to higher confidence.
The carrier did not big-bang the cutover. They ran four phases over ten weeks:
Because the new stack ran on Kubernetes with Infrastructure-as-Code, rollback was not dramatic: feature flags toggled the old gateway if needed. In practice, the toggle stayed off.