Why an Indian Mobile Carrier Switched from Sisense to InetSoft’s Serverless BI Microservice

An Indian mobile network operator running across multiple telecom circles faced a familiar problem: a monolithic analytics stack that cost too much to run, took too long to scale, and frustrated both executives and field managers. The company had standardized on a traditional BI deployment model centered on a fixed cluster with long-lived nodes and a named-user licensing structure.

While it worked for early dashboarding needs, it struggled once the carrier’s analytics footprint expanded to thousands of managers, retail partners, and network operations personnel who needed responsive, governed insights on any device.

This article breaks down how the carrier migrated from Sisense to InetSoft’s serverless BI microservice architecture, and the tangible impacts the IT team measured in licensing, resource consumption, operational overhead, support burden, and end-user satisfaction. The narrative focuses on practical, reproducible patterns an enterprise IT group can adapt without rewriting the entire data platform.

Starting Point: Challenges With the Previous Stack

The carrier’s Sisense deployment evolved organically: a set of Elasticube servers, a handful of API nodes, and gateway services pinned to fixed VMs. Cost was predictable—but high—because the team had to size for peak: quarter-end finance closes, Black Friday-style promotions, and network incident surges. That meant running compute at near-peak 24×7 to avoid timeouts, even when utilization dipped below 20% overnight or on non-reporting days.

  • Licensing friction: A mix of named and viewer users created per-seat cost pressure. Contractors and regional partners needed temporary access, which translated into license juggling and audits.
  • Scaling limits: Horizontal scale required procurement tickets, new nodes, and scheduled downtime—misaligned with traffic spikes from flash promotions or mass-market prepaid offers.
  • Operational drag: The analytics team ran patching windows and health checks on a dozen-plus long-lived servers. Capacity planning was a quarterly project, not a policy.
  • User experience pain: Dashboard load time and drill-down latency varied by hour. Field managers often defaulted to static exports shared over email and WhatsApp, which defeated the promise of live analytics.

Target State: Serverless BI as a Set of Microservices

InetSoft’s serverless BI microservice allowed the team to treat visualization, semantic modeling, caching, and delivery as independently scalable components. Instead of pinning analytics to fixed hosts, the platform ran as on-demand workloads on Kubernetes (AKS/EKS-style equivalent in the chosen cloud). Each query burst was handled by short-lived workers; idle time returned capacity to the cluster. The semantic layer, RLS/column policies, and caching were stateless or state-light, backed by managed storage and a message bus for orchestration.

 Core components deployed as microservices: - Gateway & auth (OIDC/SAML SSO) - Semantic layer & catalog - Query workers (autoscaled, ephemeral) - Cache/accelerator store (hot measures, lookup dims) - Renderers (chart/PDF/CSV) - Scheduler & alerting - Embed/API service for partner portals 

Two architectural details mattered most for the carrier’s economics:

  1. Ephemeral query workers: Compute spun up for seconds or minutes per request, then disappeared. The team stopped paying for idle capacity.
  2. Adaptive caching: Frequently accessed KPI tiles—ARPU trends, recharge funnel conversion, drop-call heatmaps—were pre-materialized by policy during off-peak windows, shaving p95 response time without over-provisioning.

Licensing: From Seats to Elastic Consumption

The migration included a pivot in licensing. Instead of paying primarily per named user, the carrier negotiated a usage-aligned model combining a modest base subscription with concurrency and compute-minute packs. The result was financially favorable for a profile with thousands of occasional viewers and a smaller group of heavy analysts.

  • Viewer explosion without penalty: Regional sales heads, circle managers, and retail partners accessed dashboards via SSO and embedded portals without the IT team minting new named licenses for each seasonal spike.
  • Analyst power users right-sized: A limited set of modelers and analysts retained creator rights; the rest consumed governed content.
  • Contractor access simplified: Short-term access was absorbed by the concurrency allowance instead of creating license churn.

Illustrative numbers for context (the carrier’s internal ledger used similar proportions):

Cost Area Before (Sisense) After (InetSoft serverless BI) Delta
Licensing (annual) ₹7.8 crore (mixed named/viewer) ₹4.9 crore (base + concurrency/compute packs) −37%
Cloud compute & storage ₹5.1 crore (always-on cluster) ₹2.7 crore (ephemeral workers + tiered storage) −47%
Professional services & support ₹1.6 crore ₹0.9 crore −44%
Total annual TCO ₹14.5 crore ₹8.5 crore −41%

Figures vary by contract, but the pattern is durable: when most users are viewers and traffic is spiky, a serverless + concurrency model tends to reduce spend without rationing access.

Resource Savings: Autoscaling That Matches India’s Usage Rhythms

Telecom usage in India is bursty—festival promotions, cricket match nights, and last-day recharge rushes. The InetSoft stack used cluster autoscalers and Horizontal Pod Autoscalers to spin up query workers only when the queue depth or CPU crossed a threshold. The team also implemented a simple policy:

  • p95 response SLA: Keep most dashboard tiles under 2.5 seconds at the edge; pre-render KPI blocks nightly for high-traffic boards.
  • Cold-start budget: Maintain a small warm pool during business hours; nights and Sundays back off to near zero outside scheduled jobs.
  • Workload separation: Batch precomputation ran on spot instances; interactive queries remained on on-demand, minimizing cost risk.

Measured outcomes in the first 90 days:

  • Compute hours: Down 58% quarter over quarter.
  • Hot cache hit rate: Up from 32% to 71% on top dashboards, halving median response time.
  • Storage IO: Reduced 29% by pushing cold artifacts to cheap object storage and tightening TTLs.

Operational Overhead: Fewer Pets, More Cattle

Operationally, the win came from replacing long-lived, hand-configured VMs with declarative manifests and CI/CD pipelines. The BI team shifted from caretaking servers to maintaining templates and policies.

  • Cluster maintenance: Blue-green rollouts cut patch windows from hours to minutes with automated health gates.
  • Secrets and config: Standardized on a managed secrets store; no more credentials in local scripts.
  • Observability: A basic trio—logs, metrics, traces—fed SRE dashboards. Query p95 and error rate became the product KPIs.

Staff time freed up quickly. The platform team measured a 45% reduction in routine maintenance hours per month. Releases moved to a weekly cadence, and model changes were versioned alongside application code. Circle-level requests—like adding a language-specific label set or a partner-only drill path—shipped as config, not projects.

Support Burden: Ticket Volume Falls as UX Becomes Predictable

Before the migration, the help desk fielded a steady stream of “report slow” and “export failed” tickets. Many correlated to predictable spikes that the fixed cluster absorbed poorly. After moving to InetSoft’s microservice design, the ticket categories changed:

  • Performance/timeout tickets: Down 63% due to autoscaling and cache policy.
  • Access/license tickets: Down 54% thanks to SSO groups and concurrency licensing that eliminated seat assignments for casual viewers.
  • Export issues: Down 41% after shifting heavy scheduled outputs to a separate rendering queue with retry and backoff.

Support also benefited from admin UX improvements. Circle admins could clone a dashboard package, apply their language and RLS rules, and publish to their audience without opening central IT tickets. That reduced the “last mile” friction that previously turned into support work.

Security, Governance, and Telco-Grade Isolation

The carrier tightened governance during the move. InetSoft’s semantic layer consolidated metric definitions—ARPU, MOU, recharge conversion, drop-call ratio—so numbers matched across circles and functions. Row-level security was expressed as policies on the semantic model, driven by SSO attributes (circle, partner tier, channel). Because services were stateless, failover tests became realistic: the team routinely killed pods during business hours to validate graceful degradation.

For regulated data, the team used network policies to fence services and a private egress for warehouse access. PII fields stayed masked in the model unless the viewer had a specific claim. Audit logs recorded query context, filter state, and export destinations for every run, satisfying internal audit requirements without custom tooling.

End-User Satisfaction: Executives and Field Managers Actually Use It

Adoption surged when the product met users where they were. Executives favored a bilingual (English + Hindi) board with three levels of drill: national → circle → cluster. Field managers got a mobile-first view with just eight KPI tiles, a trend sparkline, and a tap-to-explain note for anomalies. Retail partners accessed an embedded portal that exposed only their counters’ activations, recharges, and SIM swaps with next-best-action nudges.

Measured after rollout:

  • Daily active viewers: +74% versus the previous quarter.
  • Median tile load time: 1.8s (down from 4.2s).
  • Scheduled export usage: −36% as more users trusted live views.
  • Satisfaction score (management): 4.5/5 across executive and circle leadership surveys.

Two design choices mattered: first, the team removed low-value widgets and invested in in-place explanations for KPI movements; second, they standardized on a small set of canonical dashboards per audience with filter books (e.g., “Prepaid North,” “Postpaid Corporate,” “Retail Tier-2”). Less choice led to higher confidence.

Migrating Without Stopping the Business

The carrier did not big-bang the cutover. They ran four phases over ten weeks:

  1. Inventory & rationalize: Audit all existing dashboards; retire 25% on day one.
  2. Model the canon: Build the semantic layer for the top 30 KPIs and 12 dimensions used across sales, marketing, and network ops.
  3. Parallel run: Rebuild five flagship dashboards in InetSoft, validate numbers against the old stack, and open to a pilot audience.
  4. Redirect & decommission: Swap links in portals and emails; freeze old content; archive monthly snapshots to object storage.

Because the new stack ran on Kubernetes with Infrastructure-as-Code, rollback was not dramatic: feature flags toggled the old gateway if needed. In practice, the toggle stayed off.

What IT Should Copy From This Playbook

  • Align licensing with usage shape: If 80–90% of your users are viewers with unpredictable traffic, concurrency + compute packs beat named seats.
  • Design for ephemerality: Assume services die; keep state in managed stores; make workers disposable.
  • Cache deliberately: Precompute the 20% of tiles that drive 80% of traffic; control TTLs ruthlessly.
  • Separate batch from interactive: Give scheduled renders their own queue so users do not compete with PDF jobs.
  • Measure p95 and error rate: Treat the BI platform like a product with SLAs, not a project with a finish line.
We will help you get started Contact us