The videos on this page are provided to help you take full advantage of the features of StyleBI.
Part 1 of Evaluation Guide, an introduction to StyleBI for users who are evaluating the software prior to purchase.
Contents: Creating a connection to a relational database.
Part 2 of Evaluation Guide, an introduction to StyleBI for users who are evaluating the software prior to purchase.
Contents: Importing a Text or Excel file into a Data Worksheet.
Part 3 of Evaluation Guide, an introduction to StyleBI for users who are evaluating the software prior to purchase.
Contents: Creating a Data Worksheet and data mashup.
Part 4 of Evaluation Guide, an introduction to StyleBI for users who are evaluating the software prior to purchase.
Contents: Creating a dashboard.
Part 5 of Evaluation Guide, an introduction to StyleBI for users who are evaluating the software prior to purchase.
Contents: Creating a production report.
This article discusses how the platform supports end-users in combining disparate data into analytic-ready ‘data blocks’ with governed self-service mashup functionality. It emphasizes that users can visually join diverse structured and unstructured sources and apply transformations without full ETL flows. The piece highlights that the underlying engine supports compression, in-memory caching, and clustering for performance at scale. It describes a case where a highly technical manufacturing company shifted from a rigid integration tool to this more agile mashup platform. Finally, it underscores benefits such as faster insight cycles, lower maintenance overhead, and tighter data governance in self-service contexts.
This piece explains how the BI platform enables business users to do their own mashups of data sources without waiting for IT or lengthy ETL flows. It highlights that the platform supports many source types (databases, flat-files, XML, etc.) and allows mashup of fields that hadn’t been mapped together. It underscores self-service analytics — users can combine data, build dashboards, and explore results in one unified environment. It also covers how traditional warehouse projects often impose delays and how mashup can provide a quicker path to insight. The article positions data mashup as a core capability within the product suite to accelerate adoption and usage.
This article defines what “data mashup” means in the BI industry and contrasts it with traditional “single truth” and heavy ETL-centric data warehouse approaches. It describes how end-users benefit from being able to combine data fields from different tables or sources spontaneously, and how IT benefits from reduced backlog and more agile delivery. It outlines key business benefits, like higher adoption, faster decisions, easier sharing, and lower cost of BI deployment. The article warns that giving users power still requires governance and trustworthy data practices. It recommends that organizations adopt mashup-enabled platforms to bridge the gap between business and IT while preserving integrity of data usage.
This article overviews the data mashup engine capabilities built into the BI platform: combining disparate sources, modeling, visualization and unified analytics. It describes how the engine pulls data from big data sources, Hadoop, relational DBs, flat files, etc., and then allows mash-up with modeling and dashboard consumption. It emphasizes agility: users no longer wait months for new reports or changes, they can iterate faster. Embedded examples highlight how mashup enables discovery of new correlations (e.g., combining social media and sales data). The article encourages evaluating mashup readiness when selecting BI tools and how features like connectors, transformation, caching and modeling support it.
This article contrasts data mashup vs traditional data warehouses, highlighting how mashup delivers faster deployment, greater flexibility, lower cost and easier use by non-technical users. It outlines how mashup supports ad-hoc analytics, dynamic data sources, unstructured formats, and real-time integration needs that warehouses cannot easily handle. It stresses that mashup tools empower business users while maintaining governance frameworks for IT oversight. It argues that for many emerging analytics use-cases, mashup is more aligned than rigid warehouse pipelines. Finally, it encourages organizations to adopt a hybrid approach combining both mashup and ETL/warehouse where appropriate.
This article describes the “enterprise mashup server” offering, which enables direct connection to operational data sources and allows BI architects and end-users to build dashboards before heavy ETL or warehouse redesigns. It explains how this flips the traditional BI process by enabling early-stage mashup of unmapped sources for prototyping and fast insight. It covers how business users can drag‐and‐drop fields, import spreadsheets and mash them with operational databases to rapidly create dashboards. It highlights that this improves agility, reduces time to insight and modernizes the BI delivery cycle. It also describes the architectural supports: connectors, caching, in-memory blocks and self-service design for enterprise scale.
This article focuses on how the BI platform provides extensive source connectivity and then uses a mashup engine to unify these sources into single views. It lists the wide range of supported sources (databases, Hadoop, SaaS, flat files, spreadsheets, etc.) and explains how the mashup engine merges them on common dimensions or keys. It describes how users can extract, join and visualise the aggregated data in dashboards directly. The piece highlights that without needing custom connectors or ETL pipelines, everyday users can create analytic datasets faster. Finally, it underscores that having broad source support plus the mashup capability accelerates insight generation for different departments across the enterprise.
This article presents real-world examples of data mashups: one in biotechnology combining sales and external logistics data; another in telecommunications combining navigation and partner data into dashboards. It explains how virtualization and mashup can reduce latency and allow near real-time reporting of KPIs spanning multiple sources. It highlights how once one department gains benefit, they typically replicate mashup-driven dashboards across HR, finance, operations and logistics. The article illustrates how mashup accelerates operational BI by enabling agile data reuse rather than static, warehouse-only models. Lastly it emphasizes that creating a data services layer and reuse of mashup assets provides enterprise-wide value and cost savings.
This article outlines a process for performing data mashup in higher education: defining objectives, selecting sources, preparing data, combining datasets, and presenting results to stakeholders. It explains how universities often mash student performance, course enrollment, financial aid and external labour market data to surface actionable insight. The piece emphasises the importance of cleaning, deduplicating and formatting data prior to mashup and understanding project scope relative to data size and complexity. It then discusses how combined visualization and dashboards help academic institutions answer retention, graduation rates and job-placement questions faster. The article closes by recommending stakeholder-centric presentation of mashup results and alignment to institutional goals.
This article describes how metadata management is integral to effective mashup; the platform allows users to edit metadata and manage mashup capabilities without coding. It explains that metadata editing enables reuse of queries, consistent naming conventions, and supports self-service mashup with governed models. It highlights how the underlying Data Block technology permits “lego-style” assembly of fields and transformations into reusable mashup assets. The piece also details how organizations can reduce IT load and accelerate dashboard creation using controlled metadata and mashup tools. Finally, it invites readers to evaluate how metadata-driven mashup capability can become a competitive advantage in analytics delivery.