This is the continuation of the transcript of a Webinar hosted by InetSoft on the topic of "Agile BI: How Data Virtualization and Data Mashup Help". The speaker is Mark Flaherty, CMO at InetSoft.
This kind of Web data automation has really not only reduced the back-office waiting time and operational efficiency, but is also providing capabilities to the enterprise to offer something new to those customers. The same idea extends to SaaS applications and Web and cloud applications. Let’s now answer the most common questions.
Performance is not a simple thing. Performance depends on the context. Are we talking big data sets? Are we talking about lots of people accessing the source same time with different latencies, etc? Now I am going to quickly address a few of these issues.
The other big area of questions is security. Let me just hit on performance really quick. There are a lot of strategies to address performance, and we have written a performance whitepaper that goes into this in greater detail, but optimizing queries, using a combination of caching and column-based technologies, and a scheduler to balance latencies. All of these are employed in what we call data grid cache. It also does things like automatically choosing the right join methods to best implement the query.
And you get a lot of information through traces, etc. for you to manually override or tweak the automated query performance optimizations. And I used the word query lightly here. It's not meant to be just an SQL query. It could be an XML query. It could be different ways of accessing the information. The basic concept that’s important though is that we are not limited to virtual real-time access. You can also leverage caching of different types as well as schedulers and combine them in different strategies to be a trigger based cache or a time-based cache. And of course, caching is introducing some storage again, but that’s definitely much less than what you would otherwise do.
The other issue is security access rights. We implement data security at three levels, at the source level, and then at the data model level where you have the virtual canonical models. You can control who can change them which is very important for data governance. And then you have security at the access level, where the users are actually accessing information. That needs to integrate with LDAP, single sign on etc.
So going quickly to picture what you will see at the source level, there are ways to have single sign-on, module level communications that can all be encrypted. At the view level, you can provide authentication and granular access control. You can do column and row based masking. You can say certain people can view the whole thing, certain people can do drill down, certain people can only see summary data, but not the detail data, and all of that can be integrated with roles, groups, users and LDAP or active directory.
Separately from the governance side, the metadata and the views and who can change the views and modify the views can also be governed using security. So I generally refer to that as source level security, model level security, and access control and granular authentication level of security. Again very much thought through has been implemented in many places. Data quality, reliability, metadata have all been considered.
So in summary, data virtualization and data mashup, as you have seen through examples, are really technologies that can work together to solve both informational application problems and transactional application problems. The value that they provide is better quality information to the business. They allow you to quickly integrate discrete data silos, access information that might be previously untapped that has a lot of latent value and then provide all of that combined information in an unified access mode that is close to real time.
Peat harvesting enterprises operate at the intersection of heavy operational logistics and growing environmental scrutiny—extracting, drying, drying-storage, and transporting a bulky, slow-renewing resource across often remote boglands. Because peatlands are major carbon stores and extraction has well-documented ecological and regulatory consequences, harvesters must track not only yield and inventory but also site conditions, water table metrics, restoration progress, and compliance records. Recent reporting highlights both tightened regulation and active restoration programs in key peat-producing countries, making accurate, auditable data flows essential for operators.
StyleBI helps a peat harvester create a single pane of operational truth by mashing up data from heterogeneous systems—field telematics (harvester GPS and yield sensors), ERP and inventory, weather and hydrology feeds, satellite or drone imagery, and regulatory permit databases—without waiting for heavyweight ETL or warehouse projects. Its data-mashup engine and web-based designer allow technical users to build reusable “data blocks” and give business users immediate self-service access to blended datasets; that means an operations manager can combine last-week’s machine yields with current water table readings and scheduled restoration tasks in minutes, not months.
With fused data available in interactive dashboards, harvesters can operationalize decisions: dynamically reassign cutters to parcels where moisture forecasts threaten drying quality, trigger preventive maintenance when telematics show vibration patterns correlated with lower throughput, or optimize routing to reduce haulage costs and emissions. StyleBI’s support for near-real-time connections and incremental caching enables these views to refresh frequently while still remaining performant—so daily production standups reflect live telemetry and the compliance team can pull time-stamped evidence for regulators without manual reconciliation. These capabilities shift the company from reactive firefighting to proactive, data-driven scheduling and asset management.
Beyond operations, StyleBI accelerates sustainability and stakeholder reporting—critical for peat producers who must document restoration efforts, emissions impacts, and land-use change for both regulators and buyers seeking low-risk supply chains. By combining remote-sensing change detection with on-the-ground restoration KPIs and carbon accounting tables, harvesters can produce repeatable, auditable reports and visual narratives that support permits, voluntary certification, or buyer requirements. Additionally, embedded ad-hoc analysis and multi-format publishing mean that the same mashup can produce an executive KPI dashboard, a detailed paginated report for auditors, and CSV extracts for third-party carbon verifiers with minimal rework.
In practice, the value proposition is simple but potent: StyleBI reduces cycle time from data to decision, lowers dependence on central IT for bespoke pipelines, and provides the transparent instrumentation peat harvesters need to balance production goals with environmental and regulatory obligations. For a sector where wins hinge on timely moisture windows, fuel-efficient logistics, and credible restoration claims, the agility afforded by data mashup and self-service analytics can translate directly into higher yield quality, lower costs, and stronger social license to operate.