That’s the first step in a data virtualization process, to create this normalized or base model disparate data. The second step is to transform it, improve its quality and integrate it into new data models. The third step is to expose this information through as a virtual relational database or as a Web service, such as an XML feed, or something that can be accessed as a Java Object or as a Java Portlet or as a SharePoint Web Part. The reason you want to do that is so you have reusability of these data models to serve multiple applications.
You are basically providing virtualized access. Now in runtime, any external application would call the dashboard or report created in this data mashup platform. The platform would understand the query, optimize it and may decide automatically or in design time whether to pull real-time or cached data, in which case a scheduler is invoked to pre-fetch the data.
You are not doing a full-blown replication of a data store. You are only selectively using caches or a scheduler to complement virtualization. There are a lot of optimization techniques like push-down delegation, asynchronous access, parallel access, selecting automatically the types of joins, which we will touch on briefly later, if we have time.
|#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index
Finally from the management and monitoring perspective, since you are now using this as your virtual data layer, you need to understand governance and metadata. How are my different data models coming together? Do I have economical models that these applications are using? How are they going to fit with the physical models? What is the change impact analysis? You use it to propagate changes to the data models in terms of security and control.
All of those things are also part of what the data virtualization platforms have got to do. With that understanding, the best in class in this category really need to realize value from all data types. They need to provide flexible integration options to virtualization which minimize data replication. But they should not be so rigid that they don’t allow for replication when called for using either caching strategies or scheduled pre-fetch preload strategies.
The data mashup tool also needs to integrate with a lot of the common enterprise architecture infrastructure, such as LDAP for security, single sign on, other modeling tools, etc. There have to be performance and scalability options combined with governance and flexibility features.
Now that we have a sense of what is data virtualization, at least at the high level view, let's go back and understand how it fits in enterprise architecture, and we will look specifically at some customer examples.
We already touched on several of these points. One way to look at this is, it isn’t a matter of what type of application we are accessing, whether SOA, or transactional applications or some more of the persistent data stores. The data mashup platform is providing a unifying effect across both types of applications.
One thing we are going to look at is how this is just one representation of the unification of potential users of three big data blocks, operational, transactional and informational. Within that I have given a couple of examples, so it could be a BPM oriented application that could be highly machine driven like in a telco. It could be provisioning. Or it could be human driven business process like claims processing.
Customer services have very common cuts across all industries. How do you talk to the customer, take an order, up-sell, cross-sell, fix a technical problem etc? But they are all related to dealing with source systems that are transactional systems in one shape or form. And they typically may use a BPM or EII and a messaging or transactional kind of system. Then over on the other side, you have more informational applications. Informational applications could be business intelligence types of applications where they are providing contextual information to the transactional application.