InetSoft Webinar: Evolving a Rationalized Set of Canonical Data Models

Below is the continuation of the transcript of a Webinar hosted by InetSoft on the topic of "Managing Data Complexity." The presenter is Mark Flaherty, CMO at InetSoft.

Mark Flaherty (MF):The effect here is that, because we haven’t developed our environment 30 years earlier in anticipation of many different ways that the business was going to grow and change, we ended up with this complexity and variation. But once we have an understanding that those things can exist, we want to reduce the risk of doing the same things over and over again - replicated functionality, replicated work, rework by understanding where those differences are.

We can start to migrate towards a more standard environment so we can assess the variances. We can look at what types of data standards we can use for bubbling up that chain of definition from the business term to the data element concepts to the uses of the data elements, the conceptual domains and the value domains.

Essentially, we evolve a rationalized set of canonical data models, and when we see that that our applications could, without a significant amount of effort or intrusion, be able to migrate into these canonical models, we can merge our representations into the standardized format.

#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index Read More

And then on the other hand, if we see that we can’t merge the data sets, then what we can do is differentiate them. If it turns out that we are actually talking about the difference between the prospective customers with a customer support contract or agreement for 30 days as opposed to customers who actually have given us money, those two groups of people are actually different sets of people, and therefore we might want to differentiate even at the lowest level in that chain of definition.

We settle on the fact that we have got different business terms that need to be put into place, a prospective customer, or an evaluation customer. We will have different ways of qualifying our terminology so that we can employ our different standard representations.

In order to do this, we need to have some processes, and we need to have some data management tools. What are the types of tools do we need? Well we need data modeling tools because we need to have some mechanism for capturing our rationalized and standardized models so that we can even then use them as a way of communicating what we are working on and what things look like and how data can be shared.

Read the top 10 reasons for selecting InetSoft as your BI partner.

We need data profiling tools and statistical analysis tools and model evaluation tools and techniques so that we can do our analysis as part of our rationalization activity. And we need a metadata repository that becomes the central platform for capturing the knowledge and communicating what we have learned, not just the technical people, but really to everybody in the organization.

And if I had to make one parenthetical comment associated with the content that I am presenting today, I would boil it all down to this: that the use of a data model is no longer limited to a DBA or database administrator or a data modeler, but rather the use of a data modeler is a framework for communicating the value of that information asset, or we were talking about before, about using data as an asset or treating data as an asset.

The way to do that is to manage our assets in a structured manner. We keep track of our assets. We look at how they are being used to run the business and to help the business. We do that for any of our real assets, our hard assets, and we should do that for information assets if we are going to treat them that way.

Previous: Example of an Advanced Visualization
Next: Mobile BI