InetSoft Webinar: How Data Management Professionals Can Manage Data Complexity

Below is the transcript of a Webinar hosted by InetSoft on the topic of "Managing Data Complexity." The presenter is Mark Flaherty, CMO at InetSoft.

Mark Flaherty: The role of data management professionals has become increasingly challenging with multiple audiences, multiple business units and departments, multiple tools and applications and multiple databases. In this Webinar we will look at these challenges and explore how organizations can better manage their data complexity. You will also discover how a consistent, integrated view of critical data assets can turn data complexity into information advantage. When both the business and technical stakeholders have a common view of information, they can visualize the power of their data.

What I would like to talk about first is the fact that the growth of data, the growth of data volumes, the acceleration in growth of data volumes, has far outpaced our ability to effectively consume that information or that data and transform it into actionable information. In fact, these two concepts are essentially at loggerheads, and what I would like to do is look at how to get some organization out of this growing complexity.

These complexity trends overwhelm us, but I think that using good techniques, good processes and the appropriate types of tools for getting our hands around the way that information can be organized, it enables us to get some advantage out of the data, the data that as I said before threatens to overwhelm us. So I want to start out with this concept of the information explosion.

#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index Read More

Here are some factoids I’ve gathered. The first one: the sizes of the largest data warehouses triple approximately every two years. I think even today that might even seem tame. I have talked to some large organizations, and when I talk about data warehouses in the multi-terabyte range, some people in the front row seem to giggle at how puny those datasets seem.

We also have large growth in unstructured data, and in fact, in a recent article I read, it talked about how at the current rate of growth between now and another 10 years from now, the expectations, the amount of digital information that is going to be floating around the world is on the order of 35 trillion gigabytes that is a huge, huge, huge amount of information. So the question then becomes what do we want to do with that data?

In fact, there is a growing need to repurpose our information. It used to be the case that we would build our applications that were intended to achieve some functional specification for some operational transaction processing. The data could be processed in batch, and that we be satisfied with grouping the data together by day. Now we are taking the same data that has been entered on, one end of the organization and using it almost immediately for analysis at another point in the organization.

This has meant speeding up other operational activities way across on the other side of the world. Now, we have got sales data coming in and pouring in being analyzed in real time, which is then fed into call center scripts at inbound call centers to give reps real-time relevant customer information.