InetSoft Webinar: Cube of Data in Server Memory

This is the continuation of the transcript of the DM Radio show "Avoiding Bottlenecks and Hurdles in Data Delivery". InetSoft's Principal Technologist, Byron Igoe, joined industry analysts and other data management software vendors for a discussion about current issues and solutions for information management.

multiple chart dashboard screenshot
demo
View a 2-minute demo of InetSoft's easy, agile, and robust BI software.

Eric Kavanagh: Yeah that’s interesting. Philip, have you come across that terminology, or that kind of thing recently?

Philip Russom: Well, as you mentioned materialized views, that technology has been with us for decades, right? It's just that in its early incarnation, say in the mid 90s, it was kind of a limited technology, and especially performance was an issue. Refreshing the view, instantiating data into the view, it was kind of slow.

Thank God these kind of speed bottlenecks have been cleared as the database vendors themselves have worked out virtual tables, or you have standalone vendors who are working on this obviously. Denodo and Composite have both made contributions there. So yeah, this new technology is something that a lot of us wanted in the 90’s. It just didn’t work very well, and luckily today it works pretty good.

Ian Pestel: Each of user or even extending beyond these individual databases where you could even take a mess up across down resources, pulling feeds, pulling various databases together and then makes that as well so it's okay.

Jim Ericson: Yeah, let me provide a connection to that. We haven’t had a chance today to talk about in memory databases, and I am finding in memory databases to be a way to get around certain performance bottlenecks, so I want to introduce that to our talk today. I am also seeing in memory databases. These were stuff like operational business intelligence or performance management.

It might involve OLAP, or imagine a cube of data in server memory. It's refreshed very frequently, maybe some of it in real-time. It's so that business managers can refresh their management dashboards based on that data, and manage in a very granular, a very short timeframe kind of fashion. And so this is possible by some of the advancements in hardware.

A lot of this is Moore’s Law in action. A lot of the technology advancements I was talking about earlier are actually at the hardware level. So as we have gone from 32 bit to 64 bit systems. Companies finally are replacing the old 32 bit equipment and are going to 64 bit. They have got the giant memory space of 64 bit so they are able to do a lot in memory. And so I am saying federation, I mean basically this in memory database is really federated, and it's virtualized, right. I am seeing that the memory was a way to get around bottlenecks.

Philip Russom: Yeah, your task system is part of that virtualized layer like when I am doing that immediate processing and wonder what do I store. And the problem is how do I manage it, right? So the goal here is to provide you a layer of technology semantically that place of stack so I can store it, but how do I manage it, how do I create it easily? How do I flexibly change the scheme? How do I get information incrementally in and out of it?

So it becomes an intermediate layer that you are going to be able to store, and that’s very fast, but they can manage that, and that’s the critical element that people sort of forget about that when they are building those very fast in memory databases. You still have to manage it somehow in terms of what’s going in there and how they are going to keep it refreshed. Jim Ericson: Yeah, speaking of data management I do want to point to another piece of technology that’s really greatly improved the bottleneck problem as well as the big data problem. That will be storage. Storage just keeps getting bigger in capacity. It gets smarter. We can do more processing down at the storage level without having to drag terabytes of data over the ethernet cable. It also gets cheaper in price as it gets better.

It's amazing. So one of the ways that we are finding sort of bottlenecks alleviated is that storage itself is fast to read and write data to it. I/O is not as bad as it used to be. Processing is closer to the data there and so forth. So I don’t know. are you guys saying storage has been sort of useful helpful thing to speed this up?

Read why choosing InetSoft's cloud-flexible BI provides advantages over other BI options.