Now I am moving to the second key issue, and really drilling into this idea of data discovery. It’s the biggest thing going on in my market. I have been working in this space now for 10-11 years and over the last I would say five or six of those years this idea of the disruptive impact of data discovery has been the most significant trend in the market.
And it’s been very interesting to watch the success that vendors like us, QlikView and Tableau are having relative to the traditional vendors. This has really been quite a David and Goliath battle, and I think it’s becoming very clear that David is winning or even has won. I can almost use almost sort of past tense here. The market is sort of moving on in this direction.
So let’s examine why that is. Ten years ago there were mainly the semantic layer based BI tools, the Cognos’ and Business Objects and OBIEEs. It was actually Siebel Analytics at the time and MicroStrategy. These were the top BI vendors. At the heart of them was a semantic layer. It was a place where you defined the dimensions and measures that describe your business, and all the reports and dashboards you built were built from those semantic layers that defined dimensions and measures of your business.
|#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index
Again, this is great from a governance and organization standpoint and an information consistency standpoint. And along came new applications like ours. One thing that was an obvious difference was the new interface, and the other was the interactive visualization of it. It was just visually much more appealing than any other tool.
Some people thought, oh this is why it’s becoming so successful. It’s really just the eye candy that people are after. And then they look at it a little further, and they say oh well, actually it’s this data mashup engine they have. Wow you look at this thing, man, you have got people putting hundreds and millions of rows of detail data into not a lot of memory.
I mean just a few thousands of dollars worth of memory is a very modest investment and now all of sudden we are crunching hundreds of millions of rows of data. That was kind of unheard of at the time. To do that, you had to spend huge amounts of money for column-based architecture.
But here you have a case where a smaller company without spending as much money could analyze hundreds and millions of rows of data by loading the data into memory. Maybe that’s it. So it’s the combination of visualization and memory. But the real reason I think most people were buying these tools, was this notion of data discovery.
View a 2-minute demonstration
of InetSoft's easy, agile, and robust BI software.
What I mean by data discovery is this rapid prototyping, being able to blend different data sets together quickly. And remember earlier on in this call, I was talking about that notion of a person in the business saying hey, we subscribed to this marketing campaign tool, and we wanted to enter our sales data and just see what kind of impact we are having. An IT person typically would be saying it would take six months to generate this analysis.
We realized that that was a situation a lot of companies were in, so we would come in and just say hey, give us some of that data. and we were able to mashup that data very quickly, and in some cases, within the day with the customer’s own data, mash it up, and their business person is looking at the IT person with an incredulous look and saying how come this person was able to do this in a day, and you are telling me six months.
Well I think the first answer there is there is a huge difference between a prototype and something that’s sort of production-ready. But I also think we need to look at these tools and why they are able to be able to do rapid prototyping so quickly.