Nothing more appropriate than to start the week following ‘Memorial Day’ in the US, with some “In-Memory” concepts. Every week, we talk to thousands of companies who want to find solutions to their “In-Memory” headaches. My CEO was in NYC last Wednesday and I was in SF on Thursday to discuss this – please find our presentation and video at the bottom of this post. In the meantime, here is what we see:

Analytics and Big Data: The Perfect Storm?

The Business Analytics world hasn’t seen a lot of innovation over the past two decades. Despite the fact that the cost of storage, network and bandwidth have plummeted, the cost of analysis hasn’t moved. Most companies still rely on antiquated software or approaches to gain insights from the data they store. A poor analytic backbone has a huge financial cost but further, it prevents companies to anticipate, let alone, react to market opportunities and threats. Competitive advantage is not determined by the size of the data stored, but rather by the speed at which this data is used. For more, check out our “Elephant and the Rider” analogy for Big Data here.

Don’t run “Out of Memory”

The most significant technology advance of the past decade has been the popularization of In-Memory technology started by niche analytics vendors. The “In-Memory” movement saw a huge uptick in 2008 when industry giants, from Microsoft, to IBM, SAP and Oracle, started to build upon this trend. Most of these vendors now have in-memory options and command 75% of the revenue.

In-Memory methodology did provide speed advantages. But, when designed around RAM, In-Memory became limited technically because it couldn’t easily work with growing amounts of data. To accommodate these limitations, vendors suggested the use of direct access to live sources or the addition of massively scalable databases. But, neither solution scaled, economically. Bottom line: In-Memory was great in 2008. Now, it feels like it’s “2000 and late”.

“Pretty Big Data”

Last week, we introduced Prism 10X, the first end-to-end business analytics solution built on “In-Chip”. Briefly explained, “In-Chip” leverages the latest advances in CPU technology and provides performance and size benefits 2 generations ahead of anything that’s in market. Think about “In-Chip” Analytics as technology that can give you the same speed and flexibility the old In-Memory technologies could for gigabytes – but this time, on billions of rows, Terabytes upon Terabytes.

How?

First, CPUs are hundreds of times faster than RAM. When your query kernel, like Sisense’s, runs from inside the CPU, that matters. Second, CPU-based architecture allows software to run instructions in parallel (aiding with speed) and to process and move data in smaller chunks or “vectors” (helping with size). Third, CPU innovation is leapfrogging that of RAMs and most machines are benefiting from CPU innovation. This means, that to take advantage of the “In-Chip” revolution, you most likely won’t need to buy dedicated hardware. At least, in the case of Sisense, our software runs on any commodity machine.

The combination of the above, coupled with an open visualization, ETL and database stack, will not only turn your machine into a “Data Monster” but it will also allow it to work with “Pretty Big” Data, as our CEO likes to refers to it. “Pretty Big” here refers to the “sweet spot” of Big Data Analysis detailed here. You can also use the expression differently and say “Pretty Big Data” altogether, referring to Sisense full stack and our new set of visualizations.

Lots of folks – from ZDNet to Information Management covered our launch last week but I thought I’d highlight a few here:

– Sisense takes data processing to the next level: Is “In-Chip” Data Processing the Next Revolution?

– Sisense’s significant entry in Big Data Analytics market: Prism 10X In-Chip Analytics Data Monster

– Start your engine…and then keep going: Sisense Announces Prism 10x

To read up on In-Memory, see our “In-Memory” Technology Guide here.

To start playing with your data now, get Sisense Prism for free here.

A Future for Data, Big or Small from Bruno Aziza