BI Software Evolved
The technological breakthroughs behind Prism represent a major evolution in BI software.
Prism provides all the functionality required to quickly and easily create and share cutting-edge business intelligence applications, with no prerequisite software or third-party data warehouse. Amazingly fast query performance against hundreds of millions of rows of data from multiple combined sources is possible even on ordinary PC hardware. The system easily scales to billions of rows – realistically handling terabytes of data – on off-the-shelf PC server hardware.
If this sounds incredible, that's because it is! The technological breakthroughs behind Prism represent a major evolution in BI software. What makes this "magic" possible? It is a combination of distinct technologies, tied together into a killer app for the BI market.
Columnar Data Storage
Utilizing columnar data storage is not new or unique, but it nevertheless provides tremendous benefits for analytical applications such as BI. Prism stores all the data it processes in ElasticCube data repositories, which are column-based data stores containing the unified data of all combined data sources.
The primary advantage of columnar data storage is very fast access to the specific fields required for particular queries. Whereas other systems offer columnar data storage as the ultimate solution, it is only the first building block in Prism's next-generation data processing abilities.
Just-in-Time In-Memory Processing
Prism performs all query processing on data which is loaded into memory only when it is needed for a query. This approach provides query processing which is exponentially faster than typical systems, which access data stored on disk. This is because computer memory delivers much faster seek times compared with reading from disk.
This approach shares elements of in-memory database (IMDB) technology (in which the entire database is loaded into memory for faster processing), but provides substantial benefits over IMDB solutions, which exhibit substantial constraints. IMDB scalability is severely restricted (due to the hard limits presented by available system memory, even when the data is compressed), such that loading a massive database into memory would require a massive amount of RAM. Loading billions of rows into a typical IMDB would require a multi-million-dollar supercomputer with terabytes of RAM! A second major disadvantage of typical IMDB solutions is the rigidity of their data model. Because of how the data is compressed and loaded, new data (whether new rows of data in existing tables, or new/different tables) cannot be simply be added to the existing data schema. When new data must be added, a complete rebuild and recompress must be executed. This process can take hours (or even days) on massive data sets, forcing users to wait before accessing their queries, reports and dashboards.
Prism implements a unique technological solution – just-in-time in-memory processing – which leverages all the benefits of in-memory databases while eliminating the inherent scalability limits and pre-processing requirements. Prism analyzes each query before execution and selectively loads into memory only the portions of the database required for the actual query. This approach delivers the fastest possible performance with large data sets, while offering unlimited scalability, both vertically (number of records) and horizontally (number of fields). Prism's proprietary data mapping and loading technology enables the just-in-time loading of data into memory with no significant performance penalty.
New data, including schema-changing additions, can be added on the fly without the need for any pre-processing (see Elastic Data Structure, below).
Even when dealing with data sets consisting of millions of records, Prism's just-in-time in-memory processing delivers blazing performance on standard desktop PC hardware. When scaling up to billions of records, Prism allows an off-the-shelf PC server to deliver the performance usually expected from an in-memory database running on a super-computer!
This impressive performance/cost ratio is particularly relevant for companies wishing to run their BI in a cloud computing environment: Prism's relatively low hardware requirements enable cloud implementations to deliver very high performance at a very low price point. Prism's data stores do not require any memory resources or CPU cycles when they are not being actively queried.
Elastic Data Structure
The incredible ease with which disparate data sources can be combined in Prism's visual data management user interface is made possible by the automated, multi-dimensional data schema technology running behind the scenes.
Prism's data storage and handling is based on an "elastic data structure," which essentially means that the database schema is changed automatically and on the fly, whenever required. Virtual data merging and multi-source data abstraction are performed automatically by the software, allowing the user to create "data mash-ups" across data from multiple sources, effortlessly. The user can instantly add new data sources, new tables or new fields at any time.
Prism is instantly available to run ad hoc queries against the newly expanded data store without re-processing or rebuilding the entire data structure,. In contrast to OLAP cubes (which require large amounts of time and disk space before they can be used after a change in schema), there is no need for pre-processing, pre-aggregations or pre-calculations of any kind.
Unified Analytics Engine
Prism can execute queries against a wide variety of data sources as if they were all of the same type, essentially making the individual characteristics of each physical data source unimportant. This is made possible by Prism's Unified Analytics Engine.
When Prism imports data, the Unified Analytics Engine creates a metadata layer, or abstraction layer, which is then used to formulate queries across any number of tables from any number of data sources in any number of formats. It also supports the combined querying of resident and external (live) database sources without first loading data into the database!
These capabilities provide the user with unparalleled flexibility and speed in creating, executing and sharing highly-complex reports, dashboards and analytic applications over any number and variety of data sources.