Let’s say you’re an executive at a medium to large enterprise that’s gathering data from a database or two, some cloud applications and a few dozen local files. One day you notice that either you’re not seeing any meaningful reports being generated from this data, and if they are being generated it seems to be happening only once every blue moon – despite the fact that you have a team of dedicated developers spending countless hours each month on this task alone.
Being an informed individual, you realize that the solution to your reporting issue is probably one of those newfangled business intelligence softwares everyone seems to be talking about. But after a quick Google search you find yourself overwhelmed with the seemingly endless variety of business intelligence systems on display, all promising “self-service analytics for everyone in the organization,” and other such guarantees which seem to imply that by downloading this particular BI-system all your BI issues will instantaneously be resolved.
But before we dive any deeper, just what is a BI System? In the broadest terms, BI software is meant to help a business better understand and examine their own data, allowing data-driven decisions and a full picture of the organization’s strengths and weaknesses. BI-systems should allow for users – both internal and external via embedded analytics – to get the answers they need from their data, when they need them from self-service dashboards that are easy to use and understand, no matter the level of technical expertise of the user.
A Simple Tool Can’t Do Complex Data
But while this turn of events is indeed somewhat plausible when dealing with data of limited scope or size, such as a few fairly straightforward Excel spreadsheets, when you’re dealing with complex data (i.e., big data or data originating from multiple disparate sources), as was the case in our example above, things become more… well, complex. Here, the notion that you can get a one-size-fits-all BI tool that will solve all your data problems is far from obvious; to understand why, we must first look at what it takes to handle complex data.
Tackling Difficult Data: What You Need To Know
Generally, large or scattered data cannot simply be “fed” into any kind of BI system and be expected to immediately produce actionable results. Typically data needs to go through several preparatory stages before it’s ready to be visualized and presented in the form of a report or a BI dashboard. We could broadly define four stages or requirements of effective data analysis:
- Connecting to data sources. Data is stored in various databases, cloud apps and locally, and needs to be pulled in to your BI system in the first stage. If the BI tool in question can’t connect to your sources natively, you will need additional plugins or development work.
- Data integration and ETL. When working with messy, unorganized data (such as one generated by employees or customers with no consistent rules strictly applied to the process), or with various data sources that do not necessarily follow the same internal logic, the data will need to be replicated, modified or enriched to produce a single version that is consistent across all your data sources (a single source of truth). As we’ve written before, data preparation is often the most difficult part of data analysis.
- Querying the data repository. Once you have all your data in one place, you need to be able to quickly scan the various tables to produce answers to your questions in a reasonable timeframe, either via OLAP or in-memory relational processing.
- Data visualization. Finally, once your data is in the system, clean, and the relevant tables have been processed, you’re ready to display the results
Each of these distinct stages is crucial when it comes to producing meaningful analyses, but the sad fact of the matter is that most of the existing software tools are not actually end-to-end solutions. They will typically be able to handle 1-2 of the above detailed tasks, but will have very limited capabilities (if any) when it comes to the other tasks. Some tools focus on data integration, but come with very limited front-end functionality; while others are merely data visualization tools masquerading as “analytics”.
On a sidenote, the source of all this confusion is the fact that “business intelligence software” has become a broad enough term to encompass pretty much everything involving bar charts or pivot tables, when in fact data analysis is an incredibly complicated, multi-faceted process. But that’s a different issue – you’re still waiting for those reports! So how do you tackle your complex data issues?
Two Alternate Solutions For Complex Data Problems
The Assembly Line Approach
One way companies have found to deal with the increasing amounts of data preparation and other tasks related to complex data, is to simply keep investing in an increasing amount of tools, modules and dedicated professionals, which are piled one on top of another until finally the business has a stack of tools that can give it an end-to-end solution – the idea being that together, all of these BI systems will form one working unit.
This is the assembly line approach: data is passed along between EDW, ETL and visualization tools before finally being transformed from its original format into a useful visual representation.
The ‘assembly line’ does manage to produce reasonable results for some organizations, but the disadvantages are also obvious: this is an extremely clunky, inflexible way of doing BI. The need to keep several different systems up and running at all times, and to ensure they are properly ‘connected’ to each other, can quickly become a logistical and financial nightmare. Each of these tools comes with its own ‘experts’, its own account managers, support representatives and professional services. And this is without mentioning the costs involved with purchasing and maintaining all these separate systems.
And so, instead of the one tool that was supposed to solve all your problems, you find yourself dealing with anywhere between three to a dozen to achieve what was supposed to be a simple task.
The Single-Stack™ Approach
The alternative to the assembly line approach stems from the notion that business analytics can be different – that it can be simple, agile, and easy, even when dealing with complex data, and that the need to deploy a bunch of different tools to ensure reasonable results is not an unbreakable law of nature.
In a Single-Stack architecture, the analytics software is planned from the onset to serve as a solution for complex data – so every component of it is optimized for rapid data ingestion, integration and analysis. The technological foundation of the product is built in such way as to enable this, and to ensure the tool is able to perform at the required performance rates, including when it needs to process big data or perform complicated calculations on the fly.
Thus, users have only one system, with one point of contact and own internal product owner, to cater for all their data analytics needs, from enterprise data integration to dashboarding and reporting. Note that this does not refer to various assortments of data preparation, analysis and visualization tools jumbled together under the same label, which are often packaged as “full-stack business intelligence” – but a single tool that works as a complete, optimized unit to provide an analytics solution for data of any complexity.
Analytics Sold Separately?
So returning to the example we started with – don’t believe any provider who claims their product is the only thing you need. Check the complexity of your data and determine beforehand what you’ll need to do to get it ready for analysis, as well as which reports, dashboards and analyses you’ll eventually want to produce. If the software you’re looking at can’t deliver on any of these elements, know you’ll be looking at purchasing additional tools – or opt for a Single-Stack solution instead.