Imagine this scenario: You’re at an important meeting discussing next quarter’s upcoming sales strategy. One of the team members is presenting your company’s performance in the previous quarter, and tells you that telemarketing was on average 20% more effective than web sales in Germany, France and Italy. Intrigued, you want to learn more and ask for a breakdown by country. “Right”, says the coworker, “I’ll get back to you on that one.”
You schedule another meeting for tomorrow, and the same team member informs you that telemarketing was 40% more effective in France, 10% more effective in Italy, and 30% less effective in Germany. You still feel there’s not much insight to be gained from this – and ask to hear absolute values. “Sure”, says your data-savvy associate, “Let me get back to you.”
The next day you hear the numbers, and want to start looking at possible plans of action. So you ask to see demographic information about your customers in each one of those countries. The response? “No problem. I’ll get back to you tomorrow.”
If you’re thinking – sounds like this employee isn’t doing his job properly, you’re absolutely right. But oddly enough, most businesses are willing to accept the same kind of “I’ll get back to you” attitude when it comes to their Business Intelligence tools.
The Problem of Data Granularity
The amounts of data modern businesses tend to generate is staggering. Coupled with existing external data sources – either publicly available or purchasable – many companies, which previously could have relied on a couple of Excel spreadsheets, suddenly find themselves looking at dozens of gigabytes or terabyte-scale data, with more and more of it being generated every day. And obviously, the more historical data that a business wants to analyze, the larger the dataset becomes.
Since ‘big data is the new gold’, no one wants to exclude raw data from their analysis – and rightly so. A professional data analyst, or even a statistically-inclined business user can often find surprising connections and insights by analyzing data – connections and insights that might not have necessarily been obvious in advance, before crunching the data using their BI tool.
But has Business Intelligence solutions kept up with the times, in terms of the amounts of data it can handle? Not entirely. Most analytics software today tends to run into one or both of these bottlenecks:
- Hardware limitations: The increased size of the datasets requires increasingly powerful hardware, with increasingly larger amounts of RAM, to effectively process data.
- Data preparation: The time-consuming need to combine data coming from disperse and often disorganized sources, particularly if this has to be done by the company’s IT department which often has other tasks on its hands.
So when the data starts to accumulate, performance inevitably drops. The company’s BI system – which was quite robust and flexible when it was dealing with 1 gigabyte of data and could provide answers to new queries almost immediately, starts groaning and moaning when it’s fed 50 gigabytes of data.
The common solution to this is to reduce the granularity of the data. For a simple example – instead of working with sales information from the past 24 months, the end-user works with the last 12 months; instead of seeing sales-by-city, they see sales-by-country. The rest of the data is still in the database – but retrieving it takes time.
Whether this time is spent waiting for IT to rebuild the database to accumulate new sources, or waiting for the software itself to rearrange its storage to analyze the required information (as it can no longer store the entirety of the database in-memory and is forced to use slower disk storage for unused parts of it) is irrelevant; the fact of the matter is, the company can’t make immediate decisions and instead postpones decisions while waiting for data.
In other words, the end-user finds out that whatever new question he or she is asking their Business Intelligence software, the answer is once again: “I’ll get back to you.”
Sound bleak? Luckily more modern BI technologies do exist, ones which have the ability to display the full scope of an organization’s data.
Immediate Answers for Actionable Insights
Business Intelligence that is truly agile is not limited to a set of predetermined queries, or predefined datasets. To avoid the dreaded wait for answers, modern BI tools look to solve the two above mentioned bottlenecks:
- Joining multiple sources can be done by the software itself when queries are made and in automated manner. Self-service business analytics software should be designed to be able to mash-up data coming from diverse sources and create a single source of truth, without the need for extensive human (and specifically IT) intervention.
- Overcoming hardware limitations: As long as BI software continues to rely on processing every new query entirely in RAM, it will always start lagging when data size increases. However – using innovative caching, decompression and data storage algorithms, newer software solutions can now process terabytes of data without dropping a beat, and without requiring major investments in hardware.
When questions are answered immediately, business processes can be improved in real-time, and decision making becomes much more data oriented and accurate. In highly competitive and fast-paced business environments this is by no means a “nice to have” – but an absolute necessity.
What about you: Are you getting all the answers you want from your data, when you want to get them? Are you seeing all your data, or are you limited to the narrow scope through which your software tools allow you to see it? If not – it might be time to think about the factors that are holding you back.
Want to learn more about common Business Intelligence problems?
Read our free guide: 5 Most Common BI Problems & How To Resolve Them