So you’ve finally found a business intelligence tool that seems perfect for your needs: It’s agile, it can join multiple data sources, it’s simple and it runs reasonably fast. The vendor carries out a successful proof of concept and everything appears to be running smoothly. At this stage you’re convinced that you’ve found the perfect solution to your BI needs, and indeed it performs well – at first.
But after you’ve implemented your new BI tool across your entire business, and more and more users start adding their data and running queries, the software starts experiencing all sorts of minor and major hiccups: it gets sluggish, with new queries taking much longer than existing ones; your hardware heats up, huffs and puffs and your entire system slows down; your computer starts crashing at times of ‘query rush-hour’.
What’s Going On? Why Is Your BI Software Underperforming?
The problem is heartbreakingly simple: your BI software wasn’t built to scale. It suffers from inherent technological limitations that will invariably cause it to lose speed and capabilities when your business starts generating more data and users start running more unique queries.
Understanding the Technological Glass Ceiling
Queries are handled by your computer’s Central Processing Unit (CPU). Traditional BI tools store (cache) the results in CPU memory for quick access and retrieval. However, cache memory resources are limited and only a finite amount of queries can be stored for a finite amount of time. Thus, if two identical queries are run within a short amount of time, the answers can immediately be extracted and displayed from cache memory.
But once the queries begin to increase in numbers and complexity, cache memory runs out. And if a new query isn’t completely identical to its predecessor, it will have to be recalculated from scratch. Since different users have different questions, and each such question might lead to additional questions, it will mean that the CPU is likely to become overworked and cave under the pressure – resulting in slow loading times, crashes, etc.
If you’re already too deeply invested in your current BI solution and cannot realistically change it, you have no choice but to either:
- Accept your limitations: learn to live with fixed types of queries, wait longer for reports to generate, or downscale your data by ignoring certain fields, columns or rows.
- Upgrade your hardware. You can always get a stronger CPU and more cache memory – but these costs will continue to increase as your business and data grows. Soon,you’ll reach a point where your whole BI operation is causing you to lose out on opportunities and money rather than increasing your profitability–which is counterproductive.
The Full Solution: Shattering the Glass Ceiling
While the above mentioned methods can work for smaller businesses, if you want a BI solution that can grow along with your business, you should look for a vendor whose software has the technological capability to handle growing amounts of query without sacrificing performance and speed.
Don’t believe a vendor who tells you it can’t be done. At Sisense we’ve proven that multiple queries can actually improve, rather than inhibit, your BI software’s performance using Crowd Accelerated Analytics. Our software learns from previous queries, translating it into granular, machine-level instructions. These ‘lessons’ are then applied to similar – not merely identical – queries. Rather than restart calculating from scratch, our software already has most of the answer at hand the minute a new query is made.
In short: Business intelligence software should be able to handle multiple queries without losing performance. It just needs to be… a bit more intelligent.
Tip: Before purchasing any BI software solution, ask the vendor how their tool will handle the increased workload of running many simultaneous queries.This concludes our series on the 5 Most Common BI Problems. Click here to download a summary edition in printable ebook format