Data accumulation doesn’t necessarily deliver value to your organization — the real worth is derived from what a business does with that information. One of the most important applications of data is using it to forecast the future. However, this can seem a herculean task for many organizations, especially with the high number of unknowns that every industry and our chaotic world holds. This is where forecasting analytics can be a game-changer in the decision-making process.
In a recent webinar, I talked about how one of our customers, a performance theater owner, uses predictive analytics. Housing a performance is a tremendous investment — not only are there operating costs of rent, utilities, and the actors’/crew’s salaries, there is also the opportunity cost of producing a commercial flop. There is a massive need for forecasting analytics in this use case, but how does a theater use data to guide decisions?
Data-driven forecasting decisions
For an established theater that has a historical dataset from previous shows, the production team can lean into that data when deciding on the next performance. What was the attendance at the previous shows? When do people attend performances throughout the week? Is there seasonality to the attendance? How many tickets were sold at a discount? You need all this information to understand prior behavior and then predict what the future holds.
However, sometimes the information is overwhelming for a human to interpret (maybe it’s an enormous dataset filled with a number of variables, maybe the data constantly changes, etc.). That makes the data a perfect candidate for predictive analytics driven by vast computational power. This is where using machine learning as part of a predictive analytics program can be very effective.
Improving forecasting analytics with competing models
There are a lot of forecasting models that can be used to make predictions. Each one of them of them behaves differently — in fact, any given forecasting model may behave differently with different types of data. So when making a prediction, there is a best fit that can give a user optimal predictions.
Understanding the nuances of forecasting models is highly technical work, so rather than making a user manually choose one of the models, a properly designed system can take all of them together and run what’s called an “ensemble.” This creates a machine learning prediction model where forecasting algorithms run on historical data and compare their results with what actually happened to see how good they were. Each forecasting model gets a score, then the system decides which of them worked best and what we should base the forecast in question on.
With prediction model machine learning, we are teaching our algorithm how to predict better. And it happens incredibly fast: A machine can take four different models, see how they perform, and select the best performer in a millisecond. At the same time, the results of that process are informing the machine on what worked and didn’t so that it can optimize future forecasts, constantly letting the machine improve its reliability.
Forecasting with incomplete data
But what happens when you don’t have enough data to make an intelligent, data-driven forecast? As the old saying goes, “Two data points does not make a trend.” This is when you have to turn to qualitative methods.
Returning to our theater example, a lack of sufficient data to make a decision could be due to a new show or production being completely unlike any other previously put on at the venue — say the theater has normally produced contemporary musicals, but it’s looking to showcase some Shakespeare. In this instance, you may have to lean into the experiences, perspectives, and opinions of your team to make a decision.
But while each of these qualities can be helpful in making a prediction, attempting to limit confirmation bias and “groupthink” is paramount. One way to do this is by implementing the Delphi method, an iterative process of collecting expert opinion, making the results anonymous, and asking the panel to revise their thoughts based on the information from the previous round. The Delphi method hopes to draw on the wisdom of the crowd and converge on an optimal solution in a structured, iterative way.
Has predictive model machine learning eliminated listening to your gut?
In the not too distant past, many business decisions were made on gut feel. A decision-maker could choose the direction of the company, the products to release, or the services to offer based purely on intuition. The data revolution began to change that by helping provide actionable intelligence based on historical data.
A quick study of the transportation industry demonstrates this shift. Prior to the rise of ride-hailing apps, taxi drivers had to rely on their experience to decide in what part of town to seek their next fare. This was not a data-driven decision outside of the personal experience of the driver. The wave of apps changed everything as historical ride-sharing data proactively guided drivers to the right parts of a city in anticipation of demand.
So, should you solely trust data and completely disregard your gut feel? Or should your intuition matter? The best analysts, in my opinion, not only use data but also know their product/industry/customers inside out and can use that context in tandem. These analysts often develop a feeling of how different variables affect each other, and they can use the data to help confirm their assumptions.
However, it is important to recognize our limitations: Humans are biased and tend to dismiss or be blind to information that contradicts our deep-seated beliefs or what we have already decided. This is where it is important for the self-aware analyst to be able to seek that contradictory information, stare at it, and be able to say their gut feel was wrong.
Inbar Shaham is a senior product manager at Sisense. She has 11 years of experience in product management, having worked for Clarizen, Takadu, and ICQ, among others.