Automatic Concurrency Scaling

What is automatic concurrency scaling?

Automatic concurrency scaling is a feature of cloud-based data warehouses such as Snowflake and Amazon Redshift that automatically adds and removes computational capacity to handle ever-changing demand from thousands of concurrent users.

It enables users to benefit from greater processing capacity and speed, and gives them the ability to run more data and increase analytics workload without compromising performance. By optimizing data infrastructure and performance, automatic concurrency scaling delivers faster business insights and growth.

Elasticube - IT Dashboard

How does it work?

Multi-cluster data warehouses consist of one or more clusters of servers that execute queries. For a given warehouse, customers can set both the minimum and maximum number of compute clusters allocated to that warehouse.

This means that in automatic scaling mode, you can configure your data warehouse so that it automatically adds additional cluster capacity as needed when processing an increase in concurrent read queries.  You get more computing power when you need it, making it ideally placed to handle those burst reads. Then, the extra processing power is automatically removed when you no longer need it.

When the number of queries routed to a concurrency scaling queue exceeds the queue’s configured concurrency, eligible queries are sent to the concurrency scaling cluster. When slots become available, queries are run on the main cluster. The number of queues is limited only by the number of queues permitted per cluster. 

What are the benefits?

In traditional data warehouses, clusters serve as both the compute resources and the data storage. Because your data already lives in the compute infrastructure, there’s no need for data transfer. Therefore, individual queries will typically execute more quickly than if the data were stored separately.

A limitation to traditional warehouses is that those resources are fixed, so the same resources are used whether you’re running one query or 100 queries.

Unlike traditional warehouses, cloud-based data warehouses enable compute and storage to work independently. So, you can instantly add and resize warehouses manually or automatically. Gone are the days of scheduling ETL jobs at night to avoid contention with BI workloads during the day. Now you can separate these workloads and run them in parallel using multiple compute clusters (virtual warehouses).

Using the automatic scaling mode makes this even easier. By automatically adding and removing compute clusters based on the query workload. Since this scaling up and down happens instantly, customers use the resources only when they need them and stop paying for the resources when the query workloads drop.

Conclusion

At high query volumes, automatic concurrency scaling provides a significant performance boost. Even though a portion of that boost relates to lower execution times, the bulk stems from radically lower queue times. 

It also makes it easy to scale our platform to keep up with increasing query concurrency. Not just as customers grow, but even as the load changes throughout the day.  

Learn more about Sisense technology

See how Sisense reinvents Business Intelligence through technological innovation here.

Watch a Demo Back to Glossary