Since the emergence of the first automated ETL tools, the data science community has debated the merits of using such tools as opposed to manual coding via SQL, Python, and other programming or scripting languages. I will try to present a case for automated ETL, as well as provide some examples from my own experience to demonstrate how data analysts, data scientists and developers benefit from using ETL tools – or better yet, BI tools with built-in ETL functionality.
What is ETL?
Before we dive into it, let’s take a moment to explain the concept of Extract-Transform-Load, or in its more common abbreviation – ETL. It is a crucial stage in the data preparation process as it ensures we are working with clean and accurate data and that there is a single source of truth for every piece of information that appears in our analytical database. It consists of three stages:
- Extracting the data stored in various systems (flat files, cloud repositories, relational databases, etc.) into a centralized data repository.
- Transforming said data according to a unified schema, which could include filtering out certain fields \ columns or modifying data from different tables to make sure all the data sources speak the same language.
- Loading the newly-transformed data into a centralized data repository or data warehouse.
Is this the same as ELT?
Another term that might be used in the context of data preparation is ELT, i.e. Extract-Load-Transform. In this process, what changes is the chronological order: data is first loaded into the analytical repository and then transformed (on a side note, this is the process employed by Sisense for transforming data). However, the distinction is not crucial for the purposes of this article, so we will refer to both these processes as ETL.
So without further ado, let’s present the argument for using automated ETL tools instead of manual coding and scripting:
1. Performance and Stability
Commercial tools are built by teams of engineers working to constantly improve performance and stability when working with large or disparate datasets. In some cases this includes very deep, below-the-OS instructions regarding the way data is mapped and shifted around the various layers of the computer’s memory and CPU cache, using computational resources such as parallel processing and SIMD.
While technically it might be possible to achieve the same results via hand-coding, the amount of effort that would need to be invested is enormous. And even if you manage to come up with a working, high-performance system, you still have no real assurance it will be able to bear larger loads as your data and use cases grow in complexity. This is less of an issue when working with simple or smaller data, but this would not be a typical use case for an automated ETL tool in the first place.
From Sample Data to Agile Analysis
I once encountered a large e-commerce company that was struggling with several billions of rows of data. They had a manual system in place, and it worked – however, if they wanted to see fast results, they had to “shrink” the raw data by analyzing a sample and extrapolating the results. It was a working system, but it wasn’t agile and it wasn’t particularly accurate since they were relying on sample data. After deploying a managed Business Intelligence and ETL solution the company could take all its data into account and get answers to an unlimited range of questions, specifically because of the stronger performance they now enjoyed.
2. Management and Scale
An automated ETL tool typically provides a visual environment that lets you “see” large parts of the ETL process – where the data is coming from and going to, which calculations are performed on it, etc. – in ways that code, however carefully crafted and documented, will never be able to replicate.
This type of visual representation might annoy hardcore coders, but it makes the ETL and data modeling process much more manageable in the long-term. When working with difficult, structured and unstructured data, hand-coded ETL will invariably translate into many, many lines of code. Again, it might work, but could very well lead to trouble in more complex scenarios.
For example, consider schema changes: new data sources need to be added and the data model needs to be updated. In the manual route, you would need to go back to the drawing board, make the necessary changes and then run a series of test to make sure you haven’t broken anything; in larger data teams or deployments, multiple people need to be intimately familiar with the code whenever a simple modification is needed.
On the other hand, a visual ETL layer makes the process much more transparent and easy to modify, as well as troubleshoot in case things go wrong. And more importantly: every change is clearly documented, immediately noticeable and can be retraced – without having to spend hours or days poring over code to locate a change made by one of the developers, who in the meantime has gone on his annual vacation.
Lost in OLAP
This scenario is often common in organizations that rely on legacy OLAP implementations. Since ETL is an evolutionary process, that grows more and more complex over time, A customer had been working with such a system for years and it was growing into an unmanageable monstrosity, with every single new report requiring multiple iterations with IT. And while it was difficult to change the habits of an entire organization, once they saw how simple and easy it was to make and track schema changes in a visual environment, they never looked back.
Automated ETL is a time-saver in that it lets you, well… automate many processes which would need to be done manually otherwise. A very simple example of this could be scheduled updates – a menial, basic task, but crucial nonetheless. Having a tool that will take care of this, and notify you in case it runs into problems, means you have one thing less to worry about.
Furthermore, business intelligence tools with ETL functionality come with a rich library of natively supported data sources which they will be able to connect to and extract the data, which means you won’t have to struggle with API calls or manual exports. Certain processes can be saved and reused easily for new projects or data models. Simple joins and unions are done via drag and drop. Together, all of these accumulate into a hefty amount of time that you can devote to actually finding insights in the data rather than struggling to make is useable.
This automation can go one step further: for the really simple use cases, automated ETL allows business users to do their own data modeling by simply connecting a common field between two tables. For developers and analysts, this means less manual labor: with business users free to perform their own independent mash-ups (again, assuming said mash-up is fairly straightforward), the power users can devote their time and energy to more complicated – and interesting – projects.
3 Days vs 3 minutes
One of our customers in the energy sector had been using Excel for manual data mash-ups. The company covered around 20 regions, and each such region generated a massive Excel file. Joining all these tables took 3 people devoting almost 3 days every week, and all this work was done before any analysis could even be performed – just to prepare the data. The company realized it was time to implement a BI solution, and soon discovered that a single automated ETL tool could complete this union within minutes, and with no additional work hours invested once the system and data model were set up.
…And the Counter-Argument
Of course, all this is not to say that automated ETL is the way to go 100% of the time and for every possible use case and organizations. Companies that have large data science teams might find it easier to continue using their tried-and-tested manual processes instead of implementing a new tool, which obviously will have a learning curve and will require some getting used to. Developers who are used to thinking in code might be frustrated by visual interfaces. And since every automated ETL tool will have its own workflow and logic, it could seem restrictive compared to the endless possibilities one could create manually.
The Bottom Line: Time to Automate
Chris Meier is a Manager of Analytics Engineering for Sisense.