Building the Best Data Pipelines

Scalability, real-time analytics, team collaboration, and readiness for new technologies are key to building future-proof data pipelines.

Nathan Eddy, Freelance Writer

September 27, 2024

5 Min Read
faucet on a wall of data
rico ploeg via Alamy Stock

Data pipelines -- the process of curating data from multiple sources, preparing the data for proper ingestion, and then mobilizing the data to the destination -- create data workflows between data science teams, IT, and business units. 

Traditionally, data pipelines have been linear, making the extract, transform, load (ETL) process the norm. Businesses would extract data from sources, transform and clean up the data, and then load it into a data warehouse or data lake. 

But as AI technologies increasingly drive digital transformations, data pipelines must evolve to become non-linear and avoid moving data as much as possible to accommodate the weight of unstructured data and the iterative nature of AI. 

Non-Linear Data Pipelines 

Krishna Subramanian, COO and co-founder of Komprise, explains via email that data today is being generated everywhere -- edge, datacenter, and cloud. 

This means data processing should also be distributed, which means data pipelines must no longer require moving all the data to a central data lake first before processing. “This requires new data pipeline techniques focused on unstructured data and AI,” she says.  

This means the use of composable architectures, which allow modular and API-first data services, will enable companies to mix and match best-of-breed solutions for their data pipeline needs. 

Related:Selling Data Preparation Plans

Subramanian says the biggest evolution of data pipelines is the re-architecture to address unstructured data and AI. “We will see a massive transformation in data indexing, data management, data pre-processing and data workflow technologies to address this massive emerging need,” she says.  

Scaling Pipelines Across Multiple Sources 

Rahul Rastogi, CIO at SingleStore, explains via email that scaling data pipelines can add extra layers of complexity when it comes to ensuring data accuracy, consistency, privacy, governance, and security across various sources -- especially as more organizations become increasingly data hungry. 

He says when pipelines scale, it becomes more difficult to achieve low latency -- a critical component to keep up with today’s rapid pace -- as it takes longer to process massive datasets. “The underlying infrastructure such as the storage and compute must also scale effectively to meet the demands of increasing data pipelines,” he says. 

Another consideration is meeting data privacy and governance standards, which can be difficult to keep up with as regulations continue to evolve.  

Rastogi says to overcome these obstacles, organizations must adopt scalable data platforms that are designed to handle large-scale data processing and use techniques like partitioning and sharding (which distributes data across multiple servers) to improve processing efficiency and scalability. “They should also leverage cloud solutions that offer scalable infrastructure and storage, along with benefits like auto-scaling mechanisms to adjust resources based on workload demands,” he adds. 

Related:How GenAI Impacts Augmented Analytics

Real-Time Analytics, Improved Collaboration  

Rastogi notes real-time analytics play a critical part in optimizing data pipelines. “Detection and actioning enable data issues to be addressed in real-time, allowing businesses to make actionable insights and adaptive strategies,” he says.  

For organizations to experience these benefits, they’ll need to deploy a data platform that can process data in milliseconds instead of minutes, all the while detecting data quality, anomalies, and completeness issues while the data is in motion. 

He adds that data collaboration between various data teams is critical to ensure everyone involved is speaking the same language. 

It’s important that data teams have a common understanding of data definitions for both measures and dimensions and a common platform for processing the data -- for example building and extending pipelines. 

Related:11 Irritating Data Quality Issues

Rastogi explains while it is relatively easy to drive standardization of data processing technologies and data platforms, companies should consider data cataloging solutions and glossaries to drive consistency of data definitions. 

“Another consideration is implementing enterprise semantic layers by subject areas, and investing in processes through data champions to ensure everyone is interpreting the data in the same way,” he notes.  

Having all the data in one place also spurs collaboration and presents opportunities for data organizations to create common integrated data structures that can be used by data scientists to train models or by application developers to build intelligent applications. 

Subramanian points out data pipelines can create data workflows between data science teams, IT and business units. “Imagine if users who generate data can tag the data which is then leveraged by data scientists for analytics while IT manages the data lifecycle,” she says. “This is a three-way collaboration on the same data facilitated by smart data workflows leveraging data pipelines.”  

Data Pipelines of the Future  

Rastogi says the data pipeline architecture of the future will feature a growing emphasis on stream processing and low-latency data platforms for real-time insights, enabled by tools like Kafka, Flink, and Kinesis. 

However, not all data needs to be processed in real-time -- enterprises can adopt a hybrid approach to balance performance and cost.  

He predicts data operations (DataOps) and automation would gain traction as companies apply software engineering principles to data management -- a principle that was not followed before. “Enterprises should also consider cloud-native architectures, utilizing serverless services and scalable cloud databases to handle large, bursty data volumes and scale,” he says.  

He points to AI-powered data engineering, enabling data analysts to assemble and create data pipelines without the knowledge and expertise in writing code, as another exciting trend. “However, data quality and pipeline accuracy will be critical,” Rastogi says. “Starting small, learning and scaling gradually is the best approach.”  

Subramanian says she thinks the biggest evolution of data pipelines is the re-architecture to address unstructured data and AI. “We will see a massive transformation in data indexing, data management, data pre-processing and data workflow technologies to address this massive emerging need,” she says.  

About the Author

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights