Understanding the essential Data Processing libraries

Understanding the essential Data Processing libraries

Data processing and analysis have become increasingly important with data pipelines, Machine Learning, and AI needs booming. Numerous libraries and frameworks have been developed to meet this demand, including NumPy, Apache Arrow, Polars, and  Pandas. I will discuss these popular libraries, focusing on their advantages, disadvantages, and use cases, as well as the recent introduction of the Apache Arrow backend for Pandas data and how it is all related.

NumPy: Advantages, Disadvantages, and Limitations as a Backend for DataFrames

NumPy, short for Numerical Python, is a fundamental library for scientific computing in Python. It offers powerful n-dimensional array objects and a wide range of mathematical operations and functions. The main advantages of using NumPy are:

  • High performance: NumPy arrays provide faster and more memory-efficient operations compared to native Python lists.
  • Broad functionality: NumPy offers extensive mathematical functions, such as linear algebra, Fourier analysis, and statistical operations.
  • Convenient syntax: NumPy enables intuitive and easy-to-read array operations, simplifying complex mathematical tasks.

Despite its advantages, NumPy also has some limitations:

  • Limited data types: NumPy mainly focuses on numerical data types, which can be limiting for non-numerical data processing tasks.
  • Less flexible: NumPy's array structure can be less flexible compared to Python's native lists or other data structures.


A key limitation is that NumPy was not specifically designed to serve as a backend for DataFrame libraries, leading to some limitations. For instance, it has relatively poor support for strings and lacks native support for missing values.

These limitations have led to the development of alternative backends, such as Apache Arrow, that better cater to DataFrame libraries' requirements.



What is a DataFrame? A DataFrame is a data structure that organizes data into a 2-dimensional table of rows and columns, much like a spreadsheet. DataFrames are one of the most common data structures used in modern data analytics because they are a flexible and intuitive way of storing and working with data.

Apache Arrow: Overview and Usage

Apache Arrow is a cross-language development platform designed to enable efficient data interchange between various systems. Its primary purpose is to provide a standardized, language-agnostic columnar memory format. Key features of Apache Arrow include:

  1. High performance: Arrow's columnar format is optimized for modern hardware, allowing for faster data access and manipulation.
  2. Interoperability: Arrow supports a wide range of programming languages, facilitating seamless data exchange between different systems.
  3. Broader data type support: Compared to NumPy, Arrow offers more extensive data type support, including improved datetime types, decimals, binary data, and complex types like nested lists.
  4. Polars: A High-Performance DataFrame Library

Polars: Lightning-fast DataFrame library for Rust and Python

Polars is an emerging DataFrame library designed for lightning-fast performance. It aims to offer the following key features:

  • Parallel processing: Polars utilizes all available cores on a machine to maximize computational efficiency.
  • Query optimization: The library optimizes queries to reduce unnecessary work and memory allocations.
  • Handling large datasets: Polars is capable of processing datasets much larger than the available RAM.
  • Consistent API: Polars provides an API that is consistent and predictable, simplifying usage for developers.
  • Strict schema: Data types should be known before executing queries, ensuring consistency and reliability.

Built using Rust, Polars delivers C/C++ level performance and complete control over performance-critical components in a query engine. To optimize its performance, Polars focuses on:

  • Reducing redundant copies of data.
  • Ensuring efficient memory cache traversal.
  • Minimizing contention during parallel processing.
  • Processing data in chunks.
  • Reusing memory allocations.

Moreover, Polars exercises control over Input/Output (IO) operations, preventing unnecessary data copies and pushing projections and predicates down to the scan level.

In contrast to tools like Dask, which attempt to parallelize single-threaded libraries such as NumPy and Pandas, Polars is designed from the ground up for parallel query execution on DataFrames. It operates with a lazy and semi-lazy approach, allowing users to perform most tasks eagerly (similar to Pandas) while providing a powerful expression syntax optimized and executed within the query engine.

Polars' lazy mode allows for query optimization across the entire query, enhancing performance and reducing memory pressure. The library maintains a logical plan for each query, which is optimized and reordered before execution. When a result is requested, Polars distributes the work among different executors, using the algorithms available in the eager API to produce the result. This approach enables the parallelization of processes dependent on separate data sources on the fly, given that the entire query context is known to the optimizer and executors.

Pandas: DataFrame, Series, and NumPy Integration

Now that we covered the basic libraries for data processing, let’s review the most common framework to glue it all together.

Pandas is a widely-used library for data manipulation and analysis in Python. It provides two main data structures: DataFrame and Series. A DataFrame is a two-dimensional, size-mutable, and heterogeneous tabular data structure, while a Series is a one-dimensional array-like object that can hold various data types.

Pandas leverages the power of NumPy to provide fast and efficient data manipulation capabilities. Under the hood, Pandas DataFrames and Series are built upon NumPy arrays. Pandas also offers several key features:

  1. Data handling: Pandas can read and write data from a variety of formats, such as CSV, Excel, and SQL databases.
  2. Data cleaning: Pandas simplifies data preprocessing tasks, including missing value handling, data transformation, and aggregation.
  3. Powerful analytics: Pandas provides a wide range of analytical functions, including grouping, pivot tables, and time series analysis.

Apache Arrow Backend for Pandas Data

Recently, an essential yet subtle change was introduced to Pandas: the implementation of the Apache Arrow backend for Pandas data. To understand the significance of this change, we must first examine how Pandas operate.

Before using Pandas for any data manipulation or analysis, it is necessary to load the data of interest into memory (using methods like 'read_csv', 'read_sql', or 'read_parquet'). While loading data, decisions must be made regarding how the data will be stored in memory. This process is relatively straightforward for simple data types like integers or floats. However, for more complex data types (e.g., strings, dates, times, and categories), decisions about the data representation must be made using Python extensions, often implemented in languages like C, C++, or Rust.

NumPy has been the primary extension for fast array representation and operations in Pandas for many years. However, the limitations of NumPy as a backend for DataFrame libraries have led to the integration of the Apache Arrow backend.


Benefits of the Apache Arrow Backend for Pandas

The new Apache Arrow backend for Pandas offers several advantages over NumPy:

  • Enhanced data type support: Apache Arrow provides a broader range of data types, such as improved datetime types, decimals, binary data, and complex types like nested lists.
  • Memory efficiency: Arrow's memory representation, particularly for categorical data, can result in significant memory savings when handling large columns of categorical data.
  • Cross-language interoperability: Arrow enables seamless data exchange between systems using different programming languages.

In the pandas 2.0 for I/O operators that support creating Arrow-backed data, there is a dtype_backend parameter:


import panda

pandas.read_csv(fname, engine='pyarrow', dtype_backend='pyarrow')        

Now with Pandas support in Arrow and Polars, 'polars.to_pandas' has been updated to support the use_pyarrow_extension_array and allow the Arrow data in Polars to be shared directly to pandas without converting it to NumPy arrays.

A final note on Pareralizm and performance

Modin is a library that provides access to the Pandas API through the modin.pandas module, but it does not inherit the scalability limitations present in the Pandas implementation. Pandas is inherently single-threaded, resulting in limited utilization of a single CPU core at any given time.

Modin offers inplace semantics, but with an underlying implementation of immutable data structures, unlike Pandas. This immutability allows Modin to chain operators and optimize memory management internally, as data structures will not be altered. In many cases, this results in improved memory usage, as common memory blocks can be shared among all dataframes. Modin implements inplace semantics by utilizing a mutable pointer to the immutable internal Modin dataframe, allowing the pointer to change while the underlying data remains constant.

Modin supports row, column, and cell-oriented partitioning and parallelism, enabling the dataframe to be divided into rows, columns, or both, for corresponding operations. Modin will automatically reshape the partitioning as needed for efficient operation execution, based on the row-parallel, column-parallel, or cell-parallel nature of the operation. This allows for broader support of the Pandas API and efficient execution of operations that are challenging to parallelize in row-oriented systems, such as transpose, median, and quantile. The flexible partitioning in Modin also allows for effective straggler mitigation and improved utilization of the entire cluster.

Modin's modular design is architected to run on various systems and support various APIs, providing seamless compatibility with existing infrastructure. Currently, Modin supports execution on both the Dask and Ray compute engines, and it is continually expanding to support popular data processing APIs, such as SQL and Pandas. The flexible architecture of Modin also enables quick adoption of new versions of the Pandas API as it evolves.

As a result of Modin's flexible partitioning and ability to support most Pandas operations, including join, median, and infer_types, it provides substantial speedups even for operators not supported by other systems.

In conclusion, Modin is a good option if you need to work with large datasets and want to take advantage of multi-core or distributed computing, but you should be aware of the potential compatibility issues with existing code.


Numpy, Apache Arrow, Pandas, and Polars are essential tools for data engineers and analysts. Each library offers unique features and advantages, covering a wide range of data analysis and processing tasks. The recent introduction of the Apache Arrow backend for Pandas data, along with the emergence of the high-performance Polars library, represents significant advancements in the field. It is important for users to understand the strengths, limitations, and recent changes that have been made to these libraries so that they can choose the right tool to meet their specific data processing needs.

Arseny Chernov

APAC partnerships, SaaS and customer success. Hands-on with Data, ML / AI, Fintech, Cloud, DevOps and TrustTech. Team leader, coach, startup advisor. SG 🇸🇬 citizen.

1y
Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics