Alibaba brings full text search to its data warehouse with ParadeDB. Read their story.

Querying Apache Iceberg from Postgres

By Ming Ying on June 26, 2024

With the release of v0.9.0, pg_lakehouse has been combined with pg_analytics and the new extension has been renamed to pg_analytics.

Today, a growing number of organizations are offloading large volumes of historical data from Postgres to S3 due to its scalability and cost effectiveness. Many of these organizations are also adopting Apache Iceberg, a table format that enables S3 data to be queried like SQL tables.

While query engines like Trino, Spark, or Flink integrate with Iceberg, Postgres does not. Since we launched pg_lakehouse, Iceberg integration has been our most widely requested feature.

We’re excited to announce that Iceberg tables are now supported for S3 and Google Cloud Storage. As part of this release, we’ve replaced pg_lakehouse’s DataFusion query engine with DuckDB. We initially experimented with DuckDB for its Iceberg support and have been pleasantly surprised by the ways that DuckDB has accelerated our engineering efforts. In the latter half of this blog post, we’ll share some of the benefits and tradeoffs we’ve seen between DataFusion and DuckDB.

What is Apache Iceberg?

Built for analytics over large datasets, Iceberg is an open source specification for organizing metadata around files like Parquet. Iceberg metadata allows these files can be treated as SQL tables with features like ACID transactions, schema evolution, time travel, and partitioning. For this reason, Iceberg is commonly referred to as a table format.

Supporting Iceberg in Postgres

Today, interoperability with Iceberg is missing from the Postgres ecosystem. Instead, Postgres users who adopt Iceberg may operate a system like Trino, Spark, or Flink to query Iceberg tables.

pg_lakehouse adds support for Iceberg to Postgres with the foreign data wrapper API. The following code block demonstrates how to query Iceberg tables from Postgres.

CREATE EXTENSION pg_lakehouse;

CREATE FOREIGN DATA WRAPPER iceberg_wrapper
HANDLER iceberg_fdw_handler
VALIDATOR iceberg_fdw_validator;

CREATE SERVER iceberg_server
FOREIGN DATA WRAPPER iceberg_wrapper;

-- Replace the dummy schema with the actual schema
CREATE FOREIGN TABLE iceberg_table (x INT)
SERVER iceberg_server
OPTIONS (files 's3://bucket/iceberg_folder');

-- Success! You can now query the Iceberg table
SELECT COUNT(*) FROM iceberg_table;

Unlike the typical foreign data wrapper, pg_lakehouse can push most queries entirely down to DuckDB. This means that pg_lakehouse delivers the analytical performance of DuckDB inside Postgres. You can read more about how we achieved this in our earlier blog post.

Why We Initially Picked DataFusion

In January 2024, we evaluated both DuckDB and DataFusion as candidates for ParadeDB’s analytical query engine and chose DataFusion. At the time, several aspects of DataFusion appealed to us:

Extensibility

DuckDB and DataFusion are both in-process query systems. However, DataFusion targets database developers while DuckDB targets end users. This means that DataFusion’s API has many more extension points than DuckDB’s, which we hoped would simplify the integration path with Postgres.

Growing Adoption

DataFusion is quickly becoming the de facto query engine of modern analytical databases. InfluxDB, a popular time series database, relies on DataFusion and has contributed heavily to its development. Comet, a Spark accelerator built on DataFusion, was open sourced by Apple in February 2024. And in June 2024, DataFusion became a top-level Apache project.

Written in Rust

Whereas DuckDB is written in C/C++, DataFusion is written in Rust. We felt more comfortable reading and modifying DataFusion’s code.

Why We Switched to DuckDB

After months of using DataFusion, we decided to go all in on DuckDB. This was not because we were dissatisfied with DataFusion — in fact, we found DataFusion to be an incredibly robust and ergonomic library — but because our use case changed.

Last month, we quietly paused development on our original analytics extension, pg_analytics (we’ll share why in a future blog post). pg_analytics used the Postgres table access method, which meant that it was our responsibility to implement many of the complex features of Postgres like write-ahead logs, multi-version concurrency control, support for various transaction levels, buffer caching, etc. DataFusion’s extensibility was very useful here.

pg_lakehouse , on the other hand, uses the foreign data wrapper API, which is more straightforward to implement and does not require the query engine to be very extensible. As a result, we began to appreciate the benefits of DuckDB:

Out-of-the-Box Integrations

DuckDB’s Iceberg extension has existed for over a year. On the other hand, Iceberg support for DataFusion is in a nascent stage as part of the iceberg-rust project. DuckDB also integrates with Delta Lake — Iceberg’s rival table format — and object stores like Azure ADLS. When we built these integrations with DataFusion, we had to glue several libraries together. We were happy to offload the responsibility of testing and maintaining those integrations to DuckDB.

More Familiar to End Users

When we launched the first version of pg_lakehouse, we received several bug reports from users who attempted to query files using the DuckDB glob pattern, which DataFusion did not recognize. We realized that, while DataFusion is well regarded among database developers, DuckDB is more familiar to the general developer community. Users who were told that pg_lakehouse was “DuckDB in Postgres” seemed to gain an immediate understanding of the use case and value proposition of the extension.

Better Performance

In many benchmarks, DuckDB’s performance surpasses DataFusion’s. Although performance is not enough and DataFusion’s performance is catching up, the performance boost resonates with users with sub-second query latency requirements.

Next Steps

The next big feature we’re working on forpg_lakehouse is write support, which will enable Postgres tables to be copied into external object stores as Parquet or CSV files.

Iceberg is supported on pg_lakehouse version 0.8.0 or later. You can either install the extension or run the Docker image. If you have any questions, the best place to start is our Slack community. And please don't hesitate to show your support by giving us a star!