What is Iceberg?
Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time.
Expressive SQLIceberg supports flexible SQL commands to merge new data, update existing rows, and perform targeted deletes. Iceberg can eagerly rewrite data files for read performance, or it can use delete deltas for faster updates.
MERGE INTO prod.nyc.taxis pt USING (SELECT * FROM staging.nyc.taxis) st ON pt.id = st.id WHEN NOT MATCHED THEN INSERT *; Done!
Full Schema EvolutionSchema evolution just works. Adding a column won't bring back "zombie" data. Columns can be renamed and reordered. Best of all, schema changes never require rewriting your table.
ALTER TABLE taxis ALTER COLUMN trip_distance TYPE double; Done! ALTER TABLE taxis ALTER COLUMN trip_distance AFTER fare; Done! ALTER TABLE taxis RENAME COLUMN trip_distance TO distance; Done!
Hidden PartitioningIceberg handles the tedious and error-prone task of producing partition values for rows in a table and skips unnecessary partitions and files automatically. No extra filters are needed for fast queries, and table layout can be updated as data or queries change.
Time Travel and RollbackTime-travel enables reproducible queries that use exactly the same table snapshot, or lets users easily examine changes. Version rollback allows users to quickly correct problems by resetting tables to a good state.
spark.read.table("taxis").count() 2,853,020 val ONE_DAY_MS=86400000 val NOW=System.currentTimeMillis() val YESTERDAY=NOW - ONE_DAY_MS (spark .read .option("as-of-timestamp", YESTERDAY) .table("taxis") .count()) 2,798,371
Data CompactionData compaction is supported out-of-the-box and you can choose from different rewrite strategies such as bin-packing or sorting to optimize file layout and size.