Databricks VACUUM Command

Databricks is a unified big data processing and analytics cloud platform for transforming and processing vast volumes of data. Apache Spark is the building block of Databricks, an in-memory analytics engine for big data and machine learning. In this article, we will see how to use the Databricks VACUUM command to remove unused files from the delta table.

What is VACUUM in the Delta table?

VACUUM empties all files from the table directory that are not managed by Delta and data files that are no longer in the latest state of the transaction log for the table and are older than a retention threshold.

How to use Databricks VACUUM on Databricks Delta tables

database_names_filter = "20_silver_zendesk_eg"
dbs = spark.sql(f"SHOW DATABASES LIKE '{database_names_filter}'").select("databaseName").collect()
dbs = [(row.databaseName) for row in dbs]
for database_name in dbs:
print(f"Found database: {database_name}, performing actions on all its tables..")
tables = spark.sql(f"SHOW TABLES FROM {database_name}").select("tableName").collect()
tables = [(row.tableName) for row in tables]
for table_name in tables:
print(f"performing vaccum on {table_name}")
spark.sql(f"ALTER TABLE {database_name}.{table_name} SET TBLPROPERTIES ('delta.logRetentionDuration'='interval 2 days', 'delta.deletedFileRetentionDuration'='interval 1 days')")
spark.sql(f"VACUUM {database_name}.{table_name}")
spark.sql(f"ANALYZE TABLE {database_name}.{table_name} COMPUTE STATISTICS")

If you run VACUUM on a Delta table, you lose the ability to time-travel back to a version older than the specified data retention period.

See more

Kunal Rathi

With over 13 years of experience in data engineering and analytics, I've assisted countless clients in gaining valuable insights from their data. As a dedicated supporter of Data, Cloud and DevOps, I'm excited to connect with individuals who share my passion for this field. If my work resonates with you, we can talk and collaborate.

Shopping Cart
Scroll to Top