Top Gradient
Back

60x faster database clones in Snowflake

Author

Niall WoodwardSaturday, October 22, 2022

Introduction

I had the pleasure of attending dbt’s Coalesce conference in London last week, and dropped into a really great talk by Felipe Leite and Stephen Pastan of Miro. They mentioned how they’d achieved a considerable speed improvement by switching database clones out for multiple table clones. I had to check it out.

Experiments

Results were collected using the following query which measures the duration of each process by passing in the earliest query start time and last query end time to the DATEDIFF function:

1select
2 count(*) as query_count,
3 datediff(seconds, min(start_time), max(end_time)) as duration,
4 sum(credits_used_cloud_services) as credits_used_cloud_services
5from snowflake.account_usage.query_history where query_tag = X;

Setup

Create a database with 10 schemas, 100 tables in each:

1import snowflake.connector
2
3con = snowflake.connector.connect(
4 ...
5)
6
7for i in range(1, 11):
8 con.cursor().execute(f"create schema test.schema_{i};")
9 for j in range(1, 101):
10 con.cursor().execute(f"create table test.schema_{i}.table_{j} (i number) as (select 1);")

Control - Database clone

1create database test_1 clone test;

This operation took 22m 34s to execute.

Results:

Query Count1
Duration22m 34s
Cloud services credits0.179

Experiment 1 - Schema level clones

1import snowflake.connector
2from snowflake.connector import DictCursor
3
4def clone_database_by_schema(con, source_database, target_database):
5 con.cursor().execute(f"create database {target_database};")
6 cursor = con.cursor(DictCursor)
7 cursor.execute(f"show schemas in database {source_database};")
8 for i in cursor.fetchall():
9 if i["name"] not in ("INFORMATION_SCHEMA", "PUBLIC"):
10 con.cursor().execute_async(f"create schema {target_database}.{i['name']} clone {source_database}.{i['name']};")
11
12con = snowflake.connector.connect(
13 ...
14 session_parameters={
15 'QUERY_TAG': 'test 2',

Results:

Query Count12
Duration1m 47s
Cloud services credits0.148

Using execute_async executes each SQL statement without waiting for each to complete, resulting in all 10 schemas being cloned concurrently. A whopping 10x faster from start to finish compared with the regular database clone.

Experiment 2 - Table level clones

1import snowflake.connector
2from snowflake.connector import DictCursor
3
4def clone_database_by_table(con, source_database, target_database):
5 con.cursor().execute(f"create database {target_database};")
6 cursor = con.cursor(DictCursor)
7 cursor.execute(f"show tables in database {source_database};")
8 results = cursor.fetchall()
9 schemas_to_create = {r['schema_name'] for r in results}
10 tables_to_clone = [f"{r['schema_name']}.{r['name']}" for r in results]
11
12 for schema in schemas_to_create:
13 con.cursor().execute(f"create schema {target_database}.{schema};")
14
15 for table in tables_to_clone:

This took 1 minute 48s to complete, the limiting factor being the rate at which the queries could be dispatched by the client (likely due to network waiting times). To help mitigate that, I distributed the commands across 10 threads:

1import snowflake.connector
2from snowflake.connector import DictCursor
3import threading
4
5class ThreadedRunCommands():
6 """Helper class for running queries across a configurable number of threads"""
7 def __init__(self, con, threads):
8 self.threads = threads
9 self.register_command_thread = 0
10 self.thread_commands = [
11 [] for _ in range(self.threads)
12 ]
13 self.con = con
14
15 def register_command(self, command):

Results:

Query Count1012
Duration22s
Cloud services credits0.165

Using 10 threads, the time between the create database command starting and the final create table ... clone command completing was only 22 seconds. This is 60x faster than the create database ... clone command. The bottleneck is still the rate at which queries can be dispatched.

In Summary

The complete results:

Clone StrategyControl - Database cloneExperiment 1 - Schema level clonesExperiment 2 - Table level clones
Query count1121012
Duration22m 34s1m 47s22s
Cloud services credits0.1790.1480.165

All the queries ran were cloud services only, and did not require a running warehouse or resume a suspended one.

I hope that Snowflake improves their schema and database clone functionality, but in the mean time, cloning tables seems to be the way to go.

Thanks again to Felipe Leite and Stephen Pastan of Miro for sharing this!

Author
Niall Woodward Co-founder & CTO of SELECT

Niall is the Co-Founder & CTO of SELECT, a SaaS Snowflake cost management and optimization platform. Prior to starting SELECT, Niall was a data engineer at Brooklyn Data Company and several startups. As an open-source enthusiast, he's also a maintainer of SQLFluff, and creator of three dbt packages: dbt_artifacts, dbt_snowflake_monitoring and dbt_snowflake_query_tags.

Want to hear about our latest Snowflake learnings?Subscribe to get notified.

Get up and running in less than 15 minutes

Connect your Snowflake account and instantly understand your savings potential.

CTA Screen