
Jeff SkoldbergSaturday, November 15, 2025
Snowflake's architecture separates into three distinct layers. The storage layer holds your data in cloud object storage (S3, Azure Blob, or GCS). The compute layer consists of virtual warehouses that execute your queries and process data. The Cloud Services layer orchestrates these components, handling authentication, metadata management, query compilation, optimization, access control, security, and infrastructure management. It runs on Snowflake-managed compute resources across multiple availability zones. Unlike virtual warehouses where you control the size and runtime, cloud services scales automatically based on workload demands.
Cloud Services billing is pretty unique; you only pay for it if your daily cloud services consumption exceeds 10% of your daily virtual warehouse usage. This 10% adjustment many customers never see cloud services charges on their bill. But when those charges do appear, they often signal usage patterns worth investigating.
This article explains how cloud services billing works, how to monitor it, and which patterns drive unexpected costs.
The calculation happens daily in UTC timezone. Snowflake totals your warehouse compute credits and cloud services credits for the day. The adjustment equals the lesser of: (10% of warehouse credits) or (cloud services credits used). Your billable amount is cloud services credits minus the adjustment.
Example 1: Under the threshold (no charge)
Example 2: Over the threshold (partial charge)
Important exception: Serverless features like Snowpipe, Automatic Clustering, and Materialized Views do NOT count toward the 10% adjustment. They have separate line-item billing.
Cloud services credits are consumed by activities in Snowflake's services layer:
Snowflake's ACCOUNT_USAGE schema provides views for tracking cloud services with 2-hour latency and 365-day retention.
Check which days resulted in actual charges:
Any positive billed_cloud_services value means you were charged that day.
Snowflake’s query_history view provides a column called credits_used_cloud_services which is very useful to identify which query types consume the most cloud services:
We can use the same approach to isolate individual queries with high Cloud Services spend.
Through working with Snowflake customers, certain patterns consistently emerge as cloud services cost drivers.
Running simple queries like SELECT 1, SELECT CURRENT_SESSION() or show queries tens of thousands of times per day adds up. Also, queries against the INFORMATION_SCHEMA use cloud services only (no warehouse compute).
Fix: Reduce query frequency in these types of health checks if your query history shows that these types of queries are running up the bill. You can switch to using account_usage schema instead of information_schema if the latency is acceptable. But do this with some caution. Usually it makes more sense to not turn on a warehouse.
DDL operations are entirely metadata operations. Frequently creating/dropping large schemas or cloning entire databases for backup drives cloud services consumption.
Fix: Clone at the most granular level needed. Clone individual tables instead of schemas, schemas instead of databases. Reduce cloning frequency where possible. Also, ensure only necessary DDL is running in your account.
Snowflake isn't an OLTP system. Single-row inserts consume significant cloud services resources as these operations often replace entire micro partitions. Additionally, Multi-tenant architectures with one schema per customer create excessive metadata.
Fix: Use batch or bulk loads. For multi-tenant apps, when possible, Snowflake recommends to use shared schemas with tables clustered on customer_id and secure views for isolation.
COPY commands list files from object storage, which uses cloud services compute. Listing thousands of files to copy a few results in high consumption.
Fix: Structure object storage with date prefixes. Use precise file patterns in COPY commands.
Queries with excessive joins, large set operations (IN, NOT IN, EXISTS), or Cartesian products consume significant cloud services during compilation.
Fix: Simplify query structure. Replace large IN lists with temp tables and JOINs. Break complex queries into CTEs.
Monitor regularly. Set up weekly reviews of cloud services consumption. Establish your baseline so you can spot anomalies quickly.
Batch operations. Whether DDL, DML, or data loading, batching is almost always more efficient than individual operations.
Review third-party tool queries. BI tools, ETL platforms, and orchestration systems often generate metadata query patterns you don't directly control. Configuration changes can significantly reduce unnecessary queries.
Set up alerts. Wrap monitoring queries in scheduled tasks with notification integrations. Better yet, use SELECT monitors for turnkey alerting.
Cloud services billing is simple in concept - stay under 10% of warehouse usage and you don't pay for it. But the patterns that drive cloud services consumption are often invisible until they hit your bill.
Many customers never pay for cloud services. If you're seeing consistent charges, start by identifying which pattern applies using the monitoring queries provided. The fixes are usually straightforward once you pinpoint the source.
Let us know if if you’re having any challenges isolating or optimizing your Snowflake Cloud Services spend.

Jeff Skoldberg is a Sales Engineer at SELECT, helping customers get maximum value out of the SELECT app to reduce their Snowflake spend. Prior to joining SELECT, Jeff was a Data and Analytics Consultant with 15+ years experience in automating insights and using data to control business processes. From a technology standpoint, he specializes in Snowflake + dbt + Tableau. From a business topic standpoint, he has experience in Public Utility, Clinical Trials, Publishing, CPG, and Manufacturing.
Want to hear about our latest Snowflake learnings?Subscribe to get notified.
Connect your Snowflake account and instantly understand your savings potential.
