Understanding Query Costing in BigQuery

On-Demand vs Flat-Rate Pricing

BigQuery bills by bytes processed in on-demand mode. Seemingly minor SQL edits can multiply scanned data size, drastically affecting cost. Flat-rate customers face different trade-offs but still risk inefficient resource use.

Impact of Table Partitioning and Clustering

Failure to leverage partitioning or clustering can lead to full table scans. For example, querying an unpartitioned table with billions of rows—even with a WHERE clause—may still read every row.

Common Root Causes

Unbounded SELECT *

One of the most common yet costly anti-patterns. SELECT * reads all columns, regardless of how many are needed. In wide tables, this can multiply data scanned by 10x or more.

Non-Selective Filters

Filters on non-indexed or non-partitioned fields do not reduce scan cost. Users often assume WHERE clauses reduce cost—this is only true if they effectively reduce bytes read.

JOINs Without Filters or Keys

Cartesian joins or joins without ON conditions can multiply data volumes, silently inflating job costs. Even legitimate joins can balloon if one side is significantly larger than anticipated.

Diagnostics

Query Execution Details

Always check the execution plan in the BigQuery UI or via EXPLAIN. It breaks down each stage and shows how many bytes are read at each point.

EXPLAIN SELECT user_id, email FROM project.dataset.users WHERE is_active = TRUE

Job History and Monitoring

Use the INFORMATION_SCHEMA views to audit job metadata. It allows pattern recognition across expensive queries.

SELECT query, total_bytes_processed, start_time
FROM region-us.INFORMATION_SCHEMA.JOBS_BY_PROJECT
WHERE total_bytes_processed > 1e12
ORDER BY start_time DESC

Fixing Costly Patterns Step-by-Step

1. Eliminate SELECT *

Explicitly select only the necessary columns. This reduces I/O and speeds up queries.

-- Bad
SELECT * FROM sales_data

-- Good
SELECT sale_id, amount FROM sales_data

2. Use Partition Filters

Always filter on partition fields when available. If querying without a partition filter, BigQuery reads the entire table.

SELECT * FROM logs
WHERE _PARTITIONTIME BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY)
AND CURRENT_TIMESTAMP()

3. Optimize JOIN Strategies

Prefer broadcasting small tables or using WITH clauses to limit join size. Ensure join keys are indexed or partitioned where possible.

Best Practices

  • Enable BI Engine or result caching to reduce repeat scan costs.
  • Use clustering on frequently filtered columns (e.g., user_id, status).
  • Preview data sizes before querying using TABLESAMPLE or LIMIT clauses.
  • Set query byte limits using maximum_bytes_billed parameter in jobs.
  • Automate anomaly detection using scheduled audits via Cloud Functions or Looker dashboards.

Conclusion

Unexpected query cost spikes in Google BigQuery are often rooted in subtle inefficiencies—from overusing SELECT * to ignoring partition filters. In large-scale production systems, these inefficiencies translate directly to budget overruns and unstable pipelines. By proactively analyzing job history, optimizing query design, and enforcing architectural best practices, technical leaders can ensure their BigQuery usage is both performant and predictable.

FAQs

1. How do I detect which queries are the most expensive?

Use the INFORMATION_SCHEMA.JOBS_BY_PROJECT view to list queries by total_bytes_processed or total_slot_ms. This helps isolate cost offenders.

2. Are partitioned tables always cheaper?

Only if you query using the partition column. Without that filter, BigQuery scans the entire table, making partitioning useless in that case.

3. Can I cap the cost of a BigQuery query?

Yes. Use the maximum_bytes_billed parameter to prevent queries from running if they exceed your budgeted scan size.

4. Should I always use clustering with partitioned tables?

Clustering helps in pruning data during query execution when filters are applied to clustered columns. It's most effective on low-cardinality fields.

5. Why does SELECT * cost so much even with a WHERE clause?

Because BigQuery reads every column's full data unless explicitly limited. The WHERE clause doesn't reduce column I/O unless it also reduces rows read via pruning.