- Rescanning 4+ years of immutable data
- Costing 100+ credits per run
- Taking 20+ minutes to execute
- When only the last day’s data is actually new
For complex data transformation pipelines with testing, CI/CD, and version control, see the dbt Connector. Incremental Queries are ideal for simpler dashboard updates using pure SQL.
How It Works
When you run a query on Dune, your results are stored. With Incremental Queries, you can reference those stored results in your next run using a special table function.The Table Function
TheTABLE(previous.query.result(...)) function returns your previous query results:
- First run: Returns an empty table (0 rows) with your specified schema
- Subsequent runs: Returns all data from your previous execution
DESCRIPTOR() which specifies the column names and types:
The Pattern
Here’s the standard incremental query pattern:Pattern Components
| Component | Purpose |
|---|---|
prev CTE | Loads your previous query results. Returns 0 rows on first run. |
checkpoint CTE | Finds the max timestamp from previous results. Uses a default if no previous data exists. The lookback ensures incomplete periods get recomputed. |
| First SELECT | Keeps historical data that doesn’t need recomputing (before the cutoff). |
| Second SELECT | Queries only new/recent data from the source table (from cutoff onwards). |
| UNION ALL | Combines old + new into your complete result set. |
When to Use Incremental Queries
Great For
- Scheduled queries that run daily or hourly
- Dashboard refreshes that update visualizations with new data
- Time-series aggregations (daily stats, hourly metrics)
- Append-only data (transactions, events, transfers)
- Cumulative metrics that build over time
Not Ideal For
- Ad-hoc exploratory queries (one-time runs don’t benefit)
- Queries where historical data changes (non-append-only sources)
- Complex multi-way joins that can’t be easily partitioned by a checkpoint column
Key Patterns
Lookback Windows
The lookback window ensures incomplete time periods get recomputed. Choose based on your aggregation granularity:Default Checkpoint Values
UseCOALESCE to provide a sensible default for the first run when no previous results exist:
Example Queries
Example 1: Daily Transaction Aggregations
Track daily Ethereum transaction statistics with automatic incremental updates:Example 2: Hourly Gas Price Statistics
Track gas price percentiles by hour:Example 3: Cumulative Sums
For cumulative metrics, the pattern is slightly different: select only the base values from previous results and recompute the cumulative over the combined data.day and eth_received from previous results—NOT the old cumulative_received. Then we recompute the cumulative over the combined data. This is simpler and more reliable than trying to “continue” from the last cumulative value.
Incremental Queries vs dbt Connector
The dbt Connector is Dune’s full-featured solution for production data pipelines. It provides everything data teams need: multiple incremental strategies (merge, delete+insert, append), built-in testing, auto-generated documentation, Git version control, and CI/CD integration with tools like GitHub Actions and Airflow. Incremental Queries give you a taste of that power with zero setup. They provide the core incremental processing capability—referencing previous results and querying only new data—using pure SQL patterns you already know.When to Use Each
Use Incremental Queries when:- You want immediate cost savings with no setup
- You’re updating dashboards or running scheduled queries
- Your logic fits in a single query
- You don’t need formal testing or CI/CD
- You’re building production data pipelines
- You need data quality tests and validation
- Multiple team members collaborate on transformations
- You want Git-based version control and PR reviews
- You need to chain multiple dependent transformations
- You require audit trails and documentation
Feature Comparison
| Capability | Incremental Queries | dbt Connector |
|---|---|---|
| Incremental processing | Single pattern | 3 strategies (merge, delete+insert, append) |
| Setup required | None | dbt project configuration |
| Data quality testing | Manual | Built-in framework |
| Documentation | Manual | Auto-generated |
| Version control | Dune query versioning | Full Git integration |
| CI/CD pipelines | Not supported | GitHub Actions, Airflow, etc. |
| Model dependencies | Not supported | Automatic ordering |
| Availability | All users | Enterprise |
Tips and Best Practices
Choose Appropriate Lookback Windows
Match your lookback window to your aggregation granularity and data characteristics:- Daily aggregations typically need 1-day lookback
- Hourly metrics need 1-hour lookback
- TVL calculations might need 7-day lookback due to price changes and rebalancing
Align Checkpoints with Partitions
Dune tables are partitioned byblock_date or block_time. Using time-based checkpoints enables efficient partition pruning:
First Run Costs
The first run of an incremental query processes all historical data and will be expensive. Subsequent runs will be dramatically cheaper. Plan accordingly:- Run the initial query during off-peak hours if possible
- Consider the first run cost as a one-time investment
Schema Consistency
Ensure yourDESCRIPTOR schema exactly matches the columns and types your query produces. Mismatches will cause errors: