As part of the op-geth deprecation, all existing op-geth services are migrating to op-reth. This guide covers how to set up an op-reth node with the proofs-history Execution Extension (ExEx) to efficiently compute historical proofs within a configurable retention window.Documentation Index
Fetch the complete documentation index at: https://docs.optimism.io/llms.txt
Use this file to discover all available pages before exploring further.
Problem
Reth’s defaulteth_getProof implementation works by reverting in-memory state diffs backward from the current tip. For blocks older than ~7 days, this causes unbounded memory growth and out-of-memory (OOM) crashes — a critical issue for infrastructure serving rollup fault proofs and indexers that query historical state.
Solution
The proofs-history ExEx implements a Versioned State Store that tracks intermediate Merkle Patricia Trie nodes tagged by block number. This enables direct O(1) lookups of proofs at any historical block within a configurable retention window, without reverting state. The ExEx processes blocks asynchronously, so it adds zero overhead to sync speed and negligible tip latency.Architecture
| Table | Contents |
|---|---|
AccountTrieHistory | Branch nodes of the account trie, versioned by block |
StorageTrieHistory | Branch nodes of per-account storage tries, versioned by block |
HashedAccountHistory | Account leaf data (balance, nonce, etc.), versioned by block |
HashedStorageHistory | Storage slot values, versioned by block |
BlockChangeSet reverse index enables efficient pruning: given a block number, the pruner knows exactly which keys were modified and can delete only those entries.
Prerequisites
- op-reth v2.1.0 or above.
- An op-reth datadir, either initialized from genesis or restored from a snapshot (covered in the next two sections).
- Basic understanding of execution client configuration.
- Sufficient disk: estimate
chain_size + 20% bufferfor the proofs database (e.g., ~1 TB for 4 weeks on Base at 2s block time). - NVMe SSD recommended.
Installation
First, clone and build theop-reth binary from the op-rs fork.
./target/release/op-reth.
Initialization Steps
Runningop-reth with historical proofs requires a two-step initialization process:
- Initialize the standard
op-rethdatabase. - Initialize the specific
proofsstorage.
1. Initialize op-reth
Initialize the core database with the genesis file for your chosen chain (e.g.,optimism).
Option: Start from a Snapshot
If you prefer to start from a pre-synchronized database snapshot instead of syncing from genesis:- Download and extract an
op-rethsnapshot from datadirs.optimism.io into yourdatadir. - Skip the
op-reth initcommand (step 1 above). - Proceed to Initialize Proofs Storage (step 2 below). The
proofs initcommand initializes the proofs database at the snapshot’s chain tip — it does not retroactively populate proofs for blocks already in the snapshot.
2. Initialize Proofs Storage
Initialize the separate storage used by the ExEx to store historical proofs. This step is required before starting the node with--proofs-history, even when reusing an existing op-reth datadir or restoring from a snapshot.
proofs init completes in seconds. It does not backfill historical proofs — it simply marks the current chain tip as the starting point of the proofs database. Once the node is running with --proofs-history, the proofs database fills forward as new blocks are committed. To serve proofs across the full retention window (e.g., 30 days for fault proofs at default settings), the node must run continuously for at least that long after initialization.Running op-reth
Once initialized, you can start the execution client with the--proofs-history flag enabled.
Make sure to include all other standard flags required for your network (e.g., specific bootnodes). See the configuration reference for details.
Running Consensus Node
Start the consensus client to drive the execution client. You can use eitherop-node or kona-node.
- op-node
- kona-node
op-node is the reference implementation of the OP Stack consensus client, written in Go.Verification
After starting both clients, query the sync status of the proofs store via thedebug_proofsSyncStatus RPC method:
proofs init, both earliest and latest sit at the chain tip; the window then fills forward as new blocks are committed. eth_getProof calls for every block within [earliest, latest] will be served from the versioned store. Requests for blocks older than earliest will fail or fall back to the default reth implementation.
You can also check the op-reth startup logs for messages confirming the proofs-history ExEx is wired up:
op-node/kona-node and op-reth are connected and syncing. op-reth should start importing blocks and the proofs storage should begin populating.
Monitoring
When op-reth is run with the--metrics=<addr>:<port> flag, the proofs-history ExEx exposes Prometheus metrics covering proofs-DB sync state (optimism_trie_block_*), the background pruner (optimism_trie_pruner_*), and eth_getProof RPC traffic (optimism_rpc_eth_api_ext_*). See the historical proof configuration reference for the full list.
Operational Commands
Manual prune
Pruning runs automatically in the background on the configured--proofs-history.prune-interval (default 15s) and removes data outside the retention window. You should not need to invoke op-reth proofs prune under normal operation.
A manual prune is only required in one situation: at startup, if the proofs database contains more than 1000 blocks of history beyond the configured --proofs-history.window, the node refuses to start rather than stalling on a large prune operation. This typically happens after the node has been offline long enough that the configured window has shifted significantly, or after reducing --proofs-history.window to a smaller value than was previously in use.
When this happens, op-reth exits with an error indicating the number of blocks to prune. Run the prune command once to bring the database back within the safety threshold, then restart the node:
Unwind
Recover from corruption by reverting the proofs database to a specific block:You can only unwind to a block after the earliest block number in the database. Unwinding to a block before the earliest will fail.
Performance
Benchmarked on Base Sepolia (~700k block window, WETH contract):| Metric | Value |
|---|---|
| Avg latency | ~15 ms per eth_getProof |
| Throughput | ~5,000 req/s (10 concurrent workers) |
| Sync overhead | Zero (ExEx processes asynchronously) |
| Memory | Bounded by window size — no OOM risk |
Next steps
- See the op-reth historical proof configuration reference for the full set of
--proofs-history.*flags, management commands, RPC endpoints, and Prometheus metrics. - See the op-reth configuration reference for all standard op-reth flags inherited by this fork.