Skip to main content
As part of the op-geth deprecation, all existing op-geth services are migrating to op-reth. This guide covers how to set up an op-reth node with the proofs-history Execution Extension (ExEx) to efficiently compute historical proofs within a configurable retention window.

Problem

Reth’s default eth_getProof implementation works by reverting in-memory state diffs backward from the current tip. For blocks older than ~7 days, this causes unbounded memory growth and out-of-memory (OOM) crashes — a critical issue for infrastructure serving rollup fault proofs and indexers that query historical state.

Solution

The proofs-history ExEx implements a Versioned State Store that tracks intermediate Merkle Patricia Trie nodes tagged by block number. This enables direct O(1) lookups of proofs at any historical block within a configurable retention window, without reverting state. The ExEx processes blocks asynchronously, so it adds zero overhead to sync speed and negligible tip latency.

Architecture

op-reth node
├── Standard reth pipeline (sync, EVM, state)
├── proofs-history ExEx (ingests committed blocks → versioned trie store)
├── Pruner task (background, removes data outside retention window)
└── RPC overrides (eth_getProof, debug_executePayload, debug_executionWitness)
The versioned store lives in a separate MDBX database and maintains four tables:
TableContents
AccountTrieHistoryBranch nodes of the account trie, versioned by block
StorageTrieHistoryBranch nodes of per-account storage tries, versioned by block
HashedAccountHistoryAccount leaf data (balance, nonce, etc.), versioned by block
HashedStorageHistoryStorage slot values, versioned by block
A BlockChangeSet reverse index enables efficient pruning: given a block number, the pruner knows exactly which keys were modified and can delete only those entries.

Prerequisites

  • op-reth v1.11.4-rc.2 or above, fully synced to chain tip.
  • Basic understanding of execution client configuration.
  • Sufficient disk: estimate chain_size + 20% buffer for the proofs database (e.g., ~1 TB for 4 weeks on Base at 2s block time).
  • NVMe SSD recommended.

Installation

First, clone and build the op-reth binary from the op-rs fork.
git clone --depth 1 --branch develop --single-branch https://github.com/ethereum-optimism/optimism.git
cd optimism/rust/op-reth
cargo build --release --bin op-reth
The binary will be located at ./target/release/op-reth.

Initialization Steps

Running op-reth with historical proofs requires a two-step initialization process:
  1. Initialize the standard op-reth database.
  2. Initialize the specific proofs storage.

1. Initialize op-reth

Initialize the core database with the genesis file for your chosen chain (e.g., optimism).
./target/release/op-reth init \
  --datadir="/path/to/datadir" \
  --chain="optimism"

Option: Start from a Snapshot

If you prefer to start from a pre-synchronized database snapshot instead of syncing from genesis:
  1. Download and extract the op-reth snapshot into your datadir.
  2. Skip the op-reth init command (step 1 above).
  3. Proceed to Initialize Proofs Storage (step 2 below). The proofs init command will initialize the proofs storage based on the state present in the snapshot.

2. Initialize Proofs Storage

Initialize the separate storage used by the ExEx to store historical proofs.
./target/release/op-reth proofs init \
  --datadir="/path/to/datadir" \
  --chain="optimism" \
  --proofs-history.storage-path="/path/to/proofs-db"

Running op-reth

Once initialized, you can start the execution client with the --proofs-history flag enabled.
./target/release/op-reth node \
  --datadir="/path/to/datadir" \
  --chain="optimism" \
  --proofs-history \
  --proofs-history.storage-path="/path/to/proofs-db" \
  --proofs-history.window=956200 \
  --proofs-history.prune-interval=10s \
  --http \
  --http.port=8545 \
  --http.addr=0.0.0.0 \
  --http.corsdomain="*" \
  --http.api=admin,net,eth,web3,debug,trace,txpool \
  --ws \
  --ws.addr=0.0.0.0 \
  --ws.port=8546 \
  --ws.api=net,eth,web3,debug,txpool \
  --ws.origins="*" \
  --authrpc.port=8551 \
  --authrpc.jwtsecret="/path/to/jwt.hex" \
  --authrpc.addr=0.0.0.0 \
  --discovery.port=30303 \
  --port=30303 \
  --metrics=0.0.0.0:9001 \
  --rollup.sequencerhttp="https://mainnet-sequencer.optimism.io"
Make sure to include all other standard flags required for your network (e.g., specific bootnodes). See the configuration reference for details.

Running Consensus Node

Start the consensus client to drive the execution client. You can use either op-node or kona-node.
op-node is the reference implementation of the OP Stack consensus client, written in Go.
op-node \
  --l2=http://127.0.0.1:8551 \
  --l2.jwt-secret="/path/to/jwt.hex" \
  --verifier.l1-confs=1 \
  --network="optimism" \
  --rpc.addr=0.0.0.0 \
  --rpc.port=8547 \
  --l1=<L1_RPC_URL> \
  --l1.beacon=<BEACON_RPC_URL> \
  --p2p.advertise.ip=<YOUR_PUBLIC_IP> \
  --p2p.advertise.tcp=9003 \
  --p2p.advertise.udp=9003 \
  --p2p.listen.ip=0.0.0.0 \
  --p2p.listen.tcp=9003 \
  --p2p.listen.udp=9003 \
  --safedb.path="/path/to/op-node-db"

Verification

After starting both clients, query the sync status of the proofs store:
debug_proofsSyncStatus → { "earliest": <block>, "latest": <block> }
Once latest tracks the chain tip, eth_getProof calls for every block within [earliest, latest] will be served from the versioned store. You can also check the op-reth logs for initialization messages:
INFO [..] ExEx initialized ...
Ensure op-node/kona-node and op-reth are connected and syncing. op-reth should start importing blocks and the proofs storage should begin populating.

Operational Commands

Manual prune (if the node fails startup due to a gap > 1000 blocks):
op-reth proofs prune \
  --datadir /path/to/reth-datadir \
  --proofs-history.storage-path /path/to/proofs-db \
  --proofs-history.window 1296000
Unwind (recover from corruption by reverting to a specific block):
op-reth proofs unwind \
  --datadir /path/to/reth-datadir \
  --proofs-history.storage-path /path/to/proofs-db \
  --target <BLOCK_NUMBER>
You can only unwind to a block after the earliest block number in the database. Unwinding to a block before the earliest will fail.

Performance

Benchmarked on Base Sepolia (~700k block window, WETH contract):
MetricValue
Avg latency~15 ms per eth_getProof
Throughput~5,000 req/s (10 concurrent workers)
Sync overheadZero (ExEx processes asynchronously)
MemoryBounded by window size — no OOM risk