Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.optimism.io/llms.txt

Use this file to discover all available pages before exploring further.

As part of the op-geth deprecation, all existing op-geth services are migrating to op-reth. This guide covers how to set up an op-reth node with the proofs-history Execution Extension (ExEx) to efficiently compute historical proofs within a configurable retention window.

Problem

Reth’s default eth_getProof implementation works by reverting in-memory state diffs backward from the current tip. For blocks older than ~7 days, this causes unbounded memory growth and out-of-memory (OOM) crashes — a critical issue for infrastructure serving rollup fault proofs and indexers that query historical state.

Solution

The proofs-history ExEx implements a Versioned State Store that tracks intermediate Merkle Patricia Trie nodes tagged by block number. This enables direct O(1) lookups of proofs at any historical block within a configurable retention window, without reverting state. The ExEx processes blocks asynchronously, so it adds zero overhead to sync speed and negligible tip latency.

Architecture

op-reth node
├── Standard reth pipeline (sync, EVM, state)
├── proofs-history ExEx (ingests committed blocks → versioned trie store)
├── Pruner task (background, removes data outside retention window)
└── RPC overrides (eth_getProof, debug_executePayload, debug_executionWitness)
The versioned store lives in a separate MDBX database and maintains four tables:
TableContents
AccountTrieHistoryBranch nodes of the account trie, versioned by block
StorageTrieHistoryBranch nodes of per-account storage tries, versioned by block
HashedAccountHistoryAccount leaf data (balance, nonce, etc.), versioned by block
HashedStorageHistoryStorage slot values, versioned by block
A BlockChangeSet reverse index enables efficient pruning: given a block number, the pruner knows exactly which keys were modified and can delete only those entries.

Prerequisites

  • op-reth v2.1.0 or above.
  • An op-reth datadir, either initialized from genesis or restored from a snapshot (covered in the next two sections).
  • Basic understanding of execution client configuration.
  • Sufficient disk: estimate chain_size + 20% buffer for the proofs database (e.g., ~1 TB for 4 weeks on Base at 2s block time).
  • NVMe SSD recommended.

Installation

First, clone and build the op-reth binary from the op-rs fork.
git clone --depth 1 --branch develop --single-branch https://github.com/ethereum-optimism/optimism.git
cd optimism/rust/op-reth
cargo build --release --bin op-reth
The binary will be located at ./target/release/op-reth.

Initialization Steps

Running op-reth with historical proofs requires a two-step initialization process:
  1. Initialize the standard op-reth database.
  2. Initialize the specific proofs storage.

1. Initialize op-reth

Initialize the core database with the genesis file for your chosen chain (e.g., optimism).
./target/release/op-reth init \
  --datadir="/path/to/datadir" \
  --chain="optimism"

Option: Start from a Snapshot

If you prefer to start from a pre-synchronized database snapshot instead of syncing from genesis:
  1. Download and extract an op-reth snapshot from datadirs.optimism.io into your datadir.
  2. Skip the op-reth init command (step 1 above).
  3. Proceed to Initialize Proofs Storage (step 2 below). The proofs init command initializes the proofs database at the snapshot’s chain tip — it does not retroactively populate proofs for blocks already in the snapshot.

2. Initialize Proofs Storage

Initialize the separate storage used by the ExEx to store historical proofs. This step is required before starting the node with --proofs-history, even when reusing an existing op-reth datadir or restoring from a snapshot.
./target/release/op-reth proofs init \
  --datadir="/path/to/datadir" \
  --chain="optimism" \
  --proofs-history.storage-path="/path/to/proofs-db"
proofs init completes in seconds. It does not backfill historical proofs — it simply marks the current chain tip as the starting point of the proofs database. Once the node is running with --proofs-history, the proofs database fills forward as new blocks are committed. To serve proofs across the full retention window (e.g., 30 days for fault proofs at default settings), the node must run continuously for at least that long after initialization.

Running op-reth

Once initialized, you can start the execution client with the --proofs-history flag enabled.
./target/release/op-reth node \
  --datadir="/path/to/datadir" \
  --chain="optimism" \
  --proofs-history \
  --proofs-history.storage-path="/path/to/proofs-db" \
  --http \
  --http.port=8545 \
  --http.addr=0.0.0.0 \
  --http.corsdomain="*" \
  --http.api=admin,net,eth,web3,debug,trace,txpool \
  --ws \
  --ws.addr=0.0.0.0 \
  --ws.port=8546 \
  --ws.api=net,eth,web3,debug,txpool \
  --ws.origins="*" \
  --authrpc.port=8551 \
  --authrpc.jwtsecret="/path/to/jwt.hex" \
  --authrpc.addr=0.0.0.0 \
  --discovery.port=30303 \
  --port=30303 \
  --metrics=0.0.0.0:9001 \
  --rollup.sequencerhttp="https://mainnet-sequencer.optimism.io"
Make sure to include all other standard flags required for your network (e.g., specific bootnodes). See the configuration reference for details.

Running Consensus Node

Start the consensus client to drive the execution client. You can use either op-node or kona-node.
op-node is the reference implementation of the OP Stack consensus client, written in Go.
op-node \
  --l2=http://127.0.0.1:8551 \
  --l2.jwt-secret="/path/to/jwt.hex" \
  --verifier.l1-confs=1 \
  --network="optimism" \
  --rpc.addr=0.0.0.0 \
  --rpc.port=8547 \
  --l1=<L1_RPC_URL> \
  --l1.beacon=<BEACON_RPC_URL> \
  --p2p.advertise.ip=<YOUR_PUBLIC_IP> \
  --p2p.advertise.tcp=9003 \
  --p2p.advertise.udp=9003 \
  --p2p.listen.ip=0.0.0.0 \
  --p2p.listen.tcp=9003 \
  --p2p.listen.udp=9003 \
  --safedb.path="/path/to/op-node-db"

Verification

After starting both clients, query the sync status of the proofs store via the debug_proofsSyncStatus RPC method:
curl -s -X POST -H "Content-Type: application/json" \
  --data '{"jsonrpc":"2.0","method":"debug_proofsSyncStatus","params":[],"id":1}' \
  http://localhost:8545
The response has the shape:
{"jsonrpc":"2.0","id":1,"result":{"earliest":<block>,"latest":<block>}}
Immediately after proofs init, both earliest and latest sit at the chain tip; the window then fills forward as new blocks are committed. eth_getProof calls for every block within [earliest, latest] will be served from the versioned store. Requests for blocks older than earliest will fail or fall back to the default reth implementation. You can also check the op-reth startup logs for messages confirming the proofs-history ExEx is wired up:
INFO reth::cli: Using on-disk storage for proofs history
INFO reth::cli: Installing proofs-history RPC overrides (eth_getProof, debug_executePayload)
INFO reth::cli eth_replaced=true debug_replaced=true: Proofs-history RPC overrides installed
Ensure op-node/kona-node and op-reth are connected and syncing. op-reth should start importing blocks and the proofs storage should begin populating.

Monitoring

When op-reth is run with the --metrics=<addr>:<port> flag, the proofs-history ExEx exposes Prometheus metrics covering proofs-DB sync state (optimism_trie_block_*), the background pruner (optimism_trie_pruner_*), and eth_getProof RPC traffic (optimism_rpc_eth_api_ext_*). See the historical proof configuration reference for the full list.

Operational Commands

Manual prune

Pruning runs automatically in the background on the configured --proofs-history.prune-interval (default 15s) and removes data outside the retention window. You should not need to invoke op-reth proofs prune under normal operation. A manual prune is only required in one situation: at startup, if the proofs database contains more than 1000 blocks of history beyond the configured --proofs-history.window, the node refuses to start rather than stalling on a large prune operation. This typically happens after the node has been offline long enough that the configured window has shifted significantly, or after reducing --proofs-history.window to a smaller value than was previously in use. When this happens, op-reth exits with an error indicating the number of blocks to prune. Run the prune command once to bring the database back within the safety threshold, then restart the node:
op-reth proofs prune \
  --datadir /path/to/reth-datadir \
  --proofs-history.storage-path /path/to/proofs-db \
  --proofs-history.window 1296000

Unwind

Recover from corruption by reverting the proofs database to a specific block:
op-reth proofs unwind \
  --datadir /path/to/reth-datadir \
  --proofs-history.storage-path /path/to/proofs-db \
  --target <BLOCK_NUMBER>
You can only unwind to a block after the earliest block number in the database. Unwinding to a block before the earliest will fail.

Performance

Benchmarked on Base Sepolia (~700k block window, WETH contract):
MetricValue
Avg latency~15 ms per eth_getProof
Throughput~5,000 req/s (10 concurrent workers)
Sync overheadZero (ExEx processes asynchronously)
MemoryBounded by window size — no OOM risk

Next steps