Skip to main content
This document describes an example configuration for an OP Stack network deployment, focusing on the network architecture and node configuration required for production usage. OP Stack Network Design

Sequencers

Sequencers receive transactions from the Tx Ingress Nodes via geth p2p gossip and work with the batcher and proposer to create new blocks.

Node Configuration

  • Sequencer op-geth can be either full or archive. Full nodes offer better performance but can’t recover from deep L1 reorgs, so run at least one archive sequencer as a backup.
  • Sequencer op-node should have p2p discovery disabled and only be statically peered with other internal nodes (or use peer-management-service to define peering network).
  • The op-conductor RPC can act as a leader-aware RPC proxy for op-batcher (proxies the necessary op-geth / op-node RPC methods if the node is the leader).
  • Sequencers should have transaction journalling disabled.

op-node Configuration

OP_NODE_P2P_NO_DISCOVERY: "true"
OP_NODE_P2P_PEER_BANNING: "false"
OP_NODE_P2P_STATIC: "<static peer list>"

op-geth Configuration

GETH_ROLLUP_DISABLETXPOOLGOSSIP: "false"
GETH_TXPOOL_JOURNAL: ""
GETH_TXPOOL_JOURNALREMOTES: "false"
GETH_TXPOOL_LIFETIME: "1h"
GETH_TXPOOL_NOLOCALS: "true"
GETH_NETRESTRICT: "10.0.0.0/8" # ex: restrict p2p to internal ips

Tx Ingress Nodes

These nodes receive eth_sendRawTransaction calls from the public and then gossip the transactions to the internal geth network. This allows the Sequencer to focus on block creation while these nodes handle transaction ingress.

Node Configuration

  • These can be either full or archive nodes.
  • They participate in the internal tx pool p2p network to forward transactions to sequencers.

Configuration

GETH_ROLLUP_DISABLETXPOOLGOSSIP: "false"
GETH_TXPOOL_JOURNALREMOTES: "false"
GETH_TXPOOL_LIFETIME: "1h"
GETH_TXPOOL_NOLOCALS: "true"
GETH_NETRESTRICT: "10.0.0.0/8" # ex: restrict p2p to internal ips

Archive RPC Nodes

We recommend setting up some archive nodes for internal RPC usage, primarily used by the challenger, proposer, and security monitoring tools like monitorism.

Node Configuration

  • Archive nodes are essential for accessing historical state data.
  • You can also use these nodes for taking disk snapshots for disaster recovery.

Configuration

GETH_GCMODE: "archive"
GETH_DB_ENGINE: "pebble"
GETH_STATE_SCHEME: "hash"

Full Snapsync Nodes

These nodes provide peers for snap sync, using the snapsync bootnodes for peer discovery.

Configuration

GETH_GCMODE: "full"
GETH_DB_ENGINE: "pebble"
GETH_STATE_SCHEME: "path"
GETH_SYNCMODE: "snap"

Snapsync Bootnodes

These bootnodes facilitate peer discovery for public nodes using snapsync.

Node Configuration

  • The bootnode can be either a geth instance or the geth bootnode tool.
  • If you want to use geth for snapsync bootnodes, you may want to just make the Full Snapsync Nodes serve as your bootnodes as well.

P2P Bootnodes

These are the op-node p2p network bootnodes. We recommend using the geth bootnode tool with discovery v5 enabled.

Public RPC

Public RPC design is not listed in the above diagram but can be implemented very similarly to Tx Ingress Nodes, with the following differences:

Configuration Differences

  • Public RPC should not participate in the internal tx pool p2p network.
    • While it is possible to run Public RPC from the same nodes that serve Tx Ingress and participate in tx pool gossip, there have been geth bugs in the past that leaked tx pool details on read RPCs, so it is a possible risk to consider.
  • Public RPC proxyd should be run in consensus_aware routing mode and whitelist any RPCs you want to serve from op-geth.
  • Public RPC nodes should likely be archive nodes.

About proxyd

Proxyd is an RPC request router and proxy that provides the following capabilities:
  1. Whitelists RPC methods.
  2. Routes RPC methods to groups of backend services.
  3. Automatically retries failed backend requests.
  4. Tracks backend consensus (latest, safe, finalized blocks), peer count and sync state.
  5. Re-writes requests and responses to enforce consensus.
  6. Load balances requests across backend services.
  7. Caches immutable responses from backends.
  8. Provides metrics to measure request latency, error rates, and the like.

Next steps

  • Learn more about using proxyd for your network.