Batcher configuration
This page lists all configuration options for the op-batcher. The op-batcher posts
L2 sequencer data to the L1, to make it available for verifiers. The following
options are from the --help
in v1.10.0 (opens in a new tab).
Batcher Policy
The batcher policy defines high-level constraints and responsibilities regarding how L2 data is posted to L1. Below are the standard guidelines for configuring the batcher within the OP Stack.
Parameter | Description | Administrator | Requirement | Notes |
---|---|---|---|---|
Data Availability Type | Specifies whether the batcher uses blobs, calldata, or auto to post transaction data to L1. | Batch submitter address | Ethereum (Blobs or Calldata) | - Alternative data availability (Alt-DA) is not yet supported in the standard configuration. - The sequencer can switch at will between blob transactions and calldata, with no restrictions, because both are fully secured by L1. |
Batch Submission Frequency | Determines how frequently the batcher submits aggregated transaction data to L1 (via the batcher transaction). | Batch submitter address | Must target 1,800 L1 blocks (6 hours on Ethereum, assuming 12s L1 block time) or lower | - Batches must be posted before the sequencing window closes (commonly 12 hours by default). - Leave a buffer for L1 network congestion and data size to ensure that each batch is fully committed in a timely manner. |
Output Frequency | Defines how frequently L2 output roots are submitted to L1 (via the output oracle). | L1 Proxy Admin | 43,200 L2 blocks (24 hours at 2s block times) or lower | - Once fault proofs are implemented, this value may become deprecated. - It cannot be set to 0 (there must be some cadence for outputs). |
Additional Guidance
-
Data Availability Types:
- Calldata is generally simpler but can be more expensive on mainnet Ethereum, depending on gas prices.
- Blobs are typically lower cost when your chain has enough transaction volume to fill large chunks of data.
- The
op-batcher
can toggle between these approaches by setting the--data-availability-type=<blobs|calldata|auto>
flag or with theOP_BATCHER_DATA_AVAILABILITY_TYPE
env variable. Setting this flag toauto
will allow the batcher to automatically switch betweencalldata
andblobs
based on the current L1 gas price.
-
Batch Submission Frequency (
OP_BATCHER_MAX_CHANNEL_DURATION
and related flags):- Standard OP Chains frequently target a maximum channel duration between 1–6 hours.
- Your chain should never exceed your L2's sequencing window (commonly 12 hours).
- If targeting a longer submission window (e.g., 5 or 6 hours), be aware that the safe head (opens in a new tab) can stall up to that duration.
-
Output Frequency:
- Used to post output roots to L1 for verification.
- The recommended maximum is 24 hours (43,200 blocks at 2s each), though many chains choose smaller intervals.
- Will eventually be replaced or significantly changed by the introduction of fault proofs.
Include these high-level "policy" requirements when you set up or modify your op-batcher
configuration. See the Batcher Configuration reference, which explains each CLI flag and environment variable in depth.
Recommendations
Set your OP_BATCHER_MAX_CHANNEL_DURATION
The default value inside op-batcher
, if not specified, is still 0
, which means channel duration tracking is disabled.
For very low throughput chains, this would mean to fill channels until close to the sequencing window and post the channel to L1 SUB_SAFETY_MARGIN
L1 blocks before the sequencing window expires.
To minimize costs, we recommend setting your OP_BATCHER_MAX_CHANNEL_DURATION
to target 5 hours, with a value of 1500
L1 blocks. When non-zero, this parameter is the max time (in L1 blocks, which are 12 seconds each) between which batches will be submitted to the L1. If you have this set to 5 for example, then your batcher will send a batch to the L1 every 5*12=60 seconds. When using blobs, because 130kb blobs need to be purchased in full, if your chain doesn't generate at least ~130kb of data in those 60 seconds, then you'll be posting only partially full blobs and wasting storage.
- We do not recommend setting any values higher than targeting 5 hours, as batches have to be submitted within the sequencing window which defaults to 12 hours for OP chains, otherwise your chain may experience a 12 hour long chain reorg. 5 hours is the longest length of time we recommend that still sits snugly within that 12 hour window to avoid affecting stability.
- If your chain fills up full blobs of data before the
OP_BATCHER_MAX_CHANNEL_DURATION
elapses, a batch will be submitted anyways - (e.g. even if the OP Mainnet batcher sets anOP_BATCHER_MAX_CHANNEL_DURATION
of 5 hours, it will still be submitting batches every few minutes)
While setting anOP_BATCHER_MAX_CHANNEL_DURATION
of 1500
results in the cheapest fees, it also means that your safe head (opens in a new tab) can stall for up to 5 hours.
- This will negatively impact apps on your chain that rely on the safe head for operation. While many apps can likely operate simply by following the unsafe head, often Centralized Exchanges or third party bridges wait until transactions are marked safe before processing deposits and withdrawal.
- Thus a larger gap between posting batches can result in significant delays in the operation of certain types of high-security applications.
Configure your batcher to use multiple blobs
When there's blob congestion, running with high blob counts can backfire, because you will have a harder time getting blobs included and then fees will bump, which always means a doubling of the priority fees.
The op-batcher
has the capabilities to send multiple blobs per single blob transaction. This is accomplished by the use of multi-frame channels, see the specs (opens in a new tab) for more technical details on channels and frames.
A minimal batcher configuration (with env vars) to enable 6-blob batcher transactions is:
- OP_BATCHER_BATCH_TYPE=1 # span batches, optional
- OP_BATCHER_DATA_AVAILABILITY_TYPE=blobs
- OP_BATCHER_TARGET_NUM_FRAMES=6 # 6 blobs per tx
- OP_BATCHER_TXMGR_MIN_BASEFEE=2.0 # 2 gwei, might need to tweak, depending on gas market
- OP_BATCHER_TXMGR_MIN_TIP_CAP=2.0 # 2 gwei, might need to tweak, depending on gas market
- OP_BATCHER_RESUBMISSION_TIMEOUT=240s # wait 4 min before bumping fees
This enables blob transactions and sets the target number of frames to 6, which translates to 6 blobs per transaction. The minimum tip cap and base fee are also lifted to 2 gwei because it is uncertain how easy it will be to get 6-blob transactions included and slightly higher priority fees should help. The resubmission timeout is increased to a few minutes to give more time for inclusion before bumping the fees because current transaction pool implementations require a doubling of fees for blob transaction replacements.
Multi-blob transactions are particularly useful for medium to high-throughput chains, where enough transaction volume exists to fill up 6 blobs in a reasonable amount of time. You can use this calculator (opens in a new tab) for your chain to determine what number of blobs are right for you, and what gas scalar configuration to use. Please also refer to guide on Using Blobs for chain operators.
Set your --batch-type=1
to use span batches
Span batches reduce the overhead of OP Stack chains, introduced in the Delta network upgrade. This is beneficial for sparse and low-throughput OP Stack chains.
The overhead is reduced by representing a span of consecutive L2 blocks in a more efficient manner, while preserving the same consistency checks as regular batch data.
Batcher Sequencer Throttling
This feature is a batcher-driven sequencer-throttling control loop. This is to avoid sudden spikes in L1 DA-usage consuming too much available gas and causing a backlog in batcher transactions. The batcher can throttle the sequencer's data throughput instantly when it sees too much batcher data built up.
There are two throttling knobs:
- Transaction L1 data throttling, which skips individual transactions whose estimated compressed L1 DA usage goes over a certain threshold, and
- block L1 data throttling, which caps a block's estimated total L1 DA usage and leads to not including transactions during block building that would move the block's L1 DA usage past a certain threshold.
Feature requirements
- This new feature is enabled by default and requires running op-geth version
v1.101411.1
or later. It can be disabled by setting --throttle-interval to 0. The sequencer's op-geth node has to be updated first, before updating the batcher, so that the new required RPC is available at the time of the batcher restart. - It is required to upgrade to
op-conductor/v0.2.0
if you are using conductor's leader-aware rpc proxy feature. This conductor release includes support for theminer_setMaxDASize
op-geth rpc proxy.
Configuration
Note that this feature requires the batcher to correctly follow the sequencer at all times, or it would set throttling parameters on a non-sequencer EL client. That means, active sequencer follow mode has to be enabled correctly by listing all the possible sequencers in the L2 rollup and EL endpoint flags.
The batcher can be configures with the following new flags and default parameters:
- Interval at which throttling operations happen (besides when loading an L2 block in the batcher) via
--throttle-interval
(env varOP_BATCHER_THROTTLE_INTERVAL
): 2s - This can be set to zero to completely disable this feature. Since it's set to 2s by default, the feature is enabled by default.
- Backlog of pending block bytes beyond which the batcher will enable throttling on the sequencer via --throttle-threshold (env var
OP_BATCHER_THROTTLE_THRESHOLD
): 1_000_000 (batcher backlog of 1MB of data to batch) - Individual tx size throttling via
--throttle-tx-size
(env varOP_BATCHER_THROTTLE_TX_SIZE
): 300 (estimated compressed bytes) - Block size throttling via
--throttle-block-size
(env varOP_BATCHER_THROTTLE_BLOCK_SIZE
): 21_000 (estimated total compressed bytes, at least 70 transactions per block of up to 300 compressed bytes each) - Block size throttling that's always active via
--throttle-always-block-size
(env varOP_BATCHER_THROTTLE_ALWAYS_BLOCK_SIZE
): 130_000- This block size limit is enforced on the sequencer at all times, even if there isn't any backlog in the batcher. Normal network usage shouldn't be impacted by this. This is to prevent a too fast build up of data to batch.
If the batcher at startup has throttling enabled and the sequencer's op-geth
instance to which it's talking doesn't have the miner_setMaxDASize
RPC enabled, it will fail with an error message like:
lvl=warn msg="Served miner_setMaxDASize" reqid=1 duration=11.22µs err="the method miner_setMaxDASize does not exist/is not available"
In this case, make sure the miner API namespace is enabled for the correct transport protocol (HTTP or WS), see next paragraph.
The new RPC miner_setMaxDASize
is available in op-geth
since v1.101411.1
. It has to be enabled by adding the miner namespace to the correct API flags, like
GETH_HTTP_API: web3,debug,eth,txpool,net,miner
GETH_WS_API: debug,eth,txpool,net,miner
It is recommended to add it to both, HTTP and WS.
Global options
active-sequencer-check-duration
The duration between checks to determine the active sequencer endpoint. The
default value is 2m0s
.
--active-sequencer-check-duration=<value>
altda.da-server
HTTP address of a DA Server.
--altda.da-server=<value>
altda.da-service
Use DA service type where commitments are generated by Alt-DA server. The default
value is false
.
--altda.da-service=<value>
altda.enabled
Enable Alt-DA mode
Alt-DA Mode is a Beta feature of the MIT licensed OP Stack.
While it has received initial review from core contributors, it is still
undergoing testing, and may have bugs or other issues.
The default value is false
.
--altda.enabled=<value>
altda.get-timeout
Timeout for get requests. 0 means no timeout.
--altda.get-timeout=<value>
altda.max-concurrent-da-requests
Maximum number of concurrent requests to the DA server.
--altda.max-concurrent-da-requests=<value>
altda.put-timeout
Timeout for put requests. 0 means no timeout.
--altda.put-timeout=<value>
altda.verify-on-read
Verify input data matches the commitments from the DA storage service.
--altda.verify-on-read=<value>
approx-compr-ratio
The approximate compression ratio (<=1.0
). Only relevant for ratio
compressor. The default value is 0.6
.
--approx-compr-ratio=<value>
batch-type
The batch type. 0 for SingularBatch
and 1 for SpanBatch
. The default value
is 0
for SingularBatch
.
--batch-type=<value>
check-recent-txs-depth
Indicates how many blocks back the batcher should look during startup for a
recent batch tx on L1. This can speed up waiting for node sync. It should be
set to the verifier confirmation depth of the sequencer (e.g. 4). The default
value is 0
.
--check-recent-txs-depth=<value>
compression-algo
The compression algorithm to use. Valid options: zlib, brotli, brotli-9,
brotli-10, brotli-11. The default value is zlib
.
--compression-algo=<value>
compressor
The type of compressor. Valid options: none, ratio, shadow. The default value
is shadow
.
--compressor=<value>
data-availability-type
Setting this flag to auto
will allow the batcher to automatically switch between calldata
and blobs
based on the current L1 gas price.
The data availability type to use for submitting batches to the L1. Valid
options: calldata
, blobs
, and auto
. The default value is calldata
.
--data-availability-type=<value>
fee-limit-multiplier
The multiplier applied to fee suggestions to put a hard limit on fee increases.
The default value is 5
.
--fee-limit-multiplier=<value>
hd-path
The HD path used to derive the sequencer wallet from the mnemonic. The mnemonic flag must also be set.
--hd-path=<value>
l1-eth-rpc
HTTP provider URL for L1.
--l1-eth-rpc=<value>
l2-eth-rpc
HTTP provider URL for L2 execution engine. A comma-separated list enables the active L2 endpoint provider. Such a list needs to match the number of rollup-rpcs provided.
--l2-eth-rpc=<value>
log.color
Color the log output if in terminal mode. The default value is false
.
--log.color=<value>
log.format
Format the log output. Supported formats: 'text', 'terminal', 'logfmt', 'json',
'json-pretty'. The default value is text
.
--log.format=<value>
log.level
The lowest log level that will be output. The default value is INFO
.
--log.level=<value>
log.pid
Show PID in the log.
--log.pid=<value>
max-blocks-per-span-batch
Maximum number of blocks to add to a span batch. Default is 0 (no maximum).
--max-blocks-per-span-batch=<value>
max-channel-duration
The maximum duration of L1-blocks to keep a channel open. 0 to disable. The
default value is 0
.
--max-channel-duration=<value>
max-l1-tx-size-bytes
The maximum size of a batch tx submitted to L1. Ignored for blobs, where max
blob size will be used. The default value is 120000
.
--max-l1-tx-size-bytes=<value>
max-pending-tx
The maximum number of pending transactions. 0 for no limit. The default value
is 1
.
--max-pending-tx=<value>
metrics.addr
Metrics listening address. The default value is 0.0.0.0
.
--metrics.addr=<value>
metrics.enabled
Enable the metrics server. The default value is false
.
--metrics.enabled=<value>
metrics.port
Metrics listening port. The default value is 7300
.
--metrics.port=<value>
mnemonic
The mnemonic used to derive the wallets for either the service.
--mnemonic=<value>
network-timeout
Timeout for all network operations. The default value is 10s
.
--network-timeout=<value>
num-confirmations
Number of confirmations which we will wait after sending a transaction. The
default value is 10
.
--num-confirmations=<value>
altda.da-server
HTTP address of a DA Server.
--altda.da-server=<value>
altda.da-service
Use DA service type where commitments are generated by altda server. The
default value is false
.
--altda.da-service=<value>
altda.enabled
Enable altda mode. The default value is false
.
--altda.enabled=<value>
altda.verify-on-read
Verify input data matches the commitments from the DA storage service. The
default value is true
.
--altda.verify-on-read=<value>
poll-interval
How frequently to poll L2 for new blocks. The default value is 6s
.
--poll-interval=<value>
pprof.addr
pprof listening address. The default value is 0.0.0.0
.
--pprof.addr=<value>
pprof.enabled
Enable the pprof server. The default value is false
.
--pprof.enabled=<value>
pprof.path
pprof file path. If it is a directory, the path is {dir}/{profileType}.prof
.
--pprof.path=<value>
pprof.port
pprof listening port. The default value is 6060
.
--pprof.port=<value>
pprof.type
pprof profile type. One of cpu, heap, goroutine, threadcreate, block, mutex, allocs.
--pprof.type=<value>
private-key
The private key to use with the service. Must not be used with mnemonic.
--private-key=<value>
resubmission-timeout
Duration we will wait before resubmitting a transaction to L1. The default
value is 48s
.
--resubmission-timeout=<value>
rollup-rpc
HTTP provider URL for Rollup node. A comma-separated list enables the active L2 endpoint provider. Such a list needs to match the number of l2-eth-rpcs provided.
--rollup-rpc=<value>
rpc.addr
rpc listening address. The default value is 0.0.0.0
.
--rpc.addr=<value>
rpc.enable-admin
Enable the admin API. The default value is false
.
--rpc.enable-admin=<value>
rpc.port
rpc listening port. The default value is 8545
.
--rpc.port=<value>
safe-abort-nonce-too-low-count
Number of ErrNonceTooLow observations required to give up on a tx at a
particular nonce without receiving confirmation. The default value is 3
.
--safe-abort-nonce-too-low-count=<value>
sequencer-hd-path
DEPRECATED: The HD path used to derive the sequencer wallet from the mnemonic. The mnemonic flag must also be set.
--sequencer-hd-path=<value>
signer.address
Address the signer is signing transactions for.
--signer.address=<value>
signer.endpoint
Signer endpoint the client will connect to.
--signer.endpoint=<value>
signer.header
Headers to pass to the remote signer. Format key=value
.
Value can contain any character allowed in an HTTP header.
When using env vars, split multiple headers with commas.
When using flags, provide one key-value pair per flag.
--signer.header=<key=value>
signer.tls.ca
tls ca cert path. The default value is tls/ca.crt
.
--signer.tls.ca=<value>
signer.tls.cert
tls cert path. The default value is tls/tls.crt
.
--signer.tls.cert=<value>
signer.tls.key
tls key. The default value is tls/tls.key
.
--signer.tls.key=<value>
stopped
Initialize the batcher in a stopped state. The batcher can be started using the
admin_startBatcher RPC. The default value is false
.
--stopped=<value>
sub-safety-margin
The batcher tx submission safety margin (in #L1-blocks) to subtract from a
channel's timeout and sequencing window, to guarantee safe inclusion of a
channel on L1. The default value is 10
.
--sub-safety-margin=<value>
target-num-frames
The target number of frames to create per channel. Controls number of blobs per
blob tx, if using Blob DA. The default value is 1
.
--target-num-frames=<value>
throttle-always-block-size
The total DA limit to start imposing on block building at all times.
--throttle-always-block-size=<value>
throttle-block-size
The total DA limit to start imposing on block building when we are over the throttle threshold.
--throttle-block-size=<value>
throttle-interval
Interval between potential DA throttling actions. Zero disables throttling.
--throttle-interval=<value>
throttle-threshold
Threshold on pending-blocks-bytes-current
beyond which the batcher instructs the
block builder to start throttling transactions with larger DA demands.
--throttle-threshold=<value>
throttle-tx-size
The DA size of transactions at which throttling begins when we are over the throttle threshold.
--throttle-tx-size=<value>
txmgr.fee-limit-threshold
The minimum threshold (in GWei) at which fee bumping starts to be capped.
Allows arbitrary fee bumps below this threshold. The default value is 100
.
--txmgr.fee-limit-threshold=<value>
txmgr.min-basefee
Enforces a minimum base fee (in GWei) to assume when determining tx fees. 1
GWei by default. The default value is 1
.
--txmgr.min-basefee=<value>
txmgr.min-tip-cap
Enforces a minimum tip cap (in GWei) to use when determining tx fees. 1 GWei by
default. The default value is 1
.
--txmgr.min-tip-cap=<value>
txmgr.not-in-mempool-timeout
Timeout for aborting a tx send if the tx does not make it to the mempool. The
default value is 2m0s
.
--txmgr.not-in-mempool-timeout=<value>
txmgr.receipt-query-interval
Frequency to poll for receipts. The default value is 12s
.
--txmgr.receipt-query-interval=<value>
txmgr.send-timeout
Timeout for sending transactions. If 0 it is disabled. The default value is
0s
.
--txmgr.send-timeout=<value>
wait-node-sync
Indicates if, during startup, the batcher should wait for a recent batcher tx
on L1 to finalize (via more block confirmations). This should help avoid
duplicate batcher txs. The default value is false
.
--wait-node-sync=<value>
Miscellaneous
help
Show help. The default value is false.
--help=<value>
version
Print the version. The default value is false.
--version=<value>