Integrity
RPC nodes may return stale, empty, or inconsistent data. eRPC's integrity module ensures you always get the most recent and valid blockchain data through:
- Block tracking — Monitors highest known block across all upstreams, ensures
eth_blockNumberandeth_getBlockByNumber(latest/finalized)return the freshest data. - Range enforcement — For
eth_getLogs, ensures the requested block range is available on the chosen upstream. - Empty/missing data handling — Detects empty results, missing data errors, and block unavailability; retries automatically. See Empty or Missing Data Handling.
- Response validation — Validates response structure and consistency (bloom filters, receipts, logs). See Validations.
Config
projects:
- id: main
networks:
- architecture: evm
evm:
chainId: 1
# Block tracking: ensure highest known block is returned
integrity:
enforceHighestBlock: true # Replace stale block numbers with highest known
enforceGetLogsBlockRange: true # Skip upstreams that don't have requested range
# State poller settings (how block tracking works)
upstreamDefaults:
evm:
statePollerInterval: 30s # How often to poll latest/finalized block
statePollerDebounce: 5s # Min interval between polls (ideally ≤ block time)Block Tracking
eRPC runs a background state poller for each upstream to track latest/finalized blocks. When enforceHighestBlock: true:
eth_blockNumber: If response is older than highest known, returns highest known instead.eth_getBlockByNumber(latest/finalized): If response is stale, retries on other upstreams.
The poller also updates proactively when eth_blockNumber or eth_getBlockByNumber(latest) returns a higher block than currently tracked.
Metrics: erpc_upstream_stale_latest_block_total, erpc_upstream_stale_finalized_block_total
Range Enforcement for eth_getLogs
When enforceGetLogsBlockRange: true, eRPC checks that the upstream has the requested block range before sending the request:
- If
toBlock> upstream's latest block → skip to next upstream (after forcing a fresh poll if stale) - If
fromBlock< upstream's available range (based onmaxAvailableRecentBlocksconfig) → skip to next upstream
Large range handling: eRPC can auto-split large ranges based on getLogsAutoSplittingRangeThreshold or when upstream returns "range too large" errors.
Metrics: erpc_upstream_evm_get_logs_stale_upper_bound_total, erpc_upstream_evm_get_logs_stale_lower_bound_total, erpc_upstream_evm_get_logs_forced_splits_total
Empty or Missing Data Handling
RPC nodes often return empty or missing data — especially near the tip of the chain where not all nodes have indexed the latest block yet. eRPC automatically detects these situations and retries on other upstreams.
There are three situations eRPC handles:
| Situation | Example | What eRPC does |
|---|---|---|
| Block not ready | You request block N, but an upstream is still on block N-1 | Skips that upstream before even sending the request; retries after blockUnavailableDelay |
| Empty result | eth_getBlockByNumber returns null | Tries other upstreams; optionally retries after emptyResultDelay |
| Missing data error | Upstream returns "missing trie node" or "header not found" | Tries other upstreams; optionally retries with a delay |
How it works
- Before sending — eRPC checks if the upstream has the requested block. If not, it skips to the next upstream without wasting a network call.
- After receiving — If the upstream returns an empty result for a point-lookup method (like
eth_getBlockByNumber), eRPC treats it as missing data and tries the next upstream. - After all upstreams tried — If every upstream failed, eRPC can retry the whole round with a short delay, giving nodes time to catch up.
The pre-send block availability check is the most important mechanism. It prevents tip-of-chain errors on fast chains (Polygon, Arbitrum, Base) where upstreams may be 1-2 blocks apart.
Config reference
| Field | Where | Default | Description |
|---|---|---|---|
retryEmpty | directiveDefaults | true | Enable retrying when all upstreams return empty or missing data |
emptyResultAccept | failsafe[].retry | ["eth_getLogs", "eth_call"] | Methods where empty is a valid response — never retry these on empty |
emptyResultConfidence | failsafe[].retry | finalizedBlock | When to trust an empty result: blockHead (block ≤ latest) or finalizedBlock (block ≤ finalized) |
emptyResultMaxAttempts | failsafe[].retry | same as maxAttempts | Max retry attempts specifically for empty results |
emptyResultDelay | failsafe[].retry | (none) | Fixed delay between empty-result retries (overrides normal delay) |
blockUnavailableDelay | failsafe[].retry | (none) | Delay before retrying when upstreams don't have the block yet |
You generally don't need to configure markEmptyAsErrorMethods. eRPC ships with sensible defaults that cover all common point-lookup methods (eth_getBlockByNumber, eth_getBlockReceipts, eth_getTransactionByHash, traces, etc.). The defaults automatically treat null results for these methods as missing data, triggering failover to other upstreams. Only override this if you have unusual methods or upstream behaviors.
eth_getBlockByHash and eth_getTransactionReceipt are intentionally excluded from the defaults.
Subgraph upstreams commonly return null for eth_getBlockByHash, and eth_getTransactionReceipt returns null for pending transactions.
If you need null-safety for eth_getBlockByHash, use aggressive emptyResultMaxAttempts (see example below).
Example
This example uses Polygon (2.3s blocks). Adjust delays for your chain's block time.
blockUnavailableDelay and emptyResultDelay only fire when relevant -- for finalized blocks, emptyResultConfidence: blockHead knows the block exists and treats empty as valid, so no delay triggers. This means you can use one retry policy for all finalities instead of splitting by finality.
The one exception is consensus for unfinalized logs/receipts: during reorgs, upstreams may disagree, so consensus picks the correct result.
projects:
- id: main
networks:
- architecture: evm
evm:
chainId: 137
# This is enabled by default to instruct erpc to retry
# on other upstreams when one upstream gives an "empty-ish" response.
# Such as null, or 0x, or [] empty array, etc.
directiveDefaults:
retryEmpty: true
failsafe:
# ── Unfinalized logs/receipts: add consensus for reorg safety ──
- matchMethod: "eth_getLogs|eth_getBlockReceipts"
matchFinality: [unfinalized, unknown]
retry: &retry-standard
maxAttempts: 6
delay: 0
blockUnavailableDelay: 1s
emptyResultDelay: 500ms
emptyResultMaxAttempts: 1
emptyResultConfidence: blockHead
emptyResultAccept: ["eth_getLogs"]
consensus:
maxParticipants: 3
agreementThreshold: 2
# ── All other logs/receipts: same retry, no consensus ──
- matchMethod: "eth_getLogs|eth_getBlockReceipts"
retry: *retry-standard
# ── eth_call: empty is a valid contract return ──
- matchMethod: "eth_call"
retry:
<<: *retry-standard
emptyResultAccept: ["eth_call"]
# ── eth_getBlockByHash: null often means not indexed yet ──
- matchMethod: "eth_getBlockByHash"
retry:
<<: *retry-standard
emptyResultMaxAttempts: 6
# ── Everything else ──
- matchMethod: "*"
retry:
<<: *retry-standard
emptyResultMaxAttempts: 2
emptyResultAccept: ["eth_getLogs", "eth_call"]Production guidelines
Choosing emptyResultDelay — set it so a few retries span roughly one block time:
| Chain | Block time | emptyResultDelay | emptyResultMaxAttempts |
|---|---|---|---|
| Ethereum | 12s | 2000ms | 3 |
| Polygon | 2.3s | 500ms | 5–6 |
| Base / Optimism | 2s | 500ms | 4–5 |
| Arbitrum | 250ms | 200ms | 5 |
| Monad | 500ms | 200ms | 3 |
blockUnavailableDelay — set to 500ms–2s if you see "block unavailable" errors.
emptyResultAccept rules of thumb:
- Always include
eth_getLogs— empty logs is a valid response. - Always include
eth_call— empty/null is a valid contract return. - Consider including
eth_getBlockByHashif your upstreams reliably have blocks by hash.
Validations directives
Response validation directives are ideal for high-integrity use-cases (such as indexing) where you need guaranteed data accuracy. When a validation fails, eRPC treats it as an upstream error — the response is rejected and retry/consensus policies automatically try other upstreams until valid data is found.
How it works with failsafe policies:
Request → Upstream A returns receipts with missing logs (logsBloom doesn't match actual logs)
→ Validation fails → Response rejected (not cached, not returned)
→ Retry policy kicks in → Try Upstream B
→ Upstream B returns complete receipts with matching bloom → Success!With retry: Each validation failure triggers the next retry attempt. Configure maxAttempts high enough to cover your upstream pool.
With consensus: Invalid responses are excluded from consensus voting. Only valid responses participate, so even if 2/3 upstreams return bad data, the 1 valid response wins.
With hedge + consensus + retry (recommended for indexers): Hedge spawns parallel requests, consensus compares valid responses, retry handles cases where all initial attempts fail validation.
Set via config (applies to all requests), HTTP headers, or query parameters:
| Directive | Header | Query | Description |
|---|---|---|---|
validateLogsBloomEmptiness | X-ERPC-Validate-Logs-Bloom-Emptiness | validate-logs-bloom-emptiness | Bloom/logs consistency: logs exist ↔ bloom non-zero |
validateLogsBloomMatch | X-ERPC-Validate-Logs-Bloom-Match | validate-logs-bloom-match | Recalculate bloom from logs and verify match |
enforceLogIndexStrictIncrements | X-ERPC-Enforce-Log-Index-Strict-Increments | enforce-log-index-strict-increments | Log indices must increment by 1 across receipts |
validateTxHashUniqueness | X-ERPC-Validate-Tx-Hash-Uniqueness | validate-tx-hash-uniqueness | No duplicate transaction hashes in receipts |
validateTransactionIndex | X-ERPC-Validate-Transaction-Index | validate-transaction-index | Receipt indices must be sequential (0, 1, 2...) |
validateHeaderFieldLengths | X-ERPC-Validate-Header-Field-Lengths | validate-header-field-lengths | Block header field byte lengths |
validateTransactionFields | X-ERPC-Validate-Transaction-Fields | validate-transaction-fields | Transaction field formats |
validateTransactionBlockInfo | X-ERPC-Validate-Transaction-Block-Info | validate-transaction-block-info | Tx block hash/number matches block |
validateLogFields | X-ERPC-Validate-Log-Fields | validate-log-fields | Log address/topic lengths |
receiptsCountExact | X-ERPC-Receipts-Count-Exact | receipts-count-exact | Receipts array must have exactly N items |
receiptsCountAtLeast | X-ERPC-Receipts-Count-At-Least | receipts-count-at-least | Receipts array must have at least N items |
validationExpectedBlockHash | X-ERPC-Validation-Expected-Block-Hash | validation-expected-block-hash | All receipts must have this block hash |
validationExpectedBlockNumber | X-ERPC-Validation-Expected-Block-Number | validation-expected-block-number | All receipts must have this block number |
projects:
- id: main
networks:
- architecture: evm
evm:
chainId: 1
directiveDefaults:
# Response validations are DISABLED by default (to avoid JSON parsing overhead).
# Enable specific ones as needed for high-integrity use-cases:
validateLogsBloomEmptiness: true
validateLogsBloomMatch: true
enforceLogIndexStrictIncrements: true
# etc.
# Recommended for indexers: hedge + consensus + retry
# Invalid responses are rejected, valid ones are compared, retries if all fail
failsafe:
- matchMethods: "eth_getBlockReceipts"
hedge:
maxCount: 3 # Spawn up to 3 parallel requests
delay: 100ms
consensus:
maxParticipants: 3
agreementThreshold: 2 # Accept if 2+ agree (invalid ones excluded)
retry:
maxAttempts: 5 # Keep trying until valid data found