Integrity

RPC nodes may return stale, empty, or inconsistent data. eRPC's integrity module ensures you always get the most recent and valid blockchain data through:

  1. Block tracking — Monitors highest known block across all upstreams, ensures eth_blockNumber and eth_getBlockByNumber(latest/finalized) return the freshest data.
  2. Range enforcement — For eth_getLogs, ensures the requested block range is available on the chosen upstream.
  3. Empty/missing data handling — Detects empty results, missing data errors, and block unavailability; retries automatically. See Empty or Missing Data Handling.
  4. Response validation — Validates response structure and consistency (bloom filters, receipts, logs). See Validations.

Combine with retry and consensus policies for automatic failover when integrity checks fail.

Config

projects:
  - id: main
    networks:
      - architecture: evm
        evm:
          chainId: 1
          # Block tracking: ensure highest known block is returned
          integrity:
            enforceHighestBlock: true       # Replace stale block numbers with highest known
            enforceGetLogsBlockRange: true  # Skip upstreams that don't have requested range
 
    # State poller settings (how block tracking works)
    upstreamDefaults:
      evm:
        statePollerInterval: 30s  # How often to poll latest/finalized block
        statePollerDebounce: 5s   # Min interval between polls (ideally ≤ block time)

Block Tracking

eRPC runs a background state poller for each upstream to track latest/finalized blocks. When enforceHighestBlock: true:

  • eth_blockNumber: If response is older than highest known, returns highest known instead.
  • eth_getBlockByNumber(latest/finalized): If response is stale, retries on other upstreams.

The poller also updates proactively when eth_blockNumber or eth_getBlockByNumber(latest) returns a higher block than currently tracked.

Metrics: erpc_upstream_stale_latest_block_total, erpc_upstream_stale_finalized_block_total

Range Enforcement for eth_getLogs

When enforceGetLogsBlockRange: true, eRPC checks that the upstream has the requested block range before sending the request:

  1. If toBlock > upstream's latest block → skip to next upstream (after forcing a fresh poll if stale)
  2. If fromBlock < upstream's available range (based on maxAvailableRecentBlocks config) → skip to next upstream

Large range handling: eRPC can auto-split large ranges based on getLogsAutoSplittingRangeThreshold or when upstream returns "range too large" errors.

Metrics: erpc_upstream_evm_get_logs_stale_upper_bound_total, erpc_upstream_evm_get_logs_stale_lower_bound_total, erpc_upstream_evm_get_logs_forced_splits_total

Empty or Missing Data Handling

RPC nodes often return empty or missing data — especially near the tip of the chain where not all nodes have indexed the latest block yet. eRPC automatically detects these situations and retries on other upstreams.

There are three situations eRPC handles:

SituationExampleWhat eRPC does
Block not readyYou request block N, but an upstream is still on block N-1Skips that upstream before even sending the request; retries after blockUnavailableDelay
Empty resulteth_getBlockByNumber returns nullTries other upstreams; optionally retries after emptyResultDelay
Missing data errorUpstream returns "missing trie node" or "header not found"Tries other upstreams; optionally retries with a delay

How it works

  1. Before sending — eRPC checks if the upstream has the requested block. If not, it skips to the next upstream without wasting a network call.
  2. After receiving — If the upstream returns an empty result for a point-lookup method (like eth_getBlockByNumber), eRPC treats it as missing data and tries the next upstream.
  3. After all upstreams tried — If every upstream failed, eRPC can retry the whole round with a short delay, giving nodes time to catch up.

The pre-send block availability check is the most important mechanism. It prevents tip-of-chain errors on fast chains (Polygon, Arbitrum, Base) where upstreams may be 1-2 blocks apart.

Config reference

FieldWhereDefaultDescription
retryEmptydirectiveDefaultstrueEnable retrying when all upstreams return empty or missing data
emptyResultAcceptfailsafe[].retry["eth_getLogs", "eth_call"]Methods where empty is a valid response — never retry these on empty
emptyResultConfidencefailsafe[].retryfinalizedBlockWhen to trust an empty result: blockHead (block ≤ latest) or finalizedBlock (block ≤ finalized)
emptyResultMaxAttemptsfailsafe[].retrysame as maxAttemptsMax retry attempts specifically for empty results
emptyResultDelayfailsafe[].retry(none)Fixed delay between empty-result retries (overrides normal delay)
blockUnavailableDelayfailsafe[].retry(none)Delay before retrying when upstreams don't have the block yet

You generally don't need to configure markEmptyAsErrorMethods. eRPC ships with sensible defaults that cover all common point-lookup methods (eth_getBlockByNumber, eth_getBlockReceipts, eth_getTransactionByHash, traces, etc.). The defaults automatically treat null results for these methods as missing data, triggering failover to other upstreams. Only override this if you have unusual methods or upstream behaviors.

⚠️

eth_getBlockByHash and eth_getTransactionReceipt are intentionally excluded from the defaults. Subgraph upstreams commonly return null for eth_getBlockByHash, and eth_getTransactionReceipt returns null for pending transactions. If you need null-safety for eth_getBlockByHash, use aggressive emptyResultMaxAttempts (see example below).

Example

This example uses Polygon (2.3s blocks). Adjust delays for your chain's block time.

blockUnavailableDelay and emptyResultDelay only fire when relevant -- for finalized blocks, emptyResultConfidence: blockHead knows the block exists and treats empty as valid, so no delay triggers. This means you can use one retry policy for all finalities instead of splitting by finality.

The one exception is consensus for unfinalized logs/receipts: during reorgs, upstreams may disagree, so consensus picks the correct result.

projects:
  - id: main
    networks:
      - architecture: evm
        evm:
          chainId: 137
 
        # This is enabled by default to instruct erpc to retry
        # on other upstreams when one upstream gives an "empty-ish" response.
        # Such as null, or 0x, or [] empty array, etc.
        directiveDefaults:
          retryEmpty: true
 
        failsafe:
          # ── Unfinalized logs/receipts: add consensus for reorg safety ──
          - matchMethod: "eth_getLogs|eth_getBlockReceipts"
            matchFinality: [unfinalized, unknown]
            retry: &retry-standard
              maxAttempts: 6
              delay: 0
              blockUnavailableDelay: 1s
              emptyResultDelay: 500ms
              emptyResultMaxAttempts: 1
              emptyResultConfidence: blockHead
              emptyResultAccept: ["eth_getLogs"]
            consensus:
              maxParticipants: 3
              agreementThreshold: 2
 
          # ── All other logs/receipts: same retry, no consensus ──
          - matchMethod: "eth_getLogs|eth_getBlockReceipts"
            retry: *retry-standard
 
          # ── eth_call: empty is a valid contract return ──
          - matchMethod: "eth_call"
            retry:
              <<: *retry-standard
              emptyResultAccept: ["eth_call"]
 
          # ── eth_getBlockByHash: null often means not indexed yet ──
          - matchMethod: "eth_getBlockByHash"
            retry:
              <<: *retry-standard
              emptyResultMaxAttempts: 6
 
          # ── Everything else ──
          - matchMethod: "*"
            retry:
              <<: *retry-standard
              emptyResultMaxAttempts: 2
              emptyResultAccept: ["eth_getLogs", "eth_call"]

Production guidelines

Choosing emptyResultDelay — set it so a few retries span roughly one block time:

ChainBlock timeemptyResultDelayemptyResultMaxAttempts
Ethereum12s2000ms3
Polygon2.3s500ms5–6
Base / Optimism2s500ms4–5
Arbitrum250ms200ms5
Monad500ms200ms3

blockUnavailableDelay — set to 500ms2s if you see "block unavailable" errors.

emptyResultAccept rules of thumb:

  • Always include eth_getLogs — empty logs is a valid response.
  • Always include eth_call — empty/null is a valid contract return.
  • Consider including eth_getBlockByHash if your upstreams reliably have blocks by hash.

Validations directives

Response validation directives are ideal for high-integrity use-cases (such as indexing) where you need guaranteed data accuracy. When a validation fails, eRPC treats it as an upstream error — the response is rejected and retry/consensus policies automatically try other upstreams until valid data is found.

How it works with failsafe policies:

Request → Upstream A returns receipts with missing logs (logsBloom doesn't match actual logs)
        → Validation fails → Response rejected (not cached, not returned)
        → Retry policy kicks in → Try Upstream B
        → Upstream B returns complete receipts with matching bloom → Success!

With retry: Each validation failure triggers the next retry attempt. Configure maxAttempts high enough to cover your upstream pool.

With consensus: Invalid responses are excluded from consensus voting. Only valid responses participate, so even if 2/3 upstreams return bad data, the 1 valid response wins.

With hedge + consensus + retry (recommended for indexers): Hedge spawns parallel requests, consensus compares valid responses, retry handles cases where all initial attempts fail validation.

Set via config (applies to all requests), HTTP headers, or query parameters:

DirectiveHeaderQueryDescription
validateLogsBloomEmptinessX-ERPC-Validate-Logs-Bloom-Emptinessvalidate-logs-bloom-emptinessBloom/logs consistency: logs exist ↔ bloom non-zero
validateLogsBloomMatchX-ERPC-Validate-Logs-Bloom-Matchvalidate-logs-bloom-matchRecalculate bloom from logs and verify match
enforceLogIndexStrictIncrementsX-ERPC-Enforce-Log-Index-Strict-Incrementsenforce-log-index-strict-incrementsLog indices must increment by 1 across receipts
validateTxHashUniquenessX-ERPC-Validate-Tx-Hash-Uniquenessvalidate-tx-hash-uniquenessNo duplicate transaction hashes in receipts
validateTransactionIndexX-ERPC-Validate-Transaction-Indexvalidate-transaction-indexReceipt indices must be sequential (0, 1, 2...)
validateHeaderFieldLengthsX-ERPC-Validate-Header-Field-Lengthsvalidate-header-field-lengthsBlock header field byte lengths
validateTransactionFieldsX-ERPC-Validate-Transaction-Fieldsvalidate-transaction-fieldsTransaction field formats
validateTransactionBlockInfoX-ERPC-Validate-Transaction-Block-Infovalidate-transaction-block-infoTx block hash/number matches block
validateLogFieldsX-ERPC-Validate-Log-Fieldsvalidate-log-fieldsLog address/topic lengths
receiptsCountExactX-ERPC-Receipts-Count-Exactreceipts-count-exactReceipts array must have exactly N items
receiptsCountAtLeastX-ERPC-Receipts-Count-At-Leastreceipts-count-at-leastReceipts array must have at least N items
validationExpectedBlockHashX-ERPC-Validation-Expected-Block-Hashvalidation-expected-block-hashAll receipts must have this block hash
validationExpectedBlockNumberX-ERPC-Validation-Expected-Block-Numbervalidation-expected-block-numberAll receipts must have this block number
projects:
  - id: main
    networks:
      - architecture: evm
        evm:
          chainId: 1
        directiveDefaults:
          # Response validations are DISABLED by default (to avoid JSON parsing overhead).
          # Enable specific ones as needed for high-integrity use-cases:
          validateLogsBloomEmptiness: true
          validateLogsBloomMatch: true
          enforceLogIndexStrictIncrements: true
          # etc.
        # Recommended for indexers: hedge + consensus + retry
        # Invalid responses are rejected, valid ones are compared, retries if all fail
        failsafe:
          - matchMethods: "eth_getBlockReceipts"
            hedge:
              maxCount: 3 # Spawn up to 3 parallel requests
              delay: 100ms
            consensus:
              maxParticipants: 3
              agreementThreshold: 2 # Accept if 2+ agree (invalid ones excluded)
            retry:
              maxAttempts: 5 # Keep trying until valid data found