Consensus

The consensus policy sends the same request to multiple upstreams and returns the result only when enough of them agree. This improves correctness, detects misbehaving nodes, and provides deterministic behavior during faults.

⚠️

Consensus can only be configured at network level since it requires multiple upstreams to compare results.

Configuration

erpc.yaml
projects:
  - id: main
    networks:
      - architecture: evm
        evm:
          chainId: 42161
        failsafe:
          - matchMethod: "*" # Configure per-method thresholds if needed
            consensus:
              maxParticipants: 4
              agreementThreshold: 2
              disputeBehavior: returnError # acceptMostCommonValidResult | preferBlockHeadLeader | onlyBlockHeadLeader
              lowParticipantsBehavior: acceptMostCommonValidResult  # returnError | preferBlockHeadLeader | onlyBlockHeadLeader
              preferNonEmpty: true
              preferLargerResponses: true
              ignoreFields:
                eth_getBlockByNumber: ["timestamp"]
              misbehaviorsDestination:
                type: file        # 'file' | 's3'
                path: /var/log/erpc/misbehaviors   # absolute directory for file, or s3://bucket/prefix for S3
                filePattern: "{timestampMs}-{method}-{networkId}"  # default for file; S3 default adds -{instanceId}
                # s3:
                #   region: us-west-2
                #   maxRecords: 100
                #   maxSize: 1048576        # 1MB
                #   flushInterval: 60s
                #   contentType: application/jsonl
                #   credentials:
                #     mode: env            # 'env' | 'file' | 'secret'
                #     # for mode 'file':
                #     # credentialsFile: ~/.aws/credentials
                #     # profile: default
                #     # for mode 'secret':
                #     # accessKeyID: AKIA...
                #     # secretAccessKey: ...
              punishMisbehavior: # (optional) To exclude bad upstreams from consensus for a while
                disputeThreshold: 10 
                disputeWindow: 10m
                sitOutPenalty: 30m

A real-world example of ignoreFields

The following fields are often safe to ignore, but you should verify them for your specific use case:

ignoreFields:
  eth_getLogs:
    - "*.blockTimestamp"
  eth_getTransactionReceipt:
    - "blockTimestamp"
    - "logs.*.blockTimestamp"
    - "l1Fee"
    - "l1GasPrice"
    - "l1GasUsed"
    - "gasUsedForL1"
    - "timeboosted"
    - "l1BlockNumber"
  eth_getBlockByHash:
    - "requestsHash"
    - "transactions.*.gasPrice"
    - "transactions.*.accessList"
    - "transactions.*.chainId"
    - "transactions.*.l1Fee"
    - "transactions.*.yParity"
    - "transactions.*.isSystemTx"
    - "transactions.*.depositReceiptVersion"
  eth_getBlockByNumber:
    - "requestsHash"
    - "transactions.*.gasPrice"
    - "transactions.*.accessList"
    - "transactions.*.chainId"
    - "transactions.*.l1Fee"
    - "transactions.*.yParity"
    - "transactions.*.isSystemTx"
    - "transactions.*.depositReceiptVersion"
  eth_getBlockReceipts:
    - "*.blockTimestamp"
    - "*.l1Fee"
    - "*.l1GasPrice"
    - "*.l1GasUsed"
    - "*.logs.*.blockTimestamp"

Key options

maxParticipants

Number of upstreams to query in each consensus round. The policy selects the first N healthy upstreams based on their scores.

agreementThreshold

Minimum number of identical responses needed to reach consensus. For example, with maxParticipants: 3 and agreementThreshold: 2, at least 2 upstreams must return the same result.

disputeBehavior

When upstreams disagree (no group meets threshold):

  • acceptMostCommonValidResult: Use preferences and select the best valid result among groups that meet threshold. If none meet threshold, returns dispute.
  • returnError: Always return a dispute error in disagreement scenarios.
  • preferBlockHeadLeader: If the block head leader has a non-error result, return it; otherwise fall back to acceptMostCommonValidResult logic.
  • onlyBlockHeadLeader: Return the leader’s non-error result if available; otherwise dispute.

Behavior options

preferNonEmpty

Prioritize meaningful data over empty, and empty over errors. Applies with acceptMostCommonValidResult:

  • Above threshold: If both a non-empty and a consensus-valid error group meet threshold, pick the best non-empty (by count, then size).
  • Below threshold: With exactly one non-empty and at least one empty, pick the non-empty.
  • Prevents short-circuiting to empty/consensus-error when a non-empty may still arrive.

preferLargerResponses

Prefer larger non-empty results:

  • Below threshold (AcceptMostCommon): choose the largest non-empty.
  • Above threshold with multiple valid groups: choose the largest non-empty.
  • If a smaller non-empty meets threshold but a larger non-empty exists:
    • acceptMostCommonValidResult: choose the largest
    • returnError: dispute (don’t accept the smaller)

ignoreFields

Per-method fields ignored when computing canonical hashes (useful for timestamps etc.).

lowParticipantsBehavior

When fewer than agreementThreshold valid responses are available:

  • acceptMostCommonValidResult: Apply preferences to pick a valid result; still respects threshold semantics.
  • returnError: Return a low-participants error.
  • preferBlockHeadLeader: If the block head leader has a non-error result, return it; otherwise fall back to acceptMostCommonValidResult.
  • onlyBlockHeadLeader: If the leader has a non-error result, return it; if the leader only has an error, return that error; otherwise return a low-participants error.

Block Head Leader: The upstream reporting the highest block number. This is determined by each upstream's state poller and ensures you're getting data from the most synchronized node.

How it works

  1. Send the request to up to maxParticipants (if less upstreams it continues with the available ones); group identical results/errors.
  2. If any valid group meets agreementThreshold, it wins
  3. If no winner, apply behaviors:
    • Low participants → lowParticipantsBehavior
    • Otherwise → disputeBehavior
  4. Preferences (non-empty, larger responses) may override selection in specific contexts (see above).
  5. Ties without preferences → dispute.
  6. All upstreams return identical error → return that error; otherwise return low-participants error.

Misbehavior tracking

punishMisbehavior

Temporarily removes upstreams that consistently disagree with the consensus:

  • disputeThreshold: Number of disputes before punishment (e.g., 3 strikes)
  • disputeWindow: Time window for counting disputes (e.g., 10m)
  • sitOutPenalty: How long the upstream is cordoned (e.g., 30m)

misbehaviorsDestination

Append full misbehavior events (JSONL) to a destination. Each line contains the full JSON-RPC request, all participant responses or errors, the analysis summary, the winner, and the policy snapshot. No truncation is applied.

  • type: file | s3
  • path:
    • For file: absolute directory path; files are created using filePattern.
    • For s3: s3://bucket/prefix where files are uploaded using filePattern.
  • filePattern placeholders:
    • {dateByHour}: UTC hour (YYYY-MM-DD-HH)
    • {dateByDay}: UTC day (YYYY-MM-DD)
    • {method}: JSON-RPC method
    • {networkId}: network id with : replaced by _
    • {instanceId}: unique instance ID (auto from env/pod/hostname or generated)
    • {timestampMs}: UTC timestamp in milliseconds (useful to avoid key collisions on S3)
    • Defaults: {timestampMs}-{method}-{networkId}.jsonl
  • s3 (when type=s3):
    • region, maxRecords, maxSize (bytes), flushInterval, contentType
    • credentials.mode: env | file | secret (+ required fields per mode)

Notes:

  • File writes use atomic append; use external rotation for large files.
  • S3 uploads are buffered and flushed by size, count, or time.

Performance

Consensus increases costs and latency since it waits for multiple responses. Use it selectively for critical workloads and specific methods rather than all requests.

Transaction inclusion

For reliable transaction submission and inclusion, configure consensus policies for nonce, gas estimation, and broadcasting:

erpc.yaml
projects:
  - id: main
    networks:
      - architecture: evm
        evm:
          chainId: 1
        failsafe:
          # 1. Nonce: Always use the highest value to avoid "nonce too low" errors
          - matchMethod: eth_getTransactionCount
            consensus:
              maxParticipants: 3
              agreementThreshold: 1  # Accept any single response
              preferHighestValueFor:
                eth_getTransactionCount: ["result"]  # Pick the highest nonce
 
          # 2. Gas fees: Use highest values for better inclusion during congestion
          - matchMethod: eth_gasPrice|eth_maxPriorityFeePerGas
            consensus:
              maxParticipants: 3
              agreementThreshold: 1
              preferHighestValueFor:
                eth_gasPrice: ["result"]  # Legacy transactions
                eth_maxPriorityFeePerGas: ["result"]  # EIP-1559 transactions
 
          # 3. Send transaction: Broadcast to all nodes, return immediately
          - matchMethod: eth_sendRawTransaction
            consensus:
              maxParticipants: 5  # Broadcast widely (uses all available if fewer exist)
              agreementThreshold: 1  # Return on first success
              fireAndForget: true  # Don't cancel other requests - let them complete in background

Key settings explained

MethodGoalSettings
eth_getTransactionCountAvoid "nonce too low"preferHighestValueFor picks highest nonce
eth_gasPriceBetter inclusion (legacy)preferHighestValueFor picks highest gas price
eth_maxPriorityFeePerGasBetter inclusion (EIP-1559)preferHighestValueFor picks highest priority fee
eth_sendRawTransactionMaximum broadcastfireAndForget: true returns quickly but broadcasts to all nodes

fireAndForget: When enabled, consensus returns immediately upon reaching agreement but allows remaining upstream requests to complete in the background. This is ideal for write operations like eth_sendRawTransaction where you want to broadcast to as many nodes as possible while still returning quickly to the client.