Upstreams
An upstream is defined to handle 1 or more networks (a.k.a. chains). There are currently these types of upstreams:
evmA generic EVM-compatible JSON-RPC endpoint. This is the default and most-used type.
eRPC supports any EVM-compatible JSON-RPC endpoint when using evm type. Specialized types like "alchemy" are built for well-known providers to make it easier to import "all supported evm chains" with just an API-KEY.
Config
# ...
projects:
- id: main
# ...
# Each upstream supports 1 or more networks (i.e. evm chains)
upstreams:
# (REQUIRED) Endpoint URL supports http(s) scheme along with custom schemes like "alchemy://" defined below in this docs.
- endpoint: https://arbitrum-one.blastapi.io/xxxxxxx-xxxxxx-xxxxxxx
# Each upstream can have an arbitrary group name which is used in metrics, as well as
# useful when writing an eval function in selectionPolicy below.
# Use "fallback" group to let eRPC automatically create a "default" selection policy on the network level
# and then fallback to this group if the default one doesn't have enough healthy upstreams.
group: fallback
# (OPTIONAL) Upstream ID is optional and can be used to identify the upstream in logs/metrics.
id: blastapi-chain-42161
# (OPTIONAL) Configurations for EVM-compatible upstreams.
evm:
# (OPTIONAL) chainId is optional and will be detected from the endpoint (eth_chainId),
# but it is recommended to set it explicitly, for faster initialization.
# DEFAULT: auto-detected.
chainId: 42161
# (OPTIONAL) statePollerInterval used to periodically fetch the latest/finalized/sync states.
# DEFAULT: 30s.
statePollerInterval: 30s
# (OPTIONAL) statePollerDebounce prevents too many Polls for latest/finalized block numbers during integrity checks.
# This ideally must be close (or lower than) the block time of the chain, but not too low to avoid thundering herd (e.g <1s is too low).
# DEFAULT: 5s (or equal to block time if the chainId is a known chain)
statePollerDebounce: 5s
# (OPTIONAL) nodeType is optional and you can manually set it to "full" or "archive".
# DEFAULT: archive
nodeType: full
# (OPTIONAL) maxAvailableRecentBlocks limits the maximum number of recent blocks to be served by this upstream.
# DEFAULT: 128 (for "full" nodes).
maxAvailableRecentBlocks: 128
# (OPTIONAL) getLogsAutoSplittingRangeThreshold is an upstream hint used by the network-level
# proactive splitter. The network computes the min positive threshold across selected upstreams
# and splits large ranges into contiguous sub-requests of at most that size.
# Set to 0 or a negative value to disable for this upstream.
getLogsAutoSplittingRangeThreshold: 10000
# (OPTIONAL) Defines which budget to use when hadnling requests of this upstream (e.g. to limit total RPS)
# Since budgets can be applied to multiple upstreams they all consume from the same budget.
# For example "global-blast" below can be applied to all chains supported by BlastAPI,
# to ensure you're not hitting them more than your account allows.
# DEFAULT: <none> - no budget applied.
rateLimitBudget: global-blast
# (OPTIONAL) Rate limit budget can be automatically adjusted based on the "rate-limited" error rate,
# received from upstream. Auto-tuning is enabled by default with values below.
# This is useful to automatically increase the budget if an upstream is capable of handling more requests,
# and decrease the budget if upstream is degraded.
# Every "adjustmentPeriod" total number of requests vs rate-limited will be calculated,
# if the value (0 to 1) is above "errorRateThreshold" then budget will be decreased by "decreaseFactor",
# if the value is below "errorRateThreshold" then budget will be increased by "increaseFactor".
# Note that the new budget will be applied to any upstream using this budget (e.g. Quicknode budget decreases).
# DEFAULT: if any budget is defined, auto-tuning is enabled with these values:
rateLimitAutoTune:
enabled: true
adjustmentPeriod: 1m
errorRateThreshold: 0.1
increaseFactor: 1.05
decreaseFactor: 0.9
minBudget: 0
maxBudget: 10_000
jsonRpc:
# (OPTIONAL) To allow auto-batching requests towards the upstream.
# Remember even if "supportsBatch" is false, you still can send batch requests to eRPC
# but they will be sent to upstream as individual requests.
supportsBatch: true
batchMaxSize: 10
batchMaxWait: 50ms
# (OPTIONAL) Headers to send along with every outbound JSON-RPC request.
# This is especially useful for upstreams that require a static Bearer token for authentication.
headers:
Authorization: "Bearer 1234567890"
# (OPTIONAL) Which methods must never be sent to this upstream.
# For example this can be used to avoid archive calls (traces) to full nodes
ignoreMethods:
- "eth_traceTransaction"
- "alchemy_*"
# (OPTIONAL) Explicitly allowed methods will take precedence over ignoreMethods.
# For example if you only want eth_getLogs to be served, set ignore methods to "*" and allowMethods to "eth_getLogs".
allowMethods:
- "eth_getLogs"
# (OPTIONAL) By default a dynamic mechanism automatically adds "Unsupported" methods to ignoreMethods,
# based on errors returned by the upstream. Set this to false to disable this behavior.
# Default: true
autoIgnoreUnsupportedMethods: true
# (OPTIONAL) Refer to "Failsafe" docs section for more details.
# Here is "default" configuration if not explicitly set:
failsafe:
timeout:
duration: 15s
retry:
maxAttempts: 2
delay: 1000ms
backoffMaxDelay: 10s
backoffFactor: 0.3
jitter: 500ms
circuitBreaker:
# Open circuit after 80% of requests so far have failed (160 out of 200 last requests)
failureThresholdCount: 160
failureThresholdCapacity: 200
# Wait 5 minutes before trying again
halfOpenAfter: 5m
# Close circuit after 3 successful requests (3 out of 10)
successThresholdCount: 3
successThresholdCapacity: 10Config defaults
The project.upstreamDefaults configuration allows you to set default values for all upstreams in a project. These defaults are applied before any upstream-specific configurations:
projects:
- id: main
upstreams:
# ... example above ^
upstreamDefaults:
# Default group for all upstreams
group: "default"
# Default JSON-RPC settings
jsonRpc:
supportsBatch: true
batchMaxSize: 10
batchMaxWait: "50ms"
# Default failsafe policies
failsafe:
timeout:
duration: "15s"
retry:
maxAttempts: 3
delay: "300ms"
jitter: "100ms"
backoffMaxDelay: "5s"
backoffFactor: 1.5
circuitBreaker:
failureThresholdCount: 160
failureThresholdCapacity: 200
halfOpenAfter: "5m"
successThresholdCount: 3
successThresholdCapacity: 3
# Default method filters
ignoreMethods:
- "eth_traceTransaction"
- "alchemy_*"
allowMethods:
- "eth_getLogs"Default values are only applied if the upstream doesn't have those values explicitly set. This allows you to have consistent configuration across all upstreams while still maintaining the ability to override specific values when needed.
Defaults are merged on the first-level only (and not a deep merge).
i.e. If an upstream has its own failsafe: defined, it will not take any of policies from upstreamDefaults.
e.g. if an upstream.failsafe only has "timeout" policy, it will NOT get retry/circuitBreaker from upstreamDefaults (those will be disabled).
Priority & routing
eRPC automatically picks the best upstream for each request based on real-time performance. It tracks error rate, latency, throttling, and block lag for every upstream, computes a score (higher = better), and keeps the highest-scoring one as the "primary" while the others stay on standby.
How it works
- Every few seconds eRPC computes a score for each upstream based on its recent performance (error rate, latency, throttling, block lag).
- Upstreams are sorted by score — the highest-scoring upstream becomes the primary.
- The primary is sticky: it stays primary until another upstream scores significantly higher (hysteresis) and a cooldown period has passed. This prevents constant flip-flopping.
- If the primary fails or is too slow for a particular request, the next upstream is tried automatically.
Config
All routing settings live under the project level. The defaults work well for most setups — you only need to change them if you want to fine-tune switching behavior.
projects:
- id: main
# Routing algorithm: "score-based" (default) or "round-robin".
# score-based: best upstream is used first (recommended).
# round-robin: rotate evenly across upstreams.
routingStrategy: score-based
# Whether to compute one score per upstream ("upstream", default) or
# a separate score for each RPC method ("method").
# "upstream" is simpler and recommended for most setups.
scoreGranularity: upstream
# How often to recalculate upstream scores.
# DEFAULT: 30s
scoreRefreshInterval: 30s
# Time window for collecting performance metrics (error rate, latency, etc.)
# DEFAULT: 10m
scoreMetricsWindowSize: 10m
# How much of the previous score is carried over each refresh tick (0..1).
# Higher = smoother/more stable scores, lower = faster reaction to changes.
# Use a negative value (e.g. -1) to disable smoothing (only latest metrics matter).
# DEFAULT: 0.95
scorePenaltyDecayRate: 0.95
# A challenger must score this fraction higher than the current primary
# to trigger a switch (0..1). e.g. 0.10 = must be 10% better.
# Use a negative value (e.g. -1) to disable stickiness (always pick the highest score).
# DEFAULT: 0.10
scoreSwitchHysteresis: 0.10
# Minimum time between primary switches.
# Use a negative value (e.g. -1) to disable the cooldown.
# DEFAULT: 2m
scoreMinSwitchInterval: 2m
# Controls prometheus metrics cardinality: "compact" (default), "detailed", or "none".
scoreMetricsMode: compact
upstreams:
- endpoint: https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY
# ...The scoring mechanism only affects the order in which upstreams are tried. To fully disable an unreliable upstream, use the Circuit Breaker failsafe policy at the upstream level.
Tuning tips
- Stable production setup (default): leave everything as-is. The 10% hysteresis and 2-minute cooldown prevent unnecessary switches.
- Fast failover: set
scoreSwitchHysteresis: -1andscoreMinSwitchInterval: -1to always use the current best upstream immediately. - Gradual adaptation: increase
scorePenaltyDecayRate(e.g.0.98) for smoother scores — performance changes take longer to take effect. - Reactive adaptation: decrease
scorePenaltyDecayRate(e.g.0.80) to react faster to upstream degradation.
Customizing scores per upstream
You can boost or reduce the score of specific upstreams using score multipliers.
projects:
- id: main
upstreams:
- id: my-alchemy
endpoint: https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY
routing:
# Ignore the slowest 30% of requests when measuring latency.
# DEFAULT: 0.70
scoreLatencyQuantile: 0.70
scoreMultipliers:
- network: '*' # Apply to all networks (or e.g. "evm:1" for Ethereum only)
method: '*' # Apply to all methods (or e.g. "eth_getLogs|eth_call")
# "overall" boosts the score. Higher = higher score = more preferred.
# e.g. overall: 10 makes this upstream 10x more likely to stay primary.
# DEFAULT: 1.0
overall: 1.0
# Metric weights control how much each metric affects the score.
# Higher weight = that metric matters more = bigger score drop when it's bad.
errorRate: 4.0 # Penalize higher error rates by increasing this value.
respLatency: 8.0 # Penalize higher latency by increasing this value.
throttledRate: 3.0 # Penalize higher throttled requests by increasing this value.
blockHeadLag: 2.0 # Penalize block head lag by increasing this value.
finalizationLag: 1.0 # Penalize finalization lag by increasing this value.
# Prefer this cheaper upstream: overall: 10 boosts its score 10x
- id: my-cheap-node
endpoint: https://cheap-rpc.example.com
routing:
scoreMultipliers:
- overall: 10
# Fallback only: overall: 1 keeps the default score (no boost)
- id: my-expensive-node
endpoint: https://premium-rpc.example.com
routing:
scoreMultipliers:
- overall: 1overall boosts the score — higher value = higher score = upstream is preferred.
Metric weights (errorRate, respLatency, etc.) penalize bad performance — increase the value to penalize that metric more.
Upstream types
evm
These are generic well-known EVM-compatible JSON-RPC endpoints. This is the default and most-used type. They can be your own self-hosted nodes, or remote 3rd-party provider nodes.
# ...
projects:
- id: main
# ...
upstreams:
- id: my-infura
type: evm
endpoint: https://mainnet.infura.io/v3/YOUR_INFURA_KEY
# (OPTIONAL) Configurations for EVM-compatible upstreams.
evm:
# (OPTIONAL) chainId is optional and will be detected from the endpoint (eth_chainId),
# but it is recommended to set it explicitly, for faster initialization.
# DEFAULT: auto-detected.
chainId: 42161
# (OPTIONAL) statePollerInterval used to periodically fetch the latest/finalized/sync states.
# To disable state polling set this value to 0, which means no regular calls to RPC for latest/finalized/sync states.
# The consequence of this is all data will be considered "unfinalized" or "unknown" despite their block numbers (and where if theye're actually finalized or not).
# DEFAULT: 30s.
statePollerInterval: 30s
# (OPTIONAL) statePollerDebounce prevents too many Polls for latest/finalized block numbers during integrity checks.
# This ideally must be close (or lower than) the block time of the chain, but not too low to avoid thundering herd (e.g <1s is too low).
# DEFAULT: 5s (or equal to block time if the chainId is a known chain)
statePollerDebounce: 5s
# (OPTIONAL) nodeType is optional and you can manually set it to "full" or "archive".
# DEFAULT: archive
nodeType: full
# (OPTIONAL) maxAvailableRecentBlocks limits the maximum number of recent blocks to be served by this upstream.
# DEFAULT: 128 (for "full" nodes).
maxAvailableRecentBlocks: 128
# (OPTIONAL) getLogsAutoSplittingRangeThreshold is an upstream hint used by the network-level
# proactive splitter. The network computes the min positive threshold across selected upstreams
# and splits large ranges into contiguous sub-requests of at most that size.
# Set to 0 or a negative value to disable for this upstream.
getLogsAutoSplittingRangeThreshold: 10000
# ...getLogs limits, splitting on error, and enforcement are now configured at the network level. See EVM Networks → eth_getLogs.
Block availability
Define the block window each EVM upstream can serve. You can bound by the chain's earliest or latest block and use different probes to detect real availability on that upstream.
This feature is optional and primarily helps reduce redundant calls and latency. eRPC already maintains
correctness by automatically failing over to other healthy upstreams when one node lacks the data. Block
availability simply helps skip over such nodes faster without trying them first.
For many setups, method filters are a cheaper and simpler way to control upstream data availability; consider
using ignoreMethods/allowMethods first. See Config → method filters.
projects:
- id: main
upstreams:
- id: my-evm
endpoint: https://mainnet.example
evm:
# Limit the highest block this upstream serves to latest-64 (helps avoid reorgs)
blockAvailability:
upper:
latestBlockMinus: 64 # latest - 64 is the upper bound
probe: blockHeader # default probe; can be omitted
# updateRate is ignored for latestBlockMinus (bound computed on-demand from evmStatePoller's latest block)
# Auto-detect the earliest block where logs exist on this upstream,
# and start serving from there (refresh hourly to follow pruning).
lower:
earliestBlockPlus: 0 # earliestDetected(eventLogs) + 0
probe: eventLogs # require >=1 log in the block
updateRate: 1h # re-evaluate earliest periodically
# Example 2: fixed window (serve blocks 17,000,000..latest-128)
- id: fixed-window
endpoint: https://another
evm:
blockAvailability:
lower:
exactBlock: 17000000 # hard lower bound
probe: blockHeader
updateRate: 0s
upper:
latestBlockMinus: 128 # rolling upper bound (always uses current latest)
probe: blockHeader
# updateRate is ignored for latestBlockMinus (bound computed on-demand from evmStatePoller's latest block)
# Example 3: traces-aware lower bound (only serve blocks that have traces)
- id: traces
endpoint: https://traces.example
evm:
blockAvailability:
lower:
earliestBlockPlus: 0 # earliestDetected(traceData)
probe: traceData # tries multiple trace/debug methods
updateRate: 24h # re-check daily in case of pruning- probe values:
blockHeader(default),eventLogs,callState,traceData - lower/upper bounds: choose one of
exactBlock,earliestBlockPlus,latestBlockMinus - updateRate: only applies to
earliestBlockPlusbounds. 0 freezes the computed bound; >0 periodically re-evaluates it. ForlatestBlockMinus, updateRate is ignored since bounds are computed on-demand using the evmStatePoller's latest block
Notes on probes:
- eventLogs: considered available only if querying the block by
blockHashreturns at least 1 log. - callState: checks historical state via
eth_getBalance; any non-null result counts as available. - traceData: tries multiple engines in order:
trace_block,debug_traceBlockByHash,trace_replayBlockTransactions; available if any returns a non-empty result.
When block availability is enforced?
Block availability bounds are only enforced when eRPC can extract a block number from the request. If the block number cannot be determined (e.g., certain method calls without explicit block parameters), the request will be forwarded to the upstream regardless of the configured bounds. This ensures availability checks don't block requests where block context is unavailable.
When a probe is unsupported on an upstream (e.g. method ignored/unsupported), eRPC skips that probe for availability decisions. Prefer blockHeader or choose a probe the upstream supports.
About updateRate: The updateRate field only applies to earliestBlockPlus bounds. For latestBlockMinus, it is ignored because bounds are computed on-demand using the continuously-updated latest block value maintained by eRPC's state poller. This ensures latestBlockMinus bounds always reflect the current latest block without needing a separate update schedule.
Compression
eRPC supports gzip compression at multiple points in the request/response cycle:
- Client → eRPC: Clients can send gzipped requests by setting
Content-Encoding: gzipheader
# Example of sending gzipped request to eRPC
curl -X POST \
-H "Content-Encoding: gzip" \
-H "Content-Type: application/json" \
--data-binary @<(echo '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[]}' | gzip) \
http://localhost:4000/main/evm/42161- eRPC → Upstream: Configurable per upstream to send gzipped requests (disabled by default)
upstreams:
- id: my-infura
jsonRpc:
enableGzip: false # gzip when sending requests to this upstream (disabled by default)-
Upstream → eRPC: Automatically handles gzipped responses from upstreams when they send
Content-Encoding: gzip -
eRPC → Client: Automatically enabled when clients send
Accept-Encoding: gzipheader (can be disabled in server config)
server:
enableGzip: true # gzip compression for responses to clients (enabled by default)Using gzip can reduce ingress/egress bandwidth costs, and in certain cases (e.g. large RPC requests) it can improve performance.
Custom HTTP Headers
You can send additional headers (e.g. Authorization) along with every outbound JSON-RPC request to an upstream by specifying jsonRpc.headers in the config. This is especially useful for upstreams that require a static Bearer token for authentication.
upstreams:
- id: my-private-upstream
endpoint: https://private-provider.io/v1
jsonRpc:
# (OPTIONAL) Send additional headers to this upstream on every request
# e.g. Authorization bearer token, custom X-Header, etc.
headers:
Authorization: "Bearer SECRET_VALUE_123"
X-Custom-Header: "HelloWorld"Client proxy pools
You define proxies for outgoing traffic from eRPC to upstreams. Proxy Pools enable centralized management of http(s)/socks5 proxies with round-robin load balancing across multiple upstreams. This is particularly useful for routing requests through different proxy servers based on geographic location or specific requirements (e.g., public vs private RPC endpoints).
# Define proxy pools at the root level
proxyPools:
- id: eu-dc1-pool
urls:
- http://proxy111.myorg.local:3128
- https://proxy222.myorg.local:3129
- id: us-dc1-pool
urls:
- http://proxy333.myorg.local:3128
- socks5://proxy444.myorg.local:3129
projects:
- id: main
# Option 1: Apply proxy pool to all upstreams
upstreamDefaults:
jsonRpc:
proxyPool: eu-dc1-pool
# Option 2: Apply proxy pools selectively to specific upstreams
upstreams:
- id: public-rpc-1
endpoint: https://public-rpc-1.example.com
jsonRpc:
proxyPool: eu-dc1-pool
- id: public-rpc-2
endpoint: https://public-rpc-2.example.com
jsonRpc:
proxyPool: us-dc1-pool
# This upstream won't use a proxy since it has no proxyPool specified
- id: private-rpc-1
endpoint: https://private-rpc-1.example.comYou can use upstreamDefaults to apply a proxy pool to all upstreams, or configure them individually. Individual upstream configurations will override the defaults.