Docker deployment
AIOpen as plain markdown for AIeRPC ships multi-arch Docker images (linux/amd64, linux/arm64) to ghcr.io/erpc/erpc. The typical deployment is a single container with a mounted config file and two exposed ports.
What this page covers:
- Quick start — single
docker runcommand - docker-compose example
- Custom NPM modules for TypeScript configs
- Port mapping, volume mounts, and environment variables
- Production tuning and healthcheck integration
Quick start
Create a minimal erpc.yaml:
logLevel: debugprojects: - id: main upstreams: - endpoint: alchemy://${ALCHEMY_API_KEY} - endpoint: blastapi://${BLASTAPI_API_KEY}See the complete config example for all available options.
Run the container:
docker run -v $(pwd)/erpc.yaml:/erpc.yaml \
-p 4000:4000 -p 4001:4001 \
ghcr.io/erpc/erpc:latestSend a test request:
curl --location 'http://localhost:4000/main/evm/1' \
--header 'Content-Type: application/json' \
--data '{
"method": "eth_getBlockByNumber",
"params": ["0x1203319", false],
"id": 1,
"jsonrpc": "2.0"
}'docker-compose
Basic production-ready compose file with Prometheus and Grafana:
# Clone the repo if you haven't
git clone https://github.com/erpc/erpc.git
cd erpc
# Start eRPC + monitoring stack
docker-compose up -dMinimal standalone compose:
version: '3.8'
services:
erpc:
image: ghcr.io/erpc/erpc:latest
ports:
- "4000:4000"
- "4001:4001"
volumes:
- ./erpc.yaml:/erpc.yaml
restart: unless-stoppedSee the monitoring guide for metrics and Grafana dashboards.
Custom NPM modules
When your TypeScript config imports packages beyond @erpc-cloud/config, make them available inside the container using one of these two approaches.
Option 1: Custom image
FROM debian:12
COPY package.json pnpm-lock.yaml /
# COPY package.json package-lock.yaml / # For npm
# COPY package.json yarn.lock / # For yarn
RUN pnpm install
# RUN npm install # For npm
# RUN yarn install # For yarn
FROM ghcr.io/erpc/erpc:latest
COPY --from=0 /node_modules /node_modulesdocker build -t erpc-custom -f Dockerfile.custom .
docker run -v $(pwd)/erpc.ts:/erpc.ts \
-p 4000:4000 -p 4001:4001 \
erpc-customOption 2: Mount host dependencies
docker run \
-v $(pwd)/package.json:/package.json \
-v $(pwd)/node_modules:/node_modules \
-v $(pwd)/erpc.ts:/erpc.ts \
-p 4000:4000 -p 4001:4001 \
ghcr.io/erpc/erpc:latestdocker-compose equivalent:
version: '3.8'
services:
erpc:
image: ghcr.io/erpc/erpc:latest
ports:
- "4000:4000"
- "4001:4001"
volumes:
- ./erpc.ts:/erpc.ts
- ./package.json:/package.json
- ./node_modules:/node_modules
restart: unless-stoppedIf you only use @erpc-cloud/config, no extra steps are needed — the base image already includes it.
Copy for your AI assistant — full Docker deployment referenceExpand for every option, default, and edge case — or copy this entire section into your AI assistant.
Image reference
ghcr.io/erpc/erpc:latest # latest stable
ghcr.io/erpc/erpc:<version> # e.g. ghcr.io/erpc/erpc:0.0.46Pin to a specific version tag in production. The latest tag is overwritten on every release. Both linux/amd64 and linux/arm64 are published in the same multi-arch manifest.
Port mapping
| Port | Protocol | Purpose |
|---|---|---|
4000 | HTTP | Main proxy + admin endpoint |
4001 | HTTP | Prometheus /metrics scrape endpoint |
4100 | gRPC | gRPC listener (only if server.grpc is enabled in config) |
Expose only the ports your deployment actually uses. 4001 metrics should be reachable by your Prometheus scraper but not exposed publicly.
Volume mounts
| Mount | Required | Notes |
|---|---|---|
/erpc.yaml or /erpc.ts | yes (unless using --endpoint) | Config file. The path must match --config (default auto-discovery looks for erpc.yaml in /). |
/data or similar | optional | Persistent volume for cache connectors (Redis-less local cache). Not required for the stateless proxy mode. |
Default auto-discovery in the container checks /erpc.yaml then /erpc.ts. To use a different path, pass --config /path/to/config.yaml as the CMD override.
Environment variables
| Variable | Effect |
|---|---|
LOG_LEVEL | trace, debug, info, warn, error. Defaults to info. |
LOG_WRITER | Set to console for human-readable output; default is JSON. |
HOSTNAME | Container hostname — used as fallback instance ID and interpolated into server.responseHeaders. Set by Docker automatically. |
INSTANCE_ID | Override the instance identifier (takes precedence over HOSTNAME). Useful for stable names in non-k8s environments, e.g. INSTANCE_ID=erpc-eu-01. |
GOGC | Go GC target (percent heap growth). Default 100. Set to 30 for tighter memory control. |
GOMEMLIMIT | Go soft memory limit, e.g. GOMEMLIMIT=2GiB. Set to ~80% of the container memory limit to prevent OOM kills. |
See CLI & env vars for the full reference including POD_NAME, AWS credentials, and OTEL variables.
Production docker-compose with monitoring
version: '3.8'
services:
erpc:
image: ghcr.io/erpc/erpc:latest
ports:
- "4000:4000"
- "4001:4001"
volumes:
- ./erpc.yaml:/erpc.yaml
environment:
LOG_LEVEL: info
GOGC: "30"
GOMEMLIMIT: 2GiB
INSTANCE_ID: erpc-prod-01
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:4000/healthcheck"]
interval: 10s
timeout: 5s
retries: 3
start_period: 15s
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=adminHealthcheck integration
eRPC exposes GET /healthcheck on port 4000. It returns 200 OK once the proxy is listening and at least one upstream is reachable.
# Manual check
curl -fsS http://localhost:4000/healthcheckFor Docker healthcheck:
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:4000/healthcheck"]
interval: 10s
timeout: 5s
retries: 3
start_period: 15sKubernetes liveness/readiness probes use the same endpoint:
livenessProbe:
httpGet:
path: /healthcheck
port: 4000
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthcheck
port: 4000
initialDelaySeconds: 5
periodSeconds: 5Production memory tuning
The Go runtime defaults work for most workloads. Tune GOGC and GOMEMLIMIT together:
# Tight memory ceiling (2 GiB container limit)
docker run \
-e GOGC=30 \
-e GOMEMLIMIT=1600MiB \
-v $(pwd)/erpc.yaml:/erpc.yaml \
-p 4000:4000 -p 4001:4001 \
--memory=2g \
ghcr.io/erpc/erpc:latestRule of thumb: set GOMEMLIMIT to ~80% of --memory so the GC triggers before the OOM-killer. Lower GOGC (25-50) keeps heaps smaller at the cost of slightly higher CPU. See CLI & env vars — GOGC and GOMEMLIMIT for details.
Restart policy
Always set restart: unless-stopped (compose) or --restart=unless-stopped (docker run) in production. eRPC exits with a non-zero code on config errors or failed upstream initialization — the restart policy will handle transient failures (e.g. a dependent service not yet up).
For always vs unless-stopped: use unless-stopped so a deliberate docker stop doesn't trigger an immediate restart.
Common pitfalls
- Config volume not mounted — without
-v ./erpc.yaml:/erpc.yaml, auto-discovery finds nothing in/and the process exits. Pass--configexplicitly or ensure the mount path matches. - Port 4000 already in use — eRPC fails to bind and exits with code
2. Check withlsof -i :4000and pick a free host port via-p 4100:4000. - Image cache —
docker pull ghcr.io/erpc/erpc:latestmay serve a cached layer. Force a fresh pull withdocker pull --no-cacheor pin to a version digest. GOMEMLIMITwithoutGOGC— the runtime relies entirely on the soft limit, which can cause large heap swings just under the limit. Always pair them.- TypeScript config auto-discovery — the container's cwd is
/. Drop yourerpc.tsat/erpc.tsor pass--config /path/to/erpc.tsas the CMD. - gRPC port not exposed — port
4100is only active whenserver.grpcis configured. Exposing it on an image without gRPC config is harmless but confusing.
Append .llms.txt to this URL (or use the AI link above) to fetch the entire expanded reference as plain markdown for an AI assistant.