Deployment
Docker

Docker deployment

AIOpen as plain markdown for AI

eRPC ships multi-arch Docker images (linux/amd64, linux/arm64) to ghcr.io/erpc/erpc. The typical deployment is a single container with a mounted config file and two exposed ports.

What this page covers:

  • Quick start — single docker run command
  • docker-compose example
  • Custom NPM modules for TypeScript configs
  • Port mapping, volume mounts, and environment variables
  • Production tuning and healthcheck integration

Quick start

Create a minimal erpc.yaml:

projects
erpc.yaml
logLevel: debugprojects:  - id: main    upstreams:      - endpoint: alchemy://${ALCHEMY_API_KEY}      - endpoint: blastapi://${BLASTAPI_API_KEY}

See the complete config example for all available options.

Run the container:

docker run -v $(pwd)/erpc.yaml:/erpc.yaml \
  -p 4000:4000 -p 4001:4001 \
  ghcr.io/erpc/erpc:latest

Send a test request:

curl --location 'http://localhost:4000/main/evm/1' \
  --header 'Content-Type: application/json' \
  --data '{
    "method": "eth_getBlockByNumber",
    "params": ["0x1203319", false],
    "id": 1,
    "jsonrpc": "2.0"
  }'

docker-compose

Basic production-ready compose file with Prometheus and Grafana:

# Clone the repo if you haven't
git clone https://github.com/erpc/erpc.git
cd erpc
 
# Start eRPC + monitoring stack
docker-compose up -d

Minimal standalone compose:

version: '3.8'
services:
  erpc:
    image: ghcr.io/erpc/erpc:latest
    ports:
      - "4000:4000"
      - "4001:4001"
    volumes:
      - ./erpc.yaml:/erpc.yaml
    restart: unless-stopped

See the monitoring guide for metrics and Grafana dashboards.

Custom NPM modules

When your TypeScript config imports packages beyond @erpc-cloud/config, make them available inside the container using one of these two approaches.

Option 1: Custom image

FROM debian:12
 
COPY package.json pnpm-lock.yaml /
# COPY package.json package-lock.yaml /  # For npm
# COPY package.json yarn.lock /          # For yarn
 
RUN pnpm install
# RUN npm install   # For npm
# RUN yarn install  # For yarn
 
FROM ghcr.io/erpc/erpc:latest
 
COPY --from=0 /node_modules /node_modules
docker build -t erpc-custom -f Dockerfile.custom .
docker run -v $(pwd)/erpc.ts:/erpc.ts \
  -p 4000:4000 -p 4001:4001 \
  erpc-custom

Option 2: Mount host dependencies

docker run \
  -v $(pwd)/package.json:/package.json \
  -v $(pwd)/node_modules:/node_modules \
  -v $(pwd)/erpc.ts:/erpc.ts \
  -p 4000:4000 -p 4001:4001 \
  ghcr.io/erpc/erpc:latest

docker-compose equivalent:

version: '3.8'
services:
  erpc:
    image: ghcr.io/erpc/erpc:latest
    ports:
      - "4000:4000"
      - "4001:4001"
    volumes:
      - ./erpc.ts:/erpc.ts
      - ./package.json:/package.json
      - ./node_modules:/node_modules
    restart: unless-stopped

If you only use @erpc-cloud/config, no extra steps are needed — the base image already includes it.

Copy for your AI assistant — full Docker deployment referenceExpand for every option, default, and edge case — or copy this entire section into your AI assistant.

Image reference

ghcr.io/erpc/erpc:latest          # latest stable
ghcr.io/erpc/erpc:<version>       # e.g. ghcr.io/erpc/erpc:0.0.46

Pin to a specific version tag in production. The latest tag is overwritten on every release. Both linux/amd64 and linux/arm64 are published in the same multi-arch manifest.

Port mapping

PortProtocolPurpose
4000HTTPMain proxy + admin endpoint
4001HTTPPrometheus /metrics scrape endpoint
4100gRPCgRPC listener (only if server.grpc is enabled in config)

Expose only the ports your deployment actually uses. 4001 metrics should be reachable by your Prometheus scraper but not exposed publicly.

Volume mounts

MountRequiredNotes
/erpc.yaml or /erpc.tsyes (unless using --endpoint)Config file. The path must match --config (default auto-discovery looks for erpc.yaml in /).
/data or similaroptionalPersistent volume for cache connectors (Redis-less local cache). Not required for the stateless proxy mode.

Default auto-discovery in the container checks /erpc.yaml then /erpc.ts. To use a different path, pass --config /path/to/config.yaml as the CMD override.

Environment variables

VariableEffect
LOG_LEVELtrace, debug, info, warn, error. Defaults to info.
LOG_WRITERSet to console for human-readable output; default is JSON.
HOSTNAMEContainer hostname — used as fallback instance ID and interpolated into server.responseHeaders. Set by Docker automatically.
INSTANCE_IDOverride the instance identifier (takes precedence over HOSTNAME). Useful for stable names in non-k8s environments, e.g. INSTANCE_ID=erpc-eu-01.
GOGCGo GC target (percent heap growth). Default 100. Set to 30 for tighter memory control.
GOMEMLIMITGo soft memory limit, e.g. GOMEMLIMIT=2GiB. Set to ~80% of the container memory limit to prevent OOM kills.

See CLI & env vars for the full reference including POD_NAME, AWS credentials, and OTEL variables.

Production docker-compose with monitoring

version: '3.8'
services:
  erpc:
    image: ghcr.io/erpc/erpc:latest
    ports:
      - "4000:4000"
      - "4001:4001"
    volumes:
      - ./erpc.yaml:/erpc.yaml
    environment:
      LOG_LEVEL: info
      GOGC: "30"
      GOMEMLIMIT: 2GiB
      INSTANCE_ID: erpc-prod-01
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:4000/healthcheck"]
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 15s
 
  prometheus:
    image: prom/prometheus:latest
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"
 
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin

Healthcheck integration

eRPC exposes GET /healthcheck on port 4000. It returns 200 OK once the proxy is listening and at least one upstream is reachable.

# Manual check
curl -fsS http://localhost:4000/healthcheck

For Docker healthcheck:

healthcheck:
  test: ["CMD", "wget", "-qO-", "http://localhost:4000/healthcheck"]
  interval: 10s
  timeout: 5s
  retries: 3
  start_period: 15s

Kubernetes liveness/readiness probes use the same endpoint:

livenessProbe:
  httpGet:
    path: /healthcheck
    port: 4000
  initialDelaySeconds: 15
  periodSeconds: 10
readinessProbe:
  httpGet:
    path: /healthcheck
    port: 4000
  initialDelaySeconds: 5
  periodSeconds: 5

Production memory tuning

The Go runtime defaults work for most workloads. Tune GOGC and GOMEMLIMIT together:

# Tight memory ceiling (2 GiB container limit)
docker run \
  -e GOGC=30 \
  -e GOMEMLIMIT=1600MiB \
  -v $(pwd)/erpc.yaml:/erpc.yaml \
  -p 4000:4000 -p 4001:4001 \
  --memory=2g \
  ghcr.io/erpc/erpc:latest

Rule of thumb: set GOMEMLIMIT to ~80% of --memory so the GC triggers before the OOM-killer. Lower GOGC (25-50) keeps heaps smaller at the cost of slightly higher CPU. See CLI & env vars — GOGC and GOMEMLIMIT for details.

Restart policy

Always set restart: unless-stopped (compose) or --restart=unless-stopped (docker run) in production. eRPC exits with a non-zero code on config errors or failed upstream initialization — the restart policy will handle transient failures (e.g. a dependent service not yet up).

For always vs unless-stopped: use unless-stopped so a deliberate docker stop doesn't trigger an immediate restart.

Common pitfalls

  • Config volume not mounted — without -v ./erpc.yaml:/erpc.yaml, auto-discovery finds nothing in / and the process exits. Pass --config explicitly or ensure the mount path matches.
  • Port 4000 already in use — eRPC fails to bind and exits with code 2. Check with lsof -i :4000 and pick a free host port via -p 4100:4000.
  • Image cachedocker pull ghcr.io/erpc/erpc:latest may serve a cached layer. Force a fresh pull with docker pull --no-cache or pin to a version digest.
  • GOMEMLIMIT without GOGC — the runtime relies entirely on the soft limit, which can cause large heap swings just under the limit. Always pair them.
  • TypeScript config auto-discovery — the container's cwd is /. Drop your erpc.ts at /erpc.ts or pass --config /path/to/erpc.ts as the CMD.
  • gRPC port not exposed — port 4100 is only active when server.grpc is configured. Exposing it on an image without gRPC config is harmless but confusing.

Append .llms.txt to this URL (or use the AI link above) to fetch the entire expanded reference as plain markdown for an AI assistant.