Skip to main content
Deploy a TON Rust node using the Helm chart. For chart-specific values, configuration examples, and operator documentation such as networking, Vault, and monitoring, refer to the chart README.

Image configuration

The Helm chart defines the container image through Helm values:
  • image.repository
  • image.tag
Avoid relying on a hard-coded tag in the documentation, because default values can change between chart releases. To see the current defaults, refer to:

Node roles

The chart deploys the same TON Rust node binary in two operational roles: validator and full node.
RolePurposePorts to expose
ValidatorParticipates in consensus and validator elections.Keep liteserver and jsonRpc disabled; expose only required node and ops ports (adnl, and control if needed).
Full nodeSyncs chain and serves external clients (APIs, explorers, bots).Enable liteserver, jsonRpc, or both when external access is required.
  • Run validators and full nodes as separate Helm releases so resources, security policy, and lifecycle stay isolated.
  • If full chain history is needed, enable archival mode as described in Archival node settings.

Quick start

Prerequisites

  • Kubernetes cluster access configured for helm.
  • Helm 3 installed.
  • Access to the chart at ./helm/ton-rust-node by cloning the ton-rust-node repository.
  • A values file for the release, for example values.yaml.
Install and deploy TON Rust node with Helm using a minimal configuration, then optionally enable liteserver and JSON Remote Procedure Call (JSON-RPC) ports. To deploy a validator, use this page for base deployment and keep the liteserver and JSON-RPC ports disabled. For validator election and operations workflow, use the validator guide (nodectl).

1. Prepare a values file

Not runnable
values.yaml

replicas: 2

services:
  adnl:
    perReplica:
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "1.2.3.4"
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "5.6.7.8"

nodeConfigs:
  node-0.json: |
    { "log_config_name": "/main/logs.config.yml", ... }
  node-1.json: |
    { "log_config_name": "/main/logs.config.yml", ... }
The chart includes a mainnet globalConfig and a default logsConfig. This minimal setup requires only nodeConfigs. Other networking modes are described in the Networking section, including NodePort, hostPort, hostNetwork, and ingress controllers such as ingress-nginx.

2. Install the release

All helm commands below require Helm to be installed and available in PATH. Use the local chart from ton-rust-node/helm/ton-rust-node:
helm install <RELEASE_NAME> ./helm/ton-rust-node -f <VALUES_FILE>
Or install from an Open Container Initiative registry:
helm install <RELEASE_NAME> oci://ghcr.io/rsquad/ton-rust-node/helm/node -f <VALUES_FILE>

Verify deployment

Check pod status for the release:
kubectl get pods -l app.kubernetes.io/name=node,app.kubernetes.io/instance=<RELEASE_NAME>
Check service status for the release:
kubectl get svc -l app.kubernetes.io/name=node,app.kubernetes.io/instance=<RELEASE_NAME>

Enable liteserver and JSON-RPC ports

Use this only for full node deployments. Do not expose these ports on validators. Not runnable
replicas: 2

ports:
  liteserver: 40000
  jsonRpc: 8081

services:
  adnl:
    perReplica:
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "10.0.0.1"
      - annotations:
          metallb.universe.tf/loadBalancerIPs: "10.0.0.2"

nodeConfigs:
  node-0.json: |
    { "log_config_name": "/main/logs.config.yml", ... }
  node-1.json: |
    { "log_config_name": "/main/logs.config.yml", ... }

Run multiple releases in the same namespace

Use different release names:
helm install validator ./helm/ton-rust-node -f validator-values.yaml
helm install fullnode ./helm/ton-rust-node -f fullnode-values.yaml
This creates separate StatefulSets (validator, fullnode), services (validator-0, fullnode-0), and configs.

Operational notes

Helm hooks

This chart does not rely on Helm hooks for bootstrap. Instead, an init container seeds /main from ConfigMaps and Secrets before the main container starts. If pre- or post-deployment actions are required, such as backups before upgrades or data integrity checks, implement them outside the chart. This can be done in a CI/CD pipeline, dedicated Jobs, or Helm hooks in a wrapper chart. Chart implementation references:

Ingress and TLS

  • The chart does not create Kubernetes Ingress resources. For UDP and TCP stream ports, including ADNL UDP, liteserver TCP, control TCP, a standard HTTP Ingress is not sufficient. If ingress-nginx is already used, TCP and UDP ports can be exposed through its tcp-services and udp-services ConfigMaps, which enable stream proxying.
  • TLS termination depends on the protocol used by the exposed port.
    • For HTTP-based ports such as some JSON-RPC setups, terminate TLS at an L7 proxy or an Ingress controller that supports HTTP routing.
    • For pure TCP stream proxying, terminate TLS at an external load balancer or TCP proxy, or use a TCP proxy that supports TLS passthrough or termination.
    • ADNL uses UDP and is typically exposed directly through LoadBalancer, hostPort, or hostNetwork. TLS termination does not apply to it in the same way as for HTTP.
  • Chart networking reference

PVC resizing and retention

The chart defines volumeClaimTemplates in the StatefulSet for main, db, keys, and optionally logs. PVC resizing (expansion) depends on the StorageClass configuration. If the StorageClass is allowVolumeExpansion: true, the PVC size can be increased by editing the PVC. Shrinking PVCs is not supported. Related Kubernetes documentation: Chart retention configuration:
  • The chart supports helm.sh/resource-policy: keep for selected PVCs through storage.<vol>.resourcePolicy; defaults keep for main and keys.
  • Values chart

Safe upgrades, backups, and rolling restarts

  1. Use helm upgrade with an explicit image tag bump.
  2. Treat the db and keys PVCs as critical state. Plan backups according to the storage backend.
  3. Configuration changes:
    • Inline configuration changes trigger pod restarts through a checksum annotation.
    • External existing* ConfigMaps and Secrets are managed outside the chart. Changing them does not trigger an automatic rollout. Restart the pods or upgrade the release explicitly.

Exposure mode examples

Vault integration using VAULT_URL

The chart supports an operator workflow where private keys are stored in an encrypted vault file, and the vault URL is passed through the VAULT_URL environment variable. Recommended configuration (Secret-based):
vault:
  secretName: ton-node-vault
  secretKey: VAULT_URL
The Secret should contain VAULT_URL. Example format: file:///keys/vault.json&master_key=<64-hex-chars>.

Useful commands

# Check pod status (replace "my-node" with the release name)
kubectl get pods -l app.kubernetes.io/name=node,app.kubernetes.io/instance=my-node

# Get external service IPs
kubectl get svc -l app.kubernetes.io/name=node,app.kubernetes.io/instance=my-node

# View logs
kubectl logs my-node-0 -c ton-node

# Exec into pod
kubectl exec -it my-node-0 -c ton-node -- /bin/sh