Image configuration
The Helm chart defines the container image through Helm values:image.repositoryimage.tag
Node roles
The chart deploys the same TON Rust node binary in two operational roles: validator and full node.| Role | Purpose | Ports to expose |
|---|---|---|
| Validator | Participates in consensus and validator elections. | Keep liteserver and jsonRpc disabled; expose only required node and ops ports (adnl, and control if needed). |
| Full node | Syncs chain and serves external clients (APIs, explorers, bots). | Enable liteserver, jsonRpc, or both when external access is required. |
- Run validators and full nodes as separate Helm releases so resources, security policy, and lifecycle stay isolated.
- If full chain history is needed, enable archival mode as described in Archival node settings.
Quick start
Prerequisites
- Kubernetes cluster access configured for
helm. - Helm 3 installed.
- Access to the chart at
./helm/ton-rust-nodeby cloning theton-rust-noderepository. - A values file for the release, for example
values.yaml.
nodectl).
1. Prepare a values file
Not runnablevalues.yaml
globalConfig and a default logsConfig. This minimal setup requires only nodeConfigs.
Other networking modes are described in the Networking section, including NodePort, hostPort, hostNetwork, and ingress controllers such as ingress-nginx.
2. Install the release
Allhelm commands below require Helm to be installed and available in PATH.
Use the local chart from ton-rust-node/helm/ton-rust-node:
Verify deployment
Check pod status for the release:Enable liteserver and JSON-RPC ports
Use this only for full node deployments. Do not expose these ports on validators. Not runnableRun multiple releases in the same namespace
Use different release names:validator, fullnode), services (validator-0, fullnode-0), and configs.
Operational notes
Helm hooks
This chart does not rely on Helm hooks for bootstrap. Instead, an init container seeds/main from ConfigMaps and Secrets before the main container starts.
If pre- or post-deployment actions are required, such as backups before upgrades or data integrity checks, implement them outside the chart. This can be done in a CI/CD pipeline, dedicated Jobs, or Helm hooks in a wrapper chart.
Chart implementation references:
Ingress and TLS
-
The chart does not create Kubernetes
Ingressresources. For UDP and TCP stream ports, including ADNL UDP, liteserver TCP, control TCP, a standard HTTP Ingress is not sufficient. Ifingress-nginxis already used, TCP and UDP ports can be exposed through itstcp-servicesandudp-servicesConfigMaps, which enable stream proxying. -
TLS termination depends on the protocol used by the exposed port.
- For HTTP-based ports such as some JSON-RPC setups, terminate TLS at an L7 proxy or an Ingress controller that supports HTTP routing.
- For pure TCP stream proxying, terminate TLS at an external load balancer or TCP proxy, or use a TCP proxy that supports TLS passthrough or termination.
- ADNL uses UDP and is typically exposed directly through
LoadBalancer,hostPort, orhostNetwork. TLS termination does not apply to it in the same way as for HTTP.
- Chart networking reference
PVC resizing and retention
The chart definesvolumeClaimTemplates in the StatefulSet for main, db, keys, and optionally logs.
PVC resizing (expansion) depends on the StorageClass configuration. If the StorageClass is allowVolumeExpansion: true, the PVC size can be increased by editing the PVC. Shrinking PVCs is not supported.
Related Kubernetes documentation:
- PVC expansion overview
- If data must be preserved, consider using the PersistentVolume reclaim policy
Retaininstead of the defaultDeletefor dynamically provisioned volumes.
- The chart supports
helm.sh/resource-policy: keepfor selected PVCs throughstorage.<vol>.resourcePolicy; defaults keep formainandkeys. - Values chart
Safe upgrades, backups, and rolling restarts
- Use
helm upgradewith an explicit image tag bump. - Treat the
dbandkeysPVCs as critical state. Plan backups according to the storage backend. - Configuration changes:
- Inline configuration changes trigger pod restarts through a checksum annotation.
- External
existing*ConfigMaps and Secrets are managed outside the chart. Changing them does not trigger an automatic rollout. Restart the pods or upgrade the release explicitly.
Exposure mode examples
- Use one exposure mode per deployment. Combining modes is possible but uncommon.
- Chart implementation reference for per-port Services
Vault integration using VAULT_URL
The chart supports an operator workflow where private keys are stored in an encrypted vault file, and the vault URL is passed through theVAULT_URL environment variable.
Recommended configuration (Secret-based):
VAULT_URL. Example format: file:///keys/vault.json&master_key=<64-hex-chars>.
- Chart Vault reference
- Chart implementation references: