Create a DV Alone
It is possible for a single operator to manage all of the nodes of a DV cluster. The nodes can be run on a single machine, which is only suitable for testing, or the nodes can be run on multiple machines, which is expected for a production setup.
The private key shares can be created centrally and distributed securely to each node. Alternatively, the private key shares can be created in a lower-trust manner with a Distributed Key Generation process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the group quickstart instead for this latter case.
Pre-requisites
- A basic knowledge of Ethereum nodes and validators.
- Ensure you have git installed.
- Ensure you have docker installed.
- Make sure
docker
is running before executing the commands below.
Step 1: Create the key shares locally
- Launchpad
- CLI
Go to the the DV Launchpad and select Create a distributed validator alone
. Follow the steps to configure your DV cluster. The Launchpad will give you a docker command to create your cluster.
Before you run the command, clone the CDVC repo and cd
into the directory.
# Clone the repo
git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
# Change directory
cd charon-distributed-validator-cluster/
# Run the command provided in the DV Launchpad "Create a cluster alone" flow
docker run -u $(id -u):$(id -g) --rm -v "$(pwd)/:/opt/charon" obolnetwork/charon:v1.2.0 create cluster --definition-file=...
- Clone the CDVC repo and
cd
into the directory.
# Clone the repo
git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
# Change directory
cd charon-distributed-validator-cluster/
- Run the cluster creation command, setting required flag values.
Run the below command to create the validator private key shares and cluster artifacts locally, replacing the example values for nodes
, network
, num-validators
, fee-recipient-addresses
, and withdrawal-addresses
.
Check the Charon CLI reference for additional, optional flags to set.
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.2.0 create cluster \
--nodes=4 \
--network=holesky \
--num-validators=1 \
--name="Quickstart Guide Cluster" \
--cluster-dir="cluster" \
--fee-recipient-addresses=0x000000000000000000000000000000000000dead \
--withdrawal-addresses=0x000000000000000000000000000000000000dead
If you would like your cluster to appear on the DV Launchpad, add the --publish
flag to the command.
After the create cluster
command is run, you should have multiple subfolders within the newly created ./cluster/
folder, one for each node created.
Backup the ./cluster/
folder, then move on to deploying the cluster.
Make sure your backup is secure and private, someone with access to these files could get the validators slashed.
Step 2: Deploy and start the nodes
- Run the nodes on a single machine
- Run the nodes on multiple machines
This part of the guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance, and is not suitable for a mainnet deployment. (If this machine fails, there will not be any fault tolerance - the cluster will also fail.)
For a production deployment with fault tolerance, follow the part of the guide instructing you how to distribute the nodes across multiple machines.
Run this command to start your cluster containers if you deployed using the CDVC repo.
# Start the distributed validator cluster
docker compose up --build -d
Check the monitoring dashboard and see if things look all right.
# Open Grafana
open http://localhost:3000/d/laEp8vupp
To distribute your cluster across multiple machines, each node in the cluster needs one of the folders called node*/
to be copied to it. Each folder should be copied to a CDVN repo and renamed from node*
to .charon
.
Right now, the charon create cluster
command used earlier to create the private keys outputs a folder structure like cluster/node*/
. Make sure to grab the ./node*/
folders, rename them to .charon
and then move them to one of the single node repos below. Once all nodes are online, synced, and connected, you will be ready to activate your validator.
This is necessary for the folder to be found by the default charon run
command. Optionally, it is possible to override charon run
's default file locations by using charon run --private-key-file="node0/charon-enr-private-key" --lock-file="node0/cluster-lock.json"
for each instance of Charon you start (substituting node0
for each node number in your cluster as needed).
👉 Use the single node docker compose, the kubernetes manifests, or the helm chart example repos to get your nodes up and connected after loading the .charon
folder artifacts into them appropriately.
cluster
├── node0
│ ├── charon-enr-private-key
│ ├── cluster-lock.json
│ ├── deposit-data.json
│ └── validator_keys
│ ├── keystore-0.json
│ ├── keystore-0.txt
│ ├── ...
│ ├── keystore-N.json
│ └── keystore-N.txt
├── node1
│ ├── charon-enr-private-key
│ ├── cluster-lock.json
│ ├── deposit-data.json
│ └── validator_keys
│ ├── keystore-0.json
│ ├── keystore-0.txt
│ ├── ...
│ ├── keystore-N.json
│ └── keystore-N.txt
├── node2
│ ├── charon-enr-private-key
│ ├── cluster-lock.json
│ ├── deposit-data.json
│ └── validator_keys
│ ├── keystore-0.json
│ ├── keystore-0.txt
│ ├── ...
│ ├── keystore-N.json
│ └── keystore-N.txt
└── node3
├── charon-enr-private-key
├── cluster-lock.json
├── deposit-data.json
└── validator_keys
├── keystore-0.json
├── keystore-0.txt
├── ...
├── keystore-N.json
└── keystore-N.txt
└── .charon
├── charon-enr-private-key
├── cluster-lock.json
├── deposit-data.json
└── validator_keys
├── keystore-0.json
├── keystore-0.txt
├── ...
├── keystore-N.json
└── keystore-N.txt
Currently, the quickstart repo installs a node on the Holesky testnet. It is possible to choose a different network (another testnet, or mainnet) by overriding the .env
file.
.env.sample
is a sample environment file that allows overriding default configuration defined in docker-compose.yml
. Uncomment and set any variable to override its value.
Setup the desired inputs for the DV, including the network you wish to operate on. Check the Charon CLI reference for additional optional flags to set. Once you have set the values you wish to use. Make a copy of this file called .env
.
# Copy ".env.sample", renaming it ".env"
cp .env.sample.holesky .env