LogoLogo
SDKAPI
Version-v1.4.0 (current)
Version-v1.4.0 (current)
  • Learn
    • Introduction
      • Obol Collective
      • OBOL Incentives
      • Key Staking Concepts
      • Obol vs Other DV Implementations
      • Obol Splits
      • DV Launchpad
      • Frequently Asked Questions
    • Charon
      • Introduction to Charon
      • Distributed Key Generation
      • Cluster Configuration
      • Charon Networking
      • CLI Reference
    • Futher Reading
      • Ethereum and Its Relationship With DVT
      • Community Testing
      • Peer Score
      • Useful Links
  • Run a DV
    • Quickstart
      • Quickstart Overview
      • Create a DV Alone
      • Create a DV With a Group
      • Push Metrics to Obol Monitoring
    • Prepare to Run a DV
      • How and Where To Run DVs
      • Deployment Best Practices
      • Test a Cluster
    • Running a DV
      • Activate a DV
      • Update a DV
      • Monitoring Your Node
      • Claim Rewards
      • Exit a DV
    • Partner Integrations
      • Create an EigenLayer DV
      • Create a Lido CSM DV
      • DappNode
  • Advanced & Troubleshooting
    • Advanced Guides
      • Create a DV Using the SDK
      • Migrate an Existing Validator
      • Enable MEV
      • Combine DV Private Key Shares
      • Self-Host a Relay
      • Advanced Docker Configs
      • Beacon node authentication
    • Troubleshooting
      • Errors & Resolutions
      • Handling DKG Failure
      • Client Configuration
      • Test Commands
    • Security
      • Overview
      • Centralization Risks and Mitigation
      • Obol Bug Bounty Program
      • Smart Contract Audit
      • Software Development at Obol
      • Charon Threat Model
      • Contacts
  • Community & Governance
    • Governance
      • Collective Overview
      • The Token House
      • The RAF
      • Delegate Guide
      • RAF1 Guide
    • The OBOL Token
      • Token Utility
      • Token Distribution & Liquidity
      • TGE FAQ
    • Community
      • Staking Mastery Program
      • Techne
    • Contribution & Feedback
      • Filing a Bug Report
      • Documentation Standards
      • Feedback
  • Walkthrough Guides
    • Walkthroughs
      • Walkthrough Guides
  • SDK
    • Intro
    • Enumerations
      • FORK_MAPPING
    • Classes
      • Client
    • Interfaces
      • ClusterDefinition
      • RewardsSplitPayload
    • Type-Aliases
      • BuilderRegistration
      • BuilderRegistrationMessage
      • ClusterCreator
      • ClusterLock
      • ClusterOperator
      • ClusterPayload
      • ClusterValidator
      • DepositData
      • DistributedValidator
      • ETH_ADDRESS
      • OperatorPayload
      • SplitRecipient
      • TotalSplitPayload
    • Functions
      • validateClusterLock
  • API
    • What is this API?
    • System
    • Metrics
    • Cluster Definition
    • Cluster Lock
    • State
    • DV Exit
    • Cluster Effectiveness
    • Terms And Conditions
    • Techne Credentials
    • Address
    • OWR Information
  • Specification
Powered by GitBook
On this page
  • Pre-requisites​
  • Step 1: Create the key shares locally
  • Step 2: Deploy and start the nodes
Edit on GitHub
  1. Run a DV
  2. Quickstart

Create a DV Alone

PreviousQuickstart OverviewNextCreate a DV With a Group

Last updated 5 days ago

It is possible for a single operator to manage all of the nodes of a DV cluster. The nodes can be run on a single machine, which is only suitable for testing, or the nodes can be run on multiple machines, which is expected for a production setup.

The private key shares can be created centrally and distributed securely to each node. Alternatively, the private key shares can be created in a lower-trust manner with a process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the instead for this latter case.

Pre-requisites

  • A basic of Ethereum nodes and validators.

  • Ensure you have installed.

  • Ensure you have installed.

  • Make sure docker is running before executing the commands below.

Step 1: Create the key shares locally

Go to the the and select Create a distributed validator alone. Follow the steps to configure your DV cluster. The Launchpad will give you a docker command to create your cluster. Before you run the command, clone the and cd into the directory.

# Clone the repo
git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git

# Change directory
cd charon-distributed-validator-cluster/

# Run the command provided in the DV Launchpad "Create a cluster alone" flow
docker run -u $(id -u):$(id -g) --rm -v "$(pwd)/:/opt/charon" obolnetwork/charon:v1.4.0 create cluster --definition-file=...

After the create cluster command is run, you should have multiple subfolders within the newly created ./cluster/ folder, one for each node created.

Backup the ./cluster/ folder, then move on to deploying the cluster.

Make sure your backup is secure and private, someone with access to these files could get the validators slashed.

  1. Clone the and cd into the directory.

# Clone the repo
git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git

# Change directory
cd charon-distributed-validator-cluster/
  1. Run the cluster creation command, setting required flag values.

Run the below command to create the validator private key shares and cluster artifacts locally, replacing the example values for nodes, network, num-validators, fee-recipient-addresses, and withdrawal-addresses. Check the for additional, optional flags to set.

  docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.4.0 create cluster \
    --nodes=6 \
    --network=holesky \
    --num-validators=1 \
    --name="Quickstart Guide Cluster" \
    --cluster-dir="cluster" \
    --fee-recipient-addresses=0x000000000000000000000000000000000000dead \
    --withdrawal-addresses=0x000000000000000000000000000000000000dead

If you would like your cluster to appear on the , add the --publish flag to the command.

After the create cluster command is run, you should have multiple subfolders within the newly created ./cluster/ folder, one for each node created.

Backup the ./cluster/ folder, then move on to deploying the cluster.

Make sure your backup is secure and private, someone with access to these files could get the validators slashed.

Step 2: Deploy and start the nodes

This part of the guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance, and is not suitable for a mainnet deployment. (If this machine fails, there will not be any fault tolerance - the cluster will also fail.)

For a production deployment with fault tolerance, follow the part of the guide instructing you how to distribute the nodes across multiple machines.

# Start the distributed validator cluster
docker compose up --build -d

Check the monitoring dashboard and see if things look all right.

# Open Grafana
open http://localhost:3000/d/laEp8vupp

This is necessary for the folder to be found by the default charon run command. Optionally, it is possible to override charon run's default file locations by using charon run --private-key-file="node0/charon-enr-private-key" --lock-file="node0/cluster-lock.json" for each instance of Charon you start (substituting node0 for each node number in your cluster as needed).

cluster
├── node0
│   ├── charon-enr-private-key
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       ├── keystore-0.txt
│       ├── ...
│       ├── keystore-N.json
│       └── keystore-N.txt
├── node1
│   ├── charon-enr-private-key
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       ├── keystore-0.txt
│       ├── ...
│       ├── keystore-N.json
│       └── keystore-N.txt
├── node2
│   ├── charon-enr-private-key
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       ├── keystore-0.txt
│       ├── ...
│       ├── keystore-N.json
│       └── keystore-N.txt
└── node3
    ├── charon-enr-private-key
    ├── cluster-lock.json
    ├── deposit-data.json
    └── validator_keys
        ├── keystore-0.json
        ├── keystore-0.txt
        ├── ...
        ├── keystore-N.json
        └── keystore-N.txt

Folder structure to be placed on each DV node:

└── .charon
    ├── charon-enr-private-key
    ├── cluster-lock.json
    ├── deposit-data.json
    └── validator_keys
        ├── keystore-0.json
        ├── keystore-0.txt
        ├── ...
        ├── keystore-N.json
        └── keystore-N.txt

Currently, the quickstart repo installs a node on the Holesky testnet. It is possible to choose a different network (another testnet, or mainnet) by overriding the .env file.

.env.sample is a sample environment file that allows overriding default configuration defined in docker-compose.yml. Uncomment and set any variable to override its value.

# Copy ".env.sample", renaming it ".env"
cp .env.sample.holesky .env

Run this command to start your cluster containers if you deployed using the .

To distribute your cluster across multiple machines, each node in the cluster needs one of the folders called node*/ to be copied to it. Each folder should be copied to a and renamed from node* to .charon.

Right now, the charon create cluster command outputs a folder structure like cluster/node*/. Make sure to grab the ./node*/ folders, rename them to .charon and then move them to one of the single node repos below. Once all nodes are online, synced, and connected, you will be ready to activate your validator.

👉 Use the single node , the kubernetes , or the example repos to get your nodes up and connected after loading the .charon folder artifacts into them appropriately. Output from create cluster:

Setup the desired inputs for the DV, including the network you wish to operate on. Check the for additional optional flags to set. Once you have set the values you wish to use. Make a copy of this file called .env.

Distributed Key Generation
group quickstart
​
knowledge
git
docker
DV Launchpad
CDVC repo
CDVC repo
Charon CLI reference
DV Launchpad
CDVC repo
CDVN repo
used earlier to create the private keys
docker compose
manifests
helm chart
Charon CLI reference