LogoLogo
SDKAPI
Version-v1.4.0 (current)
Version-v1.4.0 (current)
  • Learn
    • Introduction
      • Obol Collective
      • OBOL Incentives
      • Key Staking Concepts
      • Obol vs Other DV Implementations
      • Obol Splits
      • DV Launchpad
      • Frequently Asked Questions
    • Charon
      • Introduction to Charon
      • Distributed Key Generation
      • Cluster Configuration
      • Charon Networking
      • CLI Reference
    • Futher Reading
      • Ethereum and Its Relationship With DVT
      • Community Testing
      • Peer Score
      • Useful Links
  • Run a DV
    • Quickstart
      • Quickstart Overview
      • Create a DV Alone
      • Create a DV With a Group
      • Push Metrics to Obol Monitoring
    • Prepare to Run a DV
      • How and Where To Run DVs
      • Deployment Best Practices
      • Test a Cluster
    • Running a DV
      • Activate a DV
      • Update a DV
      • Monitoring Your Node
      • Claim Rewards
      • Exit a DV
    • Partner Integrations
      • Create an EigenLayer DV
      • Create a Lido CSM DV
      • DappNode
  • Advanced & Troubleshooting
    • Advanced Guides
      • Create a DV Using the SDK
      • Migrate an Existing Validator
      • Enable MEV
      • Combine DV Private Key Shares
      • Self-Host a Relay
      • Advanced Docker Configs
      • Beacon node authentication
    • Troubleshooting
      • Errors & Resolutions
      • Handling DKG Failure
      • Client Configuration
      • Test Commands
    • Security
      • Overview
      • Centralization Risks and Mitigation
      • Obol Bug Bounty Program
      • Smart Contract Audit
      • Software Development at Obol
      • Charon Threat Model
      • Contacts
  • Community & Governance
    • Governance
      • Collective Overview
      • The Token House
      • The RAF
      • Delegate Guide
      • RAF1 Guide
    • The OBOL Token
      • Token Utility
      • Token Distribution & Liquidity
      • TGE FAQ
    • Community
      • Staking Mastery Program
      • Techne
    • Contribution & Feedback
      • Filing a Bug Report
      • Documentation Standards
      • Feedback
  • Walkthrough Guides
    • Walkthroughs
      • Walkthrough Guides
  • SDK
    • Intro
    • Enumerations
      • FORK_MAPPING
    • Classes
      • Client
    • Interfaces
      • ClusterDefinition
      • RewardsSplitPayload
    • Type-Aliases
      • BuilderRegistration
      • BuilderRegistrationMessage
      • ClusterCreator
      • ClusterLock
      • ClusterOperator
      • ClusterPayload
      • ClusterValidator
      • DepositData
      • DistributedValidator
      • ETH_ADDRESS
      • OperatorPayload
      • SplitRecipient
      • TotalSplitPayload
    • Functions
      • validateClusterLock
  • API
    • What is this API?
    • System
    • Metrics
    • Cluster Definition
    • Cluster Lock
    • State
    • DV Exit
    • Cluster Effectiveness
    • Terms And Conditions
    • Techne Credentials
    • Address
    • OWR Information
  • Specification
Powered by GitBook
On this page
  • Pre-requisites​
  • Step 1: Get your ENR
  • Step 2: Create a cluster or accept an invitation to a cluster
  • Step 3: Run the Distributed Key Generation (DKG) ceremony
  • Step 4: Start your Distributed Validator Node
Edit on GitHub
  1. Run a DV
  2. Quickstart

Create a DV With a Group

This quickstart guide will walk you through creating a Distributed Validator Cluster with a number of other node operators.

PreviousCreate a DV AloneNextPush Metrics to Obol Monitoring

Last updated 5 days ago

Pre-requisites

  • A basic of Ethereum nodes and validators.

  • A machine that meets the for the network you intend to validate.

  • If you are taking part using a :

    • A computer with an up to date version of 's software and an internet connection.

  • If you are taking part using , or (CDVN) starter repo:

    • Ensure you have installed.

    • Ensure you have installed.

    • Make sure docker is running before executing the commands below.

Step 1: Get your ENR

In order to prepare for a distributed key generation ceremony, you need to create an ENR for your Charon client. This ENR is a public/private key pair that allows the other Charon clients in the DKG to identify and connect to your node. If you are creating a cluster but not taking part as a node operator in it, you can skip this step.

# Clone the repo
git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
# Change directory
cd charon-distributed-validator-node/
# Use docker to create an ENR. Backup the file `.charon/charon-enr-private-key`.
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.4.0 create enr

You should expect to see a console output like this:

Created ENR private key: .charon/charon-enr-private-key
enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u

Please make sure to create a backup of the private key at .charon/charon-enr-private-key Be careful not to commit it to git! If you lose this file you won't be able to take part in the DKG ceremony nor start the DV cluster successfully.

If instead of being shown your enr you see an error saying permission denied then you may need to to allow the command to run successfully.

For Step 2 of the quickstart:

  • Select the Creator tab if you are coordinating the creation of the cluster (this role holds no position of privilege in the cluster, it only sets the initial terms of the cluster that the other operators agree to).

  • Select the Operator tab if you are accepting an invitation to operate a node in a cluster, proposed by the cluster creator.

Prepare an Execution and Consensus client

Before preparing the DappNode to take part in a Distributed Validator Cluster, you must ensure you have selected an execution client & consensus client on your DappNode under the 'Stakers' tab for the network you intend to validate.

  1. Login to the DappNode Interface:

  2. Click on the 'Stakers' tab on the left side, select an execution client (e.g. Geth) & consensus client (e.g. Lodestar) & click 'Apply changes'. This will start the syncing process which can take a number of hours.

  3. Once the clients are finished syncing, it should reflect on your 'Dashboard' as shown below.

With a fully synced Ethereum node now running on the DappNode, the below steps will walk through installing the Obol package via an IPFS hash and preparing for a Distributed Key Generation ceremony. Future versions of this guide will download the package from the official DappNode DappStore once a stable 1.0 release is made.

  1. Before installing the package, make sure you are installing the correct one, this depends on which network your creator configures the cluster on, Holesky or Mainnet. You can find the link to both packages below:

  2. Copy the latest IPFS hash from the release details dropdown.

  3. Go back to DappNode Dashboard > Dappstore, select the 'Public' tab, and accept the terms & conditions before proceeding.

  4. Paste the IPFS hash you copied from Github and click 'Search' (It may take a minute for the package to be found.) You will then be presented with the package installation page. Under the blue 'Install' button, click on 'Advanced Options' & toggle the button to 'Bypass only signed safe restriction'.

  5. Click 'Install' & in the config mode page > select new cluster & submit. (if you already have the config URL, you can select URL option.)

  6. Accept the terms & conditions and the install process will begin.

  7. You should now be able to see the Holesky Obol package under the 'Packages' tab. Click on the package to see important details.

  8. Under the 'Info' tab, you will be see pre-generated ENRs, along with information such as the status of all five distributed validator clusters, their docker volumes & other menu options.

  9. Select any of the ENRs listed that are not already in use. This ENR will be used in the next step.

For Step 2 of the quickstart:

  • Select the Creator tab if you are coordinating the creation of the cluster (this role holds no position of privilege in the cluster, it only sets the initial terms of the cluster that the other operators agree to).

  • Select the Operator tab if you are accepting an invitation to operate a node in a cluster, proposed by the cluster creator.

Run the below command to check if your have successfully installed sedge in your computer.

sedge

Expected output:

A tool to allow deploying validators with ease.
  Usage:
    sedge [command]
  Available Commands:
    cli             Generate a node setup interactively
    clients         List supported clients
    deps            Manage dependencies
    down            Shutdown sedge running containers
    generate        Generate new setups according to selected options
    help            Help about any command
    import-key      Import validator keys
    keys            Generate keystore folder
    logs            Get running container logs
    networks        List supported networks
    run             Run services
    show            Show useful information about sedge running containers
    slashing-export Export slashing protection data
    slashing-import Import slashing protection data
    version         Print sedge version
  Flags:
    -h, --help               help for sedge
        --log-level string   Set Log Level, e.g panic, fatal, error, warn, warning, info, debug, trace (default "info")
  Use "sedge [command] --help" for more information about a command.

Create an ENR using charon:

# Use docker to create an ENR. Backup the file `.charon/charon-enr-private-key`.
docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.4.0 create enr

For Step 2 of the quickstart:

  • Select the Creator tab if you are coordinating the creation of the cluster (this role holds no position of privilege in the cluster, it only sets the initial terms of the cluster that the other operators agree to).

  • Select the Operator tab if you are accepting an invitation to operate a node in a cluster, proposed by the cluster creator.

Step 2: Create a cluster or accept an invitation to a cluster

Collect addresses, configure the cluster, share the invitation

Before starting the cluster creation process, you will need to collect an Ethereum address for each operator in the cluster. They will need to be able to sign messages through MetaMask with this address. (Broader wallet support will be added in future.) With these addresses in hand, go through the cluster creation flow.

The following are the steps for creating a cluster.

  1. Connect your wallet

  2. Select Create a Cluster with a group then Get Started.

  3. Follow the flow and accept the advisories.

  4. Configure the Cluster

    1. Input the Cluster Name & Cluster Size (i.e. number of operators in the cluster). The threshold will update automatically, it shows the number of nodes that need to be functioning for the validator(s) to stay active.

  5. Input the Ethereum addresses for each operator that you collected previously. If you will be taking part as an operator, click the "Use My Address" button for Operator 1.

    1. Select the desired amount of validators (32 ETH each) the cluster will run. (Note that the mainnet launchpad is restricted to one validator for now.)

    2. Enter the Principal address which should receive the principal 32 ETH and the accrued consensus layer rewards when the validator is exited. This can optionally be set to the contract address of a multisig / splitter contract.

    3. Enter the Fee Recipient address to which the execution layer rewards will go. This can be the same as the principal address, or it can be a different address. This can optionally be set to the contract address of a multisig / splitter contract.

  6. Click Create Cluster Configuration. Review that all the details are correct, and press Confirm and Sign You will be prompted to sign two or three transactions with your MetaMask wallet. These are:

    1. The config_hash. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.

    2. The operator_config_hash. This is your acceptance of the terms and conditions of participating as a node operator.

    3. Your ENR. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.

  7. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created.

  8. You can use the link to monitor how many of the operators have already signed their approval of the cluster configuration and submitted their ENR.

Once every participating operator is ready, the next step is the distributed key generation amongst the operators.

  • If you are not planning on operating a node, and were only configuring the cluster for the operators, your journey ends here. Well done!

  • If you are one of the cluster operators, continue to the next step.

You will use the CLI to create the cluster definition file, which you will distribute it to the operators manually.

  1. The leader or creator of the cluster will prepare the cluster-definition.json file for the Distributed Key Generation ceremony using the charon create dkg command.

  2. Populate the charon create dkg command with the appropriate flags including the name, the num-validators, the fee-recipient-addresses, the withdrawal-addresses, and the operator-enrs of all the operators participating in the cluster.

  3. Run the charon create dkg command that generates DKG cluster-definition.json file.

    docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.4.0 create dkg 
    
    --name="Quickstart" 
    
    --num-validators=1 
    
    --fee-recipient-addresses="0x0000000000000000000000000000000000000000" 
    
    --withdrawal-addresses="0x0000000000000000000000000000000000000000" 
    
    --operator-enrs="enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u"

    This command should output a file at .charon/cluster-definition.json This file needs to be shared with the other operators in a cluster.

    • The .charon folder is hidden by default. To view it, run ls -al .charon in your terminal. Else, if you are on macOS, press Cmd + Shift + . to view all hidden files in the Finder application.

Once every participating operator is ready, the next step is the distributed key generation amongst the operators.

  • If you are not planning on operating a node, and were only configuring the cluster for the operators, your journey ends here. Well done!

  • If you are one of the cluster operators, continue to the next step.

Join the cluster prepared by the creator

Use the Launchpad or CLI to join the cluster configuration generated by the creator:

Your cluster creator needs to configure the cluster, and send you an invite URL link to join the cluster on the Launchpad. Once you've received the Launchpad invite link, you can begin the cluster acceptance process.

  1. Click on the DV launchpad link provided by the leader or creator. Make sure you recognise the domain and the person sending you the link, to ensure you are not being phished.

  2. Connect your wallet using the Ethereum address provided to the leader.\

  3. Review the operators addresses submitted and click Get Started to continue.\

  4. Review and accept the DV Launchpad terms & conditions and advisories.

  5. Sign the two transactions with your wallet, these are:

    • The config hash. This is a hashed representation of all of the details for this cluster.

    • Your own ENR This signature authorises the key represented by this ENR to act on your behalf in the cluster.

  6. Wait for all the other operators in your cluster to also finish these steps.

Once every participating operator is ready, the next step is the distributed key generation amongst the operators.

  • If you are not planning on operating a node, and were only configuring the cluster for the operators, your journey ends here. Well done!

  • If you are one of the cluster operators, continue to the next step.

You'll receive the cluster-definition.json file created by the leader/creator. You should save it in the .charon/ folder that was created initially. (Alternatively, you can use the --definition-file flag to override the default expected location for this file.)

Once every participating operator is ready, the next step is the distributed key generation amongst the operators.

  • If you are not planning on operating a node, and were only configuring the cluster for the operators, your journey ends here. Well done!

  • If you are one of the cluster operators, continue to the next step.

Step 3: Run the Distributed Key Generation (DKG) ceremony

  1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click Continue. (If you closed the tab, you can always go back to the invite link shared by the leader and connect your wallet.) \

  2. Copy and run the docker command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process. \

  3. Assuming the DKG is successful, a number of artefacts will be created in the .charon folder of the node. These include:

    • A deposit-data.json file. This contains the information needed to activate the validator on the Ethereum network.

    • A cluster-lock.json file. This contains the information needed by Charon to operate the distributed validator cluster with its peers.

    • A validator_keys/ folder. This folder contains the private key shares and passwords for the created distributed validators.

Once the creator gives you the cluster-definition.json file and you place it in a .charon subdirectory, run:

docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.4.0 dkg --publish

and the DKG process should begin.

Follow this step if you are signing through the DV Launchpad, importing the cluster definition URL into the DappNode package's config & then running the DKG inside the DappNode, followed by cluster run.\

  1. After all operators have signed with their wallet and has provided an ENR from the DappNode info tab, the Launchpad will instruct operators to begin the DKG ceremony. Click continue & navigate to the 'Dappnode/Avado' tab where the cluster definition URL is presented.

  2. To run the Distributed Key Generation ceremony using a DappNode, you must paste the cluster definition URL into the Obol Package interface. Go to the 'Config' tab, select 'URL' from the dropdown menu, paste the cluster definition URL you retrieved from the launchpad, into the validator cluster-*field which matches the cluster you took the ENR from. Example: If you picked ENR1 for signing, then you should paste the URL into Cluster-1. Finally, click the 'Update' button at the bottom of the page.\

  3. After DappNode records the cluster definition URL, go back to the 'Info' tab and restart the Charon container.

  4. The node is now ready and will attempt to complete the DKG. You can monitor the DKG progress via the 'Logs' tab of the package. Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching cluster_definition_hash), the key generation ceremony begins.

  5. Example of DKG ceremony competed log.

Create a DV Node Backup

It is important to back up all artefacts generated by the DKG ceremony, and your node ENR private key. The below steps will show you how to download your keys & node artefacts.

  1. Navigate to the backup tab inside the Obol package.

  2. Click on the 'Backup now' button and it will open a new chrome window with a 'file save' option. Select the path where you want to save the Backup tar file.

  3. Double click to extract the tar file. There will be folders for each charon node (max 5). Navigate to each node folder, and all artefacts related to each node will be present.\

Sedge does not currently support taking part in a DKG. Follow the instructions for Launchpad to take part in the DKG with Charon, and in Step 4 you will import these keys into Sedge.

Please make sure to create a backup of your .charon/ folder. If you lose your private keys you won't be able to start the DV cluster successfully and may risk your validator deposit becoming unrecoverable. Ensure every operator has their .charon folder securely and privately backed up before activating any validators.

The cluster-lock and deposit-data files are identical for each operator, if lost, they can be copied from one operator to another.

Now that the DKG has been completed, all operators can start their nodes.

Step 4: Start your Distributed Validator Node

With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term.

Start by copying the appropriate .env.sample.<NETWORK> file to .env, and modifying values as needed.

# To prepare the node for the Holesky test network
# Copy ".env.sample.holesky", renaming it ".env"
cp .env.sample.holesky .env


# To prepare the node for the main Ethereum network
# Copy ".env.sample.mainnet", renaming it ".env"
cp .env.sample.mainnet .env


In the same folder where you created your ENR in Step 1, and ran the DKG in Step 3, start your node in the DV cluster with docker compose.

```shell

# To be run from the ./charon-distributed-validator-node folder
# Spin up a Distributed Validator Node with a Validator Client
docker compose up -d

Do not start this node until the DKG is complete, as the charon container will interfere with the charon instance attempting to take part in the DKG ceremony.

If at any point you need to turn off your node, you can run:

# Shut down the currently running Distributed Validator Node
docker compose down

You should use the Grafana dashboard that accompanies the quickstart repo to see whether your cluster is healthy.

# Open Grafana dashboard
open http://localhost:3000/d/d6qujIJVk/

In particular you should check:

  • That your Charon client can connect to the configured beacon client.

  • That your Charon client can connect to all peers directly.

  • That your validator client is connected to Charon, and has the private keys it needs loaded and accessible. Most components in the dashboard have some help text there to assist you in understanding your cluster performance. You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually it takes ~16 hours after the deposit is made).

To prepare a Distributed Validator node using sedge, we will use the sedge generate command to prepare a docker-compose file of our preferred clients, sedge import-key to import the artifacts created during the DKG ceremony, and sedge run to begin running the node.

With Sedge installed, and the DKG complete, it’s time to deploy a Distributed Validator. Using the sedge generate command and its subcommands, Sedge will create a Docker Compose file needed to run the validator node.

  1. sedge generate full-node --validator=teku --consensus=prysm --execution=geth --network=holesky --distributed

    You should be shown a long list of configuration outputs with the following endings:

    2024-09-20 12:56:15 -- [INFO] Generation of files successfully, happy staking! You can use now 'sedge run' to start the setup.
  2. Explore the config files.

    You should now see a sedge-data directory created in the folder where you ran the sedge generate command. To view the directory contents, use the ls command.

    ls sedge-data
    > docker-compose.yml jwtsecret

Use the following command to import keys from the directory where the .charon dir is located.

sedge import-key --from ./ holesky teku

After confirming the configurations and ensuring all files are in place, use the sedge run command to deploy the DV docker containers. Sedge will then begin pulling all the required Docker images.

> sedge run
2024-09-20 13:11:49 -- [INFO] [Logger Init] Log level: info
2024-09-20 13:11:49 -- [WARN] A new Version of sedge is available. Please update to the latest Version. See https://github.com/NethermindEth/sedge/releases for more information. Latest detected tag: fatal: not a git repository (or any of the parent directories): .git
2024-09-20 13:11:50 -- [INFO] Setting up containers
2024-09-20 13:11:50 -- [INFO] Running command: docker compose -f /sedge/sedge-data/docker-compose.yml build
2024-09-20 13:11:50 -- [INFO] Running command: docker compose -f /sedge-data/docker-compose.yml pull
[+] Pulling 16/44
 ⠇ consensus [⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀] Pulling                                                                                                                                                     20.8s
   â ™ b003b463d750 Downloading [===============>                                   ]   32.9kB/103.7kB                                                                                                14.2s
   â ™ fe5ca62666f0 Waiting                                                                                                                                                                           14.2s
   â ™ b02a7525f878 Waiting                                                                                                                                                                           14.2s
   â ™ fcb6f6d2c998 Waiting                                                                                                                                                                           14.2s
   â ™ e8c73c638ae9 Waiting                                                                                                                                                                           14.2s
   â ™ 1e3d9b7d1452 Waiting                                                                                                                                                                           14.2s
   â ™ 4aa0ea1413d3 Waiting                                                                                                                                                                           14.2s
   â ™ 7c881f9ab25e Waiting                                                                                                                                                                           14.2s
   â ™ 5627a970d25e Waiting                                                                                                                                                                           14.2s
   â ™ 5cf83054c259 Waiting                                                                                                                                                                           14.2s
   â ™ fec68abcb14d Waiting                                                                                                                                                                           14.2s
   â ™ 4d5ad547ce94 Waiting                                                                                                                                                                           14.2s
   â ™ e1ea80853e89 Waiting                                                                                                                                                                           14.2s
   â ™ 17b1d7e8d99a Waiting                                                                                                                                                                           14.2s
   â ™ 841a2fc14521 Waiting                                                                                                                                                                           14.2s
   â ™ 55b44d28dd62 Waiting                                                                                                                                                                           14.2s
   â ™ f3e3115c6547 Pulling fs layer                                                                                                                                                                  14.2s
   â ™ 3cec53649029 Waiting                                                                                                                                                                           14.2s
   â ™ 01739568079a Waiting                                                                                                                                                                           14.2s
   â ™ c6bd24b188db Waiting                                                                                                                                                                           14.2s
   â ™ fe8d2e9c9467 Waiting                                                                                                                                                                           14.2s
   â ™ c151008cbec0 Waiting                                                                                                                                                                           14.2s
   â ™ de1ef6c90686 Waiting                                                                                                                                                                           14.2s
   â ™ 03d09d97b125 Waiting                                                                                                                                                                           14.2s
 ✔ execution Pulled                                                                                                                                                                                  9.3s
   ✔ a258b2a6b59a Pull complete                                                                                                                                                                      1.5s
   ✔ a2d6cf6afda3 Pull complete                                                                                                                                                                      1.7s
   ✔ a3dd8256fc41 Pull complete                                                                                                                                                                      6.9s

Once all docker images are pulled, sedge will create & start the containers to run all the required clients. See below for example output of the progress.

✔ 8db8b5d461a7 Pull complete                                                                                                                                                                     24.1s
   ✔ 2288b86b1d5f Pull complete                                                                                                                                                                     24.3s
   ✔ 4becb7b9a44b Pull complete                                                                                                                                                                     24.3s
   ✔ 4f4fb700ef54 Pull complete                                                                                                                                                                     24.3s
   ✔ 5c35e3728c84 Pull complete                                                                                                                                                                     35.1s
2024-09-20 13:12:45 -- [INFO] Running command: docker compose -f /sedge-data/docker-compose.yml create
[+] Creating 7/7
 ✔ Network sedge-network              Created                                                                                                                                                        0.1s
 ✔ Container sedge-dv-client          Created                                                                                                                                                        0.4s
 ✔ Container sedge-consensus-client   Created                                                                                                                                                        0.4s
 ✔ Container sedge-execution-client   Created                                                                                                                                                        0.4s
 ✔ Container sedge-mev-boost          Created                                                                                                                                                        0.4s
 ✔ Container sedge-validator-blocker  Created                                                                                                                                                        0.4s
 ✔ Container sedge-validator-client   Created                                                                                                                                                        0.1s
2024-09-20 13:12:45 -- [INFO] Running command: docker compose -f /sedge-data/docker-compose.yml up -d
[+] Running 4/5
 ✔ Container sedge-consensus-client   Started                                                                                                                                                        1.0s
 â § Container sedge-validator-blocker  Waiting                                                                                                                                                      130.8s
 ✔ Container sedge-dv-client          Started                                                                                                                                                        1.0s
 ✔ Container sedge-execution-client   Started                                                                                                                                                        1.3s
 ✔ Container sedge-mev-boost          Started      

Given time, the execution and consensus clients should complete syncing, and if a Distributed Validator has already been activated, the node should begin to validate.

Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly.

If you already have a beacon node running somewhere and you want to use that instead of running an EL (nethermind) & CL (lighthouse) as part of the example repo, you can disable these images. To do so, follow these steps:

  1. Copy the docker-compose.override.yml.sample file

cp -n docker-compose.override.yml.sample docker-compose.override.yml
  1. Uncomment the profiles: [disable] section for both nethermind and lighthouse. The override file should now look like this

services:
  nethermind:
    # Disable nethermind
    profiles: [disable]
    # Bind nethermind internal ports to host ports
    #ports:
      #- 8545:8545 # JSON-RPC
      #- 8551:8551 # AUTH-RPC
      #- 6060:6060 # Metrics
  lighthouse:
    # Disable lighthouse
    profiles: [disable]
    # Bind lighthouse internal ports to host ports
    #ports:
      #- 5052:5052 # HTTP
      #- 5054:5054 # Metrics
...
  1. Then, uncomment and set the CHARON_BEACON_NODE_ENDPOINTS variable in the .env file to your beacon node's URL

...
# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
CHARON_BEACON_NODE_ENDPOINTS=<YOUR_REMOTE_BEACON_NODE_URL>
...
  1. Restart your docker compose

docker compose down
docker compose up -d

Install the Obol DappNode package

Installing Sedge

First you must install Sedge, please refer to the to do so.

Check the install was successful

You will use the Launchpad to create an invitation, and share it with the operators. This video shows the flow within the

Go to the

If you are taking part in the cluster, enter the ENR you generated in in the "What is your charon client's ENR?" field.

Review the cluster configuration set by the creator and add your ENR that you generated in .\

For the to complete, all operators need to be running the command simultaneously. It helps if operators can agree on a certain time or schedule a video call for them to all run the command together.

The is configured to sync an execution layer client (Nethermind) and a consensus layer client (Lighthouse) using Docker Compose, further client combinations can be prepared using Sedge. You can also leverage alternative ways to run a node such as Ansible, Helm, or Kubernetes manifests.

Currently, the has defaults for the Holesky testnet and for mainnet.

Sedge generate

The following command generates the artifacts required to deploy a distributed validator on the Holesky network, using Teku as the validator client, Prysm as the consensus client, and Geth as the execution client. For additional supported client combinations, .

Sedge Import-key

Sedge Run

If you encounter issues with using Sedge as part of a DV cluster, consider consulting the directly, or opening an or if appropriate.

Use an ansible playbook to start your node. for further instructions.

Use a Helm to start your node. for further instructions.

Use Kubernetes manifests to start your Charon client and validator client. These manifests expect an existing Beacon Node Endpoint to connect to. for further instructions.

In a Distributed Validator Cluster, it is important to have a low latency connection to your peers. Charon clients will use the NAT protocol to attempt to establish a direct connection to one another automatically. If this doesn't happen, you should port forward Charon's p2p port to the public internet to facilitate direct connections. The default port to expose is :3610. Read more about Charon's networking .

If you have gotten to this stage, every node is up, synced and connected, congratulations. You can now move forward to to begin staking.

​
Holesky Repo
Mainnet Repo
​
official Sedge installation guide
​
DV Launchpad
DV Launchpad
step one
step 1
DKG
CDVN repository
CDVN repo
​
refer to the documentation here
​
​
Sedge docs
issue
pull request
See the repo here
See the repo here
See the repo here
here
activating your validator
​
knowledge
minimum requirements
DappNode
DappNode
Sedge
Charon's Distributed Validator Node
git
docker
update your docker permissions
​