Multi-Node & Delegate Setup
Run multiple nodes with UPnP, Proxmox Fractus setups, and the P2SH delegate system.
Multi-Node & Delegate Setup
Running multiple Flux nodes is a common strategy for maximizing rewards and diversifying risk. This guide covers multi-node configuration, the P2SH delegate system for shared node operation, and Proxmox-based setups for running multiple nodes on a single physical server.
Multi-Node Configuration
Each Flux node requires its own IP address and its own collateral UTXO. There are several approaches:
| Approach | Pros | Cons |
|---|---|---|
| Separate physical servers | Simple, isolated, most reliable | Highest cost |
| VMs on one server (Proxmox/VMware) | Efficient hardware use, lower cost per node | Shared failure domain, requires hypervisor knowledge |
| VPS from hosting providers | No hardware management, scalable | Monthly recurring cost, provider dependency |
| Home hosted with UPnP | Lowest cost, full control | Requires capable router, single internet connection risk |
UPnP Multi-Node (Home Hosting)
If hosting multiple nodes behind a single router, each node needs a unique API port. The default port is 16127, and allowed alternative ports are: 16137, 16147, 16157, 16167, 16177, 16187, 16197. This gives you a maximum of 8 nodes per IP address.
- 1
Enable UPnP on your router
Access your router settings and enable UPnP (Universal Plug and Play). Most modern routers support this.
- 2
Run Multitoolbox Option 14
On each node, run Multitoolbox and select Option 14 to configure UPnP port mapping. Assign a unique port to each node.
- 3
Verify port accessibility
From an external network, verify each port is reachable: `curl http://YOUR_PUBLIC_IP:16137/flux/version`
Proxmox Fractus Setup
Proxmox VE is a popular open-source hypervisor for running multiple Flux nodes on one physical server. The "Fractus" approach refers to splitting a single high-spec server into multiple VMs:
- β’Hardware: A single Stratus-class server (16 cores, 64 GB RAM, 1 TB SSD) can host up to 4 Cumulus nodes or 2 Nimbus nodes
- β’Storage: Use ZFS or LVM-thin for efficient storage allocation. Thunder storage pools provide best performance.
- β’Networking: Each VM needs a unique IP (use multiple IPs from your hosting provider or VLAN configuration)
- β’Resources: Allocate slightly more than minimum per VM β leave headroom for system overhead
The Flux network actively checks for hardware specification compliance. Running too many nodes on hardware that doesn't genuinely meet the per-node requirements will result in benchmark failures and potential network penalties.
P2SH Delegate System
The Flux delegate system allows multiple parties to collaborate on running a node using P2SH (Pay-to-Script-Hash) transactions. This is useful for shared ownership scenarios or managed hosting where the host doesn't hold the collateral:
- 1
Create a delegate key pair
Use the `createdelegatekeypair` RPC command to generate a delegate keypair. The delegate key proves operational control without requiring collateral custody.
- 2
Create a P2SH start transaction
The collateral holder creates a P2SH start transaction that includes the delegate's public key. This authorizes the delegate to operate the node.
- 3
Sign and broadcast
Both parties sign the transaction. The collateral holder signs with their key, and the delegate signs with their delegate key.
- 4
Start the node
The delegate can now start and manage the node using `startfluxnodeasdelegate` or `startp2shasdelegate` commands.
Fleet Management Tips
- β’Stagger updates: Never update all nodes simultaneously. Stagger by 10β15 minutes to maintain availability.
- β’Diversify hosting: Spread nodes across different hosting providers and geographic regions to avoid correlated failures.
- β’Centralized monitoring: Use Fluxme.io's wallet-based view to monitor all nodes tied to a single payment address.
- β’Automate with scripts: For large fleets, write SSH-based scripts to batch-check status and trigger updates across all nodes.
- β’Separate wallets: Consider using different payment addresses for different groups of nodes for organizational clarity.
Other articles in Advanced Operations
ArcaneOS Installation Guide
Step-by-step guide to installing ArcaneOS β the hardened OS for PNR-eligible Flux nodes.
Legacy vs ArcaneOS
Feature comparison, migration path, and decision guide for choosing between Legacy and ArcaneOS.
Multitoolbox Reference
Complete reference for all 14 Multitoolbox options β installation, diagnostics, repair, and configuration.
Maintenance Window Deep Dive
Understanding the 480-block confirmation cycle, timing strategies, and monitoring your window status.
Troubleshooting & Diagnostics
Common node issues, log analysis, benchmark failures, DOS list recovery, and diagnostic techniques.
FluxOS API Reference
Essential API endpoints for monitoring, automation, and building tools on top of the Flux network.
Backup, Restore & Migration
Back up your node identity, restore from scratch, and migrate between hosts with zero collateral risk.