# RPC VMID Migration Plan - Complete Guide **Last Updated:** 2026-01-31 **Document Version:** 1.0 **Status:** Active Documentation --- **Date:** 2025-01-20 **Status:** 📋 **READY FOR EXECUTION** --- ## Overview This document provides a complete step-by-step plan for migrating RPC nodes from old VMIDs to new VMIDs for consistency in Proxmox naming and IP allocation. **Migration Type:** Creating new VMs by cloning existing VMs to new VMIDs --- ## Migration Mappings ### Core/Public/Private RPC Nodes | Old VMID | Old IP | Old Name | New VMID | New IP | New Name | Host | |----------|--------|----------|----------|--------|----------|------| | 2500 | 192.168.11.250 | besu-rpc-1 | 2101 | 192.168.11.211 | besu-rpc-core-1 | 192.168.11.10 | | 2501 | 192.168.11.251 | besu-rpc-2 | 2201 | 192.168.11.221 | besu-rpc-public-1 | 192.168.11.10 | | 2502 | 192.168.11.252 | besu-rpc-3 | 2301 | 192.168.11.232 | besu-rpc-private-1 | 192.168.11.10 | ### Thirdweb RPC Nodes | Old VMID | Old IP | Old Name | New VMID | New IP | New Name | Host | |----------|--------|----------|----------|--------|----------|------| | 2400 | 192.168.11.240 | thirdweb-rpc-1 | 2401 | 192.168.11.241 | besu-rpc-thirdweb-0x8a-1 | 192.168.11.10 | | 2401 | 192.168.11.241 | thirdweb-rpc-2 | 2402 | 192.168.11.242 | besu-rpc-thirdweb-0x8a-2 | 192.168.11.10 | | 2402 | 192.168.11.242 | thirdweb-rpc-3 | 2403 | 192.168.11.243 | besu-rpc-thirdweb-0x8a-3 | 192.168.11.10 | ### Tenant RPC Nodes | Old VMID | Old IP | Old Name | New VMID | New IP | New Name | Host | |----------|--------|----------|----------|--------|----------|------| | 2503 | 192.168.11.253 | besu-rpc-ali-0x8a | 2303 | 192.168.11.233 | besu-rpc-ali-0x8a | 192.168.11.10 | | 2504 | 192.168.11.254 | besu-rpc-ali-0x1 | 2304 | 192.168.11.234 | besu-rpc-ali-0x1 | 192.168.11.10 | | 2505 | 192.168.11.201 | besu-rpc-luis-0x8a | 2305 | 192.168.11.235 | besu-rpc-luis-0x8a | 192.168.11.10 | | 2506 | 192.168.11.202 | besu-rpc-luis-0x1 | 2306 | 192.168.11.236 | besu-rpc-luis-0x1 | 192.168.11.10 | | 2507 | 192.168.11.203 | besu-rpc-putu-0x8a | 2307 | 192.168.11.237 | besu-rpc-putu-0x8a | 192.168.11.10 | | 2508 | 192.168.11.204 | besu-rpc-putu-0x1 | 2308 | 192.168.11.238 | besu-rpc-putu-0x1 | 192.168.11.10 | **Total Migrations:** 12 RPC nodes --- ## Pre-Migration Checklist ### ✅ Completed 1. ✅ Updated Besu node configuration files (`permissions-nodes.toml`, `permissioned-nodes.json`) 2. ✅ Updated all scripts with new VMID mappings 3. ✅ Deployed Besu node files to existing running nodes 4. ✅ Created migration scripts 5. ✅ UDM Pro port forwarding updated (user completed) ### ⏳ Pending 1. ⏳ Backup old VMs (recommended before migration) 2. ⏳ Verify storage availability for new VMs 3. ⏳ Schedule maintenance window (if needed) --- ## Migration Steps ### Step 1: Dry Run (Recommended) Test the migration script without making changes: ```bash bash scripts/migrate-rpc-vmids.sh 192.168.11.10 true ``` This will show what would be done without actually cloning VMs. ### Step 2: Execute Migration Run the migration script: ```bash bash scripts/migrate-rpc-vmids.sh 192.168.11.10 false ``` **What this does:** 1. Stops old VMs (if running) 2. Clones each old VMID to new VMID 3. Updates network configuration with new IP addresses 4. Sets new hostname/name **Expected Duration:** 30-60 minutes (depending on VM sizes and storage speed) ### Step 3: Start New VMs After migration, start the new VMs: ```bash # For containers for vmid in 2101 2201 2301 2401 2402 2403 2303 2304 2305 2306 2307 2308; do ssh root@192.168.11.10 "pct start $vmid" done # For VMs (if any) for vmid in 2101 2201 2301 2401 2402 2403 2303 2304 2305 2306 2307 2308; do ssh root@192.168.11.10 "qm start $vmid" done ``` ### Step 4: Verify Migration Run the verification script: ```bash bash scripts/verify-migrated-rpc-nodes.sh 192.168.11.10 ``` This checks: - VMID exists - VM is running - Hostname is correct - IP address is correct - RPC endpoints are responding ### Step 5: Deploy Besu Node Files Deploy updated Besu node configuration files to new VMs: ```bash bash scripts/deploy-besu-node-files.sh ``` ### Step 6: Restart Besu Services Restart Besu services on all nodes (including new ones): ```bash # On host 192.168.11.10 for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2101 2201 2301; do ssh root@192.168.11.10 "pct exec $vmid -- systemctl restart besu.service" done ``` ### Step 7: Verify Connectivity Test connectivity from NPMplus: ```bash bash scripts/test-npmplus-full-connectivity.sh bash scripts/diagnose-npmplus-backend-services.sh ``` ### Step 8: Update NPMplus Proxy Rules Update NPMplus proxy rules to point to new IP addresses (if needed): - Check NPMplus configuration - Update any hardcoded IP addresses - Verify proxy rules are working ### Step 9: Decommission Old VMs **⚠️ IMPORTANT: Only after verifying new VMs are working correctly** 1. Stop old VMs: ```bash for vmid in 2500 2501 2502 2400 2401 2402 2503 2504 2505 2506 2507 2508; do ssh root@192.168.11.10 "pct stop $vmid" || ssh root@192.168.11.10 "qm stop $vmid" done ``` 2. Verify new VMs are still working 3. Delete old VMs (optional - you may want to keep them as backup): ```bash for vmid in 2500 2501 2502 2400 2401 2402 2503 2504 2505 2506 2507 2508; do ssh root@192.168.11.10 "pct destroy $vmid" || ssh root@192.168.11.10 "qm destroy $vmid" done ``` --- ## Rollback Plan If migration fails or issues are discovered: 1. **Stop new VMs:** ```bash for vmid in 2101 2201 2301 2401 2402 2403 2303 2304 2305 2306 2307 2308; do ssh root@192.168.11.10 "pct stop $vmid" || ssh root@192.168.11.10 "qm stop $vmid" done ``` 2. **Start old VMs:** ```bash for vmid in 2500 2501 2502 2400 2401 2402 2503 2504 2505 2506 2507 2508; do ssh root@192.168.11.10 "pct start $vmid" || ssh root@192.168.11.10 "qm start $vmid" done ``` 3. **Revert Besu node files** (if deployed): - Restore from backup - Redeploy old configuration 4. **Update scripts** to revert to old VMIDs (if needed) --- ## Verification Checklist After migration, verify: - [ ] All new VMIDs exist and are running - [ ] IP addresses are correct - [ ] Hostnames are correct - [ ] RPC endpoints are responding - [ ] Besu node files are deployed - [ ] Besu services are running - [ ] Peer connections are working - [ ] NPMplus can reach new RPC nodes - [ ] External access works (via NPMplus) - [ ] Old VMs are stopped (or decommissioned) --- ## Troubleshooting ### Issue: Clone fails with storage error **Solution:** Specify different storage: ```bash # Edit migrate-rpc-vmids.sh and change storage parameter STORAGE="local-lvm" # or "thin1", "local", etc. ``` ### Issue: Network configuration not updated **Solution:** Manually update network config: ```bash # For containers ssh root@192.168.11.10 "pct set --net0 name=eth0,bridge=vmbr0,firewall=1,ip=/24,gw=192.168.11.1" # For VMs, update inside guest OS ``` ### Issue: IP address conflict **Solution:** Ensure old VM is stopped before starting new VM: ```bash ssh root@192.168.11.10 "pct stop " ``` ### Issue: Besu service won't start **Solution:** Check logs and verify Besu node files: ```bash ssh root@192.168.11.10 "pct exec -- journalctl -u besu.service -n 50" ssh root@192.168.11.10 "pct exec -- cat /etc/besu/permissions-nodes.toml" ``` --- ## Scripts Reference ### Migration Script ```bash bash scripts/migrate-rpc-vmids.sh [host] [dry_run] # Example: bash scripts/migrate-rpc-vmids.sh 192.168.11.10 false ``` ### Verification Script ```bash bash scripts/verify-migrated-rpc-nodes.sh [host] # Example: bash scripts/verify-migrated-rpc-nodes.sh 192.168.11.10 ``` ### Deploy Besu Node Files ```bash bash scripts/deploy-besu-node-files.sh ``` ### Test Connectivity ```bash bash scripts/test-npmplus-full-connectivity.sh bash scripts/diagnose-npmplus-backend-services.sh ``` --- ## Post-Migration Tasks 1. ✅ Update documentation with final VMID mappings 2. ✅ Update monitoring/alerting systems 3. ✅ Update backup scripts 4. ✅ Update any external documentation 5. ✅ Clean up old VM backups (if desired) --- **Last Updated:** 2025-01-20 **Ready for Execution:** ✅ Yes