Files
proxmox/docs/04-configuration/RPC_VMID_MIGRATION_PLAN.md
defiQUG fbda1b4beb
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
docs: Ledger Live integration, contract deploy learnings, NEXT_STEPS updates
- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands
- CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround
- CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check
- NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere
- MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates
- LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:46:57 -08:00

8.1 KiB

RPC VMID Migration Plan - Complete Guide

Last Updated: 2026-01-31
Document Version: 1.0
Status: Active Documentation


Date: 2025-01-20
Status: 📋 READY FOR EXECUTION


Overview

This document provides a complete step-by-step plan for migrating RPC nodes from old VMIDs to new VMIDs for consistency in Proxmox naming and IP allocation.

Migration Type: Creating new VMs by cloning existing VMs to new VMIDs


Migration Mappings

Core/Public/Private RPC Nodes

Old VMID Old IP Old Name New VMID New IP New Name Host
2500 192.168.11.250 besu-rpc-1 2101 192.168.11.211 besu-rpc-core-1 192.168.11.10
2501 192.168.11.251 besu-rpc-2 2201 192.168.11.221 besu-rpc-public-1 192.168.11.10
2502 192.168.11.252 besu-rpc-3 2301 192.168.11.232 besu-rpc-private-1 192.168.11.10

Thirdweb RPC Nodes

Old VMID Old IP Old Name New VMID New IP New Name Host
2400 192.168.11.240 thirdweb-rpc-1 2401 192.168.11.241 besu-rpc-thirdweb-0x8a-1 192.168.11.10
2401 192.168.11.241 thirdweb-rpc-2 2402 192.168.11.242 besu-rpc-thirdweb-0x8a-2 192.168.11.10
2402 192.168.11.242 thirdweb-rpc-3 2403 192.168.11.243 besu-rpc-thirdweb-0x8a-3 192.168.11.10

Tenant RPC Nodes

Old VMID Old IP Old Name New VMID New IP New Name Host
2503 192.168.11.253 besu-rpc-ali-0x8a 2303 192.168.11.233 besu-rpc-ali-0x8a 192.168.11.10
2504 192.168.11.254 besu-rpc-ali-0x1 2304 192.168.11.234 besu-rpc-ali-0x1 192.168.11.10
2505 192.168.11.201 besu-rpc-luis-0x8a 2305 192.168.11.235 besu-rpc-luis-0x8a 192.168.11.10
2506 192.168.11.202 besu-rpc-luis-0x1 2306 192.168.11.236 besu-rpc-luis-0x1 192.168.11.10
2507 192.168.11.203 besu-rpc-putu-0x8a 2307 192.168.11.237 besu-rpc-putu-0x8a 192.168.11.10
2508 192.168.11.204 besu-rpc-putu-0x1 2308 192.168.11.238 besu-rpc-putu-0x1 192.168.11.10

Total Migrations: 12 RPC nodes


Pre-Migration Checklist

Completed

  1. Updated Besu node configuration files (permissions-nodes.toml, permissioned-nodes.json)
  2. Updated all scripts with new VMID mappings
  3. Deployed Besu node files to existing running nodes
  4. Created migration scripts
  5. UDM Pro port forwarding updated (user completed)

Pending

  1. Backup old VMs (recommended before migration)
  2. Verify storage availability for new VMs
  3. Schedule maintenance window (if needed)

Migration Steps

Test the migration script without making changes:

bash scripts/migrate-rpc-vmids.sh 192.168.11.10 true

This will show what would be done without actually cloning VMs.

Step 2: Execute Migration

Run the migration script:

bash scripts/migrate-rpc-vmids.sh 192.168.11.10 false

What this does:

  1. Stops old VMs (if running)
  2. Clones each old VMID to new VMID
  3. Updates network configuration with new IP addresses
  4. Sets new hostname/name

Expected Duration: 30-60 minutes (depending on VM sizes and storage speed)

Step 3: Start New VMs

After migration, start the new VMs:

# For containers
for vmid in 2101 2201 2301 2401 2402 2403 2303 2304 2305 2306 2307 2308; do
    ssh root@192.168.11.10 "pct start $vmid"
done

# For VMs (if any)
for vmid in 2101 2201 2301 2401 2402 2403 2303 2304 2305 2306 2307 2308; do
    ssh root@192.168.11.10 "qm start $vmid"
done

Step 4: Verify Migration

Run the verification script:

bash scripts/verify-migrated-rpc-nodes.sh 192.168.11.10

This checks:

  • VMID exists
  • VM is running
  • Hostname is correct
  • IP address is correct
  • RPC endpoints are responding

Step 5: Deploy Besu Node Files

Deploy updated Besu node configuration files to new VMs:

bash scripts/deploy-besu-node-files.sh

Step 6: Restart Besu Services

Restart Besu services on all nodes (including new ones):

# On host 192.168.11.10
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2101 2201 2301; do
    ssh root@192.168.11.10 "pct exec $vmid -- systemctl restart besu.service"
done

Step 7: Verify Connectivity

Test connectivity from NPMplus:

bash scripts/test-npmplus-full-connectivity.sh
bash scripts/diagnose-npmplus-backend-services.sh

Step 8: Update NPMplus Proxy Rules

Update NPMplus proxy rules to point to new IP addresses (if needed):

  • Check NPMplus configuration
  • Update any hardcoded IP addresses
  • Verify proxy rules are working

Step 9: Decommission Old VMs

⚠️ IMPORTANT: Only after verifying new VMs are working correctly

  1. Stop old VMs:
for vmid in 2500 2501 2502 2400 2401 2402 2503 2504 2505 2506 2507 2508; do
    ssh root@192.168.11.10 "pct stop $vmid" || ssh root@192.168.11.10 "qm stop $vmid"
done
  1. Verify new VMs are still working

  2. Delete old VMs (optional - you may want to keep them as backup):

for vmid in 2500 2501 2502 2400 2401 2402 2503 2504 2505 2506 2507 2508; do
    ssh root@192.168.11.10 "pct destroy $vmid" || ssh root@192.168.11.10 "qm destroy $vmid"
done

Rollback Plan

If migration fails or issues are discovered:

  1. Stop new VMs:
for vmid in 2101 2201 2301 2401 2402 2403 2303 2304 2305 2306 2307 2308; do
    ssh root@192.168.11.10 "pct stop $vmid" || ssh root@192.168.11.10 "qm stop $vmid"
done
  1. Start old VMs:
for vmid in 2500 2501 2502 2400 2401 2402 2503 2504 2505 2506 2507 2508; do
    ssh root@192.168.11.10 "pct start $vmid" || ssh root@192.168.11.10 "qm start $vmid"
done
  1. Revert Besu node files (if deployed):

    • Restore from backup
    • Redeploy old configuration
  2. Update scripts to revert to old VMIDs (if needed)


Verification Checklist

After migration, verify:

  • All new VMIDs exist and are running
  • IP addresses are correct
  • Hostnames are correct
  • RPC endpoints are responding
  • Besu node files are deployed
  • Besu services are running
  • Peer connections are working
  • NPMplus can reach new RPC nodes
  • External access works (via NPMplus)
  • Old VMs are stopped (or decommissioned)

Troubleshooting

Issue: Clone fails with storage error

Solution: Specify different storage:

# Edit migrate-rpc-vmids.sh and change storage parameter
STORAGE="local-lvm"  # or "thin1", "local", etc.

Issue: Network configuration not updated

Solution: Manually update network config:

# For containers
ssh root@192.168.11.10 "pct set <vmid> --net0 name=eth0,bridge=vmbr0,firewall=1,ip=<new_ip>/24,gw=192.168.11.1"

# For VMs, update inside guest OS

Issue: IP address conflict

Solution: Ensure old VM is stopped before starting new VM:

ssh root@192.168.11.10 "pct stop <old_vmid>"

Issue: Besu service won't start

Solution: Check logs and verify Besu node files:

ssh root@192.168.11.10 "pct exec <vmid> -- journalctl -u besu.service -n 50"
ssh root@192.168.11.10 "pct exec <vmid> -- cat /etc/besu/permissions-nodes.toml"

Scripts Reference

Migration Script

bash scripts/migrate-rpc-vmids.sh [host] [dry_run]
# Example: bash scripts/migrate-rpc-vmids.sh 192.168.11.10 false

Verification Script

bash scripts/verify-migrated-rpc-nodes.sh [host]
# Example: bash scripts/verify-migrated-rpc-nodes.sh 192.168.11.10

Deploy Besu Node Files

bash scripts/deploy-besu-node-files.sh

Test Connectivity

bash scripts/test-npmplus-full-connectivity.sh
bash scripts/diagnose-npmplus-backend-services.sh

Post-Migration Tasks

  1. Update documentation with final VMID mappings
  2. Update monitoring/alerting systems
  3. Update backup scripts
  4. Update any external documentation
  5. Clean up old VM backups (if desired)

Last Updated: 2025-01-20
Ready for Execution: Yes