Files
proxmox/docs/03-deployment/PROXMOX_VM_CREATION_RUNBOOK.md
defiQUG bea1903ac9
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
Sync all local changes: docs, config, scripts, submodule refs, verification evidence
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-21 15:46:06 -08:00

5.4 KiB
Raw Blame History

Proxmox VM/Container Creation Runbook — Capacity and Availability

Last Updated: 2026-02-13
Purpose: Create all planned VMs/containers on the Proxmox cluster using capacity and availability best practices. Use this when SSH'd to the Proxmox host or from a host that can SSH to Proxmox with access to env/config.

Related: VMID_ALLOCATION_FINAL | PROXMOX_CLUSTER_ARCHITECTURE | PROXMOX_COMPLETE_RECOMMENDATIONS | STEPS_FROM_PROXMOX_OR_LAN_WITH_SECRETS


1. Capacity and availability principles

  • Node spread: Prefer r630-01 (192.168.11.11) and r630-02 (192.168.11.12) for new workloads; ml110 (192.168.11.10) already runs 34 containers and is CPU/memory constrained. Distribute to improve redundancy and performance.
  • VMID ranges: Use VMID_ALLOCATION_FINAL — e.g. DBIS Core 1000010199, Sankofa 78008999, Besu 10004999. Do not reuse VMIDs.
  • Storage: Use local-lvm or node-specific thin pools (e.g. r630-01 thin1, r630-02 thin2thin6). Set PROXMOX_STORAGE per node if creating on a specific host.
  • High availability: For critical services (DB, API), create primary and secondary on different nodes where possible (e.g. DB primary on r630-01, replica on r630-02).
  • Templates: Use a single OS template (e.g. local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst). Ensure template exists on the target node or use shared storage.
  • Network: Use vmbr0 (VLAN-aware). Assign static IPs from the allocated ranges; document in ALL_VMIDS_ENDPOINTS and RPC_ENDPOINTS_MASTER.

2. Cluster node summary

Node IP CPU / RAM Storage Current load Use for
ml110 192.168.11.10 6c / 125GB 907GB (26% used) 34 containers Lightweight / management only
r630-01 192.168.11.11 32c / 503GB thin1 200GB+ 0 New VMs (DBIS, Sankofa, RPC)
r630-02 192.168.11.12 56c / 251GB thin2thin6 Some VMs New VMs (HA pair, heavy workload)
r630-03 192.168.11.13 32c / 512GB Available 0 Future expansion
r630-04 192.168.11.14 32c / 512GB Available 0 Future expansion

3. Create DBIS Core containers (6 LXC)

VMIDs: 10100 (PostgreSQL primary), 10101 (replica), 10120 (Redis), 10150 (API primary), 10151 (API secondary), 10130 (Frontend).
Specs: See dbis_core/config/dbis-core-proxmox.conf (memory, cores, disk).

From repo root (with SSH to Proxmox):

# Default PROXMOX_HOST=192.168.11.10 (ml110). For HA, consider creating on r630-01/r630-02 by setting PROXMOX_NODE and storage.
export PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
export STORAGE="${PROXMOX_STORAGE:-local-lvm}"
export TEMPLATE="${CONTAINER_OS_TEMPLATE:-local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst}"

NON_INTERACTIVE=1 ./dbis_core/scripts/deployment/create-dbis-core-containers.sh

Best practice: To spread across nodes, run creation once per node with different VMID sets, or extend the script to accept PROXMOX_NODE and create API secondary / replica on r630-02.

After creation: Run DBIS Core deploy and configure (see dbis_core/scripts/deployment/deploy-all.sh, configure-database.sh).


4. Create missing Besu/RPC containers (if needed)

If VMIDs 25062508 or other RPC/validator slots are needed:

./scripts/create-missing-containers-2506-2508.sh

Check VMID_ALLOCATION_FINAL and create-chain138-containers.sh for Chain 138specific containers.


5. Sankofa / Phoenix (78008999)

  • 7800: Phoenix (initial). 7801: Sankofa.
  • Create via Proxmox UI or pct create / qm create with VMIDs and IPs from ALL_VMIDS_ENDPOINTS.
  • Prefer r630-01 or r630-02 for new VMs to avoid overloading ml110.

6. One-shot “create all” from LAN

From a host on LAN with SSH to Proxmox and repo cloned:

./scripts/run-all-operator-tasks-from-lan.sh --create-vms

This runs backup + Blockscout verify + (optionally) contract deploy, then create-vms which invokes dbis_core/scripts/deployment/create-dbis-core-containers.sh. For additional VM sets (e.g. Sankofa, Chain 138), run the specific scripts in §4§5 after.


7. Checklist after creating VMs

  • Update ALL_VMIDS_ENDPOINTS with VMID, hostname, IP, node.
  • Update RPC_ENDPOINTS_MASTER if new RPC/explorer.
  • Install and start services (DBIS: deploy-all.sh, configure-database.sh; others per service runbook).
  • Enable monitoring (Cacti, Prometheus, or existing stack) for new nodes.
  • Schedule backups if applicable (e.g. NPMplus backup cron; DB dumps for DBIS).