Files
proxmox/docs/04-configuration/PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md
defiQUG e4c9dda0fd
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
chore: update submodule references and documentation
- Marked submodules ai-mcp-pmm-controller, explorer-monorepo, and smom-dbis-138 as dirty to reflect recent changes.
- Updated documentation to clarify operator script usage, including dotenv loading and task execution instructions.
- Enhanced the README and various index files to provide clearer navigation and task completion guidance.

Made-with: Cursor
2026-03-04 02:03:08 -08:00

4.7 KiB
Raw Permalink Blame History

Adding a third and/or fourth R630 before migration — decision guide

Context: You are about to balance load by migrating containers from r630-01 to r630-02 (and optionally ml110). You asked whether it makes sense to add a third and/or fourth R630 to Proxmox before starting that migration.


1. You may already have a third and fourth R630

The repo documents r630-03 (192.168.11.13) and r630-04 (192.168.11.14):

  • Status: Powered off; not currently in the Proxmox cluster (only ml110, r630-01, r630-02 are active).
  • Hardware (per report): Dell R630, 512 GB RAM each, 2×600 GB boot, 6×250 GB SSD.
  • Issues when last used: Not in cluster, SSL/certificate issues, and others — all with documented fixes.

If these servers are still available and you are willing to power them on and fix them:

  • Add them to the cluster first (power on → fix SSL/join cluster per reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md).
  • Then you have four Proxmox nodes (ml110 + r630-01, -02, -03, -04) or three R630s + ml110. Migration can then spread workload to r630-03 and r630-04 as well, instead of only to r630-02 and ml110.
  • That gives more headroom and better HA (see below) without buying new hardware.

If r630-03/04 are decommissioned or unavailable: Treat this as “add new R630(s)” below.


2. Does it make sense to add a third/fourth R630 (or bring r630-03/04 online) before migration?

Yes, it can make sense, depending on goals.

Goal Add 3rd/4th R630 before migration? Notes
Reduce load on r630-01 quickly Optional Migration to existing r630-02 (and ml110) already helps. You can migrate first and add nodes later.
More headroom long term Yes With 34 R630s (+ ml110), workload is spread across more nodes; no single node is as hot as r630-01 today.
Proxmox HA + Ceph Yes (3 min, 4 better) Per PROXMOX_HA_CLUSTER_ROADMAP.md: 3 R630s minimum for HA + Ceph; 4 R630s better for Ceph recovery. You currently have 2 R630s + ml110; adding a 3rd (and 4th) R630 aligns with that.
Avoid “just moving the problem” Yes If you only move workload to r630-02, r630-02 may become the new bottleneck. Adding nodes gives more capacity so migration actually balances.
Cost / complexity Your call New hardware = cost and setup. Bringing r630-03/04 back = no new purchase, but time to power on, fix, and join cluster.

Practical recommendation:

  1. If r630-03 and/or r630-04 exist and are usable:
    Power them on and add them to the cluster first, then run migration. You get a 4- (or 5-) node cluster and can move workload to r630-03 and r630-04 as well as r630-02. Use reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md for the fix sequence.

  2. If you do not have extra R630s (or theyre gone):
    Migration first is still valid: move candidates from r630-01 to r630-02 (and optionally ml110) per PROXMOX_LOAD_BALANCING_RUNBOOK.md. That reduces r630-01 load with no new hardware. If after that you still want more capacity or HA, then add a 3rd (and 4th) R630.

  3. If you are buying new R630s:
    For HA + Ceph, the docs recommend at least 3 R630s (4 is better). So adding a third R630 is the minimum for that path; a fourth improves Ceph and spread. You can add them before or after the current migration; adding before gives more migration targets.


3. Order of operations (suggested)

Scenario Order
r630-03 / r630-04 exist and you will use them 1) Power on r630-03 (and -04). 2) Fix and join cluster. 3) Run load-balance migration (including to r630-03 / -04 if desired).
No extra R630s yet; migration only 1) Run migration r630-01 → r630-02 (and optionally ml110). 2) Re-check load. 3) If needed, plan 3rd/4th R630.
Buying new 3rd/4th R630 1) Install Proxmox and join cluster. 2) Run migration so new nodes take part of the workload.

4. References