Files
proxmox/docs/04-configuration/PHYSICAL_DRIVES_AND_CONFIG.md
defiQUG b3a8fe4496
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
chore: sync all changes to Gitea
- Config, docs, scripts, and backup manifests
- Submodule refs unchanged (m = modified content in submodules)

Made-with: Cursor
2026-03-02 11:37:34 -08:00

4.1 KiB
Raw Blame History

Physical Drives and Current Configurations — All Three Proxmox Hosts

Last updated: 2026-02-28


ml110 (192.168.11.10)

Device Size Model Serial Configuration
sda 931.5G ST1000DM003-1ER162 (HDD) Z4YE0TMR Partitioned: sda1 (1M), sda2 (1G vfat /boot/efi), sda3 (930.5G LVM2). VG pve: swap 8G, root 96G ext4 /, data thin pool 794G (CTs 1003, 1004, 15031508, 2102, 2301, 23042308, 2400, 2402, 2403).
sdb 931.5G ST1000DM003-1ER162 (HDD) Z4YDLPZ3 In VG pve — extended data thin pool (data_tdata). Pool now ~1.7 TB total.

RAID: None.

Summary: 2× 1TB HDDs. Both in use: sda (OS + original data pool); sdb added to pve and used to extend the data thin pool (~930G added). Data/local-lvm pool now ~1.7 TB.


r630-01 (192.168.11.11)

Device Size Model Serial Configuration
sda 558.9G HUC109060CSS600 (SSD) KSKUZEZF Partitioned: sda1 (1M), sda2 (1G vfat), sda3 (557G zfs_member). ZFS used for Proxmox root (rpool).
sdb 558.9G HUC109060CSS600 (SSD) KSKM1B4F Same layout as sda — ZFS mirror partner for root.
sdc 232.9G CT250MX500SSD1 (SSD) 2203E5FE090E Member of md0 (RAID10).
sdd 232.9G CT250MX500SSD1 2203E5FE08F8 Member of md0 (RAID10).
sde 232.9G CT250MX500SSD1 2203E5FE08FA Member of md0 (RAID10).
sdf 232.9G CT250MX500SSD1 2203E5FE08F1 Member of md0 (RAID10).
sdg 232.9G CT250MX500SSD1 2203E5FE095E Member of md0 (RAID10).
sdh 232.9G CT250MX500SSD1 2203E5FE0901 Member of md0 (RAID10).

RAID: md0 = RAID10, 6× 233G SSDs → ~698G usable. State: active, 6/6 devices [UUUUUU].

LVM on md0: VG pve (single PV /dev/md0). Thin pools: pve-thin1 208G, pve-data 280G. Hosts CTs for validators, RPC 2101, 25002505, 10001002, 15001502, 78007804, 10130, 1015010151, 1020010236, 30003501, 100105, 130, etc.

Summary: 2× 559G SSDs (ZFS root) + 6× 233G SSDs (RAID10 → LVM data/thin1). All drives in use.


r630-02 (192.168.11.12)

Device Size Model Serial Configuration
sda 232.9G CT250MX500SSD1 2202E5FB4CB9 Partitioned: sda1 (1M), sda2 (1G vfat), sda3 (231G zfs_member). ZFS for Proxmox root.
sdb 232.9G CT250MX500SSD1 2203E5FE090D Same — ZFS mirror for root.
sdc 232.9G CT250MX500SSD1 2203E5FE07E1 sdc3 → LVM VG thin2 (thin pool → VMIDs 5000, 6000, 6001, 6002).
sdd 232.9G CT250MX500SSD1 2202E5FB186E sdd3 → LVM VG thin3 (VMIDs 5800, 10237, 8641, 5801).
sde 232.9G CT250MX500SSD1 2203E5FE0905 sde3 → LVM VG thin4 (VMIDs 7810, 7811).
sdf 232.9G CT250MX500SSD1 2203E5FE0964 sdf3 → LVM VG thin5 (empty pool after 5000 migrated to thin2).
sdg 232.9G CT250MX500SSD1 2203E5FE0928 sdg3 → LVM VG thin6 (VMIDs 5700, 6400, 6401, 6402).
sdh 232.9G CT250MX500SSD1 2203E5FE0903 sdh3 → LVM VG thin1 (thin1-r630-02: 2201, 2303, 2401, 52005202, 6200, 10234).

RAID: None (each data disk is a separate LVM PV).

Summary: 2× 233G SSDs (ZFS root) + 6× 233G SSDs (each its own VG: thin1thin6). All 8 drives in use.


Quick reference

Host Physical drives Layout Unused / notes
ml110 2× 1TB HDD sda: OS+LVM data; sdb: LVM PV only sdb — 931G not in any VG
r630-01 2× 559G + 6× 233G SSD ZFS root + RAID10 md0 → LVM All in use
r630-02 2× 233G + 6× 233G SSD ZFS root + 6× single-disk LVM (thin1thin6) All in use

To re-check:
ssh root@<host> 'lsblk -o NAME,SIZE,TYPE,FSTYPE,MODEL,SERIAL; echo; pvs; vgs'