Files
proxmox/scripts/audit-proxmox-rpc-storage.sh
defiQUG cb47cce074 Complete markdown files cleanup and organization
- Organized 252 files across project
- Root directory: 187 → 2 files (98.9% reduction)
- Moved configuration guides to docs/04-configuration/
- Moved troubleshooting guides to docs/09-troubleshooting/
- Moved quick start guides to docs/01-getting-started/
- Moved reports to reports/ directory
- Archived temporary files
- Generated comprehensive reports and documentation
- Created maintenance scripts and guides

All files organized according to established standards.
2026-01-06 01:46:25 -08:00

55 lines
1.6 KiB
Bash
Executable File

#!/usr/bin/env bash
# Audit storage node restrictions vs RPC VMID placement.
#
# Requires SSH access to a Proxmox node that can run pct/pvesm and see /etc/pve/storage.cfg.
#
# Usage:
# PROXMOX_HOST=192.168.11.10 ./scripts/audit-proxmox-rpc-storage.sh
set -euo pipefail
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
VMIDS=(2400 2401 2402 2500 2501 2502 2503 2504 2505 2506 2507 2508)
ssh_pve() {
ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=5 "root@${PROXMOX_HOST}" "$@"
}
echo "=== Proxmox RPC Storage Audit ==="
echo "Host: ${PROXMOX_HOST}"
echo
NODE="$(ssh_pve "hostname")"
echo "Node name: ${NODE}"
echo
echo "=== Storages active on this node (pvesm) ==="
ssh_pve "pvesm status" | sed 's/^/ /'
echo
echo "=== storage.cfg: storages with node restrictions ==="
ssh_pve "awk '
/^dir: /{s=\$2; t=\"dir\"; nodes=\"\"}
/^lvmthin: /{s=\$2; t=\"lvmthin\"; nodes=\"\"}
/^[[:space:]]*nodes /{nodes=\$2}
/^[[:space:]]*nodes /{print t \":\" s \" nodes=\" nodes}
' /etc/pve/storage.cfg" | sed 's/^/ /'
echo
echo "=== RPC VMID -> rootfs storage mapping ==="
for v in "${VMIDS[@]}"; do
echo "--- VMID ${v} ---"
ssh_pve "pct status ${v} 2>&1; pct config ${v} 2>&1 | egrep -i 'hostname:|rootfs:|memory:|swap:|cores:|net0:'" | sed 's/^/ /'
echo
done
cat <<'NOTE'
NOTE:
- If a VMID uses rootfs "local-lvm:*" on a node where storage.cfg restricts "local-lvm" to other nodes,
the container may fail to start after a shutdown/reboot.
- Fix is to update /etc/pve/storage.cfg nodes=... for that storage to include the node hosting the VMID,
or migrate the VMID to an allowed node/storage.
NOTE