Apply Composer changes: comprehensive API updates, migrations, middleware, and infrastructure improvements

- Add comprehensive database migrations (001-024) for schema evolution
- Enhance API schema with expanded type definitions and resolvers
- Add new middleware: audit logging, rate limiting, MFA enforcement, security, tenant auth
- Implement new services: AI optimization, billing, blockchain, compliance, marketplace
- Add adapter layer for cloud integrations (Cloudflare, Kubernetes, Proxmox, storage)
- Update Crossplane provider with enhanced VM management capabilities
- Add comprehensive test suite for API endpoints and services
- Update frontend components with improved GraphQL subscriptions and real-time updates
- Enhance security configurations and headers (CSP, CORS, etc.)
- Update documentation and configuration files
- Add new CI/CD workflows and validation scripts
- Implement design system improvements and UI enhancements
This commit is contained in:
defiQUG
2025-12-12 18:01:35 -08:00
parent e01131efaf
commit 9daf1fd378
968 changed files with 160890 additions and 1092 deletions

View File

@@ -0,0 +1,813 @@
# Sankofa Phoenix: Proxmox VE Hardware Bill of Materials (BOM)
## Date
2025-01-XX
## Overview
This document provides a comprehensive inventory of all Proxmox VE hardware in the Sankofa Phoenix infrastructure, including current hardware specifications, available hardware, Proxmox VE compatibility notes, and deployment recommendations.
---
## Current Hardware Inventory
### Summary Statistics
| Category | Count | Total RAM | Total CPU Cores (Validated) | GPU Systems |
|----------|-------|-----------|----------------------------|-------------|
| **Total Systems** | 16 | 2,304 GB | 34+ cores (validated) | 2 |
| **HPE ML110 Gen9** | 1 | 256 GB | 6 cores | 0 |
| **Dell R630 (High Memory)** | 1 | 768 GB | 28 cores (56 threads) | 0 |
| **Dell R630 (Standard)** | 12 | 1,536 GB | ~260-390 cores (est.) | 0 |
| **Dell Precision 7920** | 2 | 192 GB | ~32-64 cores (est.) | 2 |
**Note**:
- ✅ HPE ML110 Gen9: Validated (6 cores)
- ✅ Dell R630 (High Memory): Validated (28 cores, 56 threads)
- ⏳ Dell R630 (Standard): Estimates based on typical configurations
- ⏳ Dell Precision 7920: Estimates based on typical configurations
---
## Detailed Hardware Specifications
### 1. HPE ML110 Gen9
**System ID**: PVE-HOST-001
**Hostname**: ml110-01
**IP Address**: 192.168.11.10
**Status**: Active
**Proxmox VE Version**: 9.1.1 (pve-manager/9.1.1/42db4a6cf33dac83)
**Kernel**: 6.17.2-1-pve
**Cluster**: [To be determined]
#### Hardware Specifications
**Chassis**:
- **Manufacturer**: Hewlett Packard Enterprise (HPE)
- **Model**: ProLiant ML110 Gen9
- **Form Factor**: Tower Server
- **Rack Mountable**: Optional (with rack kit)
- **Serial Number**: [Not available via DMI]
**Processor**:
- **CPU Model**: Intel Xeon E5-2603 v3 @ 1.60GHz
- **CPU Count**: 1 processor (single socket)
- **CPU Cores**: 6 cores
- **CPU Threads**: 6 threads (no hyperthreading)
- **CPU Architecture**: x86_64
- **CPU Speed**: 1.60 GHz (Base), 1.20-4.00 GHz (Range)
- **CPU Family**: Xeon (Haswell-EP)
- **CPU Stepping**: 2
- **Virtualization**: Intel VT-x (VMX) supported
- **Cache**:
- L1d: 192 KiB (6 instances)
- L1i: 192 KiB (6 instances)
- L2: 1.5 MiB (6 instances)
- L3: 15 MiB (1 instance)
**Memory**:
- **Total RAM**: 256 GB (251 GiB usable)
- **RAM Type**: DDR4 ECC LRDIMM (Load-Reduced DIMM)
- **Memory Modules**: 8x 32 GB modules
- **Memory Speed**: 2133 MT/s (configured at 1600 MT/s)
- **Memory Configuration**: Multi-bit ECC
- **Memory Slots**: [To be determined - likely 8-16 slots]
- **Available Memory**: ~244 GB (for VMs)
**Storage**:
- **Storage Controller**: Intel C610/X99 series chipset 6-Port SATA Controller (AHCI mode)
- **Storage Disks**:
- 2x Seagate ST1000DM003-1ER162 (1TB SATA HDD)
- sda: 931.5 GB (primary, with Proxmox installation)
- sdb: 931.5 GB (secondary, used for Ceph OSD)
- **Storage Configuration**:
- Primary disk (sda): LVM with Proxmox VE installation
- pve-swap: 8 GB
- pve-root: 96 GB
- pve-data: 794.3 GB (for VMs)
- Secondary disk (sdb): Ceph OSD block device
- **RAID Configuration**: Software-based (LVM, Ceph)
- **Storage Options**: SATA AHCI (no hardware RAID controller detected)
**Network**:
- **Network Adapters**: 2x Broadcom NetXtreme BCM5717 Gigabit Ethernet PCIe
- **Network Ports**: 2x 1GbE ports
- nic0 (enp2s0f0): Active, connected to vmbr0 bridge
- nic1 (enp2s0f1): Available (not configured)
- **Network Bridges**: vmbr0 (192.168.11.10/24)
- **MAC Addresses**:
- nic0: 1c:98:ec:52:43:c8
- nic1: 1c:98:ec:52:43:c9
- **Additional NICs**: Supports PCIe expansion cards
**Power**:
- **Power Supply**: [To be determined]
- **Power Rating**: [To be determined]
- **Power Consumption**: [To be determined]
**Proxmox VE Compatibility**:
-**Fully Compatible** - HPE ML110 Gen9 is fully supported by Proxmox VE
- **Installed Proxmox VE Version**: 9.1.1 (latest stable)
- **Virtualization Support**: Intel VT-x (VMX) enabled and working
- **Storage**:
- Local LVM storage configured
- Ceph OSD configured on secondary disk
- CephFS mounted at /mnt/pve/ceph-fs (384 GB available)
- **Network**: Standard network bridges configured (vmbr0)
- **Boot Mode**: UEFI (EFI boot mode)
**Current Configuration**:
- **Proxmox VE**: Installed and operational
- **Storage Pools**:
- local-lvm: 794.3 GB available (LVM thin pool)
- ceph-fs: 384 GB available (Ceph filesystem)
- **Network**: vmbr0 bridge configured with static IP (192.168.11.10/24)
- **VMs Running**: Multiple VMs configured (VMIDs: 136, 139, 141, 142, 145, 146, 150, 151)
- **Ceph**: Ceph OSD configured on secondary disk
**Deployment Notes**:
-**Currently Active** - System is operational and hosting VMs
- Suitable for development/testing workloads
- Can serve as Proxmox VE cluster node
- Recommended for low-to-medium workload VMs
- Consider for backup/storage node
- **CPU Limitation**: 6 cores may limit concurrent VM performance
- **Storage**: Using software-based storage (LVM, Ceph) - no hardware RAID
- **Network**: 1GbE network - consider upgrade to 10GbE for better performance
---
### 2. Dell PowerEdge R630 (High Memory)
**System ID**: PVE-HOST-002
**Hostname**: r630-01
**IP Address**: 192.168.11.11
**Status**: Active
**Proxmox VE Version**: 9.1.1 (pve-manager/9.1.1/42db4a6cf33dac83)
**Kernel**: 6.17.2-1-pve
**Serial Number**: HNQ3FB2
**UUID**: 4c4c4544-004e-5110-8033-c8c04f464232
**Cluster**: [To be determined]
#### Hardware Specifications
**Chassis**:
- **Manufacturer**: Dell Inc.
- **Model**: PowerEdge R630
- **Form Factor**: 1U Rack Server
- **Rack Mountable**: Yes
- **Serial Number**: HNQ3FB2
**Processor**:
- **CPU Model**: Intel Xeon E5-2660 v4 @ 2.00GHz
- **CPU Count**: 2 processors (dual socket)
- **CPU Cores**: 14 cores per processor (28 total cores)
- **CPU Threads**: 28 threads per processor (56 total threads with hyperthreading)
- **CPU Architecture**: x86_64 (Broadwell-EP)
- **CPU Speed**: 2.00 GHz (Base), 1.20-3.20 GHz (Range, Turbo up to 3.20 GHz)
- **CPU Stepping**: 1
- **Virtualization**: Intel VT-x (VMX), VT-d supported
- **NUMA**: 2 NUMA nodes (one per CPU socket)
- **Cache**:
- L1d: 896 KiB (28 instances)
- L1i: 896 KiB (28 instances)
- L2: 7 MiB (28 instances)
- L3: 70 MiB (2 instances, 35 MiB per socket)
**Memory**:
- **Total RAM**: 768 GB (755 GiB usable, 792 GB total including system overhead)
- **RAM Type**: DDR4 ECC LRDIMM (Load-Reduced DIMM)
- **Memory Modules**: 12x 64 GB modules
- Part Number: M386A8K40BM1-CRC (Samsung)
- Speed: 2400 MT/s
- Type: Synchronous Registered (Buffered) LRDIMM
- Error Correction: Multi-bit ECC
- **Memory Slots**: 24 DIMM slots (12 per CPU socket)
- **Memory Configuration**: 6 modules per CPU socket (A1-A6 populated, A7-A12 empty)
- **Available Memory**: ~744 GB (for VMs)
- **Maximum Memory**: Up to 1.5 TB (with additional LRDIMMs)
**Storage**:
- **Storage Controller**: Dell PERC H730 Mini (LSI MegaRAID SAS-3 3108 [Invader])
- **Storage Disks**:
- 2x 300GB drives
- sda: Seagate ST9300653SS (279.4 GB, primary with Proxmox installation)
- sdb: HUC106030CSS600 (279.4 GB, secondary used for Ceph OSD)
- **Storage Configuration**:
- Primary disk (sda): LVM with Proxmox VE installation
- pve-swap: 8 GB
- pve-root: 79.6 GB
- pve-data: 171.3 GB (for VMs)
- Secondary disk (sdb): Ceph OSD block device
- **RAID Configuration**: Hardware RAID controller (PERC H730 Mini)
- **Storage Bays**: 10x 2.5" hot-swappable drive bays
- **Storage Options**: SATA, SAS, NVMe (with riser card)
**Network**:
- **Network Adapters**: 4x Broadcom NetXtreme II BCM57800 1/10 Gigabit Ethernet
- **Network Ports**: 4x 10GbE ports (1/10 Gigabit capable)
- nic0 (enp1s0f0): Available (not configured)
- nic1 (enp1s0f1): Available (not configured)
- nic2 (enp1s0f2): Active, connected to vmbr0 bridge
- nic3 (enp1s0f3): Available (not configured)
- **Network Bridges**: vmbr0 (192.168.11.11/24)
- **MAC Addresses**:
- nic0: c8:1f:66:d2:c5:97
- nic1: c8:1f:66:d2:c5:99
- nic2: c8:1f:66:d2:c5:9b (active)
- nic3: c8:1f:66:d2:c5:9d
- **Network Capabilities**: 10GbE capable (currently configured for 1GbE)
- **Additional NICs**: Supports PCIe expansion cards for 25GbE/100GbE
**Power**:
- **Power Supply**: Dual redundant power supplies (typical for R630)
- **Power Rating**: [To be determined - typically 495W, 750W, or 1100W]
- **Power Consumption**: [To be determined]
**Proxmox VE Compatibility**:
-**Fully Compatible** - Dell R630 is fully supported by Proxmox VE
- **Installed Proxmox VE Version**: 9.1.1 (latest stable)
- **Virtualization Support**: Intel VT-x (VMX), VT-d enabled and working
- **Storage**:
- Local LVM storage configured
- Ceph OSD configured on secondary disk
- Hardware RAID controller (PERC H730) available
- **Network**: Standard network bridges configured (vmbr0), 10GbE capable
- **Boot Mode**: UEFI (EFI boot mode)
- **High Memory**: Excellent for memory-intensive workloads
**Current Configuration**:
- **Proxmox VE**: Installed and operational
- **Storage Pools**:
- local-lvm: 171.3 GB available (LVM thin pool)
- Ceph OSD: Configured on secondary disk
- **Network**: vmbr0 bridge configured with static IP (192.168.11.11/24)
- **VMs Running**: Multiple VMs configured (VMIDs: 101, 104, 134, 137, 138, 144, 148)
- **Ceph**: Ceph OSD configured on secondary disk
- **CPU Utilization**: 56 logical CPUs available (28 cores × 2 sockets with hyperthreading)
- **Memory Utilization**: ~744 GB available for VMs
**Deployment Notes**:
-**Currently Active** - System is operational and hosting VMs
- Ideal for high-memory workloads (databases, in-memory caches)
- Excellent for Proxmox VE cluster node
- Can host many VMs with high memory requirements
- Consider for primary compute node in cluster
- **CPU**: 28 cores (56 threads) provides excellent compute capacity
- **Memory**: 768 GB provides excellent capacity for memory-intensive workloads
- **Storage**: Hardware RAID controller available (PERC H730 Mini)
- **Network**: 10GbE capable - consider configuring additional ports for better performance
---
### 3-14. Dell PowerEdge R630 (Standard Configuration)
**System IDs**: PVE-HOST-003 through PVE-HOST-014
**Quantity**: 12 systems
**Status**: Active
**Proxmox VE Version**: [To be determined]
**Cluster**: [To be determined]
#### Hardware Specifications
**Chassis**:
- **Manufacturer**: Dell
- **Model**: PowerEdge R630
- **Form Factor**: 1U Rack Server
- **Rack Mountable**: Yes
**Processor**:
- **CPU Model**: [To be determined - typically Intel Xeon E5-2600 v3/v4 series]
- **CPU Count**: 2 processors (dual socket)
- **CPU Cores**: 10-18 cores per processor (20-36 total cores per system)
- **CPU Architecture**: x86_64
- **CPU Speed**: [To be determined]
- **Total CPU Cores (12 systems)**: 240-432 cores
**Memory**:
- **Total RAM per System**: 128 GB
- **Total RAM (12 systems)**: 1,536 GB
- **RAM Type**: DDR4 ECC RDIMM
- **Memory Slots**: 24 DIMM slots (12 per CPU)
- **Memory Configuration**: [To be determined]
- **Maximum Memory**: Up to 1.5 TB (with LRDIMMs)
**Storage**:
- **Storage Controller**: [To be determined - typically PERC H730/H730P]
- **Storage Bays**: 10x 2.5" hot-swappable drive bays
- **Current Storage**: [To be determined]
- **RAID Configuration**: [To be determined]
- **Storage Options**: SATA, SAS, NVMe (with riser card)
**Network**:
- **Network Adapters**: [To be determined - typically 2x 1GbE onboard]
- **Network Ports**: 2x 1GbE (onboard)
- **Additional NICs**: [To be determined - supports PCIe NICs]
- **Network Options**: 10GbE, 25GbE via PCIe cards
**Power**:
- **Power Supply**: Dual redundant power supplies
- **Power Rating**: 495W, 750W, or 1100W options
- **Power Consumption**: [To be determined]
**Proxmox VE Compatibility**:
-**Fully Compatible** - Dell R630 is fully supported by Proxmox VE
- **Recommended Proxmox VE Version**: 8.x (latest stable)
- **Virtualization Support**: Intel VT-x, VT-d
- **Storage**: Supports local storage, Ceph, ZFS
- **Network**: Supports standard network bridges, SR-IOV (with compatible NICs)
**Deployment Notes**:
- Standard configuration suitable for general-purpose workloads
- Excellent for Proxmox VE cluster nodes
- Can be used for compute-intensive workloads
- Ideal for distributed workloads across cluster
- Consider for Ceph storage nodes (with additional storage)
- Can be used for Kubernetes worker nodes
**Cluster Recommendations**:
- These 12 systems are ideal for forming a Proxmox VE cluster
- Recommended cluster size: 3-5 nodes for quorum
- Can form multiple clusters or one large cluster
- Consider Ceph storage cluster across these nodes
---
### 15. Dell Precision 7920 (High Memory + GPU)
**System ID**: PVE-HOST-015
**Status**: Active
**Proxmox VE Version**: [To be determined]
**Cluster**: [To be determined]
#### Hardware Specifications
**Chassis**:
- **Manufacturer**: Dell
- **Model**: Precision 7920 Tower
- **Form Factor**: Tower Workstation/Server
- **Rack Mountable**: Optional (with rack kit)
**Processor**:
- **CPU Model**: [To be determined - typically Intel Xeon Scalable processors]
- **CPU Count**: 2 processors (dual socket)
- **CPU Cores**: 8-28 cores per processor (16-56 total cores)
- **CPU Architecture**: x86_64
- **CPU Speed**: [To be determined]
**Memory**:
- **Total RAM**: 128 GB
- **RAM Type**: DDR4 ECC
- **Memory Slots**: [To be determined]
- **Memory Configuration**: [To be determined]
- **Maximum Memory**: Up to 3 TB (depending on configuration)
**Graphics Processing Unit (GPU)**:
- **GPU Model**: NVIDIA Quadro P5000
- **GPU Memory**: 16 GB GDDR5X
- **GPU CUDA Cores**: 2,560
- **GPU Architecture**: Pascal (GP104)
- **GPU PCIe Slot**: PCIe 3.0 x16
- **GPU Power**: 180W TDP
- **GPU Features**:
- CUDA Compute Capability: 6.1
- Supports GPU passthrough in Proxmox VE
- Supports vGPU (with NVIDIA vGPU software)
- Supports NVIDIA GRID virtualization
**Storage**:
- **Storage Controller**: [To be determined]
- **Storage Bays**: [To be determined]
- **Current Storage**: [To be determined]
- **RAID Configuration**: [To be determined]
**Network**:
- **Network Adapters**: [To be determined]
- **Network Ports**: [To be determined]
- **Additional NICs**: [To be determined]
**Power**:
- **Power Supply**: [To be determined]
- **Power Rating**: [To be determined]
- **Power Consumption**: [To be determined]
**Proxmox VE Compatibility**:
-**Fully Compatible** - Dell Precision 7920 is fully supported by Proxmox VE
- **Recommended Proxmox VE Version**: 8.x (latest stable)
- **Virtualization Support**: Intel VT-x, VT-d (required for GPU passthrough)
- **GPU Passthrough**: ✅ Supported (requires VT-d/IOMMU)
- **Storage**: Supports local storage, Ceph, ZFS
- **Network**: Supports standard network bridges
**GPU Passthrough Configuration**:
- Requires IOMMU/VT-d enabled in BIOS
- Requires proper PCIe passthrough configuration
- Supports single GPU passthrough to one VM
- Can use NVIDIA vGPU for multiple VMs (requires NVIDIA vGPU license)
**Deployment Notes**:
- Ideal for GPU-accelerated workloads (AI/ML, rendering, compute)
- Suitable for virtualized GPU workloads
- Can host VMs requiring GPU acceleration
- Consider for specialized workloads (rendering farms, AI training)
- Excellent for development/testing GPU applications
---
### 16. Dell Precision 7920 (Standard Memory + GPU)
**System ID**: PVE-HOST-016
**Status**: Active
**Proxmox VE Version**: [To be determined]
**Cluster**: [To be determined]
#### Hardware Specifications
**Chassis**:
- **Manufacturer**: Dell
- **Model**: Precision 7920 Tower
- **Form Factor**: Tower Workstation/Server
- **Rack Mountable**: Optional (with rack kit)
**Processor**:
- **CPU Model**: [To be determined - typically Intel Xeon Scalable processors]
- **CPU Count**: 2 processors (dual socket)
- **CPU Cores**: 8-28 cores per processor (16-56 total cores)
- **CPU Architecture**: x86_64
- **CPU Speed**: [To be determined]
**Memory**:
- **Total RAM**: 64 GB
- **RAM Type**: DDR4 ECC
- **Memory Slots**: [To be determined]
- **Memory Configuration**: [To be determined]
- **Maximum Memory**: Up to 3 TB (depending on configuration)
**Graphics Processing Unit (GPU)**:
- **GPU Model**: NVIDIA Quadro P5000
- **GPU Memory**: 16 GB GDDR5X
- **GPU CUDA Cores**: 2,560
- **GPU Architecture**: Pascal (GP104)
- **GPU PCIe Slot**: PCIe 3.0 x16
- **GPU Power**: 180W TDP
- **GPU Features**:
- CUDA Compute Capability: 6.1
- Supports GPU passthrough in Proxmox VE
- Supports vGPU (with NVIDIA vGPU software)
- Supports NVIDIA GRID virtualization
**Storage**:
- **Storage Controller**: [To be determined]
- **Storage Bays**: [To be determined]
- **Current Storage**: [To be determined]
- **RAID Configuration**: [To be determined]
**Network**:
- **Network Adapters**: [To be determined]
- **Network Ports**: [To be determined]
- **Additional NICs**: [To be determined]
**Power**:
- **Power Supply**: [To be determined]
- **Power Rating**: [To be determined]
- **Power Consumption**: [To be determined]
**Proxmox VE Compatibility**:
-**Fully Compatible** - Dell Precision 7920 is fully supported by Proxmox VE
- **Recommended Proxmox VE Version**: 8.x (latest stable)
- **Virtualization Support**: Intel VT-x, VT-d (required for GPU passthrough)
- **GPU Passthrough**: ✅ Supported (requires VT-d/IOMMU)
- **Storage**: Supports local storage, Ceph, ZFS
- **Network**: Supports standard network bridges
**GPU Passthrough Configuration**:
- Requires IOMMU/VT-d enabled in BIOS
- Requires proper PCIe passthrough configuration
- Supports single GPU passthrough to one VM
- Can use NVIDIA vGPU for multiple VMs (requires NVIDIA vGPU license)
**Deployment Notes**:
- Ideal for GPU-accelerated workloads (AI/ML, rendering, compute)
- Suitable for virtualized GPU workloads
- Can host VMs requiring GPU acceleration
- Consider for specialized workloads (rendering farms, AI training)
- Excellent for development/testing GPU applications
- Lower memory configuration suitable for lighter GPU workloads
---
## Available Hardware
### Current Status
All 16 systems are currently in use. No additional hardware is available at this time.
### Future Hardware Recommendations
#### For Cluster Expansion
- **Additional Dell R630 systems**: For expanding compute cluster
- **Storage-optimized systems**: For Ceph storage cluster expansion
- **Network switches**: For improved network connectivity and redundancy
#### For GPU Workload Expansion
- **Additional GPU systems**: For expanding GPU compute capacity
- **NVIDIA A100/H100 systems**: For advanced AI/ML workloads
- **GPU servers**: Dedicated GPU server systems
#### For Network Infrastructure
- **10GbE/25GbE switches**: For improved inter-node connectivity
- **Network adapters**: 10GbE/25GbE PCIe cards for existing systems
- **Redundant network infrastructure**: For high availability
---
## Proxmox VE Cluster Configuration
### Recommended Cluster Topology
#### Option 1: Single Large Cluster
- **Cluster Name**: sankofa-pve-cluster-01
- **Nodes**: All 16 systems
- **Quorum**: 3-5 nodes for quorum (recommended: 5)
- **Storage**: Ceph distributed storage across nodes
- **Network**: Shared network infrastructure
**Advantages**:
- Single management interface
- Easy VM migration across all nodes
- Centralized storage management
- Simplified backup and disaster recovery
**Considerations**:
- Requires reliable network connectivity
- Quorum management with 16 nodes
- Network bandwidth requirements
#### Option 2: Multiple Specialized Clusters
**Cluster 1: Compute Cluster**
- **Nodes**: 1x HPE ML110 Gen9, 1x Dell R630 (768GB), 12x Dell R630 (128GB)
- **Total Nodes**: 14
- **Purpose**: General compute workloads
- **Storage**: Ceph distributed storage
**Cluster 2: GPU Cluster**
- **Nodes**: 2x Dell Precision 7920 (with NVIDIA P5000)
- **Total Nodes**: 2
- **Purpose**: GPU-accelerated workloads
- **Storage**: Shared storage or local storage
**Advantages**:
- Specialized clusters for different workloads
- GPU cluster isolated for specialized workloads
- Easier management of GPU resources
**Considerations**:
- Multiple clusters to manage
- Storage sharing between clusters
- Network segmentation
#### Option 3: Hybrid Approach
**Primary Cluster**: 13x Dell R630 systems (compute cluster)
- **Nodes**: 13 systems
- **Purpose**: Primary compute infrastructure
- **Storage**: Ceph distributed storage
**Secondary Cluster**: 1x HPE ML110 Gen9 + 2x Dell Precision 7920
- **Nodes**: 3 systems
- **Purpose**: Development/testing and GPU workloads
- **Storage**: Local storage or shared storage
**Advantages**:
- Separation of production and development
- GPU resources in separate cluster
- Flexible resource allocation
---
## Storage Configuration
### Recommended Storage Setup
#### Ceph Distributed Storage
- **Recommended Nodes**: 6-12 Dell R630 systems
- **Storage Disks**: [To be determined - requires inventory]
- **Network**: Dedicated storage network (10GbE recommended)
- **Replication**: 3x replication (recommended)
- **Pool Configuration**:
- RBD pool for VM disks
- CephFS for shared filesystems
- RGW for object storage (optional)
#### Local Storage
- **Use Cases**:
- OS disks for VMs
- High-performance local storage
- Backup storage
- **Recommended**: ZFS on local storage for snapshots and compression
#### Shared Storage
- **Use Cases**:
- VM templates
- ISO images
- Backup storage
- **Options**: NFS, CIFS, or CephFS
---
## Network Configuration
### Network Requirements
#### Management Network
- **Purpose**: Proxmox VE management, cluster communication
- **Bandwidth**: 1GbE minimum, 10GbE recommended
- **Redundancy**: Bonded interfaces recommended
#### VM Network
- **Purpose**: VM traffic, external connectivity
- **Bandwidth**: 1GbE minimum, 10GbE recommended
- **VLANs**: Recommended for network segmentation
#### Storage Network (Ceph)
- **Purpose**: Ceph cluster communication, data replication
- **Bandwidth**: 10GbE minimum, 25GbE recommended
- **Redundancy**: Bonded interfaces required
- **Isolation**: Dedicated network recommended
#### Migration Network
- **Purpose**: Live migration traffic
- **Bandwidth**: 10GbE recommended
- **Can share**: With storage network or management network
### Network Hardware Recommendations
#### Switches
- **Management/VM Network**: 1GbE or 10GbE switches
- **Storage Network**: 10GbE or 25GbE switches (dedicated)
- **Redundancy**: Redundant switches for high availability
#### Network Adapters
- **Onboard NICs**: Use for management network
- **PCIe NICs**: 10GbE/25GbE cards for storage and VM networks
- **Bonding**: Configure LACP bonds for redundancy
---
## Proxmox VE Compatibility Matrix
| Hardware Component | Proxmox VE Support | Notes |
|-------------------|-------------------|-------|
| **HPE ML110 Gen9** | ✅ Full Support | Standard x86_64 server |
| **Dell R630** | ✅ Full Support | Enterprise server, excellent support |
| **Dell Precision 7920** | ✅ Full Support | Workstation/server hybrid |
| **Intel Xeon Processors** | ✅ Full Support | All modern Xeon processors supported |
| **DDR4 ECC Memory** | ✅ Full Support | Recommended for Proxmox VE |
| **NVIDIA P5000 GPU** | ✅ Full Support | GPU passthrough supported |
| **PERC RAID Controllers** | ✅ Full Support | Use in HBA mode for ZFS/Ceph |
| **Standard Network Adapters** | ✅ Full Support | Intel, Broadcom, etc. |
---
## BIOS/UEFI Configuration Requirements
### Required Settings for All Systems
#### Virtualization
-**Intel VT-x**: Enable
-**Intel VT-d / IOMMU**: Enable (required for GPU passthrough, PCIe passthrough)
-**SR-IOV**: Enable (if supported and using SR-IOV)
#### CPU Settings
-**Hyperthreading**: Enable (recommended)
-**CPU Power Management**: Performance mode (recommended for servers)
#### Memory Settings
-**Memory ECC**: Enable (if available)
-**NUMA**: Enable (for multi-socket systems)
#### Storage Settings
-**AHCI Mode**: For ZFS/Ceph (if not using hardware RAID)
-**RAID Mode**: For hardware RAID (if using hardware RAID)
#### Boot Settings
-**UEFI Boot**: Enable (recommended)
-**Secure Boot**: Disable (for Proxmox VE compatibility)
-**Legacy Boot**: Disable (if using UEFI)
---
## Performance Recommendations
### CPU Allocation
- **Host CPU Reservation**: Reserve 1-2 cores per host for Proxmox VE
- **VM CPU Allocation**: Use CPU pinning for performance-critical VMs
- **NUMA Awareness**: Configure NUMA for multi-socket systems
### Memory Allocation
- **Host Memory Reservation**: Reserve 4-8 GB per host for Proxmox VE
- **Balloon Driver**: Enable for memory overcommitment
- **Memory Hotplug**: Enable for dynamic memory allocation
### Storage Performance
- **Use NVMe/SSD**: For VM disks requiring high IOPS
- **Use Ceph**: For distributed storage and high availability
- **Use ZFS**: For local storage with snapshots and compression
- **RAID Configuration**: RAID 10 for performance, RAID 5/6 for capacity
### Network Performance
- **Use 10GbE/25GbE**: For storage and migration networks
- **Enable Jumbo Frames**: For storage network (MTU 9000)
- **Use SR-IOV**: For high-performance network requirements
---
## Monitoring and Management
### Recommended Tools
- **Proxmox VE Web Interface**: Primary management interface
- **Proxmox VE CLI**: Command-line management
- **Prometheus + Grafana**: Monitoring and alerting
- **Zabbix**: Alternative monitoring solution
- **Proxmox Backup Server**: Backup and disaster recovery
### Key Metrics to Monitor
- **CPU Usage**: Per host and per VM
- **Memory Usage**: Per host and per VM
- **Storage Usage**: Per storage pool and per VM
- **Network Usage**: Per interface and per VM
- **Cluster Health**: Quorum status, node status
- **Ceph Health**: Cluster status, OSD status, pool usage
---
## Backup and Disaster Recovery
### Backup Strategy
- **Proxmox Backup Server**: Recommended for centralized backups
- **VM Backups**: Full backups and incremental backups
- **Backup Frequency**: Daily backups recommended
- **Retention Policy**: 30-90 days recommended
### Disaster Recovery
- **Cluster Configuration**: Backup cluster configuration
- **VM Templates**: Backup VM templates and ISOs
- **Storage Configuration**: Document storage setup
- **Network Configuration**: Document network setup
- **Recovery Procedures**: Document recovery procedures
---
## Security Considerations
### Proxmox VE Security
- **Firewall**: Enable Proxmox VE firewall
- **SSH Access**: Restrict SSH access, use key-based authentication
- **Web Interface**: Use HTTPS, restrict access
- **API Access**: Use API tokens, restrict permissions
- **Updates**: Regular security updates
### VM Security
- **Guest Agent**: Install QEMU guest agent in VMs
- **Firewall**: Configure firewall in VMs
- **Updates**: Regular security updates in VMs
- **Access Control**: Use Proxmox VE user management
---
## Next Steps
### Immediate Actions
1. **Hardware Inventory**: Complete detailed hardware inventory (CPU models, storage, network)
2. **BIOS Configuration**: Configure BIOS/UEFI settings on all systems
3. **Proxmox VE Installation**: Install Proxmox VE on all systems
4. **Cluster Formation**: Form Proxmox VE cluster(s)
5. **Network Configuration**: Configure network interfaces and bonds
6. **Storage Configuration**: Configure storage (Ceph, local, shared)
7. **Testing**: Test cluster functionality, VM creation, migration
### Future Enhancements
1. **Storage Expansion**: Add additional storage to Ceph cluster
2. **Network Upgrades**: Upgrade to 10GbE/25GbE for storage network
3. **GPU Passthrough**: Configure GPU passthrough on Precision 7920 systems
4. **Monitoring Setup**: Deploy monitoring and alerting
5. **Backup Setup**: Deploy Proxmox Backup Server
6. **Documentation**: Complete detailed documentation
---
## Related Documentation
- [Hardware BOM](./hardware_bom.md) - Overall hardware specifications
- [Entity Registry](./ENTITY_REGISTRY.md) - Entity and network information
- [System Architecture](../system_architecture.md) - Overall system architecture
- [Datacenter Architecture](../datacenter_architecture.md) - Datacenter specifications
---
**Last Updated**: [Date]
**Status**: In Progress
**Maintainer**: Infrastructure Team
**Version**: 1.0