- Organized 252 files across project - Root directory: 187 → 2 files (98.9% reduction) - Moved configuration guides to docs/04-configuration/ - Moved troubleshooting guides to docs/09-troubleshooting/ - Moved quick start guides to docs/01-getting-started/ - Moved reports to reports/ directory - Archived temporary files - Generated comprehensive reports and documentation - Created maintenance scripts and guides All files organized according to established standards.
4.0 KiB
4.0 KiB
LXC Template and Node Verification Summary
Created Scripts and Documentation
✅ Verification Script: verify-node-ready.sh
- Checks node status, storage, templates, network, resources, VMID availability
- Usage:
./verify-node-ready.sh r630-01
✅ Deployment Script: deploy-supporting-services.sh
- Automates creation of Redis, Web3Signer, and Vault containers
- Usage:
./deploy-supporting-services.sh r630-01
✅ Documentation: LXC_DEPLOYMENT.md
- Complete guide for LXC container deployment
- Manual and automated deployment options
- Troubleshooting guide
✅ Updated: DEPLOYMENT.md
- Added automated deployment option
- Updated manual deployment steps with LXC container creation commands
Next Steps to Verify r630-01
1. Run Verification Script (on Proxmox host)
# SSH to r630-01 or a Proxmox host with API access
ssh root@192.168.11.11 # r630-01
# Navigate to project directory
cd /path/to/proxmox/rpc-translator-138
# Run verification
./verify-node-ready.sh r630-01
2. Manual Verification (if scripts not available)
Check Templates:
# List available templates
pveam list local | grep ubuntu-22.04
# If not available, download:
pveam download local ubuntu-22.04-standard_22.04-1_amd64.tar.zst
Check Node Status:
# Check node is online
pvesh get /nodes/r630-01/status
# Check resources
pvesh get /nodes/r630-01/status | grep -E "mem|disk"
Check Storage:
# List storage
pvesh get /nodes/r630-01/storage
# Verify local-lvm exists and has space
Check Network:
# List network interfaces
pvesh get /nodes/r630-01/network
# Verify vmbr0 exists
Check VMID Availability:
# List existing containers/VMs
pvesh get /nodes/r630-01/lxc
pvesh get /nodes/r630-01/qemu
# Verify VMIDs 106, 107, 108 are not in use
3. Quick Template Download (if needed)
Via Web UI:
- Datacenter > Storage > local
- Click "Templates" tab
- Click "Download Templates"
- Select "ubuntu-22.04-standard"
- Click "Download"
- Wait for completion
Via CLI:
# List available templates
pveam available | grep ubuntu-22.04
# Download template
pveam download local ubuntu-22.04-standard_22.04-1_amd64.tar.zst
# Verify download
pveam list local | grep ubuntu-22.04
Required Resources
- Memory: 6GB+ free (2GB per container × 3 containers)
- Disk: 50GB+ free (10GB + 20GB + 20GB)
- Storage: local-lvm must be available
- Network: vmbr0 bridge must exist
- Template: ubuntu-22.04-standard (or similar)
Expected Template Path
Template path format: local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst
Or check what's available:
pveam list local | grep -E "ubuntu|debian"
Container Specifications
| Service | VMID | IP | RAM | Disk | Template |
|---|---|---|---|---|---|
| Redis | 106 | 192.168.11.110 | 512MB | 10GB | Ubuntu 22.04 |
| Web3Signer | 107 | 192.168.11.111 | 2048MB | 20GB | Ubuntu 22.04 |
| Vault | 108 | 192.168.11.112 | 2048MB | 20GB | Ubuntu 22.04 |
Verification Checklist
- r630-01 is online and accessible
- local-lvm storage is available
- Ubuntu 22.04 template is downloaded
- vmbr0 network bridge exists
- Sufficient memory available (6GB+)
- Sufficient disk space available (50GB+)
- VMIDs 106, 107, 108 are not in use
- Scripts are executable (
chmod +x *.sh)
Troubleshooting
If template not found:
- Download via Web UI or
pveam download local ubuntu-22.04-standard_22.04-1_amd64.tar.zst
If storage not available:
- Check storage configuration in Proxmox
- Ensure local-lvm is enabled on r630-01
If VMID conflicts:
- Check existing containers:
pct list | grep -E "106|107|108" - Remove or use different VMIDs if needed
If network issues:
- Verify vmbr0 exists:
ip link show vmbr0 - Check bridge configuration in Proxmox Web UI
References
- Deployment Guide:
DEPLOYMENT.md - LXC Deployment Guide:
LXC_DEPLOYMENT.md - VMID Allocation:
VMID_ALLOCATION.md