- Organized 252 files across project - Root directory: 187 → 2 files (98.9% reduction) - Moved configuration guides to docs/04-configuration/ - Moved troubleshooting guides to docs/09-troubleshooting/ - Moved quick start guides to docs/01-getting-started/ - Moved reports to reports/ directory - Archived temporary files - Generated comprehensive reports and documentation - Created maintenance scripts and guides All files organized according to established standards.
5.9 KiB
5.9 KiB
Migration Storage Fix Guide
Date: $(date)
Issue: Container migrations failing with "migration aborted" errors
Containers: 1504, 2503, 2504, 6201
Target Node: pve
Problem Summary
Containers are failing to migrate to pve with the following errors:
WARN: Migrating CT 2504 failed: migration aborted
WARN: Migrating CT 1504 failed: migration aborted
WARN: Migrating CT 2503 failed: migration aborted
WARN: Migrating CT 6201 failed: migration aborted
TASK ERROR: Some guests failed to migrate 2504, 1504, 2503, 6201
Root Causes
- Storage Configuration Mismatch: Source containers use
local-lvmstorage, but target nodes may not have compatible storage configured - Storage Inactive: Target nodes may have storage defined but not active
- Volume Group Mismatch: Source uses volume group "pve", target nodes may use different volume groups
- Storage Type Incompatibility: LVM thin storage on source may not match target storage type
Solution
Step 1: Diagnose Storage Issues
Run the comprehensive diagnostic script to check all nodes:
./scripts/diagnose-and-fix-migration-storage.sh
This script will:
- Check storage status on all nodes (ml110, pve, pve2)
- Find current container locations
- Check what storage each container is using
- Attempt to fix storage issues
- Migrate containers to target node
Step 2: Fix Storage Configuration (if needed)
If storage issues are detected, run the storage fix script:
./scripts/fix-migration-storage.sh
This script will:
- Ensure
thin1storage is available and active on target nodes - Create thin pools if needed
- Configure storage properly for Proxmox
Step 3: Manual Migration (if scripts fail)
If automated scripts fail, you can migrate manually:
Option A: Migrate with Storage Conversion
# Connect to source node (ml110)
ssh root@192.168.11.10
# For each container, stop and migrate
for vmid in 1504 2503 2504 6201; do
# Stop container
pct stop $vmid
# Migrate to pve (Proxmox will handle storage conversion)
pct migrate $vmid pve --restart
done
Option B: Migrate to Specific Storage
# Connect to source node
ssh root@192.168.11.10
# Migrate with target storage specification
for vmid in 1504 2503 2504 6201; do
pct stop $vmid
# Try with thin1 storage
pvesh create /nodes/ml110/lxc/$vmid/migrate --target pve --storage thin1 --online 0
done
Option C: Backup and Restore
If direct migration fails, use backup/restore:
# On source node (ml110)
for vmid in 1504 2503 2504 6201; do
# Create backup
vzdump $vmid --storage local --compress gzip
# Restore on target node (pve)
# Note: This requires manual transfer of backup files
done
Storage Configuration Details
Source Node (ml110)
- IP: 192.168.11.10
- Storage:
local-lvm(LVM thin) - Volume Group:
pve - Status: Active
Target Node (pve)
- IP: 192.168.11.11
- Preferred Storage:
thin1(LVM thin) - Alternative:
local(directory storage)
Target Node (pve2)
- IP: 192.168.11.12
- Preferred Storage:
thin1(LVM thin) - Alternative:
local(directory storage)
Verification Commands
Check Storage Status
# On each node
ssh root@192.168.11.10 "pvesm status" # ml110
ssh root@192.168.11.11 "pvesm status" # pve
ssh root@192.168.11.12 "pvesm status" # pve2
Check Container Locations
# Check where containers are
ssh root@192.168.11.10 "for vmid in 1504 2503 2504 6201; do echo -n \"\$vmid: \"; pvesh get /nodes/ml110/lxc/\$vmid/status/current 2>/dev/null | jq -r '.status' && echo \" (ml110)\" || pvesh get /nodes/pve/lxc/\$vmid/status/current 2>/dev/null | jq -r '.status' && echo \" (pve)\" || echo \"not found\"; done"
Check Container Storage
# On the node where container is located
ssh root@192.168.11.10 "for vmid in 1504 2503 2504 6201; do echo -n \"\$vmid: \"; pct config \$vmid | grep '^rootfs:' | awk '{print \$2}'; done"
Troubleshooting
Issue: Storage not found on target node
Solution: Run the storage fix script or manually create storage:
# On target node (pve)
ssh root@192.168.11.11
# Check available volume groups
vgs
# Create thin pool if needed
lvcreate -L 500G -n thin1 pve
lvconvert --type thin-pool pve/thin1
# Add to Proxmox
pvesm add lvmthin thin1 --thinpool thin1 --vgname pve --content images,rootdir --nodes pve
Issue: Migration fails with "storage migration failed"
Solution: Try migrating to local storage instead (slower but more compatible):
pct migrate <VMID> pve --storage local --restart
Issue: Container already on target node but wrong storage
Solution: Move storage after migration:
# Stop container
pct stop <VMID>
# Change rootfs storage
pct set <VMID> --rootfs thin1:8,size=100G
# Start container
pct start <VMID>
Expected Results
After successful migration:
- All containers (1504, 2503, 2504, 6201) should be on
pvenode - Containers should be using
thin1storage (or compatible storage) - Containers should be in "stopped" or "running" status
- No migration errors in logs
Next Steps
After successful migration:
- Verify all containers are on correct node
- Verify container storage configuration
- Start containers if needed:
pct start <VMID> - Verify container functionality
- Update documentation with new locations
Scripts Available
diagnose-and-fix-migration-storage.sh: Comprehensive diagnostic and migration toolfix-migration-storage.sh: Storage configuration fix toolrepair-thin-storage.sh: Repair thin storage pools (thin1-thin3)
Related Documentation
docs/STORAGE_MIGRATION_ISSUE.md: Original storage migration issue documentationdocs/MIGRATION_STATUS_UPDATE.md: Migration status updatesdocs/COMPLETE_NEXT_STEPS_STATUS.md: Next steps after migration