Files
proxmox/docs/09-troubleshooting/STORAGE_MIGRATION_ISSUE.md
defiQUG fbda1b4beb
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
docs: Ledger Live integration, contract deploy learnings, NEXT_STEPS updates
- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands
- CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround
- CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check
- NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere
- MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates
- LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:46:57 -08:00

2.8 KiB

Storage Migration Issue - pve2 Configuration

Last Updated: 2026-01-31
Document Version: 1.0
Status: Active Documentation

Issue: Container migrations failing due to storage configuration mismatch.


Problem

Container migrations from ml110 to pve2 are failing with the error:

Volume group "pve" not found
ERROR: storage migration for 'local-lvm:vm-XXXX-disk-0' to storage 'local-lvm' failed

Root Cause

ml110 (source):

  • Has local-lvm storage active
  • Uses volume group named "pve" (standard Proxmox setup)
  • Containers stored on local-lvm:vm-XXXX-disk-0

pve2 (target):

  • Has local-lvm storage but it's INACTIVE
  • Has volume groups named lvm1, lvm2, lvm3, lvm4, lvm5, lvm6 instead of "pve"
  • Storage is not properly configured for Proxmox

Storage Status

ml110 Storage

local-lvm: lvmthin, active, 832GB total, 108GB used
Volume Group: pve (standard)

pve2 Storage

local-lvm: lvmthin, INACTIVE, 0GB available
Volume Groups: lvm1, lvm2, lvm3, lvm4, lvm5, lvm6 (non-standard)

Solutions

  1. Rename/create "pve" volume group on pve2:

    # On pve2, check current LVM setup
    ssh root@192.168.11.12 "vgs; lvs"
    
    # Rename one of the volume groups to "pve" (if possible)
    # OR create a new "pve" volume group from available space
    
  2. Activate local-lvm storage on pve2:

    # Check storage configuration
    ssh root@192.168.11.12 "cat /etc/pve/storage.cfg"
    
    # May need to reconfigure local-lvm to use correct volume group
    

Option 2: Migrate to Different Storage on pve2

Use local (directory storage) instead of local-lvm:

# Migrate with storage specification
pct migrate <VMID> pve2 --storage local --restart

Pros: Works immediately, no storage reconfiguration needed
Cons: Directory storage is slower than LVM thin provisioning

Option 3: Use Shared Storage

Configure shared storage (NFS, Ceph, etc.) accessible from both nodes:

# Add shared storage to cluster
# Then migrate containers to shared storage

Immediate Workaround

Until pve2's local-lvm is properly configured, we can:

  1. Skip migrations for now
  2. Configure pve2 storage first
  3. Then proceed with migrations

Next Steps

  1. Investigate pve2's LVM configuration
  2. Configure local-lvm storage on pve2 with "pve" volume group
  3. Verify storage is active and working
  4. Retry container migrations

Verification Commands

# Check pve2 storage status
ssh root@192.168.11.12 "pvesm status"

# Check volume groups
ssh root@192.168.11.12 "vgs"

# Check local-lvm configuration
ssh root@192.168.11.12 "cat /etc/pve/storage.cfg | grep -A 5 local-lvm"

Status: ⚠️ Migrations paused pending storage configuration fix