- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands - CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround - CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check - NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere - MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates - LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference Co-authored-by: Cursor <cursoragent@cursor.com>
7.6 KiB
Sankofa Phoenix Vault Cluster - Deployment Complete ✅
Last Updated: 2026-01-31
Document Version: 1.0
Status: Active Documentation
Date: 2026-01-19
Status: ✅ DEPLOYMENT COMPLETE
Cluster: 3-node High Availability Vault cluster
Executive Summary
The Sankofa Phoenix Vault cluster has been successfully deployed with full redundancy across multiple Proxmox hosts. All three nodes are operational, unsealed, and participating in the Raft consensus cluster.
Deployment Status
✅ Completed Phases
- Container Creation - All 3 containers created and started
- Vault Installation - Vault 1.21.2 installed on all nodes
- Configuration - Raft storage backend configured with HA
- Cluster Initialization - Cluster initialized with 5 unseal keys (threshold 3)
- Node Unsealing - All nodes unsealed and joined to cluster
Cluster Configuration
Node Details
| Node | VMID | Hostname | IP Address | Proxmox Host | Status | Role |
|---|---|---|---|---|---|---|
| Node 1 | 8640 | vault-phoenix-1 | 192.168.11.200 | r630-01 | ✅ Active | Leader |
| Node 2 | 8641 | vault-phoenix-2 | 192.168.11.201 | r630-02 | ✅ Active | Follower |
| Node 3 | 8642 | vault-phoenix-3 | 192.168.11.202 | r630-01 | ✅ Active | Follower |
Cluster Status
Node Address State Voter
---- ------- ----- -----
vault-phoenix-1 10.160.0.40:8201 leader true
vault-phoenix-2 10.160.0.41:8201 follower true
vault-phoenix-3 10.160.0.42:8201 follower true
Cluster Name: vault-cluster-b3158b03
Cluster ID: 135ceb09-fabd-acc5-4949-ed52500907c5
Storage Type: Raft
HA Enabled: ✅ Yes
Seal Type: Shamir (5 keys, threshold 3)
Network Configuration
Network: 192.168.11.0/24 (Main network, no VLAN)
Gateway: 192.168.11.1
API Endpoints:
Cluster Endpoints:
Resource Allocation
Per Node
- CPU: 2 cores
- Memory: 4GB
- Storage: 50GB (Raft storage + logs)
- Network: VLAN 160
Total Cluster
- CPU: 6 cores
- Memory: 12GB
- Storage: 150GB
Security Configuration
Current Settings
- TLS: Disabled (development/testing)
- Mlock: Disabled (required for Raft storage)
- Seal Type: Shamir (5 keys, threshold 3)
- Authentication: Root token (initial setup)
Production Recommendations
- Enable TLS: Configure TLS certificates for all API endpoints
- HSM Integration: Consider HSM for auto-unseal
- Authentication: Set up AppRole, LDAP, or OIDC authentication
- Policies: Create least-privilege policies for Phoenix services
- Audit Logging: Enable audit logging to secure location
Credentials
⚠️ CRITICAL: Credentials are stored in /tmp/vault-phoenix-credentials.txt
DO NOT:
- Commit credentials to Git
- Share credentials via insecure channels
- Store credentials in plain text long-term
DO:
- Move credentials to secure password manager
- Store unseal keys in separate secure locations
- Rotate root token after initial setup
- Use AppRole or other authentication methods for services
Unseal Keys (5 keys, need 3 to unseal)
foidP9q1gnN+Bm/9u1axdnSU1XSBc4ZTtCk8hsyheLahpWy6ect1WYwQNV1kzJvKqsCEER6+xHCvBN6zTMeYIELYeUu9GYrdJKuvqfnqVShPjY+EQKu15Nqju4TkhZngghKPNB/2aYlhcUy4L5jDvUHTvbUT+xHSUINnnP2iynLldUcKZAfN1U0/Bn4GGQH/5okWshZ05YFuAmXjlL5ZOCjZloY3
Root Token
hvs.PMJcL6HkZnz0unUYZAdfttZY
High Availability Features
Automatic Failover
- ✅ Leader election via Raft consensus
- ✅ Automatic failover if leader fails
- ✅ No manual intervention required
Data Redundancy
- ✅ All data replicated across all 3 nodes
- ✅ Consensus requires majority (2 of 3) for writes
- ✅ Data persisted on all nodes
Network Redundancy
- ✅ Nodes distributed across multiple Proxmox hosts
- ✅ VLAN isolation for security
- ✅ Multiple API endpoints for load distribution
Verification Commands
Check Cluster Health
# On any node
export VAULT_ADDR=http://10.160.0.40:8200
vault status
# List cluster peers (requires root token)
export VAULT_TOKEN=<root-token>
vault operator raft list-peers
Check Individual Node Status
# Node 1
ssh root@192.168.11.11 "pct exec 8640 -- vault status"
# Node 2
ssh root@192.168.11.12 "pct exec 8641 -- vault status"
# Node 3
ssh root@192.168.11.11 "pct exec 8642 -- vault status"
Test Failover
# Stop leader (Node 1)
ssh root@192.168.11.11 "pct stop 8640"
# Check new leader election
vault operator raft list-peers
# Restart Node 1
ssh root@192.168.11.11 "pct start 8640"
# Verify rejoin
vault operator raft list-peers
Next Steps
Immediate Actions
- ✅ Move Credentials: Transfer
/tmp/vault-phoenix-credentials.txtto secure location - ✅ Delete Temporary File: Remove credentials from
/tmpdirectory - ⏳ Configure Authentication: Set up AppRole or other auth methods
- ⏳ Create Policies: Define policies for Phoenix services
- ⏳ Set Up Secret Paths: Organize secrets according to Phoenix structure
Short-term (1-2 weeks)
- Enable TLS: Configure TLS certificates for production
- Set Up Monitoring: Configure monitoring and alerting
- Create Backup Procedures: Set up automated Raft snapshots
- Document Access Procedures: Document how Phoenix services will access Vault
- Test Integration: Test Phoenix API and Portal integration
Long-term (1-3 months)
- HSM Integration: Evaluate and implement HSM for auto-unseal
- Disaster Recovery: Test and document disaster recovery procedures
- Performance Tuning: Optimize cluster performance based on usage
- Security Hardening: Implement additional security measures
- Migration: Migrate secrets from existing Vault (VMID 108) if needed
Troubleshooting
Node Not Joining Cluster
# Check node status
vault status
# Check network connectivity
ping 10.160.0.40
ping 10.160.0.41
ping 10.160.0.42
# Check Vault logs
journalctl -u vault.service -f
Node Sealed
# Unseal with 3 keys
vault operator unseal <key-1>
vault operator unseal <key-2>
vault operator unseal <key-3>
Service Not Starting
# Check service status
systemctl status vault.service
# Check logs
journalctl -u vault.service -n 50
# Verify configuration
vault server -config=/etc/vault.d/vault.hcl -verify-only
Related Documentation
- Phoenix Vault Cluster Deployment Plan
- Master Secrets Inventory
- HSM Status Report
- HashiCorp Vault Documentation
Deployment Log
Deployment Date: 2026-01-19
Deployment Time: ~30 minutes
Deployment Method: Automated script + manual cluster initialization
Deployment Script: scripts/deploy-phoenix-vault-cluster.sh
Issues Encountered:
- Storage pool mismatch on r630-02 - Resolved by using
thin3pool - Systemd service template variable expansion - Resolved by escaping
$MAINPID - Vault configuration missing
disable_mlock- Resolved by adding explicit setting - Recovery shares not applicable to Shamir seal - Resolved by removing recovery parameters
All issues resolved successfully.
Status: ✅ DEPLOYMENT COMPLETE
Last Updated: 2026-01-19