Files
proxmox/docs/04-configuration/PHOENIX_VAULT_CLUSTER_DEPLOYMENT_COMPLETE.md
defiQUG fbda1b4beb
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
docs: Ledger Live integration, contract deploy learnings, NEXT_STEPS updates
- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands
- CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround
- CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check
- NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere
- MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates
- LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:46:57 -08:00

7.6 KiB

Sankofa Phoenix Vault Cluster - Deployment Complete

Last Updated: 2026-01-31
Document Version: 1.0
Status: Active Documentation


Date: 2026-01-19
Status: DEPLOYMENT COMPLETE
Cluster: 3-node High Availability Vault cluster


Executive Summary

The Sankofa Phoenix Vault cluster has been successfully deployed with full redundancy across multiple Proxmox hosts. All three nodes are operational, unsealed, and participating in the Raft consensus cluster.


Deployment Status

Completed Phases

  1. Container Creation - All 3 containers created and started
  2. Vault Installation - Vault 1.21.2 installed on all nodes
  3. Configuration - Raft storage backend configured with HA
  4. Cluster Initialization - Cluster initialized with 5 unseal keys (threshold 3)
  5. Node Unsealing - All nodes unsealed and joined to cluster

Cluster Configuration

Node Details

Node VMID Hostname IP Address Proxmox Host Status Role
Node 1 8640 vault-phoenix-1 192.168.11.200 r630-01 Active Leader
Node 2 8641 vault-phoenix-2 192.168.11.201 r630-02 Active Follower
Node 3 8642 vault-phoenix-3 192.168.11.202 r630-01 Active Follower

Cluster Status

Node               Address             State       Voter
----               -------             -----       -----
vault-phoenix-1    10.160.0.40:8201    leader      true
vault-phoenix-2    10.160.0.41:8201    follower    true
vault-phoenix-3    10.160.0.42:8201    follower    true

Cluster Name: vault-cluster-b3158b03
Cluster ID: 135ceb09-fabd-acc5-4949-ed52500907c5
Storage Type: Raft
HA Enabled: Yes
Seal Type: Shamir (5 keys, threshold 3)


Network Configuration

Network: 192.168.11.0/24 (Main network, no VLAN)
Gateway: 192.168.11.1

API Endpoints:

Cluster Endpoints:


Resource Allocation

Per Node

  • CPU: 2 cores
  • Memory: 4GB
  • Storage: 50GB (Raft storage + logs)
  • Network: VLAN 160

Total Cluster

  • CPU: 6 cores
  • Memory: 12GB
  • Storage: 150GB

Security Configuration

Current Settings

  • TLS: Disabled (development/testing)
  • Mlock: Disabled (required for Raft storage)
  • Seal Type: Shamir (5 keys, threshold 3)
  • Authentication: Root token (initial setup)

Production Recommendations

  1. Enable TLS: Configure TLS certificates for all API endpoints
  2. HSM Integration: Consider HSM for auto-unseal
  3. Authentication: Set up AppRole, LDAP, or OIDC authentication
  4. Policies: Create least-privilege policies for Phoenix services
  5. Audit Logging: Enable audit logging to secure location

Credentials

⚠️ CRITICAL: Credentials are stored in /tmp/vault-phoenix-credentials.txt

DO NOT:

  • Commit credentials to Git
  • Share credentials via insecure channels
  • Store credentials in plain text long-term

DO:

  • Move credentials to secure password manager
  • Store unseal keys in separate secure locations
  • Rotate root token after initial setup
  • Use AppRole or other authentication methods for services

Unseal Keys (5 keys, need 3 to unseal)

  1. foidP9q1gnN+Bm/9u1axdnSU1XSBc4ZTtCk8hsyheLah
  2. pWy6ect1WYwQNV1kzJvKqsCEER6+xHCvBN6zTMeYIELY
  3. eUu9GYrdJKuvqfnqVShPjY+EQKu15Nqju4TkhZngghKP
  4. NB/2aYlhcUy4L5jDvUHTvbUT+xHSUINnnP2iynLldUcK
  5. ZAfN1U0/Bn4GGQH/5okWshZ05YFuAmXjlL5ZOCjZloY3

Root Token

hvs.PMJcL6HkZnz0unUYZAdfttZY


High Availability Features

Automatic Failover

  • Leader election via Raft consensus
  • Automatic failover if leader fails
  • No manual intervention required

Data Redundancy

  • All data replicated across all 3 nodes
  • Consensus requires majority (2 of 3) for writes
  • Data persisted on all nodes

Network Redundancy

  • Nodes distributed across multiple Proxmox hosts
  • VLAN isolation for security
  • Multiple API endpoints for load distribution

Verification Commands

Check Cluster Health

# On any node
export VAULT_ADDR=http://10.160.0.40:8200
vault status

# List cluster peers (requires root token)
export VAULT_TOKEN=<root-token>
vault operator raft list-peers

Check Individual Node Status

# Node 1
ssh root@192.168.11.11 "pct exec 8640 -- vault status"

# Node 2
ssh root@192.168.11.12 "pct exec 8641 -- vault status"

# Node 3
ssh root@192.168.11.11 "pct exec 8642 -- vault status"

Test Failover

# Stop leader (Node 1)
ssh root@192.168.11.11 "pct stop 8640"

# Check new leader election
vault operator raft list-peers

# Restart Node 1
ssh root@192.168.11.11 "pct start 8640"

# Verify rejoin
vault operator raft list-peers

Next Steps

Immediate Actions

  1. Move Credentials: Transfer /tmp/vault-phoenix-credentials.txt to secure location
  2. Delete Temporary File: Remove credentials from /tmp directory
  3. Configure Authentication: Set up AppRole or other auth methods
  4. Create Policies: Define policies for Phoenix services
  5. Set Up Secret Paths: Organize secrets according to Phoenix structure

Short-term (1-2 weeks)

  1. Enable TLS: Configure TLS certificates for production
  2. Set Up Monitoring: Configure monitoring and alerting
  3. Create Backup Procedures: Set up automated Raft snapshots
  4. Document Access Procedures: Document how Phoenix services will access Vault
  5. Test Integration: Test Phoenix API and Portal integration

Long-term (1-3 months)

  1. HSM Integration: Evaluate and implement HSM for auto-unseal
  2. Disaster Recovery: Test and document disaster recovery procedures
  3. Performance Tuning: Optimize cluster performance based on usage
  4. Security Hardening: Implement additional security measures
  5. Migration: Migrate secrets from existing Vault (VMID 108) if needed

Troubleshooting

Node Not Joining Cluster

# Check node status
vault status

# Check network connectivity
ping 10.160.0.40
ping 10.160.0.41
ping 10.160.0.42

# Check Vault logs
journalctl -u vault.service -f

Node Sealed

# Unseal with 3 keys
vault operator unseal <key-1>
vault operator unseal <key-2>
vault operator unseal <key-3>

Service Not Starting

# Check service status
systemctl status vault.service

# Check logs
journalctl -u vault.service -n 50

# Verify configuration
vault server -config=/etc/vault.d/vault.hcl -verify-only


Deployment Log

Deployment Date: 2026-01-19
Deployment Time: ~30 minutes
Deployment Method: Automated script + manual cluster initialization
Deployment Script: scripts/deploy-phoenix-vault-cluster.sh

Issues Encountered:

  1. Storage pool mismatch on r630-02 - Resolved by using thin3 pool
  2. Systemd service template variable expansion - Resolved by escaping $MAINPID
  3. Vault configuration missing disable_mlock - Resolved by adding explicit setting
  4. Recovery shares not applicable to Shamir seal - Resolved by removing recovery parameters

All issues resolved successfully.


Status: DEPLOYMENT COMPLETE
Last Updated: 2026-01-19