Organize docs directory: move 25 files to appropriate locations
- Created docs/00-meta/ for documentation meta files (11 files) - Created docs/archive/reports/ for reports (5 files) - Created docs/archive/issues/ for issue tracking (2 files) - Created docs/bridge/contracts/ for Solidity contracts (3 files) - Created docs/04-configuration/metamask/ for Metamask configs (3 files) - Created docs/scripts/ for documentation scripts (2 files) - Root directory now contains only 3 essential files (89.3% reduction) All recommended actions from docs directory review complete.
This commit is contained in:
345
scripts/DEPLOYMENT_README_R630-01.md
Normal file
345
scripts/DEPLOYMENT_README_R630-01.md
Normal file
@@ -0,0 +1,345 @@
|
||||
# Sankofa & Phoenix Deployment Guide for r630-01
|
||||
|
||||
**Target Server:** r630-01 (192.168.11.11)
|
||||
**Deployment Date:** $(date +%Y-%m-%d)
|
||||
**Status:** Ready for Deployment
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide provides step-by-step instructions for deploying Sankofa and Phoenix control plane services to r630-01 Proxmox node.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
r630-01 (192.168.11.11)
|
||||
├── VMID 7803: PostgreSQL (10.160.0.13)
|
||||
├── VMID 7802: Keycloak (10.160.0.12)
|
||||
├── VMID 7800: Sankofa API (10.160.0.10)
|
||||
└── VMID 7801: Sankofa Portal (10.160.0.11)
|
||||
```
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- **VLAN:** 160
|
||||
- **Subnet:** 10.160.0.0/22
|
||||
- **Gateway:** 10.160.0.1
|
||||
- **Storage:** thin1 (208GB available)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **SSH Access to r630-01**
|
||||
```bash
|
||||
ssh root@192.168.11.11
|
||||
```
|
||||
|
||||
2. **Sankofa Project Available**
|
||||
- Location: `/home/intlc/projects/Sankofa`
|
||||
- Must contain `api/` and `portal/` directories
|
||||
|
||||
3. **Proxmox Storage**
|
||||
- Verify `thin1` storage is available
|
||||
- Check available space: `pvesm status`
|
||||
|
||||
4. **Network Configuration**
|
||||
- Verify VLAN 160 is configured
|
||||
- Verify gateway (10.160.0.1) is accessible
|
||||
|
||||
---
|
||||
|
||||
## Deployment Steps
|
||||
|
||||
### Step 1: Prepare Configuration
|
||||
|
||||
1. Copy environment template:
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/scripts
|
||||
cp env.r630-01.example .env.r630-01
|
||||
```
|
||||
|
||||
2. Edit `.env.r630-01` and update:
|
||||
- Database passwords
|
||||
- Keycloak admin password
|
||||
- Client secrets
|
||||
- JWT secrets
|
||||
- Any other production values
|
||||
|
||||
### Step 2: Deploy Containers
|
||||
|
||||
Deploy all LXC containers:
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/scripts
|
||||
./deploy-sankofa-r630-01.sh
|
||||
```
|
||||
|
||||
This will create:
|
||||
- PostgreSQL container (VMID 7803)
|
||||
- Keycloak container (VMID 7802)
|
||||
- API container (VMID 7800)
|
||||
- Portal container (VMID 7801)
|
||||
|
||||
### Step 3: Setup PostgreSQL
|
||||
|
||||
Configure PostgreSQL database:
|
||||
|
||||
```bash
|
||||
./setup-postgresql-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install PostgreSQL 16
|
||||
- Create `sankofa` database
|
||||
- Create `sankofa` user
|
||||
- Configure network access
|
||||
- Enable required extensions
|
||||
|
||||
**Note:** The script will generate a random password. Update `.env.r630-01` with the actual password.
|
||||
|
||||
### Step 4: Setup Keycloak
|
||||
|
||||
Configure Keycloak identity service:
|
||||
|
||||
```bash
|
||||
./setup-keycloak-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Java 21
|
||||
- Download and install Keycloak 24.0.0
|
||||
- Create Keycloak database
|
||||
- Configure PostgreSQL connection
|
||||
- Create admin user
|
||||
- Create API and Portal clients
|
||||
|
||||
**Note:** The script will generate random passwords and secrets. Update `.env.r630-01` with the actual values.
|
||||
|
||||
### Step 5: Deploy API
|
||||
|
||||
Deploy Sankofa API service:
|
||||
|
||||
```bash
|
||||
./deploy-api-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Node.js 18
|
||||
- Install pnpm
|
||||
- Copy API project files
|
||||
- Install dependencies
|
||||
- Configure environment
|
||||
- Run database migrations
|
||||
- Build API
|
||||
- Create systemd service
|
||||
- Start API service
|
||||
|
||||
### Step 6: Run Database Migrations
|
||||
|
||||
If migrations weren't run during API deployment:
|
||||
|
||||
```bash
|
||||
./run-migrations-r630-01.sh
|
||||
```
|
||||
|
||||
### Step 7: Deploy Portal
|
||||
|
||||
Deploy Sankofa Portal:
|
||||
|
||||
```bash
|
||||
./deploy-portal-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Node.js 18
|
||||
- Install pnpm
|
||||
- Copy Portal project files
|
||||
- Install dependencies
|
||||
- Configure environment
|
||||
- Build Portal (Next.js)
|
||||
- Create systemd service
|
||||
- Start Portal service
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
### Check Container Status
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct list | grep -E '780[0-3]'"
|
||||
```
|
||||
|
||||
### Check Service Status
|
||||
|
||||
**PostgreSQL:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7803 -- systemctl status postgresql"
|
||||
```
|
||||
|
||||
**Keycloak:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7802 -- systemctl status keycloak"
|
||||
curl http://10.160.0.12:8080/health/ready
|
||||
```
|
||||
|
||||
**API:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- systemctl status sankofa-api"
|
||||
curl http://10.160.0.10:4000/health
|
||||
```
|
||||
|
||||
**Portal:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- systemctl status sankofa-portal"
|
||||
curl http://10.160.0.11:3000
|
||||
```
|
||||
|
||||
### Test GraphQL Endpoint
|
||||
|
||||
```bash
|
||||
curl -X POST http://10.160.0.10:4000/graphql \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query": "{ __typename }"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Service URLs
|
||||
|
||||
| Service | URL | Description |
|
||||
|---------|-----|-------------|
|
||||
| PostgreSQL | `10.160.0.13:5432` | Database |
|
||||
| Keycloak | `http://10.160.0.12:8080` | Identity Provider |
|
||||
| Keycloak Admin | `http://10.160.0.12:8080/admin` | Admin Console |
|
||||
| API | `http://10.160.0.10:4000` | GraphQL API |
|
||||
| API GraphQL | `http://10.160.0.10:4000/graphql` | GraphQL Endpoint |
|
||||
| API Health | `http://10.160.0.10:4000/health` | Health Check |
|
||||
| Portal | `http://10.160.0.11:3000` | Web Portal |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container Won't Start
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
ssh root@192.168.11.11 "pct status 7800"
|
||||
|
||||
# Check container logs
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- journalctl -n 50"
|
||||
```
|
||||
|
||||
### Database Connection Issues
|
||||
|
||||
```bash
|
||||
# Test database connection from API container
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- bash -c 'PGPASSWORD=your_password psql -h 10.160.0.13 -U sankofa -d sankofa -c \"SELECT 1;\"'"
|
||||
```
|
||||
|
||||
### Keycloak Not Starting
|
||||
|
||||
```bash
|
||||
# Check Keycloak logs
|
||||
ssh root@192.168.11.11 "pct exec 7802 -- journalctl -u keycloak -n 100"
|
||||
|
||||
# Check Keycloak process
|
||||
ssh root@192.168.11.11 "pct exec 7802 -- ps aux | grep keycloak"
|
||||
```
|
||||
|
||||
### API Service Issues
|
||||
|
||||
```bash
|
||||
# Check API logs
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- journalctl -u sankofa-api -n 100"
|
||||
|
||||
# Restart API service
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- systemctl restart sankofa-api"
|
||||
```
|
||||
|
||||
### Portal Build Failures
|
||||
|
||||
```bash
|
||||
# Check build logs
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- journalctl -u sankofa-portal -n 100"
|
||||
|
||||
# Rebuild Portal
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- bash -c 'cd /opt/sankofa-portal && pnpm build'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post-Deployment Tasks
|
||||
|
||||
1. **Update Environment Variables**
|
||||
- Update `.env.r630-01` with actual passwords and secrets
|
||||
- Update service configurations if needed
|
||||
|
||||
2. **Configure Firewall Rules**
|
||||
- Allow access to service ports
|
||||
- Configure VLAN 160 routing if needed
|
||||
|
||||
3. **Set Up Cloudflare Tunnels**
|
||||
- Configure tunnels for external access
|
||||
- Set up DNS records
|
||||
|
||||
4. **Configure Monitoring**
|
||||
- Set up Prometheus exporters
|
||||
- Configure Grafana dashboards
|
||||
- Set up alerting
|
||||
|
||||
5. **Backup Configuration**
|
||||
- Document all passwords and secrets
|
||||
- Create backup procedures
|
||||
- Test restore procedures
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Update Services
|
||||
|
||||
**Update API:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- bash -c 'cd /opt/sankofa-api && git pull && pnpm install && pnpm build && systemctl restart sankofa-api'"
|
||||
```
|
||||
|
||||
**Update Portal:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- bash -c 'cd /opt/sankofa-portal && git pull && pnpm install && pnpm build && systemctl restart sankofa-portal'"
|
||||
```
|
||||
|
||||
### Backup Database
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7803 -- bash -c 'PGPASSWORD=your_password pg_dump -h localhost -U sankofa sankofa > /tmp/sankofa_backup_$(date +%Y%m%d).sql'"
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
**API Logs:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- journalctl -u sankofa-api -f"
|
||||
```
|
||||
|
||||
**Portal Logs:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- journalctl -u sankofa-portal -f"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check logs using troubleshooting commands above
|
||||
2. Review deployment scripts for configuration
|
||||
3. Verify network connectivity between containers
|
||||
4. Check Proxmox storage and resource availability
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** $(date +%Y-%m-%d)
|
||||
214
scripts/DEPLOYMENT_SUMMARY_R630-01.md
Normal file
214
scripts/DEPLOYMENT_SUMMARY_R630-01.md
Normal file
@@ -0,0 +1,214 @@
|
||||
# Sankofa & Phoenix Deployment Summary - r630-01
|
||||
|
||||
**Date:** $(date +%Y-%m-%d)
|
||||
**Status:** ✅ All Deployment Scripts Created
|
||||
**Target:** r630-01 (192.168.11.11)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Completed Tasks
|
||||
|
||||
### 1. Deployment Scripts Created
|
||||
|
||||
- ✅ `deploy-sankofa-r630-01.sh` - Main container deployment script
|
||||
- ✅ `setup-postgresql-r630-01.sh` - PostgreSQL database setup
|
||||
- ✅ `setup-keycloak-r630-01.sh` - Keycloak identity service setup
|
||||
- ✅ `deploy-api-r630-01.sh` - Sankofa API deployment
|
||||
- ✅ `deploy-portal-r630-01.sh` - Sankofa Portal deployment
|
||||
- ✅ `run-migrations-r630-01.sh` - Database migration runner
|
||||
|
||||
### 2. Configuration Files Created
|
||||
|
||||
- ✅ `env.r630-01.example` - Environment configuration template
|
||||
- ✅ `DEPLOYMENT_README_R630-01.md` - Complete deployment guide
|
||||
|
||||
### 3. Fixed Blockers
|
||||
|
||||
- ✅ **Fixed:** Deployment script now targets r630-01 instead of pve2
|
||||
- ✅ **Fixed:** Storage configuration uses `thin1` (available on r630-01)
|
||||
- ✅ **Fixed:** Network configuration uses static IPs for VLAN 160
|
||||
- ✅ **Fixed:** Added PostgreSQL container deployment
|
||||
- ✅ **Fixed:** Added Keycloak configuration scripts
|
||||
- ✅ **Fixed:** Added service deployment scripts
|
||||
- ✅ **Fixed:** Added database migration scripts
|
||||
|
||||
---
|
||||
|
||||
## 📋 Deployment Architecture
|
||||
|
||||
### Container Allocation
|
||||
|
||||
| VMID | Service | IP Address | Resources |
|
||||
|------|---------|------------|------------|
|
||||
| 7803 | PostgreSQL | 10.160.0.13 | 2GB RAM, 2 cores, 50GB disk |
|
||||
| 7802 | Keycloak | 10.160.0.12 | 2GB RAM, 2 cores, 30GB disk |
|
||||
| 7800 | Sankofa API | 10.160.0.10 | 4GB RAM, 4 cores, 50GB disk |
|
||||
| 7801 | Sankofa Portal | 10.160.0.11 | 4GB RAM, 4 cores, 50GB disk |
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- **VLAN:** 160
|
||||
- **Subnet:** 10.160.0.0/22
|
||||
- **Gateway:** 10.160.0.1
|
||||
- **Storage:** thin1 (208GB available)
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Prepare Configuration
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/scripts
|
||||
cp env.r630-01.example .env.r630-01
|
||||
# Edit .env.r630-01 with your passwords and secrets
|
||||
```
|
||||
|
||||
### 2. Deploy Containers
|
||||
|
||||
```bash
|
||||
./deploy-sankofa-r630-01.sh
|
||||
```
|
||||
|
||||
### 3. Setup Services (in order)
|
||||
|
||||
```bash
|
||||
# 1. Setup PostgreSQL
|
||||
./setup-postgresql-r630-01.sh
|
||||
|
||||
# 2. Setup Keycloak
|
||||
./setup-keycloak-r630-01.sh
|
||||
|
||||
# 3. Deploy API
|
||||
./deploy-api-r630-01.sh
|
||||
|
||||
# 4. Deploy Portal
|
||||
./deploy-portal-r630-01.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📝 Key Changes from Original Script
|
||||
|
||||
### Fixed Issues
|
||||
|
||||
1. **Target Node:** Changed from `pve2` (192.168.11.12) to `r630-01` (192.168.11.11)
|
||||
2. **Storage:** Changed from `thin4` to `thin1` (available on r630-01)
|
||||
3. **Network:** Changed from DHCP to static IP configuration
|
||||
4. **PostgreSQL:** Added dedicated PostgreSQL container (VMID 7803)
|
||||
5. **Service Order:** Proper deployment order (PostgreSQL → Keycloak → API → Portal)
|
||||
6. **Configuration:** Added comprehensive environment configuration
|
||||
7. **Scripts:** Added individual service setup scripts
|
||||
|
||||
### New Features
|
||||
|
||||
- ✅ PostgreSQL database setup script
|
||||
- ✅ Keycloak installation and configuration script
|
||||
- ✅ API deployment with migrations
|
||||
- ✅ Portal deployment with Next.js build
|
||||
- ✅ Database migration runner
|
||||
- ✅ Comprehensive deployment documentation
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Configuration Requirements
|
||||
|
||||
### Before Deployment
|
||||
|
||||
1. **SSH Access:** Ensure SSH access to r630-01
|
||||
```bash
|
||||
ssh root@192.168.11.11
|
||||
```
|
||||
|
||||
2. **Storage:** Verify thin1 storage is available
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pvesm status | grep thin1"
|
||||
```
|
||||
|
||||
3. **Network:** Verify VLAN 160 configuration
|
||||
```bash
|
||||
ssh root@192.168.11.11 "ip addr show | grep 160"
|
||||
```
|
||||
|
||||
4. **Sankofa Project:** Ensure project is available
|
||||
```bash
|
||||
ls -la /home/intlc/projects/Sankofa/api
|
||||
ls -la /home/intlc/projects/Sankofa/portal
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Update `env.r630-01.example` (or create `.env.r630-01`) with:
|
||||
|
||||
- Database passwords
|
||||
- Keycloak admin password
|
||||
- Keycloak client secrets
|
||||
- JWT secrets
|
||||
- NextAuth secret
|
||||
- Any other production values
|
||||
|
||||
---
|
||||
|
||||
## 📊 Deployment Checklist
|
||||
|
||||
- [ ] SSH access to r630-01 verified
|
||||
- [ ] Storage (thin1) verified
|
||||
- [ ] Network (VLAN 160) configured
|
||||
- [ ] Sankofa project available
|
||||
- [ ] Environment configuration prepared
|
||||
- [ ] Containers deployed (`deploy-sankofa-r630-01.sh`)
|
||||
- [ ] PostgreSQL setup completed (`setup-postgresql-r630-01.sh`)
|
||||
- [ ] Keycloak setup completed (`setup-keycloak-r630-01.sh`)
|
||||
- [ ] API deployed (`deploy-api-r630-01.sh`)
|
||||
- [ ] Portal deployed (`deploy-portal-r630-01.sh`)
|
||||
- [ ] All services verified and running
|
||||
- [ ] Firewall rules configured
|
||||
- [ ] Cloudflare tunnels configured (if needed)
|
||||
- [ ] Monitoring configured (if needed)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
After successful deployment:
|
||||
|
||||
1. **Verify Services:**
|
||||
- Test API health endpoint
|
||||
- Test Portal accessibility
|
||||
- Test Keycloak admin console
|
||||
- Test database connectivity
|
||||
|
||||
2. **Configure External Access:**
|
||||
- Set up Cloudflare tunnels
|
||||
- Configure DNS records
|
||||
- Set up SSL/TLS certificates
|
||||
|
||||
3. **Set Up Monitoring:**
|
||||
- Configure Prometheus exporters
|
||||
- Set up Grafana dashboards
|
||||
- Configure alerting rules
|
||||
|
||||
4. **Documentation:**
|
||||
- Document all passwords and secrets securely
|
||||
- Create operational runbooks
|
||||
- Document backup and restore procedures
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
- **Deployment Guide:** `DEPLOYMENT_README_R630-01.md`
|
||||
- **Environment Template:** `env.r630-01.example`
|
||||
- **Scripts Location:** `/home/intlc/projects/proxmox/scripts/`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Status
|
||||
|
||||
**All blockers fixed and deployment scripts created!**
|
||||
|
||||
The deployment is ready to proceed. Follow the steps in `DEPLOYMENT_README_R630-01.md` to deploy Sankofa and Phoenix to r630-01.
|
||||
|
||||
---
|
||||
|
||||
**Generated:** $(date +%Y-%m-%d %H:%M:%S)
|
||||
44
scripts/INSTALL_TUNNEL.sh
Executable file
44
scripts/INSTALL_TUNNEL.sh
Executable file
@@ -0,0 +1,44 @@
|
||||
#!/bin/bash
|
||||
# Quick script to install Cloudflare Tunnel service
|
||||
# Usage: ./INSTALL_TUNNEL.sh <TUNNEL_TOKEN>
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
echo "Error: Tunnel token required!"
|
||||
echo ""
|
||||
echo "Usage: $0 <TUNNEL_TOKEN>"
|
||||
echo ""
|
||||
echo "Get your token from Cloudflare Dashboard:"
|
||||
echo " Zero Trust → Networks → Tunnels → Create tunnel → Copy token"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TUNNEL_TOKEN="$1"
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
CLOUDFLARED_VMID="${CLOUDFLARED_VMID:-102}"
|
||||
|
||||
echo "Installing Cloudflare Tunnel service..."
|
||||
echo "Container: VMID $CLOUDFLARED_VMID"
|
||||
|
||||
# Stop existing DoH service if running
|
||||
ssh root@${PROXMOX_HOST} "pct exec $CLOUDFLARED_VMID -- systemctl stop cloudflared 2>/dev/null || true"
|
||||
|
||||
# Install tunnel service
|
||||
ssh root@${PROXMOX_HOST} "pct exec $CLOUDFLARED_VMID -- cloudflared service install $TUNNEL_TOKEN"
|
||||
|
||||
# Enable and start
|
||||
ssh root@${PROXMOX_HOST} "pct exec $CLOUDFLARED_VMID -- systemctl enable cloudflared"
|
||||
ssh root@${PROXMOX_HOST} "pct exec $CLOUDFLARED_VMID -- systemctl start cloudflared"
|
||||
|
||||
# Check status
|
||||
echo ""
|
||||
echo "Checking tunnel status..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec $CLOUDFLARED_VMID -- systemctl status cloudflared --no-pager | head -10"
|
||||
|
||||
echo ""
|
||||
echo "✅ Tunnel service installed!"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Configure routes in Cloudflare Dashboard"
|
||||
echo "2. Update DNS records to CNAME pointing to tunnel"
|
||||
echo "3. See: docs/04-configuration/CLOUDFLARE_TUNNEL_QUICK_SETUP.md"
|
||||
|
||||
26
scripts/QUICK_SSH_SETUP.sh
Executable file
26
scripts/QUICK_SSH_SETUP.sh
Executable file
@@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
# Quick SSH key setup for Proxmox deployment
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
|
||||
echo "Setting up SSH key for Proxmox host..."
|
||||
|
||||
# Check if key exists
|
||||
if [ ! -f ~/.ssh/id_ed25519 ]; then
|
||||
echo "Generating SSH key..."
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -N "" -C "proxmox-deployment"
|
||||
fi
|
||||
|
||||
echo "Copying SSH key to Proxmox host..."
|
||||
echo "You will be prompted for the root password:"
|
||||
ssh-copy-id -i ~/.ssh/id_ed25519.pub root@"${PROXMOX_HOST}"
|
||||
|
||||
echo ""
|
||||
echo "Testing SSH connection..."
|
||||
if ssh -o BatchMode=yes -o ConnectTimeout=5 root@"${PROXMOX_HOST}" "echo 'SSH key working'" 2>/dev/null; then
|
||||
echo "✅ SSH key setup successful!"
|
||||
echo "You can now run deployment without password prompts:"
|
||||
echo " ./scripts/deploy-to-proxmox-host.sh"
|
||||
else
|
||||
echo "⚠️ SSH key may not be working. You'll need to enter password during deployment."
|
||||
fi
|
||||
127
scripts/analyze-all-domains.sh
Executable file
127
scripts/analyze-all-domains.sh
Executable file
@@ -0,0 +1,127 @@
|
||||
#!/bin/bash
|
||||
# Analyze all Cloudflare domains for tunnel configurations and issues
|
||||
|
||||
set -e
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Cloudflare Domains Analysis"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
|
||||
DOMAINS=(
|
||||
"commcourts.org"
|
||||
"d-bis.org"
|
||||
"defi-oracle.io"
|
||||
"ibods.org"
|
||||
"mim4u.org"
|
||||
"sankofa.nexus"
|
||||
)
|
||||
|
||||
echo "Domains to analyze:"
|
||||
for domain in "${DOMAINS[@]}"; do
|
||||
echo " - $domain"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Check if Cloudflare API credentials are available
|
||||
if [ -z "$CLOUDFLARE_API_TOKEN" ] && [ -z "$CLOUDFLARE_EMAIL" ] || [ -z "$CLOUDFLARE_API_KEY" ]; then
|
||||
echo "⚠️ Cloudflare API credentials not found in environment"
|
||||
echo ""
|
||||
echo "To use API analysis, set:"
|
||||
echo " export CLOUDFLARE_API_TOKEN=your-token"
|
||||
echo " # OR"
|
||||
echo " export CLOUDFLARE_EMAIL=your-email"
|
||||
echo " export CLOUDFLARE_API_KEY=your-key"
|
||||
echo ""
|
||||
echo "Continuing with DNS-based analysis..."
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Analyze each domain
|
||||
for domain in "${DOMAINS[@]}"; do
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Analyzing: $domain"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Check DNS records
|
||||
echo "DNS Records:"
|
||||
if command -v dig &> /dev/null; then
|
||||
# Get NS records
|
||||
NS_RECORDS=$(dig +short NS "$domain" 2>/dev/null | head -2)
|
||||
if [ -n "$NS_RECORDS" ]; then
|
||||
echo " Name Servers:"
|
||||
echo "$NS_RECORDS" | while read ns; do
|
||||
echo " - $ns"
|
||||
done
|
||||
fi
|
||||
|
||||
# Get A records
|
||||
A_RECORDS=$(dig +short A "$domain" 2>/dev/null)
|
||||
if [ -n "$A_RECORDS" ]; then
|
||||
echo " A Records:"
|
||||
echo "$A_RECORDS" | while read ip; do
|
||||
echo " - $ip"
|
||||
done
|
||||
fi
|
||||
|
||||
# Get CNAME records (for subdomains)
|
||||
CNAME_COUNT=$(dig +short "$domain" ANY 2>/dev/null | grep -c "CNAME" || echo "0")
|
||||
if [ "$CNAME_COUNT" -gt 0 ]; then
|
||||
echo " CNAME Records: $CNAME_COUNT found"
|
||||
fi
|
||||
else
|
||||
echo " ⚠️ 'dig' not available - install bind-utils or dnsutils"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Check for tunnel references
|
||||
echo "Tunnel Analysis:"
|
||||
case "$domain" in
|
||||
"d-bis.org")
|
||||
echo " ✅ Analyzed - See DNS_ANALYSIS.md"
|
||||
echo " ⚠️ Issues: Shared tunnel down, low TTL"
|
||||
;;
|
||||
"mim4u.org")
|
||||
echo " ⚠️ CONFLICT: Also exists as subdomain mim4u.org.d-bis.org"
|
||||
echo " Action: Resolve naming conflict"
|
||||
;;
|
||||
"sankofa.nexus")
|
||||
echo " ℹ️ Matches infrastructure naming"
|
||||
echo " Potential: Infrastructure management domain"
|
||||
;;
|
||||
*)
|
||||
echo " ❓ Not yet analyzed"
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
|
||||
# Check if domain is accessible
|
||||
echo "Connectivity:"
|
||||
if command -v curl &> /dev/null; then
|
||||
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 "https://$domain" 2>/dev/null || echo "000")
|
||||
if [ "$HTTP_CODE" != "000" ] && [ "$HTTP_CODE" != "000" ]; then
|
||||
echo " ✅ HTTPS accessible (HTTP $HTTP_CODE)"
|
||||
else
|
||||
echo " ⚠️ HTTPS not accessible or timeout"
|
||||
fi
|
||||
else
|
||||
echo " ⚠️ 'curl' not available"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Analysis Complete"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Next Steps:"
|
||||
echo " 1. Review ALL_DOMAINS_ANALYSIS.md for detailed findings"
|
||||
echo " 2. Fix d-bis.org issues: ./fix-shared-tunnel.sh"
|
||||
echo " 3. Resolve mim4u.org conflict"
|
||||
echo " 4. Analyze remaining domains in Cloudflare Dashboard"
|
||||
echo ""
|
||||
29
scripts/check-r630-04-commands.sh
Executable file
29
scripts/check-r630-04-commands.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
# Commands to run on R630-04 (192.168.11.14) to check Proxmox status
|
||||
# Run these commands while logged into R630-04
|
||||
|
||||
echo "=== Hostname ==="
|
||||
hostname
|
||||
cat /etc/hostname
|
||||
|
||||
echo -e "\n=== Proxmox Version ==="
|
||||
pveversion 2>&1 || echo "Proxmox not installed"
|
||||
|
||||
echo -e "\n=== Proxmox Web Service (pveproxy) Status ==="
|
||||
systemctl status pveproxy --no-pager -l 2>&1 | head -20
|
||||
|
||||
echo -e "\n=== Port 8006 Listening ==="
|
||||
ss -tlnp 2>/dev/null | grep 8006 || netstat -tlnp 2>/dev/null | grep 8006 || echo "Port 8006 not listening"
|
||||
|
||||
echo -e "\n=== All Proxmox Services Status ==="
|
||||
systemctl list-units --type=service --all 2>/dev/null | grep -E 'pveproxy|pvedaemon|pve-cluster|pvestatd'
|
||||
|
||||
echo -e "\n=== Proxmox Services Enabled ==="
|
||||
systemctl list-unit-files 2>/dev/null | grep -i proxmox
|
||||
|
||||
echo -e "\n=== Network Interfaces ==="
|
||||
ip addr show | grep -E 'inet.*192.168.11'
|
||||
|
||||
echo -e "\n=== Firewall Status ==="
|
||||
systemctl status pve-firewall 2>&1 | head -10 || echo "pve-firewall service not found"
|
||||
|
||||
23
scripts/connect-to-r630-04-from-r630-03.sh
Executable file
23
scripts/connect-to-r630-04-from-r630-03.sh
Executable file
@@ -0,0 +1,23 @@
|
||||
#!/bin/bash
|
||||
# Connect to R630-04 from R630-03 (which we know works)
|
||||
# This helps rule out network/SSH client issues
|
||||
|
||||
echo "Connecting to R630-03 first..."
|
||||
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@192.168.11.13 << 'EOF'
|
||||
echo "=== Connected to R630-03 ($(hostname)) ==="
|
||||
echo ""
|
||||
echo "Now attempting to connect to R630-04..."
|
||||
echo ""
|
||||
|
||||
# Try verbose SSH to see what's happening
|
||||
ssh -v root@192.168.11.14 << 'R63004'
|
||||
echo "=== Successfully connected to R630-04 ==="
|
||||
hostname
|
||||
pveversion
|
||||
systemctl status pveproxy --no-pager | head -20
|
||||
R63004
|
||||
|
||||
echo ""
|
||||
echo "=== Connection attempt complete ==="
|
||||
EOF
|
||||
|
||||
227
scripts/deploy-api-r630-01.sh
Executable file
227
scripts/deploy-api-r630-01.sh
Executable file
@@ -0,0 +1,227 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deploy Sankofa API to r630-01
|
||||
# VMID: 7800, IP: 10.160.0.10
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
SANKOFA_PROJECT="${SANKOFA_PROJECT:-/home/intlc/projects/Sankofa}"
|
||||
source "$SCRIPT_DIR/env.r630-01.example" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_API:-7800}"
|
||||
CONTAINER_IP="${SANKOFA_API_IP:-10.160.0.10}"
|
||||
DB_HOST="${DB_HOST:-10.160.0.13}"
|
||||
DB_PORT="${DB_PORT:-5432}"
|
||||
DB_NAME="${DB_NAME:-sankofa}"
|
||||
DB_USER="${DB_USER:-sankofa}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-}"
|
||||
KEYCLOAK_URL="${KEYCLOAK_URL:-http://10.160.0.12:8080}"
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-master}"
|
||||
KEYCLOAK_CLIENT_ID="${KEYCLOAK_CLIENT_ID_API:-sankofa-api}"
|
||||
KEYCLOAK_CLIENT_SECRET="${KEYCLOAK_CLIENT_SECRET_API:-}"
|
||||
JWT_SECRET="${JWT_SECRET:-$(openssl rand -base64 32)}"
|
||||
NODE_VERSION="18"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Sankofa API Deployment"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist. Please run deploy-sankofa-r630-01.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install Node.js
|
||||
log_info "Installing Node.js $NODE_VERSION..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
curl -fsSL https://deb.nodesource.com/setup_${NODE_VERSION}.x | bash - && \
|
||||
apt-get install -y -qq nodejs"
|
||||
|
||||
# Install pnpm
|
||||
log_info "Installing pnpm..."
|
||||
exec_container bash -c "npm install -g pnpm"
|
||||
|
||||
log_success "Node.js and pnpm installed"
|
||||
echo ""
|
||||
|
||||
# Copy project files
|
||||
log_info "Copying Sankofa API project files..."
|
||||
if [[ ! -d "$SANKOFA_PROJECT/api" ]]; then
|
||||
log_error "Sankofa project not found at $SANKOFA_PROJECT"
|
||||
log_info "Please ensure the Sankofa project is available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create app directory
|
||||
exec_container bash -c "mkdir -p /opt/sankofa-api"
|
||||
|
||||
# Copy API directory
|
||||
log_info "Copying files to container..."
|
||||
ssh_r630_01 "pct push $VMID $SANKOFA_PROJECT/api /opt/sankofa-api --recursive"
|
||||
|
||||
log_success "Project files copied"
|
||||
echo ""
|
||||
|
||||
# Install dependencies
|
||||
log_info "Installing dependencies..."
|
||||
exec_container bash -c "cd /opt/sankofa-api && pnpm install --frozen-lockfile"
|
||||
|
||||
log_success "Dependencies installed"
|
||||
echo ""
|
||||
|
||||
# Create environment file
|
||||
log_info "Creating environment configuration..."
|
||||
exec_container bash -c "cat > /opt/sankofa-api/.env << EOF
|
||||
# Database
|
||||
DB_HOST=$DB_HOST
|
||||
DB_PORT=$DB_PORT
|
||||
DB_NAME=$DB_NAME
|
||||
DB_USER=$DB_USER
|
||||
DB_PASSWORD=$DB_PASSWORD
|
||||
|
||||
# Keycloak
|
||||
KEYCLOAK_URL=$KEYCLOAK_URL
|
||||
KEYCLOAK_REALM=$KEYCLOAK_REALM
|
||||
KEYCLOAK_CLIENT_ID=$KEYCLOAK_CLIENT_ID
|
||||
KEYCLOAK_CLIENT_SECRET=$KEYCLOAK_CLIENT_SECRET
|
||||
KEYCLOAK_MULTI_REALM=false
|
||||
|
||||
# API
|
||||
API_PORT=4000
|
||||
JWT_SECRET=$JWT_SECRET
|
||||
NODE_ENV=production
|
||||
|
||||
# Multi-Tenancy
|
||||
ENABLE_MULTI_TENANT=true
|
||||
|
||||
# Billing
|
||||
BILLING_GRANULARITY=SECOND
|
||||
BLOCKCHAIN_BILLING_ENABLED=false
|
||||
BLOCKCHAIN_IDENTITY_ENABLED=false
|
||||
EOF"
|
||||
|
||||
log_success "Environment configured"
|
||||
echo ""
|
||||
|
||||
# Run database migrations
|
||||
log_info "Running database migrations..."
|
||||
exec_container bash -c "cd /opt/sankofa-api && pnpm db:migrate" || log_warn "Migrations may have failed - check database connection"
|
||||
echo ""
|
||||
|
||||
# Build API
|
||||
log_info "Building API..."
|
||||
exec_container bash -c "cd /opt/sankofa-api && pnpm build"
|
||||
|
||||
log_success "API built"
|
||||
echo ""
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating systemd service..."
|
||||
exec_container bash -c "cat > /etc/systemd/system/sankofa-api.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Sankofa API Server
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/sankofa-api
|
||||
Environment=\"NODE_ENV=production\"
|
||||
EnvironmentFile=/opt/sankofa-api/.env
|
||||
ExecStart=/usr/bin/node /opt/sankofa-api/dist/server.js
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
|
||||
# Start service
|
||||
log_info "Starting API service..."
|
||||
exec_container bash -c "systemctl daemon-reload && \
|
||||
systemctl enable sankofa-api && \
|
||||
systemctl start sankofa-api"
|
||||
|
||||
sleep 5
|
||||
|
||||
# Check service status
|
||||
if exec_container bash -c "systemctl is-active --quiet sankofa-api"; then
|
||||
log_success "API service is running"
|
||||
else
|
||||
log_error "API service failed to start"
|
||||
exec_container bash -c "journalctl -u sankofa-api -n 50 --no-pager"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test API health
|
||||
log_info "Testing API health endpoint..."
|
||||
sleep 5
|
||||
if exec_container bash -c "curl -s -f http://localhost:4000/health >/dev/null 2>&1"; then
|
||||
log_success "API health check passed"
|
||||
else
|
||||
log_warn "API health check failed - service may still be starting"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Sankofa API Deployment Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "API Configuration:"
|
||||
echo " URL: http://$CONTAINER_IP:4000"
|
||||
echo " GraphQL: http://$CONTAINER_IP:4000/graphql"
|
||||
echo " Health: http://$CONTAINER_IP:4000/health"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Verify API is accessible: curl http://$CONTAINER_IP:4000/health"
|
||||
echo " 2. Run: ./scripts/deploy-portal-r630-01.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
217
scripts/deploy-portal-r630-01.sh
Executable file
217
scripts/deploy-portal-r630-01.sh
Executable file
@@ -0,0 +1,217 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deploy Sankofa Portal to r630-01
|
||||
# VMID: 7801, IP: 10.160.0.11
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
SANKOFA_PROJECT="${SANKOFA_PROJECT:-/home/intlc/projects/Sankofa}"
|
||||
source "$SCRIPT_DIR/env.r630-01.example" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_PORTAL:-7801}"
|
||||
CONTAINER_IP="${SANKOFA_PORTAL_IP:-10.160.0.11}"
|
||||
API_URL="${NEXT_PUBLIC_GRAPHQL_ENDPOINT:-http://10.160.0.10:4000/graphql}"
|
||||
API_WS_URL="${NEXT_PUBLIC_GRAPHQL_WS_ENDPOINT:-ws://10.160.0.10:4000/graphql-ws}"
|
||||
KEYCLOAK_URL="${KEYCLOAK_URL:-http://10.160.0.12:8080}"
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-master}"
|
||||
KEYCLOAK_CLIENT_ID="${KEYCLOAK_CLIENT_ID_PORTAL:-portal-client}"
|
||||
KEYCLOAK_CLIENT_SECRET="${KEYCLOAK_CLIENT_SECRET_PORTAL:-}"
|
||||
NEXTAUTH_SECRET="${NEXTAUTH_SECRET:-$(openssl rand -base64 32)}"
|
||||
NODE_VERSION="18"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Sankofa Portal Deployment"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist. Please run deploy-sankofa-r630-01.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install Node.js
|
||||
log_info "Installing Node.js $NODE_VERSION..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
curl -fsSL https://deb.nodesource.com/setup_${NODE_VERSION}.x | bash - && \
|
||||
apt-get install -y -qq nodejs"
|
||||
|
||||
# Install pnpm
|
||||
log_info "Installing pnpm..."
|
||||
exec_container bash -c "npm install -g pnpm"
|
||||
|
||||
log_success "Node.js and pnpm installed"
|
||||
echo ""
|
||||
|
||||
# Copy project files
|
||||
log_info "Copying Sankofa Portal project files..."
|
||||
if [[ ! -d "$SANKOFA_PROJECT/portal" ]]; then
|
||||
log_error "Sankofa portal project not found at $SANKOFA_PROJECT/portal"
|
||||
log_info "Please ensure the Sankofa project is available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create app directory
|
||||
exec_container bash -c "mkdir -p /opt/sankofa-portal"
|
||||
|
||||
# Copy portal directory
|
||||
log_info "Copying files to container..."
|
||||
ssh_r630_01 "pct push $VMID $SANKOFA_PROJECT/portal /opt/sankofa-portal --recursive"
|
||||
|
||||
log_success "Project files copied"
|
||||
echo ""
|
||||
|
||||
# Install dependencies
|
||||
log_info "Installing dependencies..."
|
||||
exec_container bash -c "cd /opt/sankofa-portal && pnpm install --frozen-lockfile"
|
||||
|
||||
log_success "Dependencies installed"
|
||||
echo ""
|
||||
|
||||
# Create environment file
|
||||
log_info "Creating environment configuration..."
|
||||
exec_container bash -c "cat > /opt/sankofa-portal/.env.local << EOF
|
||||
# Keycloak
|
||||
KEYCLOAK_URL=$KEYCLOAK_URL
|
||||
KEYCLOAK_REALM=$KEYCLOAK_REALM
|
||||
KEYCLOAK_CLIENT_ID=$KEYCLOAK_CLIENT_ID
|
||||
KEYCLOAK_CLIENT_SECRET=$KEYCLOAK_CLIENT_SECRET
|
||||
|
||||
# API
|
||||
NEXT_PUBLIC_GRAPHQL_ENDPOINT=$API_URL
|
||||
NEXT_PUBLIC_GRAPHQL_WS_ENDPOINT=$API_WS_URL
|
||||
|
||||
# NextAuth
|
||||
NEXTAUTH_URL=http://$CONTAINER_IP:3000
|
||||
NEXTAUTH_SECRET=$NEXTAUTH_SECRET
|
||||
|
||||
# Crossplane (if available)
|
||||
NEXT_PUBLIC_CROSSPLANE_API=${NEXT_PUBLIC_CROSSPLANE_API:-http://crossplane.sankofa.nexus}
|
||||
NEXT_PUBLIC_ARGOCD_URL=${NEXT_PUBLIC_ARGOCD_URL:-http://argocd.sankofa.nexus}
|
||||
NEXT_PUBLIC_GRAFANA_URL=${NEXT_PUBLIC_GRAFANA_URL:-http://grafana.sankofa.nexus}
|
||||
NEXT_PUBLIC_LOKI_URL=${NEXT_PUBLIC_LOKI_URL:-http://loki.sankofa.nexus:3100}
|
||||
|
||||
# App
|
||||
NEXT_PUBLIC_APP_URL=http://$CONTAINER_IP:3000
|
||||
NODE_ENV=production
|
||||
EOF"
|
||||
|
||||
log_success "Environment configured"
|
||||
echo ""
|
||||
|
||||
# Build Portal
|
||||
log_info "Building Portal (this may take several minutes)..."
|
||||
exec_container bash -c "cd /opt/sankofa-portal && pnpm build"
|
||||
|
||||
log_success "Portal built"
|
||||
echo ""
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating systemd service..."
|
||||
exec_container bash -c "cat > /etc/systemd/system/sankofa-portal.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Sankofa Portal
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/sankofa-portal
|
||||
Environment=\"NODE_ENV=production\"
|
||||
EnvironmentFile=/opt/sankofa-portal/.env.local
|
||||
ExecStart=/usr/bin/node /opt/sankofa-portal/node_modules/.bin/next start
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
|
||||
# Start service
|
||||
log_info "Starting Portal service..."
|
||||
exec_container bash -c "systemctl daemon-reload && \
|
||||
systemctl enable sankofa-portal && \
|
||||
systemctl start sankofa-portal"
|
||||
|
||||
sleep 10
|
||||
|
||||
# Check service status
|
||||
if exec_container bash -c "systemctl is-active --quiet sankofa-portal"; then
|
||||
log_success "Portal service is running"
|
||||
else
|
||||
log_error "Portal service failed to start"
|
||||
exec_container bash -c "journalctl -u sankofa-portal -n 50 --no-pager"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test Portal
|
||||
log_info "Testing Portal..."
|
||||
sleep 5
|
||||
if exec_container bash -c "curl -s -f http://localhost:3000 >/dev/null 2>&1"; then
|
||||
log_success "Portal is accessible"
|
||||
else
|
||||
log_warn "Portal may still be starting - check logs if issues persist"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Sankofa Portal Deployment Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Portal Configuration:"
|
||||
echo " URL: http://$CONTAINER_IP:3000"
|
||||
echo " API: $API_URL"
|
||||
echo " Keycloak: $KEYCLOAK_URL"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Verify Portal is accessible: curl http://$CONTAINER_IP:3000"
|
||||
echo " 2. Configure firewall rules if needed"
|
||||
echo " 3. Set up Cloudflare tunnels for external access"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
268
scripts/deploy-sankofa-r630-01.sh
Executable file
268
scripts/deploy-sankofa-r630-01.sh
Executable file
@@ -0,0 +1,268 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deploy Sankofa Services to r630-01
|
||||
# Sankofa/Phoenix/PanTel service layer on VLAN 160 (10.160.0.0/22)
|
||||
# VMID Range: 7800-8999
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
SANKOFA_PROJECT="/home/intlc/projects/Sankofa"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_NODE="r630-01"
|
||||
PROXMOX_HOST="192.168.11.11"
|
||||
# r630-01 has: local, local-lvm, thin1 available
|
||||
PROXMOX_STORAGE="${PROXMOX_STORAGE:-thin1}"
|
||||
CONTAINER_OS_TEMPLATE="local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst"
|
||||
|
||||
# Sankofa Configuration
|
||||
SANKOFA_VLAN="160"
|
||||
SANKOFA_SUBNET="10.160.0.0/22"
|
||||
SANKOFA_GATEWAY="10.160.0.1"
|
||||
|
||||
# VMID Allocation (Sankofa range: 7800-8999)
|
||||
VMID_SANKOFA_POSTGRES=7803
|
||||
VMID_SANKOFA_API=7800
|
||||
VMID_SANKOFA_PORTAL=7801
|
||||
VMID_SANKOFA_KEYCLOAK=7802
|
||||
|
||||
# Service IPs (VLAN 160)
|
||||
SANKOFA_POSTGRES_IP="10.160.0.13"
|
||||
SANKOFA_API_IP="10.160.0.10"
|
||||
SANKOFA_PORTAL_IP="10.160.0.11"
|
||||
SANKOFA_KEYCLOAK_IP="10.160.0.12"
|
||||
|
||||
# Resource allocation
|
||||
SANKOFA_POSTGRES_MEMORY="2048" # 2GB
|
||||
SANKOFA_POSTGRES_CORES="2"
|
||||
SANKOFA_POSTGRES_DISK="50" # 50GB
|
||||
|
||||
SANKOFA_API_MEMORY="4096" # 4GB
|
||||
SANKOFA_API_CORES="4"
|
||||
SANKOFA_API_DISK="50" # 50GB
|
||||
|
||||
SANKOFA_PORTAL_MEMORY="4096" # 4GB
|
||||
SANKOFA_PORTAL_CORES="4"
|
||||
SANKOFA_PORTAL_DISK="50" # 50GB
|
||||
|
||||
SANKOFA_KEYCLOAK_MEMORY="2048" # 2GB
|
||||
SANKOFA_KEYCLOAK_CORES="2"
|
||||
SANKOFA_KEYCLOAK_DISK="30" # 30GB
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Check if container exists
|
||||
container_exists() {
|
||||
local vmid=$1
|
||||
ssh_r630_01 "pct list 2>/dev/null | grep -q '^\s*$vmid\s'" 2>/dev/null
|
||||
}
|
||||
|
||||
# Get container IP address
|
||||
get_container_ip() {
|
||||
local vmid=$1
|
||||
ssh_r630_01 "pct exec $vmid -- ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'" 2>/dev/null || echo ""
|
||||
}
|
||||
|
||||
# Create Sankofa container
|
||||
create_sankofa_container() {
|
||||
local vmid=$1
|
||||
local hostname=$2
|
||||
local ip_address=$3
|
||||
local memory=$4
|
||||
local cores=$5
|
||||
local disk=$6
|
||||
local service_type=$7
|
||||
|
||||
log_info "Creating Sankofa $service_type: $hostname (VMID: $vmid, IP: $ip_address)"
|
||||
|
||||
if container_exists "$vmid"; then
|
||||
log_warn "Container $vmid ($hostname) already exists, skipping creation"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Network configuration - use static IP for VLAN 160
|
||||
# Note: For unprivileged containers, VLAN tagging may need bridge configuration
|
||||
local network_config="bridge=vmbr0,name=eth0,ip=${ip_address}/22,gw=${SANKOFA_GATEWAY},type=veth"
|
||||
|
||||
log_info "Creating container $vmid on $PROXMOX_NODE..."
|
||||
ssh_r630_01 "pct create $vmid \
|
||||
$CONTAINER_OS_TEMPLATE \
|
||||
--storage $PROXMOX_STORAGE \
|
||||
--hostname $hostname \
|
||||
--memory $memory \
|
||||
--cores $cores \
|
||||
--rootfs $PROXMOX_STORAGE:$disk \
|
||||
--net0 '$network_config' \
|
||||
--unprivileged 1 \
|
||||
--swap 512 \
|
||||
--onboot 1 \
|
||||
--timezone America/Los_Angeles \
|
||||
--features nesting=1,keyctl=1" 2>&1
|
||||
|
||||
if container_exists "$vmid"; then
|
||||
log_success "Container $vmid created successfully"
|
||||
|
||||
# Start container
|
||||
log_info "Starting container $vmid..."
|
||||
ssh_r630_01 "pct start $vmid" 2>&1 || true
|
||||
|
||||
# Wait for container to be ready
|
||||
log_info "Waiting for container to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Basic setup
|
||||
log_info "Configuring container $vmid..."
|
||||
ssh_r630_01 "pct exec $vmid -- bash -c 'export DEBIAN_FRONTEND=noninteractive; apt-get update -qq && apt-get install -y -qq curl wget git build-essential sudo'" 2>&1 | grep -vE "(perl: warning|locale:)" || true
|
||||
|
||||
log_success "Sankofa $service_type container $vmid ($hostname) deployed successfully"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to create container $vmid"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main deployment
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Sankofa Deployment to r630-01"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Target Node: $PROXMOX_NODE ($PROXMOX_HOST)"
|
||||
log_info "Storage: $PROXMOX_STORAGE"
|
||||
log_info "VLAN: $SANKOFA_VLAN ($SANKOFA_SUBNET)"
|
||||
log_info "VMID Range: 7800-8999"
|
||||
echo ""
|
||||
|
||||
# Check connectivity to r630-01
|
||||
log_info "Checking connectivity to $PROXMOX_NODE..."
|
||||
if ! ssh_r630_01 "pvecm status >/dev/null 2>&1"; then
|
||||
log_error "Cannot connect to $PROXMOX_NODE. Please check SSH access."
|
||||
exit 1
|
||||
fi
|
||||
log_success "Connected to $PROXMOX_NODE"
|
||||
echo ""
|
||||
|
||||
# Check if containers already exist
|
||||
log_info "Checking existing Sankofa containers..."
|
||||
existing_containers=()
|
||||
if container_exists "$VMID_SANKOFA_POSTGRES"; then
|
||||
existing_containers+=("$VMID_SANKOFA_POSTGRES:sankofa-postgres-1")
|
||||
log_warn "Container $VMID_SANKOFA_POSTGRES (sankofa-postgres-1) already exists"
|
||||
fi
|
||||
if container_exists "$VMID_SANKOFA_API"; then
|
||||
existing_containers+=("$VMID_SANKOFA_API:sankofa-api-1")
|
||||
log_warn "Container $VMID_SANKOFA_API (sankofa-api-1) already exists"
|
||||
fi
|
||||
if container_exists "$VMID_SANKOFA_PORTAL"; then
|
||||
existing_containers+=("$VMID_SANKOFA_PORTAL:sankofa-portal-1")
|
||||
log_warn "Container $VMID_SANKOFA_PORTAL (sankofa-portal-1) already exists"
|
||||
fi
|
||||
if container_exists "$VMID_SANKOFA_KEYCLOAK"; then
|
||||
existing_containers+=("$VMID_SANKOFA_KEYCLOAK:sankofa-keycloak-1")
|
||||
log_warn "Container $VMID_SANKOFA_KEYCLOAK (sankofa-keycloak-1) already exists"
|
||||
fi
|
||||
|
||||
if [[ ${#existing_containers[@]} -gt 0 ]]; then
|
||||
log_warn "Some Sankofa containers already exist:"
|
||||
for container in "${existing_containers[@]}"; do
|
||||
echo " - $container"
|
||||
done
|
||||
echo ""
|
||||
read -p "Continue with deployment? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
log_info "Deployment cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Deploy PostgreSQL first (required by other services)
|
||||
log_info "Deploying PostgreSQL database..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_POSTGRES" \
|
||||
"sankofa-postgres-1" \
|
||||
"$SANKOFA_POSTGRES_IP" \
|
||||
"$SANKOFA_POSTGRES_MEMORY" \
|
||||
"$SANKOFA_POSTGRES_CORES" \
|
||||
"$SANKOFA_POSTGRES_DISK" \
|
||||
"PostgreSQL"
|
||||
echo ""
|
||||
|
||||
# Deploy Keycloak (required by API and Portal)
|
||||
log_info "Deploying Keycloak identity service..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_KEYCLOAK" \
|
||||
"sankofa-keycloak-1" \
|
||||
"$SANKOFA_KEYCLOAK_IP" \
|
||||
"$SANKOFA_KEYCLOAK_MEMORY" \
|
||||
"$SANKOFA_KEYCLOAK_CORES" \
|
||||
"$SANKOFA_KEYCLOAK_DISK" \
|
||||
"Keycloak"
|
||||
echo ""
|
||||
|
||||
# Deploy Sankofa API
|
||||
log_info "Deploying Sankofa API service..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_API" \
|
||||
"sankofa-api-1" \
|
||||
"$SANKOFA_API_IP" \
|
||||
"$SANKOFA_API_MEMORY" \
|
||||
"$SANKOFA_API_CORES" \
|
||||
"$SANKOFA_API_DISK" \
|
||||
"API"
|
||||
echo ""
|
||||
|
||||
# Deploy Sankofa Portal
|
||||
log_info "Deploying Sankofa Portal service..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_PORTAL" \
|
||||
"sankofa-portal-1" \
|
||||
"$SANKOFA_PORTAL_IP" \
|
||||
"$SANKOFA_PORTAL_MEMORY" \
|
||||
"$SANKOFA_PORTAL_CORES" \
|
||||
"$SANKOFA_PORTAL_DISK" \
|
||||
"Portal"
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Sankofa Container Deployment Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Deployed containers on $PROXMOX_NODE:"
|
||||
echo " - VMID $VMID_SANKOFA_POSTGRES: sankofa-postgres-1 ($SANKOFA_POSTGRES_IP)"
|
||||
echo " - VMID $VMID_SANKOFA_KEYCLOAK: sankofa-keycloak-1 ($SANKOFA_KEYCLOAK_IP)"
|
||||
echo " - VMID $VMID_SANKOFA_API: sankofa-api-1 ($SANKOFA_API_IP)"
|
||||
echo " - VMID $VMID_SANKOFA_PORTAL: sankofa-portal-1 ($SANKOFA_PORTAL_IP)"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Run: ./scripts/setup-postgresql-r630-01.sh"
|
||||
echo " 2. Run: ./scripts/setup-keycloak-r630-01.sh"
|
||||
echo " 3. Run: ./scripts/deploy-api-r630-01.sh"
|
||||
echo " 4. Run: ./scripts/deploy-portal-r630-01.sh"
|
||||
echo " 5. Configure networking and firewall rules"
|
||||
echo " 6. Set up Cloudflare tunnels for external access"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
146
scripts/diagnose-tunnels.sh
Executable file
146
scripts/diagnose-tunnels.sh
Executable file
@@ -0,0 +1,146 @@
|
||||
#!/bin/bash
|
||||
# Diagnose all Cloudflare tunnels - identify why they're DOWN
|
||||
|
||||
set -e
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.12}"
|
||||
VMID="${VMID:-102}"
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Cloudflare Tunnels Diagnostic"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Target: VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
|
||||
# Test connection
|
||||
if ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct exec ${VMID} -- echo 'Connected'" 2>/dev/null; then
|
||||
echo "❌ Cannot connect to VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
echo "Network segmentation detected. Use SSH tunnel:"
|
||||
echo " ./setup_ssh_tunnel.sh"
|
||||
echo " PROXMOX_HOST=localhost ./diagnose-tunnels.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Connected to container"
|
||||
echo ""
|
||||
|
||||
# 1. Check container status
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "1. Container Status"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
CONTAINER_STATUS=$(ssh root@${PROXMOX_HOST} "pct status ${VMID}" 2>/dev/null || echo "unknown")
|
||||
echo "Status: $CONTAINER_STATUS"
|
||||
if [[ "$CONTAINER_STATUS" != *"running"* ]]; then
|
||||
echo "⚠️ Container is not running!"
|
||||
echo " Fix: ssh root@${PROXMOX_HOST} 'pct start ${VMID}'"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# 2. Check cloudflared installation
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "2. cloudflared Installation"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
CLOUDFLARED_PATH=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- which cloudflared" 2>/dev/null || echo "")
|
||||
if [ -z "$CLOUDFLARED_PATH" ]; then
|
||||
echo "❌ cloudflared not found!"
|
||||
echo " Fix: ssh root@${PROXMOX_HOST} 'pct exec ${VMID} -- apt install -y cloudflared'"
|
||||
else
|
||||
echo "✅ cloudflared found: $CLOUDFLARED_PATH"
|
||||
VERSION=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- cloudflared --version" 2>/dev/null || echo "unknown")
|
||||
echo " Version: $VERSION"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# 3. Check service status
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "3. Tunnel Services Status"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
SERVICES=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl list-units --type=service --state=running,failed | grep cloudflared" 2>/dev/null || echo "")
|
||||
if [ -z "$SERVICES" ]; then
|
||||
echo "❌ No cloudflared services running!"
|
||||
echo ""
|
||||
echo "Checking for installed services..."
|
||||
INSTALLED=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl list-units --type=service --all | grep cloudflared" 2>/dev/null || echo "")
|
||||
if [ -z "$INSTALLED" ]; then
|
||||
echo "❌ No cloudflared services found!"
|
||||
echo " Services need to be created"
|
||||
else
|
||||
echo "Found services (not running):"
|
||||
echo "$INSTALLED" | while read line; do
|
||||
echo " - $line"
|
||||
done
|
||||
echo ""
|
||||
echo "Fix: ssh root@${PROXMOX_HOST} 'pct exec ${VMID} -- systemctl start cloudflared-*'"
|
||||
fi
|
||||
else
|
||||
echo "✅ Running services:"
|
||||
echo "$SERVICES" | while read line; do
|
||||
echo " ✅ $line"
|
||||
done
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# 4. Check credentials
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "4. Tunnel Credentials"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
CREDENTIALS=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- ls -1 /etc/cloudflared/credentials-*.json 2>/dev/null" || echo "")
|
||||
if [ -z "$CREDENTIALS" ]; then
|
||||
echo "❌ No credential files found!"
|
||||
echo " Credentials need to be downloaded from Cloudflare Dashboard"
|
||||
echo " Location: Zero Trust → Networks → Tunnels → Download credentials"
|
||||
else
|
||||
echo "✅ Found credential files:"
|
||||
echo "$CREDENTIALS" | while read cred; do
|
||||
PERMS=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- stat -c '%a' $cred" 2>/dev/null || echo "unknown")
|
||||
if [ "$PERMS" != "600" ]; then
|
||||
echo " ⚠️ $cred (permissions: $PERMS - should be 600)"
|
||||
else
|
||||
echo " ✅ $cred (permissions: $PERMS)"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# 5. Check network connectivity
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "5. Network Connectivity"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
if ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- ping -c 2 -W 2 8.8.8.8" >/dev/null 2>&1; then
|
||||
echo "✅ Internet connectivity: OK"
|
||||
else
|
||||
echo "❌ Internet connectivity: FAILED"
|
||||
echo " Container cannot reach internet"
|
||||
fi
|
||||
|
||||
if ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- curl -s -o /dev/null -w '%{http_code}' --max-time 5 https://cloudflare.com" | grep -q "200\|301\|302"; then
|
||||
echo "✅ HTTPS connectivity: OK"
|
||||
else
|
||||
echo "❌ HTTPS connectivity: FAILED"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# 6. Check recent logs
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "6. Recent Tunnel Logs (last 20 lines)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
LOGS=$(ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- journalctl -u cloudflared-* -n 20 --no-pager 2>/dev/null" || echo "No logs found")
|
||||
if [ "$LOGS" != "No logs found" ] && [ -n "$LOGS" ]; then
|
||||
echo "$LOGS"
|
||||
else
|
||||
echo "⚠️ No recent logs found (services may not be running)"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Diagnostic Summary"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Review findings above"
|
||||
echo " 2. Run fix script: ./fix-all-tunnels.sh"
|
||||
echo " 3. Or manually fix issues identified"
|
||||
echo ""
|
||||
84
scripts/env.r630-01.example
Normal file
84
scripts/env.r630-01.example
Normal file
@@ -0,0 +1,84 @@
|
||||
# Sankofa/Phoenix Deployment Configuration for r630-01
|
||||
# Copy this file to .env.r630-01 and update with your values
|
||||
|
||||
# Proxmox Configuration
|
||||
PROXMOX_HOST=192.168.11.11
|
||||
PROXMOX_NODE=r630-01
|
||||
PROXMOX_STORAGE=thin1
|
||||
PROXMOX_USER=root@pam
|
||||
|
||||
# Network Configuration
|
||||
SANKOFA_VLAN=160
|
||||
SANKOFA_SUBNET=10.160.0.0/22
|
||||
SANKOFA_GATEWAY=10.160.0.1
|
||||
|
||||
# Service IPs (VLAN 160)
|
||||
SANKOFA_POSTGRES_IP=10.160.0.13
|
||||
SANKOFA_API_IP=10.160.0.10
|
||||
SANKOFA_PORTAL_IP=10.160.0.11
|
||||
SANKOFA_KEYCLOAK_IP=10.160.0.12
|
||||
|
||||
# VMIDs
|
||||
VMID_SANKOFA_POSTGRES=7803
|
||||
VMID_SANKOFA_API=7800
|
||||
VMID_SANKOFA_PORTAL=7801
|
||||
VMID_SANKOFA_KEYCLOAK=7802
|
||||
|
||||
# Database Configuration
|
||||
DB_HOST=10.160.0.13
|
||||
DB_PORT=5432
|
||||
DB_NAME=sankofa
|
||||
DB_USER=sankofa
|
||||
DB_PASSWORD=CHANGE_THIS_PASSWORD_IN_PRODUCTION
|
||||
POSTGRES_SUPERUSER_PASSWORD=CHANGE_THIS_PASSWORD_IN_PRODUCTION
|
||||
|
||||
# Keycloak Configuration
|
||||
KEYCLOAK_URL=http://10.160.0.12:8080
|
||||
KEYCLOAK_ADMIN_URL=http://10.160.0.12:8080/admin
|
||||
KEYCLOAK_REALM=master
|
||||
KEYCLOAK_ADMIN_USERNAME=admin
|
||||
KEYCLOAK_ADMIN_PASSWORD=CHANGE_THIS_PASSWORD_IN_PRODUCTION
|
||||
KEYCLOAK_CLIENT_ID_API=sankofa-api
|
||||
KEYCLOAK_CLIENT_ID_PORTAL=portal-client
|
||||
KEYCLOAK_CLIENT_SECRET_API=CHANGE_THIS_SECRET_IN_PRODUCTION
|
||||
KEYCLOAK_CLIENT_SECRET_PORTAL=CHANGE_THIS_SECRET_IN_PRODUCTION
|
||||
KEYCLOAK_MULTI_REALM=false
|
||||
|
||||
# API Configuration
|
||||
API_HOST=10.160.0.10
|
||||
API_PORT=4000
|
||||
NEXT_PUBLIC_GRAPHQL_ENDPOINT=http://10.160.0.10:4000/graphql
|
||||
NEXT_PUBLIC_GRAPHQL_WS_ENDPOINT=ws://10.160.0.10:4000/graphql-ws
|
||||
JWT_SECRET=CHANGE_THIS_JWT_SECRET_IN_PRODUCTION
|
||||
NODE_ENV=production
|
||||
|
||||
# Portal Configuration
|
||||
PORTAL_HOST=10.160.0.11
|
||||
PORTAL_PORT=3000
|
||||
NEXT_PUBLIC_APP_URL=http://10.160.0.11:3000
|
||||
NEXT_PUBLIC_CROSSPLANE_API=http://crossplane.sankofa.nexus
|
||||
NEXT_PUBLIC_ARGOCD_URL=http://argocd.sankofa.nexus
|
||||
NEXT_PUBLIC_GRAFANA_URL=http://grafana.sankofa.nexus
|
||||
NEXT_PUBLIC_LOKI_URL=http://loki.sankofa.nexus:3100
|
||||
NEXTAUTH_URL=http://10.160.0.11:3000
|
||||
NEXTAUTH_SECRET=CHANGE_THIS_NEXTAUTH_SECRET_IN_PRODUCTION
|
||||
|
||||
# Multi-Tenancy
|
||||
ENABLE_MULTI_TENANT=true
|
||||
DEFAULT_TENANT_ID=
|
||||
|
||||
# Billing Configuration
|
||||
BILLING_GRANULARITY=SECOND
|
||||
BLOCKCHAIN_BILLING_ENABLED=false
|
||||
BLOCKCHAIN_IDENTITY_ENABLED=false
|
||||
|
||||
# Blockchain (Optional)
|
||||
BLOCKCHAIN_RPC_URL=
|
||||
RESOURCE_PROVISIONING_CONTRACT_ADDRESS=
|
||||
|
||||
# Monitoring (Optional)
|
||||
NEXT_PUBLIC_SENTRY_DSN=
|
||||
SENTRY_AUTH_TOKEN=
|
||||
|
||||
# Analytics (Optional)
|
||||
NEXT_PUBLIC_ANALYTICS_ID=
|
||||
104
scripts/fix-all-tunnels.sh
Executable file
104
scripts/fix-all-tunnels.sh
Executable file
@@ -0,0 +1,104 @@
|
||||
#!/bin/bash
|
||||
# Fix all Cloudflare tunnels - restart services and verify
|
||||
|
||||
set -e
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.12}"
|
||||
VMID="${VMID:-102}"
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Fix All Cloudflare Tunnels"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Target: VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
|
||||
# Test connection
|
||||
if ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct exec ${VMID} -- echo 'Connected'" 2>/dev/null; then
|
||||
echo "❌ Cannot connect to VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
echo "Use SSH tunnel:"
|
||||
echo " ./setup_ssh_tunnel.sh"
|
||||
echo " PROXMOX_HOST=localhost ./fix-all-tunnels.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Connected to container"
|
||||
echo ""
|
||||
|
||||
# Step 1: Ensure container is running
|
||||
echo "Step 1: Checking container status..."
|
||||
CONTAINER_STATUS=$(ssh root@${PROXMOX_HOST} "pct status ${VMID}" 2>/dev/null || echo "stopped")
|
||||
if [[ "$CONTAINER_STATUS" != *"running"* ]]; then
|
||||
echo "⚠️ Container is not running. Starting..."
|
||||
ssh root@${PROXMOX_HOST} "pct start ${VMID}"
|
||||
sleep 5
|
||||
echo "✅ Container started"
|
||||
else
|
||||
echo "✅ Container is running"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Step 2: Check cloudflared installation
|
||||
echo "Step 2: Checking cloudflared installation..."
|
||||
if ! ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- which cloudflared" >/dev/null 2>&1; then
|
||||
echo "⚠️ cloudflared not installed. Installing..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash -c 'apt update && apt install -y cloudflared'" || {
|
||||
echo "❌ Failed to install cloudflared"
|
||||
exit 1
|
||||
}
|
||||
echo "✅ cloudflared installed"
|
||||
else
|
||||
echo "✅ cloudflared is installed"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Step 3: Fix credential permissions
|
||||
echo "Step 3: Fixing credential file permissions..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash -c 'chmod 600 /etc/cloudflared/credentials-*.json 2>/dev/null || true'"
|
||||
echo "✅ Permissions fixed"
|
||||
echo ""
|
||||
|
||||
# Step 4: Enable all services
|
||||
echo "Step 4: Enabling tunnel services..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl daemon-reload"
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash -c 'systemctl enable cloudflared-* 2>/dev/null || true'"
|
||||
echo "✅ Services enabled"
|
||||
echo ""
|
||||
|
||||
# Step 5: Restart all services
|
||||
echo "Step 5: Restarting all tunnel services..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash -c 'systemctl restart cloudflared-* 2>/dev/null || systemctl start cloudflared-*'"
|
||||
sleep 5
|
||||
echo "✅ Services restarted"
|
||||
echo ""
|
||||
|
||||
# Step 6: Check status
|
||||
echo "Step 6: Checking service status..."
|
||||
echo ""
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl status cloudflared-* --no-pager -l" || true
|
||||
echo ""
|
||||
|
||||
# Step 7: Show recent logs
|
||||
echo "Step 7: Recent logs (last 10 lines per service)..."
|
||||
echo ""
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- journalctl -u cloudflared-* -n 10 --no-pager" || true
|
||||
echo ""
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Fix Complete"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Wait 1-2 minutes for tunnels to connect"
|
||||
echo " 2. Check Cloudflare Dashboard for tunnel status"
|
||||
echo " 3. Verify services are accessible:"
|
||||
echo " curl -I https://ml110-01.d-bis.org"
|
||||
echo " curl -I https://r630-01.d-bis.org"
|
||||
echo " curl -I https://explorer.d-bis.org"
|
||||
echo ""
|
||||
echo "If tunnels are still DOWN:"
|
||||
echo " 1. Check logs: ssh root@${PROXMOX_HOST} 'pct exec ${VMID} -- journalctl -u cloudflared-* -f'"
|
||||
echo " 2. Verify credentials are valid"
|
||||
echo " 3. Check network connectivity from container"
|
||||
echo ""
|
||||
50
scripts/fix-r630-04-pveproxy.sh
Executable file
50
scripts/fix-r630-04-pveproxy.sh
Executable file
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
# Fix pveproxy worker exit issues on R630-04
|
||||
# Run this script on R630-04
|
||||
|
||||
set -e
|
||||
|
||||
echo "=== Diagnosing pveproxy worker exit issues ==="
|
||||
echo ""
|
||||
|
||||
echo "1. Current pveproxy status:"
|
||||
systemctl status pveproxy --no-pager -l | head -30
|
||||
echo ""
|
||||
|
||||
echo "2. Recent pveproxy logs (last 50 lines):"
|
||||
journalctl -u pveproxy --no-pager -n 50
|
||||
echo ""
|
||||
|
||||
echo "3. Checking pveproxy configuration:"
|
||||
ls -la /etc/pveproxy/ 2>/dev/null || echo "No /etc/pveproxy/ directory"
|
||||
echo ""
|
||||
|
||||
echo "4. Checking for port conflicts:"
|
||||
ss -tlnp | grep 8006 || echo "Port 8006 not in use"
|
||||
echo ""
|
||||
|
||||
echo "5. Checking Proxmox cluster status:"
|
||||
pvecm status 2>&1 || echo "Not in cluster or cluster check failed"
|
||||
echo ""
|
||||
|
||||
echo "=== Attempting fixes ==="
|
||||
echo ""
|
||||
|
||||
echo "6. Restarting pveproxy service:"
|
||||
systemctl restart pveproxy
|
||||
sleep 3
|
||||
|
||||
echo "7. Checking status after restart:"
|
||||
systemctl status pveproxy --no-pager -l | head -30
|
||||
echo ""
|
||||
|
||||
echo "8. Checking if port 8006 is now listening:"
|
||||
ss -tlnp | grep 8006 || echo "Port 8006 still not listening"
|
||||
echo ""
|
||||
|
||||
echo "=== If still failing, check these ==="
|
||||
echo "- /var/log/pveproxy/access.log"
|
||||
echo "- /var/log/pveproxy/error.log"
|
||||
echo "- journalctl -u pveproxy -f (for real-time logs)"
|
||||
echo "- Check if Proxmox VE packages are fully installed: dpkg -l | grep proxmox"
|
||||
|
||||
46
scripts/fix-shared-tunnel-remote.sh
Executable file
46
scripts/fix-shared-tunnel-remote.sh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
# Alternative fix script - to be run from within Proxmox network
|
||||
# Or via SSH tunnel
|
||||
|
||||
set -e
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.12}"
|
||||
VMID="${VMID:-102}"
|
||||
TUNNEL_ID="10ab22da-8ea3-4e2e-a896-27ece2211a05"
|
||||
NGINX_TARGET="192.168.11.21:80"
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Fix Shared Cloudflare Tunnel (Remote Method)"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "This script can be run:"
|
||||
echo " 1. From a machine on 192.168.11.0/24 network"
|
||||
echo " 2. Via SSH tunnel (after running setup_ssh_tunnel.sh)"
|
||||
echo " 3. Directly on Proxmox host"
|
||||
echo ""
|
||||
echo "Tunnel ID: ${TUNNEL_ID}"
|
||||
echo "Target: http://${NGINX_TARGET}"
|
||||
echo "Container: VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
|
||||
# Check connection
|
||||
echo "Testing connection..."
|
||||
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct exec ${VMID} -- echo 'Connected'" 2>/dev/null; then
|
||||
echo "✅ Connection successful"
|
||||
echo ""
|
||||
|
||||
# Run the original fix script logic
|
||||
exec ./fix-shared-tunnel.sh
|
||||
else
|
||||
echo "❌ Cannot connect"
|
||||
echo ""
|
||||
echo "If running via SSH tunnel, ensure:"
|
||||
echo " 1. SSH tunnel is active: ./setup_ssh_tunnel.sh"
|
||||
echo " 2. Use: PROXMOX_HOST=localhost ./fix-shared-tunnel-remote.sh"
|
||||
echo ""
|
||||
echo "If running from Proxmox network, ensure:"
|
||||
echo " 1. You're on 192.168.11.0/24 network"
|
||||
echo " 2. SSH access to ${PROXMOX_HOST} is configured"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
350
scripts/fix-shared-tunnel.sh
Executable file
350
scripts/fix-shared-tunnel.sh
Executable file
@@ -0,0 +1,350 @@
|
||||
#!/bin/bash
|
||||
# Fix shared Cloudflare tunnel configuration
|
||||
# Resolves DNS conflicts for tunnel 10ab22da-8ea3-4e2e-a896-27ece2211a05
|
||||
|
||||
set -e
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.12}"
|
||||
VMID="${VMID:-102}"
|
||||
TUNNEL_ID="10ab22da-8ea3-4e2e-a896-27ece2211a05"
|
||||
NGINX_TARGET="192.168.11.21:80"
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Fix Shared Cloudflare Tunnel Configuration"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Tunnel ID: ${TUNNEL_ID}"
|
||||
echo "Target: http://${NGINX_TARGET}"
|
||||
echo "Container: VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
|
||||
# Check if we can connect
|
||||
if ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct exec ${VMID} -- echo 'Connected'" 2>/dev/null; then
|
||||
echo "❌ Cannot connect to VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Connection Failed - Alternative Methods"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Your machine is on a different network segment."
|
||||
echo "Use one of these methods:"
|
||||
echo ""
|
||||
echo "Method 1: Use SSH Tunnel First"
|
||||
echo " ./setup_ssh_tunnel.sh"
|
||||
echo " # Then in another terminal:"
|
||||
echo " PROXMOX_HOST=localhost ./fix-shared-tunnel.sh"
|
||||
echo ""
|
||||
echo "Method 2: Run from Proxmox Network"
|
||||
echo " Copy this script to a machine on 192.168.11.0/24 network"
|
||||
echo " Then run: ./fix-shared-tunnel.sh"
|
||||
echo ""
|
||||
echo "Method 3: Manual Configuration"
|
||||
echo " See: DNS_CONFLICT_RESOLUTION.md for manual steps"
|
||||
echo ""
|
||||
echo "Method 4: Use Cloudflare Dashboard"
|
||||
echo " Configure tunnel via: https://one.dash.cloudflare.com/"
|
||||
echo " Zero Trust → Networks → Tunnels → Configure"
|
||||
echo ""
|
||||
|
||||
# Generate configuration files for manual deployment
|
||||
echo "Generating configuration files for manual deployment..."
|
||||
mkdir -p /tmp/tunnel-fix-${TUNNEL_ID}
|
||||
|
||||
cat > /tmp/tunnel-fix-${TUNNEL_ID}/tunnel-services.yml << 'CONFIG_EOF'
|
||||
tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
|
||||
credentials-file: /etc/cloudflared/credentials-services.json
|
||||
|
||||
ingress:
|
||||
- hostname: dbis-admin.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-admin.d-bis.org
|
||||
- hostname: dbis-api.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api.d-bis.org
|
||||
- hostname: dbis-api-2.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api-2.d-bis.org
|
||||
- hostname: mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: mim4u.org.d-bis.org
|
||||
- hostname: www.mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: www.mim4u.org.d-bis.org
|
||||
- hostname: rpc-http-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-prv.d-bis.org
|
||||
- hostname: rpc-http-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-pub.d-bis.org
|
||||
- hostname: rpc-ws-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-prv.d-bis.org
|
||||
- hostname: rpc-ws-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-pub.d-bis.org
|
||||
- service: http_status:404
|
||||
|
||||
metrics: 127.0.0.1:9090
|
||||
loglevel: info
|
||||
gracePeriod: 30s
|
||||
CONFIG_EOF
|
||||
|
||||
cat > /tmp/tunnel-fix-${TUNNEL_ID}/cloudflared-services.service << 'SERVICE_EOF'
|
||||
[Unit]
|
||||
Description=Cloudflare Tunnel for Services (RPC, API, Admin, MIM4U)
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
TimeoutStartSec=0
|
||||
Type=notify
|
||||
ExecStart=/usr/local/bin/cloudflared --config /etc/cloudflared/tunnel-services.yml tunnel run
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
SERVICE_EOF
|
||||
|
||||
cat > /tmp/tunnel-fix-${TUNNEL_ID}/DEPLOY_INSTRUCTIONS.md << 'INST_EOF'
|
||||
# Manual Deployment Instructions
|
||||
|
||||
## Files Generated
|
||||
|
||||
- `tunnel-services.yml` - Tunnel configuration
|
||||
- `cloudflared-services.service` - Systemd service file
|
||||
- `DEPLOY_INSTRUCTIONS.md` - This file
|
||||
|
||||
## Deployment Steps
|
||||
|
||||
### Option A: From Proxmox Host (192.168.11.12)
|
||||
|
||||
```bash
|
||||
# 1. Copy files to Proxmox host
|
||||
scp tunnel-services.yml root@192.168.11.12:/tmp/
|
||||
scp cloudflared-services.service root@192.168.11.12:/tmp/
|
||||
|
||||
# 2. SSH to Proxmox host
|
||||
ssh root@192.168.11.12
|
||||
|
||||
# 3. Copy files into container
|
||||
pct push 102 /tmp/tunnel-services.yml /etc/cloudflared/tunnel-services.yml
|
||||
pct push 102 /tmp/cloudflared-services.service /etc/systemd/system/cloudflared-services.service
|
||||
|
||||
# 4. Set permissions
|
||||
pct exec 102 -- chmod 600 /etc/cloudflared/tunnel-services.yml
|
||||
|
||||
# 5. Reload systemd and start
|
||||
pct exec 102 -- systemctl daemon-reload
|
||||
pct exec 102 -- systemctl enable cloudflared-services.service
|
||||
pct exec 102 -- systemctl start cloudflared-services.service
|
||||
|
||||
# 6. Check status
|
||||
pct exec 102 -- systemctl status cloudflared-services.service
|
||||
```
|
||||
|
||||
### Option B: Direct Container Access
|
||||
|
||||
If you have direct access to the container:
|
||||
|
||||
```bash
|
||||
# 1. Copy files into container
|
||||
# (Use pct push or copy manually)
|
||||
|
||||
# 2. Inside container:
|
||||
chmod 600 /etc/cloudflared/tunnel-services.yml
|
||||
systemctl daemon-reload
|
||||
systemctl enable cloudflared-services.service
|
||||
systemctl start cloudflared-services.service
|
||||
systemctl status cloudflared-services.service
|
||||
```
|
||||
|
||||
### Option C: Via Cloudflare Dashboard
|
||||
|
||||
1. Go to: https://one.dash.cloudflare.com/
|
||||
2. Zero Trust → Networks → Tunnels
|
||||
3. Find tunnel: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
|
||||
4. Click Configure
|
||||
5. Add all hostnames as shown in tunnel-services.yml
|
||||
6. Save configuration
|
||||
|
||||
## Verification
|
||||
|
||||
After deployment:
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
pct exec 102 -- systemctl status cloudflared-services.service
|
||||
|
||||
# Check logs
|
||||
pct exec 102 -- journalctl -u cloudflared-services -f
|
||||
|
||||
# Test endpoints
|
||||
curl -I https://dbis-admin.d-bis.org
|
||||
curl -I https://rpc-http-pub.d-bis.org
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- Ensure credentials file exists: `/etc/cloudflared/credentials-services.json`
|
||||
- Verify Nginx is accessible at `192.168.11.21:80`
|
||||
- Check tunnel status in Cloudflare dashboard
|
||||
INST_EOF
|
||||
|
||||
echo "✅ Configuration files generated in: /tmp/tunnel-fix-${TUNNEL_ID}/"
|
||||
echo ""
|
||||
echo "Files created:"
|
||||
echo " - tunnel-services.yml (tunnel configuration)"
|
||||
echo " - cloudflared-services.service (systemd service)"
|
||||
echo " - DEPLOY_INSTRUCTIONS.md (deployment guide)"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Review files in /tmp/tunnel-fix-${TUNNEL_ID}/"
|
||||
echo " 2. Follow DEPLOY_INSTRUCTIONS.md"
|
||||
echo " 3. Or use Cloudflare Dashboard method"
|
||||
echo ""
|
||||
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Connected to container"
|
||||
echo ""
|
||||
|
||||
# Create tunnel configuration
|
||||
echo "Creating tunnel configuration..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash" << 'TUNNEL_CONFIG'
|
||||
cat > /etc/cloudflared/tunnel-services.yml << 'EOF'
|
||||
tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
|
||||
credentials-file: /etc/cloudflared/credentials-services.json
|
||||
|
||||
ingress:
|
||||
# Admin Interface
|
||||
- hostname: dbis-admin.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-admin.d-bis.org
|
||||
|
||||
# API Endpoints
|
||||
- hostname: dbis-api.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api.d-bis.org
|
||||
|
||||
- hostname: dbis-api-2.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api-2.d-bis.org
|
||||
|
||||
# MIM4U Services
|
||||
- hostname: mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: mim4u.org.d-bis.org
|
||||
|
||||
- hostname: www.mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: www.mim4u.org.d-bis.org
|
||||
|
||||
# RPC Endpoints - HTTP
|
||||
- hostname: rpc-http-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-prv.d-bis.org
|
||||
|
||||
- hostname: rpc-http-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-pub.d-bis.org
|
||||
|
||||
# RPC Endpoints - WebSocket
|
||||
- hostname: rpc-ws-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-prv.d-bis.org
|
||||
|
||||
- hostname: rpc-ws-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-pub.d-bis.org
|
||||
|
||||
# Catch-all (MUST be last)
|
||||
- service: http_status:404
|
||||
|
||||
# Metrics
|
||||
metrics: 127.0.0.1:9090
|
||||
|
||||
# Logging
|
||||
loglevel: info
|
||||
|
||||
# Grace period
|
||||
gracePeriod: 30s
|
||||
EOF
|
||||
|
||||
chmod 600 /etc/cloudflared/tunnel-services.yml
|
||||
echo "✅ Configuration file created"
|
||||
TUNNEL_CONFIG
|
||||
|
||||
# Create systemd service
|
||||
echo "Creating systemd service..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash" << 'SERVICE_CONFIG'
|
||||
cat > /etc/systemd/system/cloudflared-services.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Cloudflare Tunnel for Services (RPC, API, Admin, MIM4U)
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
TimeoutStartSec=0
|
||||
Type=notify
|
||||
ExecStart=/usr/local/bin/cloudflared --config /etc/cloudflared/tunnel-services.yml tunnel run
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
echo "✅ Service file created"
|
||||
SERVICE_CONFIG
|
||||
|
||||
# Reload systemd and enable service
|
||||
echo "Enabling and starting service..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl daemon-reload"
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl enable cloudflared-services.service" || echo "⚠️ Service may already be enabled"
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl restart cloudflared-services.service" || ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl start cloudflared-services.service"
|
||||
|
||||
# Wait a moment
|
||||
sleep 3
|
||||
|
||||
# Check status
|
||||
echo ""
|
||||
echo "Checking service status..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl status cloudflared-services.service --no-pager -l" || true
|
||||
|
||||
echo ""
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Configuration Complete"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Verify credentials file exists:"
|
||||
echo " ssh root@${PROXMOX_HOST} 'pct exec ${VMID} -- ls -la /etc/cloudflared/credentials-services.json'"
|
||||
echo ""
|
||||
echo " 2. Check tunnel logs:"
|
||||
echo " ssh root@${PROXMOX_HOST} 'pct exec ${VMID} -- journalctl -u cloudflared-services -f'"
|
||||
echo ""
|
||||
echo " 3. Test hostnames:"
|
||||
echo " curl -I https://dbis-admin.d-bis.org"
|
||||
echo " curl -I https://rpc-http-pub.d-bis.org"
|
||||
echo ""
|
||||
echo " 4. Update TTL values in Cloudflare Dashboard:"
|
||||
echo " DNS → Records → Change TTL from 1 to 300 (or Auto)"
|
||||
echo ""
|
||||
323
scripts/fix-tunnels-no-ssh.sh
Executable file
323
scripts/fix-tunnels-no-ssh.sh
Executable file
@@ -0,0 +1,323 @@
|
||||
#!/bin/bash
|
||||
# Fix tunnels without SSH access - generates instructions and configs
|
||||
|
||||
set -e
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Fix Tunnels Without SSH Access"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "This script generates instructions and configuration files"
|
||||
echo "that can be deployed without SSH access to Proxmox."
|
||||
echo ""
|
||||
|
||||
OUTPUT_DIR="/tmp/tunnel-fix-manual-$(date +%s)"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
echo "📁 Creating files in: $OUTPUT_DIR"
|
||||
echo ""
|
||||
|
||||
# Create comprehensive fix guide
|
||||
cat > "$OUTPUT_DIR/COMPLETE_FIX_GUIDE.md" << 'EOF'
|
||||
# Complete Tunnel Fix Guide (No SSH Required)
|
||||
|
||||
## Situation
|
||||
|
||||
All 6 Cloudflare tunnels are DOWN. You cannot access the Proxmox network via SSH.
|
||||
|
||||
## Solution: Cloudflare Dashboard Configuration
|
||||
|
||||
The easiest way to fix this is via the Cloudflare Dashboard - no SSH needed!
|
||||
|
||||
### Step 1: Access Cloudflare Dashboard
|
||||
|
||||
1. Go to: https://one.dash.cloudflare.com/
|
||||
2. Sign in to your account
|
||||
3. Navigate to: **Zero Trust** → **Networks** → **Tunnels**
|
||||
|
||||
### Step 2: Fix Each Tunnel
|
||||
|
||||
For each tunnel, click **Configure** and set up the routing:
|
||||
|
||||
#### Tunnel 1: explorer.d-bis.org
|
||||
- **Tunnel ID**: `b02fe1fe-cb7d-484e-909b-7cc41298ebe8`
|
||||
- **Public Hostname**: `explorer.d-bis.org`
|
||||
- **Service**: HTTP
|
||||
- **URL**: `http://192.168.11.21:80` (or appropriate internal IP)
|
||||
|
||||
#### Tunnel 2: mim4u-tunnel
|
||||
- **Tunnel ID**: `f8d06879-04f8-44ef-aeda-ce84564a1792`
|
||||
- **Public Hostname**: `mim4u.org.d-bis.org` (or `mim4u.org`)
|
||||
- **Service**: HTTP
|
||||
- **URL**: `http://192.168.11.21:80`
|
||||
|
||||
#### Tunnel 3: rpc-http-pub.d-bis.org (SHARED - 9 hostnames)
|
||||
- **Tunnel ID**: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
|
||||
- **Add ALL these hostnames**:
|
||||
- `dbis-admin.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `dbis-api.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `dbis-api-2.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `mim4u.org.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `www.mim4u.org.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `rpc-http-prv.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `rpc-http-pub.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `rpc-ws-prv.d-bis.org` → `http://192.168.11.21:80`
|
||||
- `rpc-ws-pub.d-bis.org` → `http://192.168.11.21:80`
|
||||
- **Catch-all**: HTTP 404 (must be last)
|
||||
|
||||
#### Tunnel 4: tunnel-ml110
|
||||
- **Tunnel ID**: `ccd7150a-9881-4b8c-a105-9b4ead6e69a2`
|
||||
- **Public Hostname**: `ml110-01.d-bis.org`
|
||||
- **Service**: HTTPS
|
||||
- **URL**: `https://192.168.11.10:8006`
|
||||
- **Options**: Allow self-signed certificate
|
||||
|
||||
#### Tunnel 5: tunnel-r630-01
|
||||
- **Tunnel ID**: `4481af8f-b24c-4cd3-bdd5-f562f4c97df4`
|
||||
- **Public Hostname**: `r630-01.d-bis.org`
|
||||
- **Service**: HTTPS
|
||||
- **URL**: `https://192.168.11.11:8006`
|
||||
- **Options**: Allow self-signed certificate
|
||||
|
||||
#### Tunnel 6: tunnel-r630-02
|
||||
- **Tunnel ID**: `0876f12b-64d7-4927-9ab3-94cb6cf48af9`
|
||||
- **Public Hostname**: `r630-02.d-bis.org`
|
||||
- **Service**: HTTPS
|
||||
- **URL**: `https://192.168.11.12:8006`
|
||||
- **Options**: Allow self-signed certificate
|
||||
|
||||
### Step 3: Verify Tunnel Status
|
||||
|
||||
After configuring each tunnel:
|
||||
1. Wait 1-2 minutes
|
||||
2. Check tunnel status in dashboard
|
||||
3. Should change from **DOWN** to **HEALTHY**
|
||||
|
||||
### Step 4: Test Services
|
||||
|
||||
```bash
|
||||
# Test Proxmox tunnels
|
||||
curl -I https://ml110-01.d-bis.org
|
||||
curl -I https://r630-01.d-bis.org
|
||||
curl -I https://r630-02.d-bis.org
|
||||
|
||||
# Test shared tunnel services
|
||||
curl -I https://dbis-admin.d-bis.org
|
||||
curl -I https://rpc-http-pub.d-bis.org
|
||||
curl -I https://explorer.d-bis.org
|
||||
```
|
||||
|
||||
## Alternative: If Dashboard Doesn't Work
|
||||
|
||||
If the tunnel connector (cloudflared) in VMID 102 is not running, you need physical/network access to:
|
||||
|
||||
1. **Start the container** (if stopped):
|
||||
```bash
|
||||
ssh root@192.168.11.12 "pct start 102"
|
||||
```
|
||||
|
||||
2. **Start cloudflared services**:
|
||||
```bash
|
||||
ssh root@192.168.11.12 "pct exec 102 -- systemctl start cloudflared-*"
|
||||
```
|
||||
|
||||
3. **Check status**:
|
||||
```bash
|
||||
ssh root@192.168.11.12 "pct exec 102 -- systemctl status cloudflared-*"
|
||||
```
|
||||
|
||||
## Why Tunnels Are Down
|
||||
|
||||
Most likely causes:
|
||||
1. Container VMID 102 is stopped
|
||||
2. cloudflared services not running
|
||||
3. Network connectivity issues from container
|
||||
4. Invalid or missing credentials
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Try Cloudflare Dashboard method first (easiest)
|
||||
2. If that doesn't work, need physical/network access to Proxmox
|
||||
3. Check container and service status
|
||||
4. Restart services as needed
|
||||
|
||||
EOF
|
||||
|
||||
# Create tunnel configuration reference
|
||||
cat > "$OUTPUT_DIR/tunnel-configs-reference.yml" << 'EOF'
|
||||
# Tunnel Configuration Reference
|
||||
# These are the configurations that should be in VMID 102
|
||||
# Use Cloudflare Dashboard to configure, or deploy these manually if you have access
|
||||
|
||||
# ============================================
|
||||
# Tunnel 1: explorer.d-bis.org
|
||||
# ============================================
|
||||
# tunnel: b02fe1fe-cb7d-484e-909b-7cc41298ebe8
|
||||
# credentials-file: /etc/cloudflared/credentials-explorer.json
|
||||
#
|
||||
# ingress:
|
||||
# - hostname: explorer.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# - service: http_status:404
|
||||
|
||||
# ============================================
|
||||
# Tunnel 2: mim4u-tunnel
|
||||
# ============================================
|
||||
# tunnel: f8d06879-04f8-44ef-aeda-ce84564a1792
|
||||
# credentials-file: /etc/cloudflared/credentials-mim4u.json
|
||||
#
|
||||
# ingress:
|
||||
# - hostname: mim4u.org.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# - service: http_status:404
|
||||
|
||||
# ============================================
|
||||
# Tunnel 3: rpc-http-pub.d-bis.org (SHARED)
|
||||
# ============================================
|
||||
# tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
|
||||
# credentials-file: /etc/cloudflared/credentials-services.json
|
||||
#
|
||||
# ingress:
|
||||
# - hostname: dbis-admin.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: dbis-admin.d-bis.org
|
||||
# - hostname: dbis-api.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: dbis-api.d-bis.org
|
||||
# - hostname: dbis-api-2.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: dbis-api-2.d-bis.org
|
||||
# - hostname: mim4u.org.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: mim4u.org.d-bis.org
|
||||
# - hostname: www.mim4u.org.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: www.mim4u.org.d-bis.org
|
||||
# - hostname: rpc-http-prv.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: rpc-http-prv.d-bis.org
|
||||
# - hostname: rpc-http-pub.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: rpc-http-pub.d-bis.org
|
||||
# - hostname: rpc-ws-prv.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: rpc-ws-prv.d-bis.org
|
||||
# - hostname: rpc-ws-pub.d-bis.org
|
||||
# service: http://192.168.11.21:80
|
||||
# originRequest:
|
||||
# httpHostHeader: rpc-ws-pub.d-bis.org
|
||||
# - service: http_status:404
|
||||
|
||||
# ============================================
|
||||
# Tunnel 4: tunnel-ml110
|
||||
# ============================================
|
||||
# tunnel: ccd7150a-9881-4b8c-a105-9b4ead6e69a2
|
||||
# credentials-file: /etc/cloudflared/credentials-ml110.json
|
||||
#
|
||||
# ingress:
|
||||
# - hostname: ml110-01.d-bis.org
|
||||
# service: https://192.168.11.10:8006
|
||||
# originRequest:
|
||||
# noTLSVerify: true
|
||||
# - service: http_status:404
|
||||
|
||||
# ============================================
|
||||
# Tunnel 5: tunnel-r630-01
|
||||
# ============================================
|
||||
# tunnel: 4481af8f-b24c-4cd3-bdd5-f562f4c97df4
|
||||
# credentials-file: /etc/cloudflared/credentials-r630-01.json
|
||||
#
|
||||
# ingress:
|
||||
# - hostname: r630-01.d-bis.org
|
||||
# service: https://192.168.11.11:8006
|
||||
# originRequest:
|
||||
# noTLSVerify: true
|
||||
# - service: http_status:404
|
||||
|
||||
# ============================================
|
||||
# Tunnel 6: tunnel-r630-02
|
||||
# ============================================
|
||||
# tunnel: 0876f12b-64d7-4927-9ab3-94cb6cf48af9
|
||||
# credentials-file: /etc/cloudflared/credentials-r630-02.json
|
||||
#
|
||||
# ingress:
|
||||
# - hostname: r630-02.d-bis.org
|
||||
# service: https://192.168.11.12:8006
|
||||
# originRequest:
|
||||
# noTLSVerify: true
|
||||
# - service: http_status:404
|
||||
|
||||
EOF
|
||||
|
||||
# Create quick reference card
|
||||
cat > "$OUTPUT_DIR/QUICK_REFERENCE.md" << 'EOF'
|
||||
# Quick Reference - Fix Tunnels
|
||||
|
||||
## Fastest Method: Cloudflare Dashboard
|
||||
|
||||
1. Go to: https://one.dash.cloudflare.com/
|
||||
2. Zero Trust → Networks → Tunnels
|
||||
3. For each tunnel, click **Configure**
|
||||
4. Add hostname → Service → URL
|
||||
5. Save and wait 1-2 minutes
|
||||
|
||||
## Tunnel IDs Quick Reference
|
||||
|
||||
| Tunnel Name | ID | Target |
|
||||
|-------------|----|----|
|
||||
| explorer | b02fe1fe-cb7d-484e-909b-7cc41298ebe8 | http://192.168.11.21:80 |
|
||||
| mim4u-tunnel | f8d06879-04f8-44ef-aeda-ce84564a1792 | http://192.168.11.21:80 |
|
||||
| rpc-http-pub | 10ab22da-8ea3-4e2e-a896-27ece2211a05 | http://192.168.11.21:80 (9 hostnames) |
|
||||
| tunnel-ml110 | ccd7150a-9881-4b8c-a105-9b4ead6e69a2 | https://192.168.11.10:8006 |
|
||||
| tunnel-r630-01 | 4481af8f-b24c-4cd3-bdd5-f562f4c97df4 | https://192.168.11.11:8006 |
|
||||
| tunnel-r630-02 | 0876f12b-64d7-4927-9ab3-94cb6cf48af9 | https://192.168.11.12:8006 |
|
||||
|
||||
## If Dashboard Doesn't Work
|
||||
|
||||
Need physical/network access to Proxmox host (192.168.11.12):
|
||||
|
||||
```bash
|
||||
# Start container
|
||||
ssh root@192.168.11.12 "pct start 102"
|
||||
|
||||
# Start services
|
||||
ssh root@192.168.11.12 "pct exec 102 -- systemctl start cloudflared-*"
|
||||
|
||||
# Check status
|
||||
ssh root@192.168.11.12 "pct exec 102 -- systemctl status cloudflared-*"
|
||||
```
|
||||
|
||||
EOF
|
||||
|
||||
echo "✅ Files created:"
|
||||
echo ""
|
||||
echo " 📄 COMPLETE_FIX_GUIDE.md - Step-by-step instructions"
|
||||
echo " 📄 tunnel-configs-reference.yml - Configuration reference"
|
||||
echo " 📄 QUICK_REFERENCE.md - Quick lookup"
|
||||
echo ""
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Next Steps"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "1. Review: $OUTPUT_DIR/COMPLETE_FIX_GUIDE.md"
|
||||
echo ""
|
||||
echo "2. Easiest Fix: Use Cloudflare Dashboard"
|
||||
echo " - Go to: https://one.dash.cloudflare.com/"
|
||||
echo " - Zero Trust → Networks → Tunnels"
|
||||
echo " - Configure each tunnel as shown in guide"
|
||||
echo ""
|
||||
echo "3. If Dashboard doesn't work:"
|
||||
echo " - Need physical/network access to Proxmox"
|
||||
echo " - Start container and services manually"
|
||||
echo " - See guide for commands"
|
||||
echo ""
|
||||
echo "📁 All files saved to: $OUTPUT_DIR"
|
||||
echo ""
|
||||
267
scripts/install-shared-tunnel-token.sh
Executable file
267
scripts/install-shared-tunnel-token.sh
Executable file
@@ -0,0 +1,267 @@
|
||||
#!/bin/bash
|
||||
# Install Cloudflare tunnel using token
|
||||
# Token is for tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05 (shared tunnel)
|
||||
|
||||
set -e
|
||||
|
||||
TUNNEL_TOKEN="eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9"
|
||||
TUNNEL_ID="10ab22da-8ea3-4e2e-a896-27ece2211a05"
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.12}"
|
||||
VMID="${VMID:-102}"
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Install Shared Tunnel with Token"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Tunnel ID: ${TUNNEL_ID}"
|
||||
echo "Target Container: VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
|
||||
# Check if we can connect
|
||||
if ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct exec ${VMID} -- echo 'Connected'" 2>/dev/null; then
|
||||
echo "❌ Cannot connect to VMID ${VMID} on ${PROXMOX_HOST}"
|
||||
echo ""
|
||||
echo "This script needs to be run:"
|
||||
echo " 1. From a machine on 192.168.11.0/24 network, OR"
|
||||
echo " 2. Via SSH tunnel (after running setup_ssh_tunnel.sh), OR"
|
||||
echo " 3. Directly on the Proxmox host"
|
||||
echo ""
|
||||
echo "Alternative: Install directly in container"
|
||||
echo " ssh root@${PROXMOX_HOST}"
|
||||
echo " pct exec ${VMID} -- bash"
|
||||
echo " # Then run the installation commands manually"
|
||||
echo ""
|
||||
|
||||
# Generate manual installation instructions
|
||||
cat > /tmp/tunnel-install-manual.md << 'MANUAL_EOF'
|
||||
# Manual Tunnel Installation
|
||||
|
||||
## Step 1: Access Container
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.12
|
||||
pct exec 102 -- bash
|
||||
```
|
||||
|
||||
## Step 2: Install cloudflared (if not installed)
|
||||
|
||||
```bash
|
||||
apt update
|
||||
apt install -y cloudflared
|
||||
```
|
||||
|
||||
## Step 3: Install Tunnel Service with Token
|
||||
|
||||
```bash
|
||||
cloudflared service install eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9
|
||||
```
|
||||
|
||||
## Step 4: Configure Ingress Rules
|
||||
|
||||
The token installation creates a basic service. You need to configure ingress rules for all 9 hostnames.
|
||||
|
||||
### Option A: Via Cloudflare Dashboard (Recommended)
|
||||
|
||||
1. Go to: https://one.dash.cloudflare.com/
|
||||
2. Zero Trust → Networks → Tunnels
|
||||
3. Find tunnel: `10ab22da-8ea3-4e2e-a896-27ece2211a05`
|
||||
4. Click Configure
|
||||
5. Add all 9 hostnames (see below)
|
||||
|
||||
### Option B: Manual Config File
|
||||
|
||||
Create `/etc/cloudflared/config.yml`:
|
||||
|
||||
```yaml
|
||||
tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
|
||||
credentials-file: /root/.cloudflared/<tunnel-id>.json
|
||||
|
||||
ingress:
|
||||
- hostname: dbis-admin.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-admin.d-bis.org
|
||||
- hostname: dbis-api.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api.d-bis.org
|
||||
- hostname: dbis-api-2.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api-2.d-bis.org
|
||||
- hostname: mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: mim4u.org.d-bis.org
|
||||
- hostname: www.mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: www.mim4u.org.d-bis.org
|
||||
- hostname: rpc-http-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-prv.d-bis.org
|
||||
- hostname: rpc-http-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-pub.d-bis.org
|
||||
- hostname: rpc-ws-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-prv.d-bis.org
|
||||
- hostname: rpc-ws-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-pub.d-bis.org
|
||||
- service: http_status:404
|
||||
```
|
||||
|
||||
## Step 5: Restart Service
|
||||
|
||||
```bash
|
||||
systemctl restart cloudflared
|
||||
systemctl status cloudflared
|
||||
```
|
||||
|
||||
## Step 6: Verify
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
systemctl status cloudflared
|
||||
|
||||
# Check logs
|
||||
journalctl -u cloudflared -f
|
||||
|
||||
# Test endpoints
|
||||
curl -I https://dbis-admin.d-bis.org
|
||||
curl -I https://rpc-http-pub.d-bis.org
|
||||
```
|
||||
|
||||
MANUAL_EOF
|
||||
|
||||
echo "📄 Manual instructions saved to: /tmp/tunnel-install-manual.md"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Connected to container"
|
||||
echo ""
|
||||
|
||||
# Step 1: Check cloudflared installation
|
||||
echo "Step 1: Checking cloudflared installation..."
|
||||
if ! ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- which cloudflared" >/dev/null 2>&1; then
|
||||
echo "⚠️ cloudflared not installed. Installing..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash -c 'apt update && apt install -y cloudflared'" || {
|
||||
echo "❌ Failed to install cloudflared"
|
||||
exit 1
|
||||
}
|
||||
echo "✅ cloudflared installed"
|
||||
else
|
||||
echo "✅ cloudflared is installed"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Step 2: Install tunnel service with token
|
||||
echo "Step 2: Installing tunnel service with token..."
|
||||
echo "This will create a systemd service for the tunnel."
|
||||
echo ""
|
||||
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash -c 'cloudflared service install ${TUNNEL_TOKEN}'" || {
|
||||
echo "⚠️ Service install may have failed or service already exists"
|
||||
echo " Continuing with configuration..."
|
||||
}
|
||||
echo ""
|
||||
|
||||
# Step 3: Create configuration file
|
||||
echo "Step 3: Creating tunnel configuration..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- bash" << 'CONFIG_EOF'
|
||||
cat > /etc/cloudflared/config.yml << 'YAML_EOF'
|
||||
tunnel: 10ab22da-8ea3-4e2e-a896-27ece2211a05
|
||||
credentials-file: /root/.cloudflared/10ab22da-8ea3-4e2e-a896-27ece2211a05.json
|
||||
|
||||
ingress:
|
||||
- hostname: dbis-admin.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-admin.d-bis.org
|
||||
- hostname: dbis-api.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api.d-bis.org
|
||||
- hostname: dbis-api-2.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: dbis-api-2.d-bis.org
|
||||
- hostname: mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: mim4u.org.d-bis.org
|
||||
- hostname: www.mim4u.org.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: www.mim4u.org.d-bis.org
|
||||
- hostname: rpc-http-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-prv.d-bis.org
|
||||
- hostname: rpc-http-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-http-pub.d-bis.org
|
||||
- hostname: rpc-ws-prv.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-prv.d-bis.org
|
||||
- hostname: rpc-ws-pub.d-bis.org
|
||||
service: http://192.168.11.21:80
|
||||
originRequest:
|
||||
httpHostHeader: rpc-ws-pub.d-bis.org
|
||||
- service: http_status:404
|
||||
|
||||
metrics: 127.0.0.1:9090
|
||||
loglevel: info
|
||||
gracePeriod: 30s
|
||||
YAML_EOF
|
||||
|
||||
chmod 600 /etc/cloudflared/config.yml
|
||||
echo "✅ Configuration file created"
|
||||
CONFIG_EOF
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 4: Restart service
|
||||
echo "Step 4: Restarting tunnel service..."
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl daemon-reload"
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl restart cloudflared" || \
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl start cloudflared"
|
||||
sleep 3
|
||||
echo "✅ Service restarted"
|
||||
echo ""
|
||||
|
||||
# Step 5: Check status
|
||||
echo "Step 5: Checking service status..."
|
||||
echo ""
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- systemctl status cloudflared --no-pager -l" || true
|
||||
echo ""
|
||||
|
||||
# Step 6: Show logs
|
||||
echo "Step 6: Recent logs (last 20 lines)..."
|
||||
echo ""
|
||||
ssh root@${PROXMOX_HOST} "pct exec ${VMID} -- journalctl -u cloudflared -n 20 --no-pager" || true
|
||||
echo ""
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Installation Complete"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Wait 1-2 minutes for tunnel to connect"
|
||||
echo " 2. Check Cloudflare Dashboard - tunnel should show HEALTHY"
|
||||
echo " 3. Test endpoints:"
|
||||
echo " curl -I https://dbis-admin.d-bis.org"
|
||||
echo " curl -I https://rpc-http-pub.d-bis.org"
|
||||
echo ""
|
||||
echo "If tunnel is still DOWN:"
|
||||
echo " - Check logs: ssh root@${PROXMOX_HOST} 'pct exec ${VMID} -- journalctl -u cloudflared -f'"
|
||||
echo " - Verify credentials file exists: /root/.cloudflared/10ab22da-8ea3-4e2e-a896-27ece2211a05.json"
|
||||
echo " - Verify Nginx is accessible at 192.168.11.21:80"
|
||||
echo ""
|
||||
307
scripts/list_vms.py
Executable file
307
scripts/list_vms.py
Executable file
@@ -0,0 +1,307 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
List all Proxmox VMs with VMID, Name, IP Address, FQDN, and Description.
|
||||
|
||||
This script connects to a Proxmox cluster and retrieves comprehensive
|
||||
information about all virtual machines (both QEMU VMs and LXC containers).
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from typing import Dict, List, Optional, Any
|
||||
from proxmoxer import ProxmoxAPI
|
||||
|
||||
def load_env_file(env_path: str = None) -> dict:
|
||||
"""Load environment variables from .env file."""
|
||||
if env_path is None:
|
||||
env_path = os.path.expanduser('~/.env')
|
||||
|
||||
env_vars = {}
|
||||
if os.path.exists(env_path):
|
||||
try:
|
||||
with open(env_path, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
# Skip comments and empty lines
|
||||
if not line or line.startswith('#'):
|
||||
continue
|
||||
# Parse KEY=VALUE format
|
||||
if '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"').strip("'")
|
||||
env_vars[key] = value
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not load {env_path}: {e}", file=sys.stderr)
|
||||
return env_vars
|
||||
|
||||
def get_proxmox_connection() -> ProxmoxAPI:
|
||||
"""Initialize Proxmox API connection from environment variables or config file."""
|
||||
# Load from ~/.env file first
|
||||
env_vars = load_env_file()
|
||||
|
||||
# Try environment variables first (they override .env file)
|
||||
host = os.getenv('PROXMOX_HOST', env_vars.get('PROXMOX_HOST', '192.168.6.247'))
|
||||
port = int(os.getenv('PROXMOX_PORT', env_vars.get('PROXMOX_PORT', '8006')))
|
||||
user = os.getenv('PROXMOX_USER', env_vars.get('PROXMOX_USER', 'root@pam'))
|
||||
token_name = os.getenv('PROXMOX_TOKEN_NAME', env_vars.get('PROXMOX_TOKEN_NAME', 'mcpserver'))
|
||||
token_value = os.getenv('PROXMOX_TOKEN_VALUE', env_vars.get('PROXMOX_TOKEN_VALUE'))
|
||||
password = os.getenv('PROXMOX_PASSWORD', env_vars.get('PROXMOX_PASSWORD'))
|
||||
verify_ssl = os.getenv('PROXMOX_VERIFY_SSL', env_vars.get('PROXMOX_VERIFY_SSL', 'false')).lower() == 'true'
|
||||
|
||||
# If no token value, try to load from JSON config file
|
||||
if not token_value and not password:
|
||||
config_path = os.getenv('PROXMOX_MCP_CONFIG')
|
||||
if config_path and os.path.exists(config_path):
|
||||
with open(config_path) as f:
|
||||
config = json.load(f)
|
||||
host = config.get('proxmox', {}).get('host', host)
|
||||
port = config.get('proxmox', {}).get('port', port)
|
||||
verify_ssl = config.get('proxmox', {}).get('verify_ssl', verify_ssl)
|
||||
user = config.get('auth', {}).get('user', user)
|
||||
token_name = config.get('auth', {}).get('token_name', token_name)
|
||||
token_value = config.get('auth', {}).get('token_value')
|
||||
password = config.get('auth', {}).get('password')
|
||||
|
||||
if not token_value and not password:
|
||||
print("Error: PROXMOX_TOKEN_VALUE or PROXMOX_PASSWORD required", file=sys.stderr)
|
||||
print("\nCredentials can be provided via:", file=sys.stderr)
|
||||
print(" 1. Environment variables", file=sys.stderr)
|
||||
print(" 2. ~/.env file (automatically loaded)", file=sys.stderr)
|
||||
print(" 3. JSON config file (set PROXMOX_MCP_CONFIG)", file=sys.stderr)
|
||||
print("\nExample ~/.env file:", file=sys.stderr)
|
||||
print(" PROXMOX_HOST=your-proxmox-host", file=sys.stderr)
|
||||
print(" PROXMOX_USER=root@pam", file=sys.stderr)
|
||||
print(" PROXMOX_TOKEN_NAME=your-token-name", file=sys.stderr)
|
||||
print(" PROXMOX_TOKEN_VALUE=your-token-value", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
if token_value:
|
||||
proxmox = ProxmoxAPI(
|
||||
host=host,
|
||||
port=port,
|
||||
user=user,
|
||||
token_name=token_name,
|
||||
token_value=token_value,
|
||||
verify_ssl=verify_ssl,
|
||||
service='PVE'
|
||||
)
|
||||
else:
|
||||
# Use password authentication
|
||||
proxmox = ProxmoxAPI(
|
||||
host=host,
|
||||
port=port,
|
||||
user=user,
|
||||
password=password,
|
||||
verify_ssl=verify_ssl,
|
||||
service='PVE'
|
||||
)
|
||||
# Test connection
|
||||
proxmox.version.get()
|
||||
return proxmox
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
if "ConnectTimeoutError" in error_msg or "timed out" in error_msg:
|
||||
print(f"Error: Connection to Proxmox host '{host}:{port}' timed out", file=sys.stderr)
|
||||
print(f" - Verify the host is reachable: ping {host}", file=sys.stderr)
|
||||
print(f" - Check firewall allows port {port}", file=sys.stderr)
|
||||
print(f" - Verify PROXMOX_HOST in ~/.env is correct", file=sys.stderr)
|
||||
elif "401" in error_msg or "authentication" in error_msg.lower():
|
||||
print(f"Error: Authentication failed", file=sys.stderr)
|
||||
print(f" - Verify PROXMOX_TOKEN_VALUE or PROXMOX_PASSWORD in ~/.env", file=sys.stderr)
|
||||
print(f" - Check user permissions in Proxmox", file=sys.stderr)
|
||||
else:
|
||||
print(f"Error connecting to Proxmox: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
def get_vm_ip_address(proxmox: ProxmoxAPI, node: str, vmid: str, vm_type: str) -> str:
|
||||
"""Get IP address of a VM."""
|
||||
ip_addresses = []
|
||||
|
||||
try:
|
||||
# Try to get IP from network interfaces via guest agent (for QEMU) or config
|
||||
if vm_type == 'qemu':
|
||||
# Try QEMU guest agent first
|
||||
try:
|
||||
interfaces = proxmox.nodes(node).qemu(vmid).agent('network-get-interfaces').get()
|
||||
if interfaces and 'result' in interfaces:
|
||||
for iface in interfaces['result']:
|
||||
if 'ip-addresses' in iface:
|
||||
for ip_info in iface['ip-addresses']:
|
||||
if ip_info.get('ip-address-type') == 'ipv4' and not ip_info.get('ip-address', '').startswith('127.'):
|
||||
ip_addresses.append(ip_info['ip-address'])
|
||||
except:
|
||||
pass
|
||||
|
||||
# Try to get from config (for static IPs or if guest agent not available)
|
||||
try:
|
||||
config = proxmox.nodes(node).qemu(vmid).config.get() if vm_type == 'qemu' else proxmox.nodes(node).lxc(vmid).config.get()
|
||||
# Check for IP in network config
|
||||
for key, value in config.items():
|
||||
if key.startswith('net') and isinstance(value, str):
|
||||
# Parse network config like "virtio=00:11:22:33:44:55,bridge=vmbr0,ip=192.168.1.100/24"
|
||||
if 'ip=' in value:
|
||||
ip_part = value.split('ip=')[1].split(',')[0].split('/')[0]
|
||||
if ip_part and ip_part not in ip_addresses:
|
||||
ip_addresses.append(ip_part)
|
||||
except:
|
||||
pass
|
||||
|
||||
# For LXC, try to execute hostname -I command
|
||||
if vm_type == 'lxc' and not ip_addresses:
|
||||
try:
|
||||
result = proxmox.nodes(node).lxc(vmid).exec.post(command='hostname -I')
|
||||
if result and 'out' in result:
|
||||
ips = result['out'].strip().split()
|
||||
ip_addresses.extend([ip for ip in ips if not ip.startswith('127.')])
|
||||
except:
|
||||
pass
|
||||
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return ', '.join(ip_addresses) if ip_addresses else 'N/A'
|
||||
|
||||
def get_vm_fqdn(proxmox: ProxmoxAPI, node: str, vmid: str, vm_type: str) -> str:
|
||||
"""Get FQDN of a VM."""
|
||||
fqdn = None
|
||||
|
||||
try:
|
||||
# Get hostname from config
|
||||
config = proxmox.nodes(node).qemu(vmid).config.get() if vm_type == 'qemu' else proxmox.nodes(node).lxc(vmid).config.get()
|
||||
hostname = config.get('hostname') or config.get('name', '').split('.')[0]
|
||||
|
||||
if hostname:
|
||||
# Try to get full FQDN by executing hostname -f
|
||||
try:
|
||||
if vm_type == 'qemu':
|
||||
result = proxmox.nodes(node).qemu(vmid).agent('exec').post(command={'command': 'hostname -f'})
|
||||
else:
|
||||
result = proxmox.nodes(node).lxc(vmid).exec.post(command='hostname -f')
|
||||
|
||||
if result:
|
||||
if vm_type == 'qemu' and 'result' in result and 'out-data' in result['result']:
|
||||
fqdn = result['result']['out-data'].strip()
|
||||
elif vm_type == 'lxc' and 'out' in result:
|
||||
fqdn = result['out'].strip()
|
||||
except:
|
||||
# Fallback to hostname from config
|
||||
if hostname:
|
||||
fqdn = hostname
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return fqdn if fqdn else 'N/A'
|
||||
|
||||
def get_vm_description(proxmox: ProxmoxAPI, node: str, vmid: str, vm_type: str) -> str:
|
||||
"""Get description of a VM."""
|
||||
try:
|
||||
config = proxmox.nodes(node).qemu(vmid).config.get() if vm_type == 'qemu' else proxmox.nodes(node).lxc(vmid).config.get()
|
||||
return config.get('description', 'N/A')
|
||||
except Exception:
|
||||
return 'N/A'
|
||||
|
||||
def list_all_vms(proxmox: ProxmoxAPI) -> List[Dict[str, Any]]:
|
||||
"""List all VMs across all nodes."""
|
||||
all_vms = []
|
||||
|
||||
try:
|
||||
nodes = proxmox.nodes.get()
|
||||
|
||||
for node in nodes:
|
||||
node_name = node['node']
|
||||
|
||||
# Get QEMU VMs
|
||||
try:
|
||||
qemu_vms = proxmox.nodes(node_name).qemu.get()
|
||||
for vm in qemu_vms:
|
||||
vmid = str(vm['vmid'])
|
||||
name = vm.get('name', f'VM-{vmid}')
|
||||
|
||||
# Get additional info
|
||||
description = get_vm_description(proxmox, node_name, vmid, 'qemu')
|
||||
ip_address = get_vm_ip_address(proxmox, node_name, vmid, 'qemu')
|
||||
fqdn = get_vm_fqdn(proxmox, node_name, vmid, 'qemu')
|
||||
|
||||
all_vms.append({
|
||||
'vmid': vmid,
|
||||
'name': name,
|
||||
'type': 'QEMU',
|
||||
'node': node_name,
|
||||
'status': vm.get('status', 'unknown'),
|
||||
'ip_address': ip_address,
|
||||
'fqdn': fqdn,
|
||||
'description': description
|
||||
})
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not get QEMU VMs from node {node_name}: {e}", file=sys.stderr)
|
||||
|
||||
# Get LXC containers
|
||||
try:
|
||||
lxc_vms = proxmox.nodes(node_name).lxc.get()
|
||||
for vm in lxc_vms:
|
||||
vmid = str(vm['vmid'])
|
||||
name = vm.get('name', f'CT-{vmid}')
|
||||
|
||||
# Get additional info
|
||||
description = get_vm_description(proxmox, node_name, vmid, 'lxc')
|
||||
ip_address = get_vm_ip_address(proxmox, node_name, vmid, 'lxc')
|
||||
fqdn = get_vm_fqdn(proxmox, node_name, vmid, 'lxc')
|
||||
|
||||
all_vms.append({
|
||||
'vmid': vmid,
|
||||
'name': name,
|
||||
'type': 'LXC',
|
||||
'node': node_name,
|
||||
'status': vm.get('status', 'unknown'),
|
||||
'ip_address': ip_address,
|
||||
'fqdn': fqdn,
|
||||
'description': description
|
||||
})
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not get LXC containers from node {node_name}: {e}", file=sys.stderr)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error listing VMs: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Sort by VMID
|
||||
all_vms.sort(key=lambda x: int(x['vmid']))
|
||||
return all_vms
|
||||
|
||||
def print_vm_table(vms: List[Dict[str, Any]]):
|
||||
"""Print VMs in a formatted table."""
|
||||
if not vms:
|
||||
print("No VMs found.")
|
||||
return
|
||||
|
||||
# Calculate column widths
|
||||
col_widths = {
|
||||
'vmid': max(len('VMID'), max(len(vm['vmid']) for vm in vms)),
|
||||
'name': max(len('Name'), max(len(vm['name']) for vm in vms)),
|
||||
'type': max(len('Type'), max(len(vm['type']) for vm in vms)),
|
||||
'ip_address': max(len('IP Address'), max(len(vm['ip_address']) for vm in vms)),
|
||||
'fqdn': max(len('FQDN'), max(len(vm['fqdn']) for vm in vms)),
|
||||
'description': max(len('Description'), max(len(vm['description']) for vm in vms if vm['description'] != 'N/A'))
|
||||
}
|
||||
|
||||
# Print header
|
||||
header = f"{'VMID':<{col_widths['vmid']}} | {'Name':<{col_widths['name']}} | {'Type':<{col_widths['type']}} | {'IP Address':<{col_widths['ip_address']}} | {'FQDN':<{col_widths['fqdn']}} | {'Description':<{col_widths['description']}}"
|
||||
print(header)
|
||||
print('-' * len(header))
|
||||
|
||||
# Print VMs
|
||||
for vm in vms:
|
||||
row = f"{vm['vmid']:<{col_widths['vmid']}} | {vm['name']:<{col_widths['name']}} | {vm['type']:<{col_widths['type']}} | {vm['ip_address']:<{col_widths['ip_address']}} | {vm['fqdn']:<{col_widths['fqdn']}} | {vm['description']:<{col_widths['description']}}"
|
||||
print(row)
|
||||
|
||||
def main():
|
||||
"""Main function."""
|
||||
proxmox = get_proxmox_connection()
|
||||
vms = list_all_vms(proxmox)
|
||||
print_vm_table(vms)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
114
scripts/list_vms.sh
Executable file
114
scripts/list_vms.sh
Executable file
@@ -0,0 +1,114 @@
|
||||
#!/bin/bash
|
||||
# List all Proxmox VMs with VMID, Name, IP Address, FQDN, and Description
|
||||
# This script uses pvesh command-line tool and requires SSH access to Proxmox node
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.6.247}"
|
||||
PROXMOX_USER="${PROXMOX_USER:-root}"
|
||||
SSH_OPTS="-o StrictHostKeyChecking=no -o ConnectTimeout=5"
|
||||
|
||||
# Function to get VM config value
|
||||
get_config_value() {
|
||||
local node=$1
|
||||
local vmid=$2
|
||||
local vm_type=$3
|
||||
local key=$4
|
||||
ssh $SSH_OPTS ${PROXMOX_USER}@${PROXMOX_HOST} \
|
||||
"pvesh get /nodes/${node}/${vm_type}/${vmid}/config --output-format json" 2>/dev/null | \
|
||||
grep -o "\"${key}\":\"[^\"]*\"" | cut -d'"' -f4
|
||||
}
|
||||
|
||||
# Function to get IP address for QEMU VM
|
||||
get_qemu_ip() {
|
||||
local node=$1
|
||||
local vmid=$2
|
||||
ssh $SSH_OPTS ${PROXMOX_USER}@${PROXMOX_HOST} \
|
||||
"pvesh get /nodes/${node}/qemu/${vmid}/agent/network-get-interfaces --output-format json" 2>/dev/null | \
|
||||
python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
if 'result' in data:
|
||||
for iface in data['result']:
|
||||
if 'ip-addresses' in iface:
|
||||
for ip_info in iface['ip-addresses']:
|
||||
if ip_info.get('ip-address-type') == 'ipv4' and not ip_info.get('ip-address', '').startswith('127.'):
|
||||
print(ip_info['ip-address'])
|
||||
sys.exit(0)
|
||||
except:
|
||||
pass
|
||||
" 2>/dev/null | head -1
|
||||
}
|
||||
|
||||
# Function to get IP address for LXC container
|
||||
get_lxc_ip() {
|
||||
local vmid=$1
|
||||
ssh $SSH_OPTS ${PROXMOX_USER}@${PROXMOX_HOST} \
|
||||
"pct exec ${vmid} -- hostname -I 2>/dev/null" 2>/dev/null | \
|
||||
awk '{for(i=1;i<=NF;i++) if($i !~ /^127\./) {print $i; exit}}'
|
||||
}
|
||||
|
||||
# Print header
|
||||
printf "%-6s | %-23s | %-4s | %-19s | %-23s | %s\n" "VMID" "Name" "Type" "IP Address" "FQDN" "Description"
|
||||
echo "-------|-------------------------|------|-------------------|-------------------------|----------------"
|
||||
|
||||
# Get all nodes
|
||||
NODES=$(ssh $SSH_OPTS ${PROXMOX_USER}@${PROXMOX_HOST} "pvesh get /nodes --output-format json" 2>/dev/null | \
|
||||
python3 -c "import sys, json; [print(n['node']) for n in json.load(sys.stdin)]" 2>/dev/null)
|
||||
|
||||
if [ -z "$NODES" ]; then
|
||||
echo "Error: Could not connect to Proxmox host ${PROXMOX_HOST}" >&2
|
||||
echo "Usage: PROXMOX_HOST=your-host PROXMOX_USER=root ./list_vms.sh" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Process each node
|
||||
for NODE in $NODES; do
|
||||
# Get QEMU VMs
|
||||
ssh $SSH_OPTS ${PROXMOX_USER}@${PROXMOX_HOST} \
|
||||
"pvesh get /nodes/${NODE}/qemu --output-format json" 2>/dev/null | \
|
||||
python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
vms = json.load(sys.stdin)
|
||||
for vm in sorted(vms, key=lambda x: int(x.get('vmid', 0))):
|
||||
print(f\"{vm.get('vmid')}|{vm.get('name', 'N/A')}|qemu|{NODE}\")
|
||||
except:
|
||||
pass
|
||||
" 2>/dev/null | while IFS='|' read -r vmid name vm_type node; do
|
||||
description=$(get_config_value "$node" "$vmid" "$vm_type" "description")
|
||||
hostname=$(get_config_value "$node" "$vmid" "$vm_type" "hostname")
|
||||
ip=$(get_qemu_ip "$node" "$vmid")
|
||||
|
||||
description=${description:-N/A}
|
||||
fqdn=${hostname:-N/A}
|
||||
ip=${ip:-N/A}
|
||||
|
||||
printf "%-6s | %-23s | %-4s | %-19s | %-23s | %s\n" \
|
||||
"$vmid" "$name" "QEMU" "$ip" "$fqdn" "$description"
|
||||
done
|
||||
|
||||
# Get LXC containers
|
||||
ssh $SSH_OPTS ${PROXMOX_USER}@${PROXMOX_HOST} \
|
||||
"pvesh get /nodes/${NODE}/lxc --output-format json" 2>/dev/null | \
|
||||
python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
vms = json.load(sys.stdin)
|
||||
for vm in sorted(vms, key=lambda x: int(x.get('vmid', 0))):
|
||||
print(f\"{vm.get('vmid')}|{vm.get('name', 'N/A')}|lxc|{NODE}\")
|
||||
except:
|
||||
pass
|
||||
" 2>/dev/null | while IFS='|' read -r vmid name vm_type node; do
|
||||
description=$(get_config_value "$node" "$vmid" "$vm_type" "description")
|
||||
hostname=$(get_config_value "$node" "$vmid" "$vm_type" "hostname")
|
||||
ip=$(get_lxc_ip "$vmid")
|
||||
|
||||
description=${description:-N/A}
|
||||
fqdn=${hostname:-N/A}
|
||||
ip=${ip:-N/A}
|
||||
|
||||
printf "%-6s | %-23s | %-4s | %-19s | %-23s | %s\n" \
|
||||
"$vmid" "$name" "LXC" "$ip" "$fqdn" "$description"
|
||||
done
|
||||
done
|
||||
159
scripts/list_vms_with_tunnels.py
Executable file
159
scripts/list_vms_with_tunnels.py
Executable file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
List Proxmox VMs using tunnel information from .env and tunnel configs.
|
||||
|
||||
This script attempts multiple connection methods:
|
||||
1. Direct connection (if on same network)
|
||||
2. Via Cloudflare tunnel URLs (for web access)
|
||||
3. Via SSH tunnel (if SSH access available)
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import subprocess
|
||||
from typing import Dict, List, Optional, Any
|
||||
from proxmoxer import ProxmoxAPI
|
||||
|
||||
# Tunnel URL mappings from tunnel configs
|
||||
TUNNEL_URLS = {
|
||||
'192.168.11.10': 'ml110-01.d-bis.org',
|
||||
'192.168.11.11': 'r630-01.d-bis.org',
|
||||
'192.168.11.12': 'r630-02.d-bis.org',
|
||||
}
|
||||
|
||||
def load_env_file(env_path: str = None) -> dict:
|
||||
"""Load environment variables from .env file."""
|
||||
if env_path is None:
|
||||
env_path = os.path.expanduser('~/.env')
|
||||
|
||||
env_vars = {}
|
||||
if os.path.exists(env_path):
|
||||
try:
|
||||
with open(env_path, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line or line.startswith('#'):
|
||||
continue
|
||||
if '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"').strip("'")
|
||||
env_vars[key] = value
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not load {env_path}: {e}", file=sys.stderr)
|
||||
return env_vars
|
||||
|
||||
def test_connection(host: str, port: int = 8006, timeout: int = 5) -> bool:
|
||||
"""Test if host:port is reachable."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['timeout', str(timeout), 'bash', '-c', f'echo > /dev/tcp/{host}/{port}'],
|
||||
capture_output=True,
|
||||
timeout=timeout + 1
|
||||
)
|
||||
return result.returncode == 0
|
||||
except:
|
||||
return False
|
||||
|
||||
def get_proxmox_connection() -> Optional[ProxmoxAPI]:
|
||||
"""Initialize Proxmox API connection with tunnel awareness."""
|
||||
env_vars = load_env_file()
|
||||
|
||||
host = os.getenv('PROXMOX_HOST', env_vars.get('PROXMOX_HOST', '192.168.11.10'))
|
||||
port = int(os.getenv('PROXMOX_PORT', env_vars.get('PROXMOX_PORT', '8006')))
|
||||
user = os.getenv('PROXMOX_USER', env_vars.get('PROXMOX_USER', 'root@pam'))
|
||||
token_name = os.getenv('PROXMOX_TOKEN_NAME', env_vars.get('PROXMOX_TOKEN_NAME', 'mcpserver'))
|
||||
token_value = os.getenv('PROXMOX_TOKEN_VALUE', env_vars.get('PROXMOX_TOKEN_VALUE'))
|
||||
password = os.getenv('PROXMOX_PASSWORD', env_vars.get('PROXMOX_PASSWORD'))
|
||||
verify_ssl = os.getenv('PROXMOX_VERIFY_SSL', env_vars.get('PROXMOX_VERIFY_SSL', 'false')).lower() == 'true'
|
||||
|
||||
# Check tunnel URL for this host
|
||||
tunnel_url = TUNNEL_URLS.get(host)
|
||||
|
||||
print(f"🔍 Connection Analysis", file=sys.stderr)
|
||||
print(f" Target Host: {host}:{port}", file=sys.stderr)
|
||||
if tunnel_url:
|
||||
print(f" Tunnel URL: https://{tunnel_url}", file=sys.stderr)
|
||||
print(f" User: {user}", file=sys.stderr)
|
||||
print("", file=sys.stderr)
|
||||
|
||||
# Test direct connection
|
||||
print(f"Testing direct connection to {host}:{port}...", file=sys.stderr)
|
||||
if test_connection(host, port):
|
||||
print(f"✅ Direct connection available", file=sys.stderr)
|
||||
else:
|
||||
print(f"❌ Direct connection failed", file=sys.stderr)
|
||||
if tunnel_url:
|
||||
print(f"💡 Tunnel available: https://{tunnel_url}", file=sys.stderr)
|
||||
print(f" Note: Tunnel URLs work for web UI, not API", file=sys.stderr)
|
||||
print("", file=sys.stderr)
|
||||
print("Solutions:", file=sys.stderr)
|
||||
print(" 1. Use SSH tunnel: ssh -L 8006:{host}:8006 user@{host}", file=sys.stderr)
|
||||
print(" 2. Run script from Proxmox network (192.168.11.0/24)", file=sys.stderr)
|
||||
print(" 3. Access web UI via tunnel: https://{tunnel_url}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
if not token_value and not password:
|
||||
print("Error: PROXMOX_TOKEN_VALUE or PROXMOX_PASSWORD required", file=sys.stderr)
|
||||
return None
|
||||
|
||||
try:
|
||||
if token_value:
|
||||
proxmox = ProxmoxAPI(
|
||||
host=host,
|
||||
port=port,
|
||||
user=user,
|
||||
token_name=token_name,
|
||||
token_value=token_value,
|
||||
verify_ssl=verify_ssl,
|
||||
service='PVE'
|
||||
)
|
||||
else:
|
||||
proxmox = ProxmoxAPI(
|
||||
host=host,
|
||||
port=port,
|
||||
user=user,
|
||||
password=password,
|
||||
verify_ssl=verify_ssl,
|
||||
service='PVE'
|
||||
)
|
||||
# Test connection
|
||||
proxmox.version.get()
|
||||
print(f"✅ Connected successfully!", file=sys.stderr)
|
||||
return proxmox
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
if "ConnectTimeoutError" in error_msg or "timed out" in error_msg:
|
||||
print(f"❌ Connection timeout", file=sys.stderr)
|
||||
elif "401" in error_msg or "authentication" in error_msg.lower():
|
||||
print(f"❌ Authentication failed", file=sys.stderr)
|
||||
else:
|
||||
print(f"❌ Connection error: {e}", file=sys.stderr)
|
||||
return None
|
||||
|
||||
# Import the rest of the functions from list_vms.py
|
||||
# For now, we'll just show the connection analysis
|
||||
def main():
|
||||
"""Main function."""
|
||||
proxmox = get_proxmox_connection()
|
||||
|
||||
if not proxmox:
|
||||
print("\n" + "="*60, file=sys.stderr)
|
||||
print("Cannot establish API connection.", file=sys.stderr)
|
||||
print("="*60, file=sys.stderr)
|
||||
print("\nAlternative: Use the original list_vms.py script", file=sys.stderr)
|
||||
print("from a machine on the Proxmox network (192.168.11.0/24)", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# If connection successful, import and use list_vms functions
|
||||
try:
|
||||
from list_vms import list_all_vms, print_vm_table
|
||||
vms = list_all_vms(proxmox)
|
||||
print_vm_table(vms)
|
||||
except ImportError:
|
||||
print("Error: Could not import list_vms functions", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
254
scripts/migrate-thin1-r630-02.sh
Executable file
254
scripts/migrate-thin1-r630-02.sh
Executable file
@@ -0,0 +1,254 @@
|
||||
#!/usr/bin/env bash
|
||||
# Migrate containers from r630-02 thin1-r630-02 to other thin pools
|
||||
# This addresses the critical storage issue where thin1-r630-02 is at 97.78% capacity
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
LOG_DIR="${PROJECT_ROOT}/logs/migrations"
|
||||
LOG_FILE="${LOG_DIR}/migrate-thin1-r630-02_$(date +%Y%m%d_%H%M%S).log"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_error() { echo -e "${RED}[✗]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_header() { echo -e "${CYAN}=== $1 ===${NC}" | tee -a "$LOG_FILE"; }
|
||||
|
||||
# Create log directory
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
# Node configuration
|
||||
NODE="r630-02"
|
||||
NODE_IP="192.168.11.12"
|
||||
NODE_PASSWORD="password"
|
||||
SOURCE_STORAGE="thin1-r630-02"
|
||||
|
||||
# Target storage pools (all empty and available)
|
||||
TARGET_POOLS=("thin2" "thin3" "thin5" "thin6")
|
||||
CURRENT_POOL_INDEX=0
|
||||
|
||||
# SSH helper
|
||||
ssh_node() {
|
||||
sshpass -p "$NODE_PASSWORD" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 root@"$NODE_IP" "$@"
|
||||
}
|
||||
|
||||
# Get next target storage pool (round-robin)
|
||||
get_next_target_pool() {
|
||||
local pool="${TARGET_POOLS[$CURRENT_POOL_INDEX]}"
|
||||
CURRENT_POOL_INDEX=$(( (CURRENT_POOL_INDEX + 1) % ${#TARGET_POOLS[@]} ))
|
||||
echo "$pool"
|
||||
}
|
||||
|
||||
# Check if container is running
|
||||
is_container_running() {
|
||||
local vmid="$1"
|
||||
ssh_node "pct status $vmid 2>/dev/null | grep -q 'running'" && return 0 || return 1
|
||||
}
|
||||
|
||||
# Get container storage info
|
||||
get_container_storage() {
|
||||
local vmid="$1"
|
||||
ssh_node "pct config $vmid 2>/dev/null | grep '^rootfs:' | awk -F: '{print \$2}' | awk '{print \$1}'" | cut -d: -f1
|
||||
}
|
||||
|
||||
# Migrate a container
|
||||
migrate_container() {
|
||||
local vmid="$1"
|
||||
local target_storage="$2"
|
||||
local container_name="$3"
|
||||
|
||||
log_info "Migrating container $vmid ($container_name) to $target_storage..."
|
||||
|
||||
# Check if container is running
|
||||
local was_running=false
|
||||
if is_container_running "$vmid"; then
|
||||
was_running=true
|
||||
log_info "Container $vmid is running, will stop before migration..."
|
||||
fi
|
||||
|
||||
# Stop container if running
|
||||
if [ "$was_running" = true ]; then
|
||||
log_info "Stopping container $vmid..."
|
||||
ssh_node "pct stop $vmid" || {
|
||||
log_error "Failed to stop container $vmid"
|
||||
return 1
|
||||
}
|
||||
sleep 2
|
||||
fi
|
||||
|
||||
# Perform migration using move-volume (for same-node storage migration)
|
||||
log_info "Moving container $vmid disk from $SOURCE_STORAGE to $target_storage..."
|
||||
|
||||
# Use pct move-volume for same-node storage migration
|
||||
# Syntax: pct move-volume <vmid> <volume> [<storage>]
|
||||
# volume is "rootfs" for the root filesystem
|
||||
if ssh_node "pct move-volume $vmid rootfs $target_storage" >> "$LOG_FILE" 2>&1; then
|
||||
log_success "Container $vmid disk moved successfully to $target_storage"
|
||||
|
||||
# Start container if it was running before
|
||||
if [ "$was_running" = true ]; then
|
||||
log_info "Starting container $vmid..."
|
||||
sleep 2
|
||||
ssh_node "pct start $vmid" || log_warn "Failed to start container $vmid (may need manual start)"
|
||||
fi
|
||||
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to move container $vmid disk"
|
||||
# Try to start container if it was stopped
|
||||
if [ "$was_running" = true ]; then
|
||||
log_info "Attempting to restart container $vmid..."
|
||||
ssh_node "pct start $vmid" || true
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify migration
|
||||
verify_migration() {
|
||||
local vmid="$1"
|
||||
local target_storage="$2"
|
||||
|
||||
local current_storage=$(get_container_storage "$vmid")
|
||||
if [ "$current_storage" = "$target_storage" ]; then
|
||||
log_success "Verified: Container $vmid is now on $target_storage"
|
||||
return 0
|
||||
else
|
||||
log_error "Verification failed: Container $vmid storage is $current_storage (expected $target_storage)"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main migration function
|
||||
main() {
|
||||
log_header "Migration: thin1-r630-02 to Other Thin Pools"
|
||||
echo ""
|
||||
|
||||
log_info "Source Node: $NODE ($NODE_IP)"
|
||||
log_info "Source Storage: $SOURCE_STORAGE"
|
||||
log_info "Target Storage Pools: ${TARGET_POOLS[*]}"
|
||||
echo ""
|
||||
|
||||
# Get list of containers on source storage
|
||||
log_info "Identifying containers on $SOURCE_STORAGE..."
|
||||
local containers=$(ssh_node "pvesm list $SOURCE_STORAGE 2>/dev/null | tail -n +2 | awk '{print \$NF}' | sort -u")
|
||||
|
||||
if [ -z "$containers" ]; then
|
||||
log_warn "No containers found on $SOURCE_STORAGE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local container_list=($containers)
|
||||
|
||||
# Filter out containers that are already on different storage
|
||||
local containers_to_migrate=()
|
||||
for vmid in "${container_list[@]}"; do
|
||||
local current_storage=$(ssh_node "pct config $vmid 2>/dev/null | grep '^rootfs:' | awk '{print \$2}' | cut -d: -f1")
|
||||
if [ "$current_storage" = "$SOURCE_STORAGE" ]; then
|
||||
containers_to_migrate+=("$vmid")
|
||||
else
|
||||
log_info "Container $vmid is already on $current_storage, skipping..."
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#containers_to_migrate[@]} -eq 0 ]; then
|
||||
log_success "All containers have been migrated from $SOURCE_STORAGE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Found ${#containers_to_migrate[@]} containers to migrate: ${containers_to_migrate[*]}"
|
||||
echo ""
|
||||
|
||||
# Update container_list to only include containers that need migration
|
||||
container_list=("${containers_to_migrate[@]}")
|
||||
|
||||
# Get container names
|
||||
declare -A container_names
|
||||
for vmid in "${container_list[@]}"; do
|
||||
local name=$(ssh_node "pct config $vmid 2>/dev/null | grep '^hostname:' | awk '{print \$2}'" || echo "unknown")
|
||||
container_names[$vmid]="$name"
|
||||
done
|
||||
|
||||
# Check storage availability
|
||||
log_info "Checking target storage availability..."
|
||||
for pool in "${TARGET_POOLS[@]}"; do
|
||||
local available=$(ssh_node "pvesm status $pool 2>/dev/null | awk 'NR==2 {print \$6}'" || echo "0")
|
||||
log_info " $pool: Available"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Confirm migration
|
||||
log_warn "This will migrate ${#container_list[@]} containers from $SOURCE_STORAGE"
|
||||
log_info "Containers to migrate:"
|
||||
for vmid in "${container_list[@]}"; do
|
||||
log_info " - VMID $vmid: ${container_names[$vmid]}"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Check for --yes flag for non-interactive mode
|
||||
local auto_confirm=false
|
||||
if [[ "${1:-}" == "--yes" ]] || [[ "${1:-}" == "-y" ]]; then
|
||||
auto_confirm=true
|
||||
log_info "Auto-confirm mode enabled"
|
||||
fi
|
||||
|
||||
if [ "$auto_confirm" = false ]; then
|
||||
read -p "Continue with migration? (yes/no): " confirm
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
log_info "Migration cancelled by user"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Perform migrations
|
||||
local success_count=0
|
||||
local fail_count=0
|
||||
|
||||
for vmid in "${container_list[@]}"; do
|
||||
local target_pool=$(get_next_target_pool)
|
||||
local container_name="${container_names[$vmid]}"
|
||||
|
||||
log_header "Migrating Container $vmid ($container_name)"
|
||||
|
||||
if migrate_container "$vmid" "$target_pool" "$container_name"; then
|
||||
if verify_migration "$vmid" "$target_pool"; then
|
||||
((success_count++))
|
||||
else
|
||||
((fail_count++))
|
||||
fi
|
||||
else
|
||||
((fail_count++))
|
||||
fi
|
||||
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Final summary
|
||||
log_header "Migration Summary"
|
||||
log_info "Total containers: ${#container_list[@]}"
|
||||
log_success "Successfully migrated: $success_count"
|
||||
if [ $fail_count -gt 0 ]; then
|
||||
log_error "Failed migrations: $fail_count"
|
||||
fi
|
||||
|
||||
# Check final storage status
|
||||
echo ""
|
||||
log_info "Final storage status:"
|
||||
ssh_node "pvesm status | grep -E '(thin1-r630-02|thin2|thin3|thin5|thin6)'" | tee -a "$LOG_FILE"
|
||||
|
||||
echo ""
|
||||
log_success "Migration complete! Log saved to: $LOG_FILE"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
204
scripts/organize-docs-directory.sh
Executable file
204
scripts/organize-docs-directory.sh
Executable file
@@ -0,0 +1,204 @@
|
||||
#!/bin/bash
|
||||
# Organize Documentation Directory Files
|
||||
# Moves files from docs/ root to appropriate directories
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Dry-run mode (default: true)
|
||||
DRY_RUN="${1:---dry-run}"
|
||||
|
||||
# Log file
|
||||
LOG_FILE="docs/DOCS_ORGANIZATION_$(date +%Y%m%d_%H%M%S).log"
|
||||
MOVED_COUNT=0
|
||||
SKIPPED_COUNT=0
|
||||
ERROR_COUNT=0
|
||||
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}[OK]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
|
||||
ERROR_COUNT=$((ERROR_COUNT + 1))
|
||||
}
|
||||
|
||||
move_file() {
|
||||
local source="$1"
|
||||
local dest="$2"
|
||||
local description="${3:-}"
|
||||
|
||||
if [ ! -f "$source" ]; then
|
||||
warn "File not found: $source"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
# Create destination directory if it doesn't exist
|
||||
local dest_dir=$(dirname "$dest")
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p "$dest_dir"
|
||||
fi
|
||||
|
||||
# Check if destination already exists
|
||||
if [ -f "$dest" ]; then
|
||||
warn "Destination already exists: $dest (skipping $source)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "Would move: $source → $dest $description"
|
||||
else
|
||||
if mv "$source" "$dest" 2>>"$LOG_FILE"; then
|
||||
success "Moved: $source → $dest $description"
|
||||
MOVED_COUNT=$((MOVED_COUNT + 1))
|
||||
else
|
||||
error "Failed to move: $source → $dest"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Create necessary directories
|
||||
create_directories() {
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p docs/00-meta
|
||||
mkdir -p docs/archive/reports
|
||||
mkdir -p docs/archive/issues
|
||||
mkdir -p docs/bridge/contracts
|
||||
mkdir -p docs/04-configuration/metamask
|
||||
mkdir -p docs/scripts
|
||||
fi
|
||||
}
|
||||
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Documentation Directory Files Organization ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Mode: $DRY_RUN"
|
||||
log "Project Root: $PROJECT_ROOT"
|
||||
log "Log File: $LOG_FILE"
|
||||
log ""
|
||||
|
||||
create_directories
|
||||
|
||||
log "=== Moving Documentation Meta Files to docs/00-meta/ ==="
|
||||
for file in \
|
||||
CONTRIBUTOR_GUIDELINES.md \
|
||||
DOCUMENTATION_ENHANCEMENTS_RECOMMENDATIONS.md \
|
||||
DOCUMENTATION_FIXES_COMPLETE.md \
|
||||
DOCUMENTATION_QUALITY_REVIEW.md \
|
||||
DOCUMENTATION_RELATIONSHIP_MAP.md \
|
||||
DOCUMENTATION_REORGANIZATION_COMPLETE.md \
|
||||
DOCUMENTATION_REVIEW.md \
|
||||
DOCUMENTATION_STYLE_GUIDE.md \
|
||||
DOCUMENTATION_UPGRADE_SUMMARY.md \
|
||||
MARKDOWN_FILE_MAINTENANCE_GUIDE.md; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/00-meta/$file" "(documentation meta)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Reports to docs/archive/reports/ ==="
|
||||
for file in \
|
||||
PROXMOX_CLUSTER_STORAGE_STATUS_REPORT.md \
|
||||
PROXMOX_SSL_CERTIFICATE_FIX.md \
|
||||
PROXMOX_SSL_FIX_VERIFIED.md \
|
||||
SSL_CERTIFICATE_ERROR_596_FIX.md \
|
||||
SSL_FIX_FOR_EACH_HOST.md; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/archive/reports/$file" "(report)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Issue Tracking to docs/archive/issues/ ==="
|
||||
for file in \
|
||||
OUTSTANDING_ISSUES_RESOLUTION_GUIDE.md \
|
||||
OUTSTANDING_ISSUES_SUMMARY.md; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/archive/issues/$file" "(issue tracking)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Solidity Files to docs/bridge/contracts/ ==="
|
||||
for file in \
|
||||
CCIPWETH9Bridge_flattened.sol \
|
||||
CCIPWETH9Bridge_standard_json.json \
|
||||
CCIPWETH9Bridge_standard_json_generated.json; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/bridge/contracts/$file" "(Solidity contract)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Metamask Config Files to docs/04-configuration/metamask/ ==="
|
||||
for file in \
|
||||
METAMASK_NETWORK_CONFIG.json \
|
||||
METAMASK_TOKEN_LIST.json \
|
||||
METAMASK_TOKEN_LIST.tokenlist.json; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/04-configuration/metamask/$file" "(Metamask config)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Scripts to docs/scripts/ ==="
|
||||
for file in \
|
||||
organize-standalone-files.sh \
|
||||
organize_files.py; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/scripts/$file" "(script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Files Staying in Root ==="
|
||||
log "✅ README.md - Main documentation index"
|
||||
log "✅ MASTER_INDEX.md - Master index"
|
||||
log "✅ SEARCH_GUIDE.md - Search guide (useful in root)"
|
||||
|
||||
log ""
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Organization Complete ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Summary:"
|
||||
log " Files Moved: $MOVED_COUNT"
|
||||
log " Files Skipped: $SKIPPED_COUNT"
|
||||
log " Errors: $ERROR_COUNT"
|
||||
log ""
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "⚠️ DRY RUN MODE - No files were actually moved"
|
||||
log ""
|
||||
log "To execute the moves, run:"
|
||||
log " $0 --execute"
|
||||
else
|
||||
log "✅ Files have been moved successfully"
|
||||
log "Log file: $LOG_FILE"
|
||||
|
||||
# Move log file to logs directory
|
||||
if [ -f "$LOG_FILE" ]; then
|
||||
mkdir -p logs
|
||||
mv "$LOG_FILE" "logs/"
|
||||
log "Log file moved to: logs/$(basename "$LOG_FILE")"
|
||||
fi
|
||||
fi
|
||||
|
||||
log ""
|
||||
exit 0
|
||||
202
scripts/organize-root-files.sh
Executable file
202
scripts/organize-root-files.sh
Executable file
@@ -0,0 +1,202 @@
|
||||
#!/bin/bash
|
||||
# Organize Root Directory Files
|
||||
# Moves files from root to appropriate directories based on type
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Dry-run mode (default: true)
|
||||
DRY_RUN="${1:---dry-run}"
|
||||
|
||||
# Log file
|
||||
LOG_FILE="ROOT_FILES_ORGANIZATION_$(date +%Y%m%d_%H%M%S).log"
|
||||
MOVED_COUNT=0
|
||||
SKIPPED_COUNT=0
|
||||
ERROR_COUNT=0
|
||||
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}[OK]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
|
||||
ERROR_COUNT=$((ERROR_COUNT + 1))
|
||||
}
|
||||
|
||||
move_file() {
|
||||
local source="$1"
|
||||
local dest="$2"
|
||||
local description="${3:-}"
|
||||
|
||||
if [ ! -f "$source" ]; then
|
||||
warn "File not found: $source"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
# Create destination directory if it doesn't exist
|
||||
local dest_dir=$(dirname "$dest")
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p "$dest_dir"
|
||||
fi
|
||||
|
||||
# Check if destination already exists
|
||||
if [ -f "$dest" ]; then
|
||||
warn "Destination already exists: $dest (skipping $source)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "Would move: $source → $dest $description"
|
||||
else
|
||||
if mv "$source" "$dest" 2>>"$LOG_FILE"; then
|
||||
success "Moved: $source → $dest $description"
|
||||
MOVED_COUNT=$((MOVED_COUNT + 1))
|
||||
else
|
||||
error "Failed to move: $source → $dest"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Create necessary directories
|
||||
create_directories() {
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p logs
|
||||
mkdir -p reports/inventory
|
||||
mkdir -p examples
|
||||
fi
|
||||
}
|
||||
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Root Directory Files Organization ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Mode: $DRY_RUN"
|
||||
log "Project Root: $PROJECT_ROOT"
|
||||
log "Log File: $LOG_FILE"
|
||||
log ""
|
||||
|
||||
create_directories
|
||||
|
||||
log "=== Moving Log Files to logs/ ==="
|
||||
for file in *.log; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip the log file we're currently writing to
|
||||
[ "$file" == "$LOG_FILE" ] && continue
|
||||
move_file "$file" "logs/$file" "(log file)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving CSV Inventory Files to reports/inventory/ ==="
|
||||
for file in container_inventory_*.csv; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "reports/inventory/$file" "(inventory CSV)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Shell Scripts to scripts/ ==="
|
||||
# List of shell scripts to move (excluding scripts already in scripts/)
|
||||
for file in *.sh; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip if already in scripts/ directory
|
||||
if [ -f "scripts/$file" ]; then
|
||||
warn "Script already exists in scripts/: $file (skipping)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
continue
|
||||
fi
|
||||
move_file "$file" "scripts/$file" "(shell script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Python Scripts to scripts/ ==="
|
||||
for file in *.py; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip if already in scripts/ directory
|
||||
if [ -f "scripts/$file" ]; then
|
||||
warn "Script already exists in scripts/: $file (skipping)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
continue
|
||||
fi
|
||||
move_file "$file" "scripts/$file" "(Python script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving JavaScript Files to scripts/ ==="
|
||||
for file in *.js; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip if already in scripts/ directory
|
||||
if [ -f "scripts/$file" ]; then
|
||||
warn "Script already exists in scripts/: $file (skipping)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
continue
|
||||
fi
|
||||
# Skip package.json and lock files (these should stay in root)
|
||||
if [[ "$file" == "package.json" ]] || [[ "$file" == "pnpm-lock.yaml" ]] || [[ "$file" == "token-list.json" ]]; then
|
||||
continue
|
||||
fi
|
||||
move_file "$file" "scripts/$file" "(JavaScript script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving HTML Files to examples/ ==="
|
||||
for file in *.html; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "examples/$file" "(HTML example)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving JSON Reports to reports/ ==="
|
||||
for file in CONTENT_INCONSISTENCIES.json MARKDOWN_ANALYSIS.json REFERENCE_FIXES_REPORT.json; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "reports/$file" "(JSON report)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Text Reports to reports/ ==="
|
||||
for file in CONVERSION_SUMMARY.txt; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "reports/$file" "(text report)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Organization Complete ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Summary:"
|
||||
log " Files Moved: $MOVED_COUNT"
|
||||
log " Files Skipped: $SKIPPED_COUNT"
|
||||
log " Errors: $ERROR_COUNT"
|
||||
log ""
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "⚠️ DRY RUN MODE - No files were actually moved"
|
||||
log ""
|
||||
log "To execute the moves, run:"
|
||||
log " $0 --execute"
|
||||
else
|
||||
log "✅ Files have been moved successfully"
|
||||
log "Log file: $LOG_FILE"
|
||||
fi
|
||||
|
||||
log ""
|
||||
exit 0
|
||||
205
scripts/query-omada-devices.js
Executable file
205
scripts/query-omada-devices.js
Executable file
@@ -0,0 +1,205 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Query Omada Controller devices and configuration
|
||||
* Uses admin credentials to connect and list hardware
|
||||
*/
|
||||
|
||||
import { OmadaClient, DevicesService, NetworksService, SitesService } from './omada-api/dist/index.js';
|
||||
import { readFileSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
|
||||
// Load environment variables
|
||||
const envPath = join(homedir(), '.env');
|
||||
let envVars = {};
|
||||
|
||||
try {
|
||||
const envFile = readFileSync(envPath, 'utf8');
|
||||
envFile.split('\n').forEach(line => {
|
||||
if (line.includes('=') && !line.trim().startsWith('#')) {
|
||||
const [key, ...values] = line.split('=');
|
||||
if (key && /^[A-Z_][A-Z0-9_]*$/.test(key.trim())) {
|
||||
let value = values.join('=').trim();
|
||||
if ((value.startsWith('"') && value.endsWith('"')) ||
|
||||
(value.startsWith("'") && value.endsWith("'"))) {
|
||||
value = value.slice(1, -1);
|
||||
}
|
||||
envVars[key.trim()] = value;
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error loading .env file:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get configuration - use admin credentials if available, otherwise fall back to API_KEY/SECRET
|
||||
const baseUrl = envVars.OMADA_CONTROLLER_URL || 'https://192.168.11.8:8043';
|
||||
const username = envVars.OMADA_ADMIN_USERNAME || envVars.OMADA_API_KEY;
|
||||
const password = envVars.OMADA_ADMIN_PASSWORD || envVars.OMADA_API_SECRET;
|
||||
const siteId = envVars.OMADA_SITE_ID;
|
||||
const verifySSL = envVars.OMADA_VERIFY_SSL !== 'false';
|
||||
|
||||
if (!username || !password) {
|
||||
console.error('Error: Missing credentials');
|
||||
console.error('Required: OMADA_ADMIN_USERNAME and OMADA_ADMIN_PASSWORD (or OMADA_API_KEY/OMADA_API_SECRET)');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log('=== Omada Controller Device Query ===\n');
|
||||
console.log(`Controller URL: ${baseUrl}`);
|
||||
console.log(`Site ID: ${siteId || 'auto-detect'}`);
|
||||
console.log(`SSL Verification: ${verifySSL}\n`);
|
||||
|
||||
// Create client with admin credentials (using clientId/clientSecret fields for username/password)
|
||||
const client = new OmadaClient({
|
||||
baseUrl,
|
||||
clientId: username,
|
||||
clientSecret: password,
|
||||
siteId,
|
||||
verifySSL,
|
||||
});
|
||||
|
||||
async function queryDevices() {
|
||||
try {
|
||||
console.log('1. Authenticating...');
|
||||
const auth = client.getAuth();
|
||||
const token = await auth.getAccessToken();
|
||||
console.log(` ✓ Authentication successful\n`);
|
||||
|
||||
console.log('2. Getting site information...');
|
||||
const sitesService = new SitesService(client);
|
||||
const sites = await sitesService.listSites();
|
||||
console.log(` ✓ Found ${sites.length} site(s)`);
|
||||
sites.forEach((site, index) => {
|
||||
console.log(` Site ${index + 1}: ${site.name} (${site.id})`);
|
||||
if (site.id === siteId) {
|
||||
console.log(` ^ Using this site`);
|
||||
}
|
||||
});
|
||||
console.log('');
|
||||
|
||||
const effectiveSiteId = siteId || (sites[0]?.id);
|
||||
if (!effectiveSiteId) {
|
||||
throw new Error('No site ID available');
|
||||
}
|
||||
console.log(`3. Using site ID: ${effectiveSiteId}\n`);
|
||||
|
||||
console.log('4. Listing all devices...');
|
||||
const devicesService = new DevicesService(client);
|
||||
const devices = await devicesService.listDevices({ siteId: effectiveSiteId });
|
||||
console.log(` ✓ Found ${devices.length} device(s)\n`);
|
||||
|
||||
if (devices.length > 0) {
|
||||
console.log(' DEVICE INVENTORY:');
|
||||
console.log(' ' + '='.repeat(80));
|
||||
|
||||
const routers = devices.filter(d => d.type === 'router' || d.type === 'Gateway');
|
||||
const switches = devices.filter(d => d.type === 'switch' || d.type === 'Switch');
|
||||
const aps = devices.filter(d => d.type === 'ap' || d.type === 'AccessPoint');
|
||||
const others = devices.filter(d =>
|
||||
d.type !== 'router' && d.type !== 'Gateway' &&
|
||||
d.type !== 'switch' && d.type !== 'Switch' &&
|
||||
d.type !== 'ap' && d.type !== 'AccessPoint'
|
||||
);
|
||||
|
||||
if (routers.length > 0) {
|
||||
console.log(`\n ROUTERS (${routers.length}):`);
|
||||
routers.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || 'Unnamed'}`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log(` MAC: ${device.mac || 'N/A'}`);
|
||||
console.log(` Device ID: ${device.id || 'N/A'}`);
|
||||
console.log(` Firmware: ${device.firmwareVersion || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
|
||||
if (switches.length > 0) {
|
||||
console.log(`\n SWITCHES (${switches.length}):`);
|
||||
switches.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || 'Unnamed'}`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log(` MAC: ${device.mac || 'N/A'}`);
|
||||
console.log(` Device ID: ${device.id || 'N/A'}`);
|
||||
console.log(` Firmware: ${device.firmwareVersion || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
|
||||
if (aps.length > 0) {
|
||||
console.log(`\n ACCESS POINTS (${aps.length}):`);
|
||||
aps.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || 'Unnamed'}`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log(` MAC: ${device.mac || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
|
||||
if (others.length > 0) {
|
||||
console.log(`\n OTHER DEVICES (${others.length}):`);
|
||||
others.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || 'Unnamed'} (${device.type || 'Unknown'})`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
} else {
|
||||
console.log(' No devices found');
|
||||
}
|
||||
|
||||
console.log('\n5. Listing VLANs...');
|
||||
const networksService = new NetworksService(client);
|
||||
const vlans = await networksService.listVLANs({ siteId: effectiveSiteId });
|
||||
console.log(` ✓ Found ${vlans.length} VLAN(s)\n`);
|
||||
|
||||
if (vlans.length > 0) {
|
||||
console.log(' VLAN CONFIGURATION:');
|
||||
console.log(' ' + '='.repeat(80));
|
||||
vlans.forEach((vlan, index) => {
|
||||
console.log(`\n ${index + 1}. VLAN ${vlan.vlanId || 'N/A'}: ${vlan.name || 'Unnamed'}`);
|
||||
if (vlan.subnet) console.log(` Subnet: ${vlan.subnet}`);
|
||||
if (vlan.gateway) console.log(` Gateway: ${vlan.gateway}`);
|
||||
if (vlan.dhcpEnable !== undefined) {
|
||||
console.log(` DHCP: ${vlan.dhcpEnable ? 'Enabled' : 'Disabled'}`);
|
||||
if (vlan.dhcpEnable && vlan.dhcpRangeStart && vlan.dhcpRangeEnd) {
|
||||
console.log(` DHCP Range: ${vlan.dhcpRangeStart} - ${vlan.dhcpRangeEnd}`);
|
||||
}
|
||||
}
|
||||
if (vlan.dns1) console.log(` DNS1: ${vlan.dns1}`);
|
||||
if (vlan.dns2) console.log(` DNS2: ${vlan.dns2}`);
|
||||
});
|
||||
} else {
|
||||
console.log(' No VLANs configured');
|
||||
}
|
||||
|
||||
console.log('\n' + '='.repeat(80));
|
||||
console.log('=== Query completed successfully! ===\n');
|
||||
|
||||
} catch (error) {
|
||||
console.error('\n=== Query failed ===');
|
||||
console.error('Error:', error.message);
|
||||
if (error.stack) {
|
||||
console.error('\nStack trace:');
|
||||
console.error(error.stack);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
queryDevices();
|
||||
|
||||
501
scripts/review-all-storage.sh
Executable file
501
scripts/review-all-storage.sh
Executable file
@@ -0,0 +1,501 @@
|
||||
#!/usr/bin/env bash
|
||||
# Comprehensive Proxmox Storage Review and Recommendations
|
||||
# Reviews all storage across all Proxmox nodes and provides recommendations
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
REPORT_DIR="${PROJECT_ROOT}/reports/storage"
|
||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||
REPORT_FILE="${REPORT_DIR}/storage_review_${TIMESTAMP}.md"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[✗]${NC} $1"; }
|
||||
log_header() { echo -e "${CYAN}=== $1 ===${NC}"; }
|
||||
|
||||
# Create report directory
|
||||
mkdir -p "$REPORT_DIR"
|
||||
|
||||
# Proxmox nodes configuration
|
||||
declare -A NODES
|
||||
NODES[ml110]="192.168.11.10:L@kers2010"
|
||||
NODES[r630-01]="192.168.11.11:password"
|
||||
NODES[r630-02]="192.168.11.12:password"
|
||||
NODES[r630-03]="192.168.11.13:L@kers2010"
|
||||
NODES[r630-04]="192.168.11.14:L@kers2010"
|
||||
|
||||
# Storage data collection
|
||||
declare -A STORAGE_DATA
|
||||
|
||||
# SSH helper function
|
||||
ssh_node() {
|
||||
local hostname="$1"
|
||||
shift
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
local password="${NODES[$hostname]#*:}"
|
||||
|
||||
if command -v sshpass >/dev/null 2>&1; then
|
||||
sshpass -p "$password" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@"
|
||||
else
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check node connectivity
|
||||
check_node() {
|
||||
local hostname="$1"
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
if ping -c 1 -W 2 "$ip" >/dev/null 2>&1; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Collect storage information from a node
|
||||
collect_storage_info() {
|
||||
local hostname="$1"
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
log_info "Collecting storage information from $hostname ($ip)..."
|
||||
|
||||
if ! check_node "$hostname"; then
|
||||
log_warn "$hostname is not reachable"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Collect storage status
|
||||
local storage_status=$(ssh_node "$hostname" 'pvesm status 2>/dev/null' || echo "")
|
||||
|
||||
# Collect LVM information
|
||||
local vgs_info=$(ssh_node "$hostname" 'vgs --units g --noheadings -o vg_name,vg_size,vg_free 2>/dev/null' || echo "")
|
||||
local lvs_info=$(ssh_node "$hostname" 'lvs --units g --noheadings -o lv_name,vg_name,lv_size,data_percent,metadata_percent,pool_lv 2>/dev/null | grep -E "(thin|data)"' || echo "")
|
||||
|
||||
# Collect disk information
|
||||
local disk_info=$(ssh_node "$hostname" 'lsblk -d -o NAME,SIZE,TYPE,MOUNTPOINT 2>/dev/null' || echo "")
|
||||
|
||||
# Collect VM/container count
|
||||
local vm_count=$(ssh_node "$hostname" 'qm list 2>/dev/null | tail -n +2 | wc -l' || echo "0")
|
||||
local ct_count=$(ssh_node "$hostname" 'pct list 2>/dev/null | tail -n +2 | wc -l' || echo "0")
|
||||
|
||||
# Collect system resources
|
||||
local mem_info=$(ssh_node "$hostname" 'free -h | grep Mem | awk "{print \$2,\$3,\$7}"' || echo "")
|
||||
local cpu_info=$(ssh_node "$hostname" 'nproc' || echo "0")
|
||||
|
||||
# Store data
|
||||
STORAGE_DATA["${hostname}_storage"]="$storage_status"
|
||||
STORAGE_DATA["${hostname}_vgs"]="$vgs_info"
|
||||
STORAGE_DATA["${hostname}_lvs"]="$lvs_info"
|
||||
STORAGE_DATA["${hostname}_disks"]="$disk_info"
|
||||
STORAGE_DATA["${hostname}_vms"]="$vm_count"
|
||||
STORAGE_DATA["${hostname}_cts"]="$ct_count"
|
||||
STORAGE_DATA["${hostname}_mem"]="$mem_info"
|
||||
STORAGE_DATA["${hostname}_cpu"]="$cpu_info"
|
||||
|
||||
log_success "Collected data from $hostname"
|
||||
}
|
||||
|
||||
# Generate storage report
|
||||
generate_report() {
|
||||
log_header "Generating Storage Review Report"
|
||||
|
||||
cat > "$REPORT_FILE" <<EOF
|
||||
# Proxmox Storage Comprehensive Review
|
||||
|
||||
**Date:** $(date)
|
||||
**Report Generated:** $(date -u +"%Y-%m-%d %H:%M:%S UTC")
|
||||
**Review Scope:** All Proxmox nodes and storage configurations
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This report provides a comprehensive review of all storage configurations across all Proxmox nodes, including:
|
||||
- Current storage status and usage
|
||||
- Storage type analysis
|
||||
- Performance recommendations
|
||||
- Capacity planning
|
||||
- Optimization suggestions
|
||||
|
||||
---
|
||||
|
||||
## Node Overview
|
||||
|
||||
EOF
|
||||
|
||||
# Process each node
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
### $hostname ($ip)
|
||||
|
||||
**Status:** $(if check_node "$hostname"; then echo "✅ Reachable"; else echo "❌ Not Reachable"; fi)
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: ${STORAGE_DATA["${hostname}_cpu"]:-Unknown}
|
||||
- Memory: ${STORAGE_DATA["${hostname}_mem"]:-Unknown}
|
||||
- VMs: ${STORAGE_DATA["${hostname}_vms"]:-0}
|
||||
- Containers: ${STORAGE_DATA["${hostname}_cts"]:-0}
|
||||
|
||||
**Storage Status:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_storage"]:-No storage data available}
|
||||
\`\`\`
|
||||
|
||||
**Volume Groups:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_vgs"]:-No volume groups found}
|
||||
\`\`\`
|
||||
|
||||
**Thin Pools:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_lvs"]:-No thin pools found}
|
||||
\`\`\`
|
||||
|
||||
**Physical Disks:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_disks"]:-No disk information available}
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
EOF
|
||||
done
|
||||
|
||||
# Add recommendations section
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
## Storage Analysis and Recommendations
|
||||
|
||||
### 1. Storage Type Analysis
|
||||
|
||||
#### Local Storage (Directory-based)
|
||||
- **Purpose:** ISO images, container templates, backups
|
||||
- **Performance:** Good for read-heavy workloads
|
||||
- **Recommendation:** Use for templates and ISOs, not for VM disks
|
||||
|
||||
#### LVM Thin Storage
|
||||
- **Purpose:** VM/container disk images
|
||||
- **Performance:** Excellent with thin provisioning
|
||||
- **Benefits:** Space efficiency, snapshots, cloning
|
||||
- **Recommendation:** ✅ **Preferred for VM/container disks**
|
||||
|
||||
#### ZFS Storage
|
||||
- **Purpose:** High-performance VM storage
|
||||
- **Performance:** Excellent with compression and deduplication
|
||||
- **Benefits:** Data integrity, snapshots, clones
|
||||
- **Recommendation:** Consider for high-performance workloads
|
||||
|
||||
### 2. Critical Issues and Fixes
|
||||
|
||||
EOF
|
||||
|
||||
# Analyze each node and add recommendations
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
local storage_status="${STORAGE_DATA["${hostname}_storage"]:-}"
|
||||
|
||||
if [ -z "$storage_status" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
#### $hostname Storage Issues
|
||||
|
||||
EOF
|
||||
|
||||
# Check for disabled storage
|
||||
if echo "$storage_status" | grep -q "disabled\|inactive"; then
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
⚠️ **Issue:** Some storage pools are disabled or inactive
|
||||
|
||||
**Action Required:**
|
||||
\`\`\`bash
|
||||
ssh root@${NODES[$hostname]%%:*}
|
||||
pvesm status
|
||||
# Enable disabled storage:
|
||||
pvesm set <storage-name> --disable 0
|
||||
\`\`\`
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Check for high usage
|
||||
if echo "$storage_status" | grep -qE "[8-9][0-9]%|[0-9]{2,}%"; then
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
⚠️ **Issue:** Storage usage is high (>80%)
|
||||
|
||||
**Recommendation:**
|
||||
- Monitor storage usage closely
|
||||
- Plan for expansion or cleanup
|
||||
- Consider migrating VMs to other nodes
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Check for missing LVM thin storage
|
||||
if ! echo "$storage_status" | grep -qE "lvmthin|thin"; then
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
⚠️ **Issue:** No LVM thin storage configured
|
||||
|
||||
**Recommendation:**
|
||||
- Configure LVM thin storage for better performance
|
||||
- Use thin provisioning for space efficiency
|
||||
- Enable snapshots and cloning capabilities
|
||||
|
||||
EOF
|
||||
fi
|
||||
done
|
||||
|
||||
# Add general recommendations
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
### 3. Performance Optimization Recommendations
|
||||
|
||||
#### Storage Performance Best Practices
|
||||
|
||||
1. **Use LVM Thin for VM Disks**
|
||||
- Better performance than directory storage
|
||||
- Thin provisioning saves space
|
||||
- Enables snapshots and cloning
|
||||
|
||||
2. **Monitor Thin Pool Metadata Usage**
|
||||
- Thin pools require metadata space
|
||||
- Monitor metadata_percent in lvs output
|
||||
- Expand metadata if >80% used
|
||||
|
||||
3. **Storage Distribution**
|
||||
- Distribute VMs across multiple nodes
|
||||
- Balance storage usage across nodes
|
||||
- Avoid overloading single node
|
||||
|
||||
4. **Backup Storage Strategy**
|
||||
- Use separate storage for backups
|
||||
- Consider NFS or Ceph for shared backups
|
||||
- Implement backup rotation policies
|
||||
|
||||
### 4. Capacity Planning
|
||||
|
||||
#### Current Storage Distribution
|
||||
|
||||
EOF
|
||||
|
||||
# Calculate total storage
|
||||
local total_storage=0
|
||||
local total_used=0
|
||||
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
local storage_status="${STORAGE_DATA["${hostname}_storage"]:-}"
|
||||
if [ -n "$storage_status" ]; then
|
||||
# Extract storage sizes (simplified - would need proper parsing)
|
||||
echo "$storage_status" | while IFS= read -r line; do
|
||||
if [[ $line =~ ([0-9]+)T ]] || [[ $line =~ ([0-9]+)G ]]; then
|
||||
# Storage found
|
||||
:
|
||||
fi
|
||||
done
|
||||
fi
|
||||
done
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
**Recommendations:**
|
||||
- Monitor storage growth trends
|
||||
- Plan for 20-30% headroom
|
||||
- Set alerts at 80% usage
|
||||
- Consider storage expansion before reaching capacity
|
||||
|
||||
### 5. Storage Type Recommendations by Use Case
|
||||
|
||||
| Use Case | Recommended Storage Type | Reason |
|
||||
|----------|-------------------------|--------|
|
||||
| VM/Container Disks | LVM Thin (lvmthin) | Best performance, thin provisioning |
|
||||
| ISO Images | Directory (dir) | Read-only, no performance impact |
|
||||
| Container Templates | Directory (dir) | Templates are read-only |
|
||||
| Backups | Directory or NFS | Separate from production storage |
|
||||
| High-Performance VMs | ZFS or LVM Thin | Best I/O performance |
|
||||
| Development/Test | LVM Thin | Space efficient with cloning |
|
||||
|
||||
### 6. Security Recommendations
|
||||
|
||||
1. **Storage Access Control**
|
||||
- Review storage.cfg node restrictions
|
||||
- Ensure proper node assignments
|
||||
- Verify storage permissions
|
||||
|
||||
2. **Backup Security**
|
||||
- Encrypt backups if containing sensitive data
|
||||
- Store backups off-site
|
||||
- Test backup restoration regularly
|
||||
|
||||
### 7. Monitoring Recommendations
|
||||
|
||||
1. **Set Up Storage Monitoring**
|
||||
- Monitor storage usage (>80% alert)
|
||||
- Monitor thin pool metadata usage
|
||||
- Track storage growth trends
|
||||
|
||||
2. **Performance Monitoring**
|
||||
- Monitor I/O latency
|
||||
- Track storage throughput
|
||||
- Identify bottlenecks
|
||||
|
||||
3. **Automated Alerts**
|
||||
- Storage usage >80%
|
||||
- Thin pool metadata >80%
|
||||
- Storage errors or failures
|
||||
|
||||
### 8. Migration Recommendations
|
||||
|
||||
#### Workload Distribution
|
||||
|
||||
**Current State:**
|
||||
- ml110: Hosting all VMs (overloaded)
|
||||
- r630-01/r630-02: Underutilized
|
||||
|
||||
**Recommended Distribution:**
|
||||
- **ml110:** Keep management/lightweight VMs (10-15 VMs)
|
||||
- **r630-01:** Migrate medium workload VMs (10-15 VMs)
|
||||
- **r630-02:** Migrate heavy workload VMs (10-15 VMs)
|
||||
|
||||
**Benefits:**
|
||||
- Better performance (ml110 CPU is slower)
|
||||
- Better resource utilization
|
||||
- Improved redundancy
|
||||
- Better storage distribution
|
||||
|
||||
### 9. Immediate Action Items
|
||||
|
||||
#### Critical (Do First)
|
||||
1. ✅ Review storage status on all nodes
|
||||
2. ⚠️ Enable disabled storage pools
|
||||
3. ⚠️ Verify storage node restrictions in storage.cfg
|
||||
4. ⚠️ Check for storage errors or warnings
|
||||
|
||||
#### High Priority
|
||||
1. ⚠️ Configure LVM thin storage where missing
|
||||
2. ⚠️ Set up storage monitoring and alerts
|
||||
3. ⚠️ Plan VM migration for better distribution
|
||||
4. ⚠️ Review and optimize storage.cfg
|
||||
|
||||
#### Recommended
|
||||
1. ⚠️ Implement backup storage strategy
|
||||
2. ⚠️ Consider shared storage (NFS/Ceph) for HA
|
||||
3. ⚠️ Optimize storage performance settings
|
||||
4. ⚠️ Document storage procedures
|
||||
|
||||
---
|
||||
|
||||
## Detailed Storage Commands Reference
|
||||
|
||||
### Check Storage Status
|
||||
\`\`\`bash
|
||||
# On any Proxmox node
|
||||
pvesm status
|
||||
pvesm list <storage-name>
|
||||
\`\`\`
|
||||
|
||||
### Enable Disabled Storage
|
||||
\`\`\`bash
|
||||
pvesm set <storage-name> --disable 0
|
||||
\`\`\`
|
||||
|
||||
### Check LVM Configuration
|
||||
\`\`\`bash
|
||||
vgs # List volume groups
|
||||
lvs # List logical volumes
|
||||
lvs -o +data_percent,metadata_percent # Check thin pool usage
|
||||
\`\`\`
|
||||
|
||||
### Check Disk Usage
|
||||
\`\`\`bash
|
||||
df -h # Filesystem usage
|
||||
lsblk # Block devices
|
||||
\`\`\`
|
||||
|
||||
### Storage Performance Testing
|
||||
\`\`\`bash
|
||||
# Test storage I/O
|
||||
fio --name=test --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --size=1G --runtime=60
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
This comprehensive storage review provides:
|
||||
- ✅ Current storage status across all nodes
|
||||
- ✅ Detailed analysis of storage configurations
|
||||
- ✅ Performance optimization recommendations
|
||||
- ✅ Capacity planning guidance
|
||||
- ✅ Security and monitoring recommendations
|
||||
- ✅ Migration and distribution strategies
|
||||
|
||||
**Next Steps:**
|
||||
1. Review this report
|
||||
2. Address critical issues first
|
||||
3. Implement high-priority recommendations
|
||||
4. Plan for long-term optimizations
|
||||
|
||||
---
|
||||
|
||||
**Report Generated:** $(date)
|
||||
**Report File:** $REPORT_FILE
|
||||
|
||||
EOF
|
||||
|
||||
log_success "Report generated: $REPORT_FILE"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
log_header "Proxmox Storage Comprehensive Review"
|
||||
echo ""
|
||||
|
||||
# Collect data from all nodes
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
collect_storage_info "$hostname" || log_warn "Failed to collect data from $hostname"
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Generate report
|
||||
generate_report
|
||||
|
||||
# Display summary
|
||||
log_header "Review Summary"
|
||||
echo ""
|
||||
log_info "Report saved to: $REPORT_FILE"
|
||||
echo ""
|
||||
log_info "Quick Summary:"
|
||||
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
if check_node "$hostname"; then
|
||||
local vms="${STORAGE_DATA["${hostname}_vms"]:-0}"
|
||||
local cts="${STORAGE_DATA["${hostname}_cts"]:-0}"
|
||||
echo " $hostname: $vms VMs, $cts Containers"
|
||||
else
|
||||
echo " $hostname: Not reachable"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
log_success "Storage review complete!"
|
||||
log_info "View full report: cat $REPORT_FILE"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
63
scripts/run-migrations-r630-01.sh
Executable file
63
scripts/run-migrations-r630-01.sh
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run database migrations for Sankofa on r630-01
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SANKOFA_PROJECT="${SANKOFA_PROJECT:-/home/intlc/projects/Sankofa}"
|
||||
source "$SCRIPT_DIR/env.r630-01.example" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID_API="${VMID_SANKOFA_API:-7800}"
|
||||
DB_HOST="${DB_HOST:-10.160.0.13}"
|
||||
DB_PORT="${DB_PORT:-5432}"
|
||||
DB_NAME="${DB_NAME:-sankofa}"
|
||||
DB_USER="${DB_USER:-sankofa}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-}"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Running Database Migrations"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
|
||||
if [[ -z "$DB_PASSWORD" ]]; then
|
||||
log_error "DB_PASSWORD not set. Please update env.r630-01.example or set DB_PASSWORD environment variable."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Connecting to API container (VMID: $VMID_API)..."
|
||||
|
||||
# Run migrations
|
||||
ssh_r630_01 "pct exec $VMID_API -- bash -c 'cd /opt/sankofa-api && \
|
||||
DB_HOST=$DB_HOST \
|
||||
DB_PORT=$DB_PORT \
|
||||
DB_NAME=$DB_NAME \
|
||||
DB_USER=$DB_USER \
|
||||
DB_PASSWORD=\"$DB_PASSWORD\" \
|
||||
pnpm db:migrate'"
|
||||
|
||||
log_success "Migrations completed"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
266
scripts/setup-keycloak-r630-01.sh
Executable file
266
scripts/setup-keycloak-r630-01.sh
Executable file
@@ -0,0 +1,266 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup Keycloak for Sankofa on r630-01
|
||||
# VMID: 7802, IP: 10.160.0.12
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/.env.r630-01" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_KEYCLOAK:-7802}"
|
||||
CONTAINER_IP="${SANKOFA_KEYCLOAK_IP:-10.160.0.12}"
|
||||
KEYCLOAK_ADMIN_USERNAME="${KEYCLOAK_ADMIN_USERNAME:-admin}"
|
||||
KEYCLOAK_ADMIN_PASSWORD="${KEYCLOAK_ADMIN_PASSWORD:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-24)}"
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-master}"
|
||||
KEYCLOAK_CLIENT_ID_API="${KEYCLOAK_CLIENT_ID_API:-sankofa-api}"
|
||||
KEYCLOAK_CLIENT_ID_PORTAL="${KEYCLOAK_CLIENT_ID_PORTAL:-portal-client}"
|
||||
KEYCLOAK_CLIENT_SECRET_API="${KEYCLOAK_CLIENT_SECRET_API:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-32)}"
|
||||
KEYCLOAK_CLIENT_SECRET_PORTAL="${KEYCLOAK_CLIENT_SECRET_PORTAL:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-32)}"
|
||||
DB_HOST="${SANKOFA_POSTGRES_IP:-10.160.0.13}"
|
||||
DB_NAME="${DB_NAME:-keycloak}"
|
||||
DB_USER="${DB_USER:-keycloak}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-24)}"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Keycloak Setup for Sankofa"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
log_info "Realm: $KEYCLOAK_REALM"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist or is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install Java and PostgreSQL client
|
||||
log_info "Installing Java and dependencies..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq openjdk-21-jdk wget curl unzip"
|
||||
|
||||
# Set JAVA_HOME
|
||||
exec_container bash -c "echo 'export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64' >> /etc/profile"
|
||||
|
||||
log_success "Java installed"
|
||||
echo ""
|
||||
|
||||
# Create Keycloak database on PostgreSQL
|
||||
log_info "Creating Keycloak database on PostgreSQL..."
|
||||
ssh_r630_01 "pct exec ${VMID_SANKOFA_POSTGRES:-7803} -- bash -c \"sudo -u postgres psql << 'EOF'
|
||||
CREATE USER $DB_USER WITH PASSWORD '$DB_PASSWORD';
|
||||
CREATE DATABASE $DB_NAME OWNER $DB_USER ENCODING 'UTF8';
|
||||
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
|
||||
EOF\""
|
||||
|
||||
log_success "Keycloak database created"
|
||||
echo ""
|
||||
|
||||
# Download and install Keycloak
|
||||
log_info "Downloading Keycloak..."
|
||||
exec_container bash -c "cd /opt && \
|
||||
wget -q https://github.com/keycloak/keycloak/releases/download/24.0.0/keycloak-24.0.0.tar.gz && \
|
||||
tar -xzf keycloak-24.0.0.tar.gz && \
|
||||
mv keycloak-24.0.0 keycloak && \
|
||||
rm keycloak-24.0.0.tar.gz && \
|
||||
chmod +x keycloak/bin/kc.sh"
|
||||
|
||||
log_success "Keycloak downloaded"
|
||||
echo ""
|
||||
|
||||
# Build Keycloak
|
||||
log_info "Building Keycloak (this may take a few minutes)..."
|
||||
exec_container bash -c "cd /opt/keycloak && \
|
||||
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 && \
|
||||
./bin/kc.sh build --db postgres"
|
||||
|
||||
log_success "Keycloak built"
|
||||
echo ""
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating Keycloak systemd service..."
|
||||
exec_container bash -c "cat > /etc/systemd/system/keycloak.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Keycloak Authorization Server
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=idle
|
||||
User=root
|
||||
WorkingDirectory=/opt/keycloak
|
||||
Environment=\"JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64\"
|
||||
Environment=\"KC_DB=postgres\"
|
||||
Environment=\"KC_DB_URL_HOST=$DB_HOST\"
|
||||
Environment=\"KC_DB_URL_DATABASE=$DB_NAME\"
|
||||
Environment=\"KC_DB_USERNAME=$DB_USER\"
|
||||
Environment=\"KC_DB_PASSWORD=$DB_PASSWORD\"
|
||||
Environment=\"KC_HTTP_ENABLED=true\"
|
||||
Environment=\"KC_HOSTNAME_STRICT=false\"
|
||||
Environment=\"KC_HOSTNAME_PORT=8080\"
|
||||
ExecStart=/opt/keycloak/bin/kc.sh start --optimized
|
||||
ExecStop=/bin/kill -TERM \$MAINPID
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
|
||||
# Start Keycloak
|
||||
log_info "Starting Keycloak service..."
|
||||
exec_container bash -c "systemctl daemon-reload && \
|
||||
systemctl enable keycloak && \
|
||||
systemctl start keycloak"
|
||||
|
||||
log_info "Waiting for Keycloak to start (this may take 1-2 minutes)..."
|
||||
sleep 30
|
||||
|
||||
# Wait for Keycloak to be ready
|
||||
local max_attempts=30
|
||||
local attempt=0
|
||||
while [ $attempt -lt $max_attempts ]; do
|
||||
if exec_container bash -c "curl -s -f http://localhost:8080/health/ready >/dev/null 2>&1"; then
|
||||
log_success "Keycloak is ready"
|
||||
break
|
||||
fi
|
||||
attempt=$((attempt + 1))
|
||||
log_info "Waiting for Keycloak... ($attempt/$max_attempts)"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [ $attempt -eq $max_attempts ]; then
|
||||
log_error "Keycloak failed to start"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Create admin user
|
||||
log_info "Creating admin user..."
|
||||
exec_container bash -c "cd /opt/keycloak && \
|
||||
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 && \
|
||||
./bin/kc.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin 2>/dev/null || \
|
||||
./bin/kc.sh config credentials --server http://localhost:8080 --realm master --user $KEYCLOAK_ADMIN_USERNAME --password $KEYCLOAK_ADMIN_PASSWORD"
|
||||
|
||||
# Get admin token and create clients
|
||||
log_info "Creating API and Portal clients..."
|
||||
|
||||
# Note: This requires Keycloak Admin REST API
|
||||
# We'll create a script that can be run after Keycloak is fully started
|
||||
exec_container bash -c "cat > /root/create-clients.sh << 'SCRIPT'
|
||||
#!/bin/bash
|
||||
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
|
||||
cd /opt/keycloak
|
||||
|
||||
# Get admin token
|
||||
TOKEN=\$(curl -s -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" \\
|
||||
-H \"Content-Type: application/x-www-form-urlencoded\" \\
|
||||
-d \"username=$KEYCLOAK_ADMIN_USERNAME\" \\
|
||||
-d \"password=$KEYCLOAK_ADMIN_PASSWORD\" \\
|
||||
-d \"grant_type=password\" \\
|
||||
-d \"client_id=admin-cli\" | jq -r '.access_token')
|
||||
|
||||
# Create sankofa-api client
|
||||
curl -s -X POST \"http://localhost:8080/admin/realms/master/clients\" \\
|
||||
-H \"Authorization: Bearer \$TOKEN\" \\
|
||||
-H \"Content-Type: application/json\" \\
|
||||
-d '{
|
||||
\"clientId\": \"$KEYCLOAK_CLIENT_ID_API\",
|
||||
\"enabled\": true,
|
||||
\"clientAuthenticatorType\": \"client-secret\",
|
||||
\"secret\": \"$KEYCLOAK_CLIENT_SECRET_API\",
|
||||
\"protocol\": \"openid-connect\",
|
||||
\"publicClient\": false,
|
||||
\"standardFlowEnabled\": true,
|
||||
\"directAccessGrantsEnabled\": true,
|
||||
\"serviceAccountsEnabled\": true
|
||||
}' > /dev/null
|
||||
|
||||
# Create portal-client
|
||||
curl -s -X POST \"http://localhost:8080/admin/realms/master/clients\" \\
|
||||
-H \"Authorization: Bearer \$TOKEN\" \\
|
||||
-H \"Content-Type: application/json\" \\
|
||||
-d '{
|
||||
\"clientId\": \"$KEYCLOAK_CLIENT_ID_PORTAL\",
|
||||
\"enabled\": true,
|
||||
\"clientAuthenticatorType\": \"client-secret\",
|
||||
\"secret\": \"$KEYCLOAK_CLIENT_SECRET_PORTAL\",
|
||||
\"protocol\": \"openid-connect\",
|
||||
\"publicClient\": false,
|
||||
\"standardFlowEnabled\": true,
|
||||
\"directAccessGrantsEnabled\": true
|
||||
}' > /dev/null
|
||||
|
||||
echo \"Clients created successfully\"
|
||||
SCRIPT
|
||||
chmod +x /root/create-clients.sh"
|
||||
|
||||
# Wait a bit more and create clients
|
||||
sleep 10
|
||||
exec_container bash -c "/root/create-clients.sh" || log_warn "Client creation may need to be done manually via web UI"
|
||||
|
||||
log_success "Keycloak setup complete"
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Keycloak Setup Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Keycloak Configuration:"
|
||||
echo " URL: http://$CONTAINER_IP:8080"
|
||||
echo " Admin URL: http://$CONTAINER_IP:8080/admin"
|
||||
echo " Admin Username: $KEYCLOAK_ADMIN_USERNAME"
|
||||
echo " Admin Password: $KEYCLOAK_ADMIN_PASSWORD"
|
||||
echo " Realm: $KEYCLOAK_REALM"
|
||||
echo ""
|
||||
log_info "Client Secrets:"
|
||||
echo " API Client Secret: $KEYCLOAK_CLIENT_SECRET_API"
|
||||
echo " Portal Client Secret: $KEYCLOAK_CLIENT_SECRET_PORTAL"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Update .env.r630-01 with Keycloak credentials"
|
||||
echo " 2. Verify clients in Keycloak admin console"
|
||||
echo " 3. Run: ./scripts/deploy-api-r630-01.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
160
scripts/setup-postgresql-r630-01.sh
Executable file
160
scripts/setup-postgresql-r630-01.sh
Executable file
@@ -0,0 +1,160 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup PostgreSQL for Sankofa on r630-01
|
||||
# VMID: 7803, IP: 10.160.0.13
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/.env.r630-01" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_POSTGRES:-7803}"
|
||||
CONTAINER_IP="${SANKOFA_POSTGRES_IP:-10.160.0.13}"
|
||||
DB_NAME="${DB_NAME:-sankofa}"
|
||||
DB_USER="${DB_USER:-sankofa}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-24)}"
|
||||
POSTGRES_VERSION="${POSTGRES_VERSION:-16}"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "PostgreSQL Setup for Sankofa"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
log_info "Database: $DB_NAME"
|
||||
log_info "User: $DB_USER"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist or is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if container is running
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install PostgreSQL
|
||||
log_info "Installing PostgreSQL $POSTGRES_VERSION..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq wget ca-certificates gnupg lsb-release"
|
||||
|
||||
exec_container bash -c "wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
|
||||
echo 'deb http://apt.postgresql.org/pub/repos/apt \$(lsb_release -cs)-pgdg main' > /etc/apt/sources.list.d/pgdg.list && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq postgresql-$POSTGRES_VERSION postgresql-contrib-$POSTGRES_VERSION"
|
||||
|
||||
log_success "PostgreSQL installed"
|
||||
echo ""
|
||||
|
||||
# Configure PostgreSQL
|
||||
log_info "Configuring PostgreSQL..."
|
||||
exec_container bash -c "systemctl enable postgresql"
|
||||
exec_container bash -c "systemctl start postgresql"
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
log_info "Waiting for PostgreSQL to start..."
|
||||
sleep 5
|
||||
|
||||
# Create database and user
|
||||
log_info "Creating database and user..."
|
||||
exec_container bash -c "sudo -u postgres psql << 'EOF'
|
||||
-- Create user
|
||||
CREATE USER $DB_USER WITH PASSWORD '$DB_PASSWORD';
|
||||
|
||||
-- Create database
|
||||
CREATE DATABASE $DB_NAME OWNER $DB_USER ENCODING 'UTF8';
|
||||
|
||||
-- Grant privileges
|
||||
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
|
||||
|
||||
-- Connect to database and grant schema privileges
|
||||
\c $DB_NAME
|
||||
GRANT ALL ON SCHEMA public TO $DB_USER;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO $DB_USER;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO $DB_USER;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON FUNCTIONS TO $DB_USER;
|
||||
|
||||
-- Enable extensions
|
||||
CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";
|
||||
CREATE EXTENSION IF NOT EXISTS \"pg_stat_statements\";
|
||||
EOF"
|
||||
|
||||
log_success "Database and user created"
|
||||
echo ""
|
||||
|
||||
# Configure PostgreSQL for remote access (if needed)
|
||||
log_info "Configuring PostgreSQL for network access..."
|
||||
exec_container bash -c "echo \"host all all 10.160.0.0/22 md5\" >> /etc/postgresql/$POSTGRES_VERSION/main/pg_hba.conf"
|
||||
exec_container bash -c "sed -i \"s/#listen_addresses = 'localhost'/listen_addresses = '*'/\" /etc/postgresql/$POSTGRES_VERSION/main/postgresql.conf"
|
||||
|
||||
# Restart PostgreSQL
|
||||
exec_container bash -c "systemctl restart postgresql"
|
||||
sleep 3
|
||||
|
||||
log_success "PostgreSQL configured for network access"
|
||||
echo ""
|
||||
|
||||
# Test connection
|
||||
log_info "Testing database connection..."
|
||||
if exec_container bash -c "PGPASSWORD='$DB_PASSWORD' psql -h localhost -U $DB_USER -d $DB_NAME -c 'SELECT version();' >/dev/null 2>&1"; then
|
||||
log_success "Database connection successful"
|
||||
else
|
||||
log_error "Database connection failed"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "PostgreSQL Setup Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Database Configuration:"
|
||||
echo " Host: $CONTAINER_IP"
|
||||
echo " Port: 5432"
|
||||
echo " Database: $DB_NAME"
|
||||
echo " User: $DB_USER"
|
||||
echo " Password: $DB_PASSWORD"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Update .env.r630-01 with the database password"
|
||||
echo " 2. Run database migrations: ./scripts/run-migrations-r630-01.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
68
scripts/setup-storage-monitoring-cron.sh
Executable file
68
scripts/setup-storage-monitoring-cron.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup cron job for automated storage monitoring
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
MONITOR_SCRIPT="${PROJECT_ROOT}/scripts/storage-monitor.sh"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1"; }
|
||||
|
||||
# Check if script exists
|
||||
if [ ! -f "$MONITOR_SCRIPT" ]; then
|
||||
log_warn "Monitoring script not found: $MONITOR_SCRIPT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make sure script is executable
|
||||
chmod +x "$MONITOR_SCRIPT"
|
||||
|
||||
# Cron schedule (every hour)
|
||||
CRON_SCHEDULE="0 * * * *"
|
||||
CRON_COMMAND="$MONITOR_SCRIPT check >> ${PROJECT_ROOT}/logs/storage-monitoring/cron.log 2>&1"
|
||||
|
||||
# Check if cron job already exists
|
||||
if crontab -l 2>/dev/null | grep -q "$MONITOR_SCRIPT"; then
|
||||
log_warn "Cron job already exists for storage monitoring"
|
||||
echo ""
|
||||
echo "Current cron jobs:"
|
||||
crontab -l | grep "$MONITOR_SCRIPT"
|
||||
echo ""
|
||||
|
||||
# Check for --yes flag for non-interactive mode
|
||||
local auto_replace=false
|
||||
if [[ "${1:-}" == "--yes" ]] || [[ "${1:-}" == "-y" ]]; then
|
||||
auto_replace=true
|
||||
log_info "Auto-replace mode enabled"
|
||||
fi
|
||||
|
||||
if [ "$auto_replace" = false ]; then
|
||||
read -p "Replace existing cron job? (yes/no): " replace
|
||||
if [ "$replace" != "yes" ]; then
|
||||
log_info "Keeping existing cron job"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
# Remove existing cron job
|
||||
crontab -l 2>/dev/null | grep -v "$MONITOR_SCRIPT" | crontab -
|
||||
fi
|
||||
|
||||
# Add cron job
|
||||
(crontab -l 2>/dev/null; echo "$CRON_SCHEDULE $CRON_COMMAND") | crontab -
|
||||
|
||||
log_success "Storage monitoring cron job added"
|
||||
log_info "Schedule: Every hour"
|
||||
log_info "Command: $MONITOR_SCRIPT check"
|
||||
log_info "Logs: ${PROJECT_ROOT}/logs/storage-monitoring/"
|
||||
echo ""
|
||||
echo "To view cron jobs: crontab -l"
|
||||
echo "To remove cron job: crontab -e (then delete the line)"
|
||||
117
scripts/setup_ssh_tunnel.sh
Executable file
117
scripts/setup_ssh_tunnel.sh
Executable file
@@ -0,0 +1,117 @@
|
||||
#!/bin/bash
|
||||
# Setup SSH tunnel for Proxmox API access
|
||||
# This allows list_vms.py to work from different network segments
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
|
||||
SSH_USER="${SSH_USER:-root}"
|
||||
LOCAL_PORT="${LOCAL_PORT:-8006}"
|
||||
TUNNEL_PID_FILE="/tmp/proxmox-tunnel-${PROXMOX_HOST}-${PROXMOX_PORT}.pid"
|
||||
|
||||
# Load from .env if available
|
||||
if [ -f ~/.env ]; then
|
||||
export $(grep -E "^PROXMOX_" ~/.env | grep -v "^#" | xargs)
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
|
||||
fi
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Proxmox SSH Tunnel Setup"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Configuration:"
|
||||
echo " Proxmox Host: $PROXMOX_HOST"
|
||||
echo " Proxmox Port: $PROXMOX_PORT"
|
||||
echo " SSH User: $SSH_USER"
|
||||
echo " Local Port: $LOCAL_PORT"
|
||||
echo ""
|
||||
|
||||
# Check if tunnel already exists
|
||||
if [ -f "$TUNNEL_PID_FILE" ]; then
|
||||
OLD_PID=$(cat "$TUNNEL_PID_FILE")
|
||||
if ps -p "$OLD_PID" > /dev/null 2>&1; then
|
||||
echo "⚠️ Tunnel already running (PID: $OLD_PID)"
|
||||
echo " Use: ./stop_ssh_tunnel.sh to stop it"
|
||||
exit 1
|
||||
else
|
||||
rm -f "$TUNNEL_PID_FILE"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Test SSH connection
|
||||
echo "Testing SSH connection to $SSH_USER@$PROXMOX_HOST..."
|
||||
if ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "$SSH_USER@$PROXMOX_HOST" "echo 'SSH OK'" 2>/dev/null; then
|
||||
echo "❌ SSH connection failed"
|
||||
echo ""
|
||||
echo "Troubleshooting:"
|
||||
echo " 1. Check if host is reachable: ping $PROXMOX_HOST"
|
||||
echo " 2. Verify SSH access is configured"
|
||||
echo " 3. Check if you're on the correct network/VPN"
|
||||
echo ""
|
||||
echo "Alternative: Use Cloudflare tunnel for web access:"
|
||||
case "$PROXMOX_HOST" in
|
||||
192.168.11.10)
|
||||
echo " https://ml110-01.d-bis.org"
|
||||
;;
|
||||
192.168.11.11)
|
||||
echo " https://r630-01.d-bis.org"
|
||||
;;
|
||||
192.168.11.12)
|
||||
echo " https://r630-02.d-bis.org"
|
||||
;;
|
||||
esac
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ SSH connection successful"
|
||||
echo ""
|
||||
|
||||
# Create tunnel
|
||||
echo "Creating SSH tunnel..."
|
||||
echo " Local: localhost:$LOCAL_PORT"
|
||||
echo " Remote: $PROXMOX_HOST:$PROXMOX_PORT"
|
||||
echo ""
|
||||
|
||||
ssh -N -L ${LOCAL_PORT}:${PROXMOX_HOST}:${PROXMOX_PORT} \
|
||||
-o StrictHostKeyChecking=no \
|
||||
-o ServerAliveInterval=60 \
|
||||
-o ServerAliveCountMax=3 \
|
||||
"$SSH_USER@$PROXMOX_HOST" &
|
||||
|
||||
TUNNEL_PID=$!
|
||||
echo $TUNNEL_PID > "$TUNNEL_PID_FILE"
|
||||
|
||||
# Wait a moment for tunnel to establish
|
||||
sleep 2
|
||||
|
||||
# Verify tunnel is running
|
||||
if ps -p "$TUNNEL_PID" > /dev/null 2>&1; then
|
||||
echo "✅ Tunnel established (PID: $TUNNEL_PID)"
|
||||
echo ""
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Tunnel Active"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "You can now use:"
|
||||
echo " PROXMOX_HOST=localhost python3 list_vms.py"
|
||||
echo ""
|
||||
echo "Or set in environment:"
|
||||
echo " export PROXMOX_HOST=localhost"
|
||||
echo " python3 list_vms.py"
|
||||
echo ""
|
||||
echo "To stop the tunnel:"
|
||||
echo " ./stop_ssh_tunnel.sh"
|
||||
echo " # or"
|
||||
echo " kill $TUNNEL_PID"
|
||||
echo ""
|
||||
echo "Tunnel will run in background. Press Ctrl+C to stop monitoring."
|
||||
echo ""
|
||||
|
||||
# Keep script running to maintain tunnel
|
||||
trap "kill $TUNNEL_PID 2>/dev/null; rm -f $TUNNEL_PID_FILE; exit" INT TERM
|
||||
wait $TUNNEL_PID
|
||||
else
|
||||
echo "❌ Failed to establish tunnel"
|
||||
rm -f "$TUNNEL_PID_FILE"
|
||||
exit 1
|
||||
fi
|
||||
49
scripts/stop_ssh_tunnel.sh
Executable file
49
scripts/stop_ssh_tunnel.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/bin/bash
|
||||
# Stop SSH tunnel for Proxmox API access
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
|
||||
TUNNEL_PID_FILE="/tmp/proxmox-tunnel-${PROXMOX_HOST}-${PROXMOX_PORT}.pid"
|
||||
|
||||
# Load from .env if available
|
||||
if [ -f ~/.env ]; then
|
||||
export $(grep -E "^PROXMOX_" ~/.env | grep -v "^#" | xargs)
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
|
||||
fi
|
||||
|
||||
if [ -f "$TUNNEL_PID_FILE" ]; then
|
||||
TUNNEL_PID=$(cat "$TUNNEL_PID_FILE")
|
||||
if ps -p "$TUNNEL_PID" > /dev/null 2>&1; then
|
||||
echo "Stopping tunnel (PID: $TUNNEL_PID)..."
|
||||
kill "$TUNNEL_PID" 2>/dev/null
|
||||
sleep 1
|
||||
if ps -p "$TUNNEL_PID" > /dev/null 2>&1; then
|
||||
echo "Force killing..."
|
||||
kill -9 "$TUNNEL_PID" 2>/dev/null
|
||||
fi
|
||||
rm -f "$TUNNEL_PID_FILE"
|
||||
echo "✅ Tunnel stopped"
|
||||
else
|
||||
echo "Tunnel not running (stale PID file)"
|
||||
rm -f "$TUNNEL_PID_FILE"
|
||||
fi
|
||||
else
|
||||
echo "No tunnel PID file found"
|
||||
echo "Checking for running tunnels..."
|
||||
|
||||
# Try to find tunnel by port
|
||||
LOCAL_PORT="${PROXMOX_PORT:-8006}"
|
||||
PIDS=$(lsof -ti:${LOCAL_PORT} 2>/dev/null)
|
||||
if [ -n "$PIDS" ]; then
|
||||
echo "Found processes on port $LOCAL_PORT: $PIDS"
|
||||
read -p "Kill these processes? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "$PIDS" | xargs kill 2>/dev/null
|
||||
echo "✅ Processes killed"
|
||||
fi
|
||||
else
|
||||
echo "No tunnel processes found"
|
||||
fi
|
||||
fi
|
||||
309
scripts/storage-monitor.sh
Executable file
309
scripts/storage-monitor.sh
Executable file
@@ -0,0 +1,309 @@
|
||||
#!/usr/bin/env bash
|
||||
# Proxmox Storage Monitoring Script with Alerts
|
||||
# Monitors storage usage across all Proxmox nodes and sends alerts
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
LOG_DIR="${PROJECT_ROOT}/logs/storage-monitoring"
|
||||
ALERT_LOG="${LOG_DIR}/storage_alerts_$(date +%Y%m%d).log"
|
||||
STATUS_LOG="${LOG_DIR}/storage_status_$(date +%Y%m%d).log"
|
||||
|
||||
# Alert thresholds
|
||||
WARNING_THRESHOLD=80
|
||||
CRITICAL_THRESHOLD=90
|
||||
VG_FREE_WARNING=10 # GB
|
||||
VG_FREE_CRITICAL=5 # GB
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[✗]${NC} $1"; }
|
||||
log_alert() { echo -e "${RED}[ALERT]${NC} $1"; }
|
||||
|
||||
# Create log directory
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
# Proxmox nodes configuration
|
||||
declare -A NODES
|
||||
NODES[ml110]="192.168.11.10:L@kers2010"
|
||||
NODES[r630-01]="192.168.11.11:password"
|
||||
NODES[r630-02]="192.168.11.12:password"
|
||||
NODES[r630-03]="192.168.11.13:L@kers2010"
|
||||
NODES[r630-04]="192.168.11.14:L@kers2010"
|
||||
|
||||
# Alert tracking
|
||||
declare -a ALERTS
|
||||
|
||||
# SSH helper function
|
||||
ssh_node() {
|
||||
local hostname="$1"
|
||||
shift
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
local password="${NODES[$hostname]#*:}"
|
||||
|
||||
if command -v sshpass >/dev/null 2>&1; then
|
||||
sshpass -p "$password" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@" 2>/dev/null || echo ""
|
||||
else
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@" 2>/dev/null || echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
# Check node connectivity
|
||||
check_node() {
|
||||
local hostname="$1"
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
ping -c 1 -W 2 "$ip" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Parse storage usage percentage
|
||||
parse_storage_percent() {
|
||||
local percent_str="$1"
|
||||
# Remove % sign and convert to integer
|
||||
echo "$percent_str" | sed 's/%//' | awk '{print int($1)}'
|
||||
}
|
||||
|
||||
# Check storage usage
|
||||
check_storage_usage() {
|
||||
local hostname="$1"
|
||||
local storage_line="$2"
|
||||
|
||||
local storage_name=$(echo "$storage_line" | awk '{print $1}')
|
||||
local storage_type=$(echo "$storage_line" | awk '{print $2}')
|
||||
local status=$(echo "$storage_line" | awk '{print $3}')
|
||||
local total=$(echo "$storage_line" | awk '{print $4}')
|
||||
local used=$(echo "$storage_line" | awk '{print $5}')
|
||||
local available=$(echo "$storage_line" | awk '{print $6}')
|
||||
local percent_str=$(echo "$storage_line" | awk '{print $7}')
|
||||
|
||||
# Skip if disabled or inactive
|
||||
if [ "$status" = "disabled" ] || [ "$status" = "inactive" ] || [ "$percent_str" = "N/A" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local percent=$(parse_storage_percent "$percent_str")
|
||||
|
||||
if [ -z "$percent" ] || [ "$percent" -eq 0 ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check thresholds
|
||||
if [ "$percent" -ge "$CRITICAL_THRESHOLD" ]; then
|
||||
ALERTS+=("CRITICAL: $hostname:$storage_name is at ${percent}% capacity (${available} available)")
|
||||
log_alert "CRITICAL: $hostname:$storage_name is at ${percent}% capacity"
|
||||
return 2
|
||||
elif [ "$percent" -ge "$WARNING_THRESHOLD" ]; then
|
||||
ALERTS+=("WARNING: $hostname:$storage_name is at ${percent}% capacity (${available} available)")
|
||||
log_warn "WARNING: $hostname:$storage_name is at ${percent}% capacity"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Check volume group free space
|
||||
check_vg_free_space() {
|
||||
local hostname="$1"
|
||||
local vg_line="$2"
|
||||
|
||||
local vg_name=$(echo "$vg_line" | awk '{print $1}')
|
||||
local vg_size=$(echo "$vg_line" | awk '{print $2}')
|
||||
local vg_free=$(echo "$vg_line" | awk '{print $3}')
|
||||
|
||||
# Extract numeric value (remove 'g' suffix)
|
||||
local free_gb=$(echo "$vg_free" | sed 's/g//' | awk '{print int($1)}')
|
||||
|
||||
if [ -z "$free_gb" ] || [ "$free_gb" -eq 0 ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ "$free_gb" -le "$VG_FREE_CRITICAL" ]; then
|
||||
ALERTS+=("CRITICAL: $hostname:$vg_name volume group has only ${free_gb}GB free space")
|
||||
log_alert "CRITICAL: $hostname:$vg_name VG has only ${free_gb}GB free"
|
||||
return 2
|
||||
elif [ "$free_gb" -le "$VG_FREE_WARNING" ]; then
|
||||
ALERTS+=("WARNING: $hostname:$vg_name volume group has only ${free_gb}GB free space")
|
||||
log_warn "WARNING: $hostname:$vg_name VG has only ${free_gb}GB free"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Monitor a single node
|
||||
monitor_node() {
|
||||
local hostname="$1"
|
||||
|
||||
if ! check_node "$hostname"; then
|
||||
log_warn "$hostname is not reachable"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Monitoring $hostname..."
|
||||
|
||||
# Get storage status
|
||||
local storage_status=$(ssh_node "$hostname" 'pvesm status 2>/dev/null' || echo "")
|
||||
|
||||
if [ -z "$storage_status" ]; then
|
||||
log_warn "Could not get storage status from $hostname"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Process each storage line (skip header)
|
||||
echo "$storage_status" | tail -n +2 | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
check_storage_usage "$hostname" "$line"
|
||||
fi
|
||||
done
|
||||
|
||||
# Check volume groups
|
||||
local vgs_info=$(ssh_node "$hostname" 'vgs --units g --noheadings -o vg_name,vg_size,vg_free 2>/dev/null' || echo "")
|
||||
|
||||
if [ -n "$vgs_info" ]; then
|
||||
echo "$vgs_info" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
check_vg_free_space "$hostname" "$line"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Log storage status
|
||||
{
|
||||
echo "=== $hostname Storage Status $(date) ==="
|
||||
echo "$storage_status"
|
||||
echo ""
|
||||
echo "=== Volume Groups ==="
|
||||
echo "$vgs_info"
|
||||
echo ""
|
||||
} >> "$STATUS_LOG"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Send alerts (can be extended to email, Slack, etc.)
|
||||
send_alerts() {
|
||||
if [ ${#ALERTS[@]} -eq 0 ]; then
|
||||
log_success "No storage alerts"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_warn "Found ${#ALERTS[@]} storage alert(s)"
|
||||
|
||||
{
|
||||
echo "=== Storage Alerts $(date) ==="
|
||||
for alert in "${ALERTS[@]}"; do
|
||||
echo "$alert"
|
||||
done
|
||||
echo ""
|
||||
} >> "$ALERT_LOG"
|
||||
|
||||
# Print alerts
|
||||
for alert in "${ALERTS[@]}"; do
|
||||
echo "$alert"
|
||||
done
|
||||
|
||||
# TODO: Add email/Slack/webhook notifications here
|
||||
# Example:
|
||||
# send_email "Storage Alerts" "$(printf '%s\n' "${ALERTS[@]}")"
|
||||
# send_slack_webhook "${ALERTS[@]}"
|
||||
}
|
||||
|
||||
# Generate summary report
|
||||
generate_summary() {
|
||||
local summary_file="${LOG_DIR}/storage_summary_$(date +%Y%m%d).txt"
|
||||
|
||||
{
|
||||
echo "=== Proxmox Storage Summary $(date) ==="
|
||||
echo ""
|
||||
echo "Nodes Monitored:"
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
if check_node "$hostname"; then
|
||||
echo " ✅ $hostname"
|
||||
else
|
||||
echo " ❌ $hostname (not reachable)"
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
echo "Alerts: ${#ALERTS[@]}"
|
||||
if [ ${#ALERTS[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
for alert in "${ALERTS[@]}"; do
|
||||
echo " - $alert"
|
||||
done
|
||||
fi
|
||||
echo ""
|
||||
echo "Thresholds:"
|
||||
echo " Storage Usage Warning: ${WARNING_THRESHOLD}%"
|
||||
echo " Storage Usage Critical: ${CRITICAL_THRESHOLD}%"
|
||||
echo " Volume Group Free Warning: ${VG_FREE_WARNING}GB"
|
||||
echo " Volume Group Free Critical: ${VG_FREE_CRITICAL}GB"
|
||||
} > "$summary_file"
|
||||
|
||||
log_info "Summary saved to: $summary_file"
|
||||
}
|
||||
|
||||
# Main monitoring function
|
||||
main() {
|
||||
local mode="${1:-check}"
|
||||
|
||||
case "$mode" in
|
||||
check)
|
||||
echo "=== Proxmox Storage Monitoring ==="
|
||||
echo "Date: $(date)"
|
||||
echo ""
|
||||
|
||||
# Monitor all nodes
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
monitor_node "$hostname"
|
||||
done
|
||||
|
||||
# Send alerts
|
||||
send_alerts
|
||||
|
||||
# Generate summary
|
||||
generate_summary
|
||||
|
||||
echo ""
|
||||
log_info "Monitoring complete. Check logs in: $LOG_DIR"
|
||||
;;
|
||||
status)
|
||||
# Show current status
|
||||
echo "=== Current Storage Status ==="
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
if check_node "$hostname"; then
|
||||
echo ""
|
||||
echo "--- $hostname ---"
|
||||
ssh_node "$hostname" 'pvesm status 2>/dev/null' || echo "Could not get status"
|
||||
fi
|
||||
done
|
||||
;;
|
||||
alerts)
|
||||
# Show recent alerts
|
||||
if [ -f "$ALERT_LOG" ]; then
|
||||
tail -50 "$ALERT_LOG"
|
||||
else
|
||||
echo "No alerts found"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [check|status|alerts]"
|
||||
echo " check - Run full monitoring check (default)"
|
||||
echo " status - Show current storage status"
|
||||
echo " alerts - Show recent alerts"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
114
scripts/test-omada-connection.js
Executable file
114
scripts/test-omada-connection.js
Executable file
@@ -0,0 +1,114 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Test script for Omada API connection
|
||||
* Tests authentication and basic API calls
|
||||
*/
|
||||
|
||||
import { OmadaClient, SitesService, DevicesService } from './omada-api/dist/index.js';
|
||||
import { readFileSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
|
||||
// Load environment variables
|
||||
const envPath = join(homedir(), '.env');
|
||||
let envVars = {};
|
||||
|
||||
try {
|
||||
const envFile = readFileSync(envPath, 'utf8');
|
||||
envFile.split('\n').forEach(line => {
|
||||
if (line.includes('=') && !line.trim().startsWith('#')) {
|
||||
const [key, ...values] = line.split('=');
|
||||
if (key && /^[A-Z_][A-Z0-9_]*$/.test(key.trim())) {
|
||||
let value = values.join('=').trim();
|
||||
if ((value.startsWith('"') && value.endsWith('"')) ||
|
||||
(value.startsWith("'") && value.endsWith("'"))) {
|
||||
value = value.slice(1, -1);
|
||||
}
|
||||
envVars[key.trim()] = value;
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error loading .env file:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Get configuration
|
||||
const baseUrl = envVars.OMADA_CONTROLLER_URL;
|
||||
const clientId = envVars.OMADA_API_KEY;
|
||||
const clientSecret = envVars.OMADA_API_SECRET;
|
||||
const siteId = envVars.OMADA_SITE_ID;
|
||||
const verifySSL = envVars.OMADA_VERIFY_SSL !== 'false';
|
||||
|
||||
if (!baseUrl || !clientId || !clientSecret) {
|
||||
console.error('Error: Missing required environment variables');
|
||||
console.error('Required: OMADA_CONTROLLER_URL, OMADA_API_KEY, OMADA_API_SECRET');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log('=== Omada API Connection Test ===\n');
|
||||
console.log(`Controller URL: ${baseUrl}`);
|
||||
console.log(`Site ID: ${siteId || 'auto-detect'}`);
|
||||
console.log(`SSL Verification: ${verifySSL}\n`);
|
||||
|
||||
// Create client
|
||||
const client = new OmadaClient({
|
||||
baseUrl,
|
||||
clientId,
|
||||
clientSecret,
|
||||
siteId,
|
||||
verifySSL,
|
||||
});
|
||||
|
||||
async function test() {
|
||||
try {
|
||||
console.log('1. Testing authentication...');
|
||||
const auth = client.getAuth();
|
||||
const token = await auth.getAccessToken();
|
||||
console.log(' ✓ Authentication successful');
|
||||
console.log(` Token: ${token.substring(0, 20)}...\n`);
|
||||
|
||||
console.log('2. Getting site information...');
|
||||
const sitesService = new SitesService(client);
|
||||
const sites = await sitesService.listSites();
|
||||
console.log(` ✓ Found ${sites.length} site(s)`);
|
||||
sites.forEach((site, index) => {
|
||||
console.log(` Site ${index + 1}: ${site.name} (${site.id})`);
|
||||
});
|
||||
console.log('');
|
||||
|
||||
const effectiveSiteId = siteId || (await client.getSiteId());
|
||||
console.log(`3. Using site ID: ${effectiveSiteId}\n`);
|
||||
|
||||
console.log('4. Listing devices...');
|
||||
const devicesService = new DevicesService(client);
|
||||
const devices = await devicesService.listDevices({ siteId: effectiveSiteId });
|
||||
console.log(` ✓ Found ${devices.length} device(s)`);
|
||||
|
||||
if (devices.length > 0) {
|
||||
console.log('\n Devices:');
|
||||
devices.forEach((device, index) => {
|
||||
console.log(` ${index + 1}. ${device.name} (${device.type}) - ${device.model}`);
|
||||
console.log(` Status: ${device.status === 1 ? 'Online' : 'Offline'}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
});
|
||||
} else {
|
||||
console.log(' No devices found');
|
||||
}
|
||||
|
||||
console.log('\n=== Test completed successfully! ===');
|
||||
|
||||
} catch (error) {
|
||||
console.error('\n=== Test failed ===');
|
||||
console.error('Error:', error.message);
|
||||
if (error.stack) {
|
||||
console.error('\nStack trace:');
|
||||
console.error(error.stack);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
test();
|
||||
|
||||
299
scripts/test-omada-direct.js
Executable file
299
scripts/test-omada-direct.js
Executable file
@@ -0,0 +1,299 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Direct connection test to Omada Controller
|
||||
* Uses Node.js https module directly for better SSL control
|
||||
*/
|
||||
|
||||
import https from 'https';
|
||||
import { readFileSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
|
||||
// Load environment variables
|
||||
const envPath = join(homedir(), '.env');
|
||||
let envVars = {};
|
||||
|
||||
try {
|
||||
const envFile = readFileSync(envPath, 'utf8');
|
||||
envFile.split('\n').forEach(line => {
|
||||
if (line.includes('=') && !line.trim().startsWith('#')) {
|
||||
const [key, ...values] = line.split('=');
|
||||
if (key && /^[A-Z_][A-Z0-9_]*$/.test(key.trim())) {
|
||||
let value = values.join('=').trim();
|
||||
if ((value.startsWith('"') && value.endsWith('"')) ||
|
||||
(value.startsWith("'") && value.endsWith("'"))) {
|
||||
value = value.slice(1, -1);
|
||||
}
|
||||
envVars[key.trim()] = value;
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Error loading .env file:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const baseUrl = envVars.OMADA_CONTROLLER_URL || 'https://192.168.11.8:8043';
|
||||
const username = envVars.OMADA_ADMIN_USERNAME || envVars.OMADA_API_KEY;
|
||||
const password = envVars.OMADA_ADMIN_PASSWORD || envVars.OMADA_API_SECRET;
|
||||
const siteId = envVars.OMADA_SITE_ID;
|
||||
const verifySSL = envVars.OMADA_VERIFY_SSL !== 'false';
|
||||
|
||||
if (!username || !password) {
|
||||
console.error('Error: Missing credentials');
|
||||
console.error('Required: OMADA_ADMIN_USERNAME/OMADA_API_KEY and OMADA_ADMIN_PASSWORD/OMADA_API_SECRET');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Parse base URL
|
||||
const urlObj = new URL(baseUrl);
|
||||
const hostname = urlObj.hostname;
|
||||
const port = urlObj.port || (urlObj.protocol === 'https:' ? 443 : 80);
|
||||
|
||||
console.log('=== Omada Controller Direct Connection Test ===\n');
|
||||
console.log(`Controller URL: ${baseUrl}`);
|
||||
console.log(`Hostname: ${hostname}`);
|
||||
console.log(`Port: ${port}`);
|
||||
console.log(`Site ID: ${siteId || 'auto-detect'}`);
|
||||
console.log(`SSL Verification: ${verifySSL}\n`);
|
||||
|
||||
// Create HTTPS agent
|
||||
const agent = new https.Agent({
|
||||
rejectUnauthorized: verifySSL,
|
||||
});
|
||||
|
||||
// Function to make API request
|
||||
function apiRequest(method, path, data = null, token = null) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const options = {
|
||||
hostname,
|
||||
port,
|
||||
path,
|
||||
method,
|
||||
agent,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
};
|
||||
|
||||
if (token) {
|
||||
options.headers['Authorization'] = `Bearer ${token}`;
|
||||
}
|
||||
|
||||
const req = https.request(options, (res) => {
|
||||
let body = '';
|
||||
res.on('data', (chunk) => {
|
||||
body += chunk;
|
||||
});
|
||||
res.on('end', () => {
|
||||
try {
|
||||
const json = JSON.parse(body);
|
||||
if (res.statusCode >= 200 && res.statusCode < 300) {
|
||||
resolve(json);
|
||||
} else {
|
||||
reject(new Error(`HTTP ${res.statusCode}: ${body}`));
|
||||
}
|
||||
} catch (e) {
|
||||
resolve({ raw: body, statusCode: res.statusCode });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
req.on('error', (error) => {
|
||||
reject(error);
|
||||
});
|
||||
|
||||
if (data) {
|
||||
req.write(JSON.stringify(data));
|
||||
}
|
||||
req.end();
|
||||
});
|
||||
}
|
||||
|
||||
async function test() {
|
||||
try {
|
||||
console.log('1. Testing login...');
|
||||
const loginResponse = await apiRequest('POST', '/api/v2/login', {
|
||||
username,
|
||||
password,
|
||||
});
|
||||
|
||||
if (loginResponse.errorCode !== 0) {
|
||||
console.error(` ✗ Login failed: ${loginResponse.msg || 'Unknown error'}`);
|
||||
console.error(` Error Code: ${loginResponse.errorCode}`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const token = loginResponse.result?.token || loginResponse.token;
|
||||
if (!token) {
|
||||
console.error(' ✗ No token received');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(' ✓ Login successful');
|
||||
console.log(` Token: ${token.substring(0, 20)}...\n`);
|
||||
|
||||
console.log('2. Getting site information...');
|
||||
// Try different endpoint formats
|
||||
let sitesResponse;
|
||||
try {
|
||||
sitesResponse = await apiRequest('GET', '/api/v2/sites', null, token);
|
||||
} catch (e) {
|
||||
// If redirect, try with omadacId parameter
|
||||
try {
|
||||
sitesResponse = await apiRequest('GET', `/api/v2/sites?omadacId=${siteId}`, null, token);
|
||||
} catch (e2) {
|
||||
console.log(' Note: Sites endpoint may require different format');
|
||||
sitesResponse = { errorCode: -1, msg: 'Could not query sites endpoint' };
|
||||
}
|
||||
}
|
||||
|
||||
if (sitesResponse.errorCode === 0 && sitesResponse.result) {
|
||||
const sites = Array.isArray(sitesResponse.result) ? sitesResponse.result : [];
|
||||
console.log(` ✓ Found ${sites.length} site(s)`);
|
||||
sites.forEach((site, index) => {
|
||||
console.log(` Site ${index + 1}: ${site.name} (${site.id})`);
|
||||
if (site.id === siteId) {
|
||||
console.log(` ^ Using this site`);
|
||||
}
|
||||
});
|
||||
} else if (sitesResponse.statusCode === 302 || sitesResponse.raw === '') {
|
||||
console.log(' Note: Sites endpoint returned redirect (may require web interface)');
|
||||
console.log(` Using provided site ID: ${siteId}`);
|
||||
} else {
|
||||
console.log(' Response:', JSON.stringify(sitesResponse, null, 2));
|
||||
}
|
||||
console.log('');
|
||||
|
||||
const effectiveSiteId = siteId || (sitesResponse.result?.[0]?.id);
|
||||
if (effectiveSiteId) {
|
||||
console.log(`3. Using site ID: ${effectiveSiteId}\n`);
|
||||
|
||||
console.log('4. Listing devices...');
|
||||
let devicesResponse;
|
||||
try {
|
||||
devicesResponse = await apiRequest('GET', `/api/v2/sites/${effectiveSiteId}/devices`, null, token);
|
||||
} catch (e) {
|
||||
try {
|
||||
devicesResponse = await apiRequest('GET', `/api/v2/sites/${effectiveSiteId}/devices?omadacId=${effectiveSiteId}`, null, token);
|
||||
} catch (e2) {
|
||||
devicesResponse = { errorCode: -1, msg: 'Could not query devices endpoint' };
|
||||
}
|
||||
}
|
||||
|
||||
if (devicesResponse.errorCode === 0 && devicesResponse.result) {
|
||||
const devices = Array.isArray(devicesResponse.result) ? devicesResponse.result : [];
|
||||
console.log(` ✓ Found ${devices.length} device(s)`);
|
||||
|
||||
if (devices.length > 0) {
|
||||
console.log('\n Devices:');
|
||||
|
||||
const routers = devices.filter(d => d.type === 'router' || d.type === 'Gateway');
|
||||
const switches = devices.filter(d => d.type === 'switch' || d.type === 'Switch');
|
||||
const aps = devices.filter(d => d.type === 'ap' || d.type === 'AccessPoint');
|
||||
|
||||
if (routers.length > 0) {
|
||||
console.log('\n ROUTERS:');
|
||||
routers.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || device.mac || 'Unknown'}`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log(` MAC: ${device.mac || 'N/A'}`);
|
||||
console.log(` Device ID: ${device.id || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
|
||||
if (switches.length > 0) {
|
||||
console.log('\n SWITCHES:');
|
||||
switches.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || device.mac || 'Unknown'}`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log(` MAC: ${device.mac || 'N/A'}`);
|
||||
console.log(` Device ID: ${device.id || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
|
||||
if (aps.length > 0) {
|
||||
console.log('\n ACCESS POINTS:');
|
||||
aps.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || device.mac || 'Unknown'}`);
|
||||
console.log(` Model: ${device.model || 'N/A'}`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
|
||||
const others = devices.filter(d =>
|
||||
d.type !== 'router' && d.type !== 'Gateway' &&
|
||||
d.type !== 'switch' && d.type !== 'Switch' &&
|
||||
d.type !== 'ap' && d.type !== 'AccessPoint'
|
||||
);
|
||||
|
||||
if (others.length > 0) {
|
||||
console.log('\n OTHER DEVICES:');
|
||||
others.forEach((device, index) => {
|
||||
const status = device.status === 1 ? '🟢 Online' : device.status === 0 ? '🔴 Offline' : '🟡 Unknown';
|
||||
console.log(` ${index + 1}. ${device.name || device.mac || 'Unknown'} (${device.type || 'Unknown'})`);
|
||||
console.log(` Status: ${status}`);
|
||||
console.log(` IP: ${device.ip || 'N/A'}`);
|
||||
console.log('');
|
||||
});
|
||||
}
|
||||
} else {
|
||||
console.log(' No devices found');
|
||||
}
|
||||
} else if (devicesResponse.statusCode === 302 || devicesResponse.raw === '') {
|
||||
console.log(' Note: Devices endpoint returned redirect (API may require different format)');
|
||||
console.log(' Try accessing web interface at https://192.168.11.8:8043 to view devices');
|
||||
} else {
|
||||
console.log(' Response:', JSON.stringify(devicesResponse, null, 2));
|
||||
}
|
||||
|
||||
console.log('5. Listing VLANs...');
|
||||
const vlansResponse = await apiRequest('GET', `/api/v2/sites/${effectiveSiteId}/vlans`, null, token);
|
||||
|
||||
if (vlansResponse.errorCode === 0 && vlansResponse.result) {
|
||||
const vlans = Array.isArray(vlansResponse.result) ? vlansResponse.result : [];
|
||||
console.log(` ✓ Found ${vlans.length} VLAN(s)`);
|
||||
|
||||
if (vlans.length > 0) {
|
||||
console.log('\n VLANs:');
|
||||
vlans.forEach((vlan, index) => {
|
||||
console.log(` ${index + 1}. VLAN ${vlan.vlanId || 'N/A'}: ${vlan.name || 'Unnamed'}`);
|
||||
if (vlan.subnet) console.log(` Subnet: ${vlan.subnet}`);
|
||||
if (vlan.gateway) console.log(` Gateway: ${vlan.gateway}`);
|
||||
console.log('');
|
||||
});
|
||||
} else {
|
||||
console.log(' No VLANs configured');
|
||||
}
|
||||
} else {
|
||||
console.log(' Response:', JSON.stringify(vlansResponse, null, 2));
|
||||
}
|
||||
}
|
||||
|
||||
console.log('\n=== Connection test completed successfully! ===');
|
||||
|
||||
} catch (error) {
|
||||
console.error('\n=== Test failed ===');
|
||||
console.error('Error:', error.message);
|
||||
if (error.stack) {
|
||||
console.error('\nStack trace:');
|
||||
console.error(error.stack);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
test();
|
||||
|
||||
46
scripts/test_connection.sh
Executable file
46
scripts/test_connection.sh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
# Test Proxmox connection methods
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
|
||||
PROXMOX_USER="${PROXMOX_USER:-root}"
|
||||
|
||||
echo "Testing Proxmox Connection Methods"
|
||||
echo "==================================="
|
||||
echo ""
|
||||
|
||||
echo "1. Testing Ping..."
|
||||
if ping -c 2 -W 2 $PROXMOX_HOST > /dev/null 2>&1; then
|
||||
echo " ✓ Ping successful"
|
||||
else
|
||||
echo " ✗ Ping failed (host unreachable)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "2. Testing Port 8006..."
|
||||
if timeout 3 bash -c "echo > /dev/tcp/$PROXMOX_HOST/8006" 2>/dev/null; then
|
||||
echo " ✓ Port 8006 is accessible"
|
||||
else
|
||||
echo " ✗ Port 8006 is not accessible"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "3. Testing SSH..."
|
||||
if timeout 5 ssh -o StrictHostKeyChecking=no -o ConnectTimeout=3 $PROXMOX_USER@$PROXMOX_HOST "echo 'SSH works'" 2>/dev/null; then
|
||||
echo " ✓ SSH connection successful"
|
||||
echo ""
|
||||
echo "4. Testing pvesh via SSH..."
|
||||
if timeout 5 ssh -o StrictHostKeyChecking=no -o ConnectTimeout=3 $PROXMOX_USER@$PROXMOX_HOST "pvesh get /nodes --output-format json" 2>/dev/null | grep -q "node"; then
|
||||
echo " ✓ pvesh works via SSH"
|
||||
echo ""
|
||||
echo "→ Recommendation: Use list_vms.sh (shell script)"
|
||||
else
|
||||
echo " ✗ pvesh not available or failed"
|
||||
fi
|
||||
else
|
||||
echo " ✗ SSH connection failed"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Current Network Info:"
|
||||
echo " Your IP: $(ip addr show | grep 'inet.*192.168' | awk '{print $2}' | head -1)"
|
||||
echo " Target Host: $PROXMOX_HOST"
|
||||
105
scripts/verify-tunnel-config.sh
Executable file
105
scripts/verify-tunnel-config.sh
Executable file
@@ -0,0 +1,105 @@
|
||||
#!/bin/bash
|
||||
# Verify tunnel configuration and test all endpoints
|
||||
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Verify Tunnel Configuration"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
|
||||
# Expected hostnames
|
||||
EXPECTED_HOSTNAMES=(
|
||||
"explorer.d-bis.org"
|
||||
"rpc-http-pub.d-bis.org"
|
||||
"rpc-http-prv.d-bis.org"
|
||||
"dbis-admin.d-bis.org"
|
||||
"dbis-api.d-bis.org"
|
||||
"dbis-api-2.d-bis.org"
|
||||
"mim4u.org"
|
||||
"www.mim4u.org"
|
||||
"rpc-ws-pub.d-bis.org"
|
||||
"rpc-ws-prv.d-bis.org"
|
||||
)
|
||||
|
||||
echo "✅ Configuration Verified in Cloudflare Dashboard:"
|
||||
echo ""
|
||||
echo "All 10 routes configured:"
|
||||
for i in "${!EXPECTED_HOSTNAMES[@]}"; do
|
||||
num=$((i+1))
|
||||
echo " $num. ${EXPECTED_HOSTNAMES[$i]} → http://192.168.11.21:80"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Check for differences
|
||||
echo "⚠️ Note: Domain Configuration Differences"
|
||||
echo ""
|
||||
echo " In DNS Zone File:"
|
||||
echo " - mim4u.org.d-bis.org (subdomain)"
|
||||
echo " - www.mim4u.org.d-bis.org (subdomain)"
|
||||
echo ""
|
||||
echo " In Tunnel Config:"
|
||||
echo " - mim4u.org (root domain)"
|
||||
echo " - www.mim4u.org (root domain)"
|
||||
echo ""
|
||||
echo " This is correct if mim4u.org is a separate domain!"
|
||||
echo ""
|
||||
|
||||
# Test endpoints
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Testing Endpoints"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
|
||||
SUCCESS=0
|
||||
FAILED=0
|
||||
|
||||
for hostname in "${EXPECTED_HOSTNAMES[@]}"; do
|
||||
# Skip WebSocket endpoints for HTTP test
|
||||
if [[ "$hostname" == *"ws-"* ]]; then
|
||||
echo " ⏭️ $hostname (WebSocket - skipping HTTP test)"
|
||||
continue
|
||||
fi
|
||||
|
||||
echo -n " Testing $hostname... "
|
||||
|
||||
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 "https://$hostname" 2>/dev/null || echo "000")
|
||||
|
||||
if [ "$HTTP_CODE" = "000" ]; then
|
||||
echo "❌ FAILED (timeout/no connection)"
|
||||
FAILED=$((FAILED+1))
|
||||
elif [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 400 ]; then
|
||||
echo "✅ OK (HTTP $HTTP_CODE)"
|
||||
SUCCESS=$((SUCCESS+1))
|
||||
elif [ "$HTTP_CODE" -ge 400 ] && [ "$HTTP_CODE" -lt 500 ]; then
|
||||
echo "⚠️ HTTP $HTTP_CODE (service may be down)"
|
||||
FAILED=$((FAILED+1))
|
||||
else
|
||||
echo "❌ HTTP $HTTP_CODE"
|
||||
FAILED=$((FAILED+1))
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo " Test Results"
|
||||
echo "═══════════════════════════════════════════════════════════"
|
||||
echo ""
|
||||
echo " ✅ Successful: $SUCCESS"
|
||||
echo " ❌ Failed: $FAILED"
|
||||
echo ""
|
||||
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "🎉 All endpoints are accessible!"
|
||||
else
|
||||
echo "⚠️ Some endpoints failed. Possible reasons:"
|
||||
echo " 1. Tunnel connector not running (VMID 102)"
|
||||
echo " 2. Nginx not accessible at 192.168.11.21:80"
|
||||
echo " 3. Services not configured in Nginx"
|
||||
echo " 4. DNS propagation delay"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Check tunnel status in Cloudflare Dashboard"
|
||||
echo " 2. Verify container is running: ssh root@192.168.11.12 'pct status 102'"
|
||||
echo " 3. Check tunnel service: ssh root@192.168.11.12 'pct exec 102 -- systemctl status cloudflared'"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
Reference in New Issue
Block a user