Refactor code for improved readability and performance

This commit is contained in:
defiQUG
2025-12-21 22:32:09 -08:00
parent 79e3c02f50
commit b45c2006be
2259 changed files with 380318 additions and 2 deletions

View File

@@ -0,0 +1,97 @@
# Besu Balance Query Script
Query balances from Besu RPC nodes running on VMID 115-117.
**Note**: Only RPC nodes (115-117) expose RPC endpoints. Validators (106-110) and sentries (111-114) don't have RPC enabled.
## Installation
```bash
npm install ethers
```
## Usage
### Basic Usage (with RPC template)
The script defaults to querying RPC nodes 115-117. It uses a template to generate RPC URLs by substituting `{vmid}` with the VMID:
```bash
RPC_TEMPLATE="http://192.168.11.{vmid}:8545" \
node scripts/besu_balances_106_117.js
```
**Note**: The default template is `http://192.168.11.{vmid}:8545` and defaults to VMIDs 115-117 (RPC nodes only).
### With Custom RPC URLs
You can also provide explicit RPC URLs (comma-separated):
```bash
RPC_URLS="http://192.168.11.13:8545,http://192.168.11.14:8545,http://192.168.11.15:8545" \
node scripts/besu_balances_106_117.js
```
### With Token Addresses
```bash
RPC_TEMPLATE="http://192.168.11.{vmid}:8545" \
WETH9_ADDRESS="0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2" \
WETH10_ADDRESS="0xYourWETH10Address" \
node scripts/besu_balances_106_117.js
```
## Environment Variables
- `RPC_TEMPLATE`: Template for RPC URLs with `{vmid}` placeholder (default: `http://192.168.11.{vmid}:8545`)
- `RPC_URLS`: Comma-separated list of explicit RPC URLs (overrides template if set)
- `VMID_START`: Starting VMID for template (default: `115` - RPC nodes only)
- `VMID_END`: Ending VMID for template (default: `117` - RPC nodes only)
- `WETH9_ADDRESS`: WETH9 token contract address (default: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`)
- `WETH10_ADDRESS`: WETH10 token contract address (optional, skipped if not set)
## Output
The script outputs balance information for each RPC endpoint:
```
VMID: 106
RPC: http://192.168.11.13:8545
chainId: 138
block: 12345
ETH: 1.5 (wei: 1500000000000000000)
WETH9: 0.0 WETH (raw: 0)
WETH10: skipped (missing address)
---
```
## Features
- Queries native ETH balance
- Queries ERC-20 token balances (WETH9, WETH10)
- Fetches token decimals and symbol for formatted display
- Health checks (chainId, blockNumber)
- Concurrent requests (limit: 4)
- Request timeout: 15 seconds
- Continues on failures, reports errors per endpoint
- Exit code: 0 if at least one RPC succeeded, 1 if all failed
## Container IP Mapping (Reference)
For the deployed containers, the VMID to IP mapping is:
- 106 -> 192.168.11.13 (besu-validator-1)
- 107 -> 192.168.11.14 (besu-validator-2)
- 108 -> 192.168.11.15 (besu-validator-3)
- 109 -> 192.168.11.16 (besu-validator-4)
- 110 -> 192.168.11.18 (besu-validator-5)
- 111 -> 192.168.11.19 (besu-sentry-2)
- 112 -> 192.168.11.20 (besu-sentry-3)
- 113 -> 192.168.11.21 (besu-sentry-4)
- 114 -> 192.168.11.22 (besu-sentry-5)
- 115 -> 192.168.11.23 (besu-rpc-1)
- 116 -> 192.168.11.24 (besu-rpc-2)
- 117 -> 192.168.11.25 (besu-rpc-3)
**Note**: Since the IPs don't follow a simple pattern with VMID, it's recommended to use `RPC_URLS`
with explicit addresses or create a custom mapping in the script.

View File

@@ -0,0 +1,66 @@
# Cloudflare Credentials Web Setup
A web-based interface to configure your Cloudflare API credentials.
## Quick Start
1. **Start the web server:**
```bash
./scripts/start-cloudflare-setup.sh
```
2. **Open in your browser:**
- http://localhost:5000
- http://127.0.0.1:5000
3. **Fill in the form:**
- Enter your Cloudflare email
- Add your Global API Key or API Token
- Optionally add Account ID, Zone ID, Domain, and Tunnel Token
- Click "Save Credentials"
4. **Test your connection:**
- Click "Test API Connection" to verify credentials work
5. **Stop the server:**
- Press `Ctrl+C` in the terminal
## Features
- ✅ View current credential status
- ✅ Update credentials via web form
- ✅ Test API connection
- ✅ Secure (only accessible from localhost)
- ✅ Automatically saves to `.env` file
## Getting Your Credentials
### Global API Key
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Scroll to **API Keys** section
3. Click **View** next to **Global API Key**
4. Enter your password
5. Copy the key
### API Token (Recommended)
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Click **Create Token**
3. Use **Edit zone DNS** template
4. Add permission: **Account** → **Cloudflare Tunnel** → **Edit**
5. Copy the token
### Account ID & Zone ID
- Found in your Cloudflare dashboard
- Or use the web interface to test and it will help you find them
## Troubleshooting
**Port 5000 already in use?**
- The script will show an error
- Kill the process: `lsof -ti:5000 | xargs kill -9`
- Or modify the port in `cloudflare-setup-web.py`
**Can't access from browser?**
- Make sure you're accessing from the same machine
- The server only listens on localhost (127.0.0.1) for security

118
scripts/README.md Normal file
View File

@@ -0,0 +1,118 @@
# Project Root Scripts
This directory contains utility scripts for managing the Proxmox workspace project.
## Setup Scripts
### `setup.sh`
Initial setup script that creates `.env` file and Claude Desktop configuration.
**Usage:**
```bash
./scripts/setup.sh
```
### `complete-setup.sh`
Complete setup script that performs all setup steps including dependency installation.
**Usage:**
```bash
./scripts/complete-setup.sh
```
### `verify-setup.sh`
Verifies that the workspace is properly configured and all prerequisites are met.
**Usage:**
```bash
./scripts/verify-setup.sh
```
## Environment Configuration
### `configure-env.sh`
Quick configuration script to update `.env` with Proxmox credentials.
**Usage:**
```bash
./scripts/configure-env.sh
```
### `load-env.sh`
Standardized `.env` loader function. Can be sourced by other scripts.
**Usage:**
```bash
source scripts/load-env.sh
load_env_file
```
Or run directly:
```bash
./scripts/load-env.sh
```
## Token Management
### `create-proxmox-token.sh`
Creates a Proxmox API token programmatically.
**Usage:**
```bash
./scripts/create-proxmox-token.sh <host> <user> <password> [token-name]
```
**Example:**
```bash
./scripts/create-proxmox-token.sh 192.168.11.10 root@pam mypassword mcp-server
```
### `update-token.sh`
Interactively updates the `PROXMOX_TOKEN_VALUE` in `~/.env`.
**Usage:**
```bash
./scripts/update-token.sh
```
## Testing & Validation
### `test-connection.sh`
Tests the connection to the Proxmox API using credentials from `~/.env`.
**Usage:**
```bash
./scripts/test-connection.sh
```
### `validate-ml110-deployment.sh`
Comprehensive validation script for deployment to ml110-01.
**Usage:**
```bash
./scripts/validate-ml110-deployment.sh
```
This script validates:
- Prerequisites
- Proxmox connection
- Storage availability
- Template availability
- Configuration files
- Deployment scripts
- Resource requirements
## Script Dependencies
All scripts use the standardized `~/.env` file for configuration. See [docs/ENV_STANDARDIZATION.md](../docs/ENV_STANDARDIZATION.md) for details.
## Environment Variables
All scripts expect these variables in `~/.env`:
- `PROXMOX_HOST` - Proxmox host IP or hostname
- `PROXMOX_PORT` - Proxmox API port (default: 8006)
- `PROXMOX_USER` - Proxmox API user (e.g., root@pam)
- `PROXMOX_TOKEN_NAME` - API token name
- `PROXMOX_TOKEN_VALUE` - API token secret value

View File

@@ -0,0 +1,96 @@
#!/bin/bash
# Backup Configuration Files and Validator Keys
# Creates encrypted backups of critical files
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
# Backup configuration
BACKUP_BASE="${BACKUP_BASE:-/backup/smom-dbis-138}"
BACKUP_DIR="$BACKUP_BASE/$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR"
log_info "Creating backup in: $BACKUP_DIR"
# Backup deployment configs (if on Proxmox host)
if [[ -d "$PROJECT_ROOT/config" ]]; then
log_info "Backing up deployment configuration files..."
tar -czf "$BACKUP_DIR/deployment-configs.tar.gz" -C "$PROJECT_ROOT" config/ || {
log_warn "Failed to backup deployment configs (may not be on Proxmox host)"
}
fi
# Backup source project configs (if accessible)
SOURCE_PROJECT="${SOURCE_PROJECT:-/home/intlc/projects/smom-dbis-138}"
if [[ -d "$SOURCE_PROJECT/config" ]]; then
log_info "Backing up source project configuration files..."
tar -czf "$BACKUP_DIR/source-configs.tar.gz" -C "$SOURCE_PROJECT" config/ || {
log_warn "Failed to backup source configs"
}
# Backup validator keys (encrypted if gpg available)
if [[ -d "$SOURCE_PROJECT/keys/validators" ]]; then
log_info "Backing up validator keys..."
if command -v gpg >/dev/null 2>&1; then
tar -czf - -C "$SOURCE_PROJECT" keys/validators/ | \
gpg -c --cipher-algo AES256 --batch --yes \
--passphrase "${BACKUP_PASSPHRASE:-}" \
> "$BACKUP_DIR/validator-keys.tar.gz.gpg" 2>/dev/null || {
log_warn "GPG encryption failed, backing up without encryption"
tar -czf "$BACKUP_DIR/validator-keys.tar.gz" -C "$SOURCE_PROJECT" keys/validators/
}
else
log_warn "GPG not available, backing up without encryption"
tar -czf "$BACKUP_DIR/validator-keys.tar.gz" -C "$SOURCE_PROJECT" keys/validators/
fi
fi
fi
# Backup container configurations (if pct available)
if command -v pct >/dev/null 2>&1; then
log_info "Backing up container configurations..."
mkdir -p "$BACKUP_DIR/containers"
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2500 2501 2502; do
if pct config "$vmid" >/dev/null 2>&1; then
pct config "$vmid" > "$BACKUP_DIR/containers/container-$vmid.conf" 2>/dev/null || true
fi
done
log_success "Container configs backed up"
fi
# Create backup manifest
cat > "$BACKUP_DIR/manifest.txt" <<MANIFEST
Backup created: $(date)
Backup location: $BACKUP_DIR
Contents:
- deployment-configs.tar.gz
- source-configs.tar.gz
- validator-keys.tar.gz[.gpg]
- containers/ (container configurations)
Restore instructions:
1. Extract configs: tar -xzf deployment-configs.tar.gz
2. Extract source configs: tar -xzf source-configs.tar.gz
3. Decrypt and extract keys: gpg -d validator-keys.tar.gz.gpg | tar -xzf -
4. Restore container configs from containers/ directory
MANIFEST
log_success "Backup complete: $BACKUP_DIR"
# Retention policy: Keep backups for 30 days
log_info "Cleaning up old backups (retention: 30 days)..."
find "$BACKUP_BASE" -mindepth 1 -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \; 2>/dev/null || true
log_success "Backup process complete!"

View File

@@ -0,0 +1,198 @@
#!/bin/bash
# Collect enodes from all nodes and generate allowlist
# Usage: Update NODES array with your node IPs, then: bash collect-all-enodes.sh
set -euo pipefail
WORK_DIR="${WORK_DIR:-./besu-enodes-$(date +%Y%m%d-%H%M%S)}"
mkdir -p "$WORK_DIR"
# Node inventory: IP:RPC_PORT:USE_RPC (use_rpc=1 if RPC available, 0 for nodekey)
declare -A NODES=(
["192.168.11.13"]="8545:1" # validator-1
["192.168.11.14"]="8545:1" # validator-2
["192.168.11.15"]="8545:1" # validator-3
["192.168.11.16"]="8545:1" # validator-4
["192.168.11.18"]="8545:1" # validator-5
["192.168.11.19"]="8545:1" # sentry-2
["192.168.11.20"]="8545:1" # sentry-3
["192.168.11.21"]="8545:1" # sentry-4
["192.168.11.22"]="8545:1" # sentry-5
["192.168.11.23"]="8545:1" # rpc-1
["192.168.11.24"]="8545:1" # rpc-2
["192.168.11.25"]="8545:1" # rpc-3
)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SSH_USER="${SSH_USER:-root}"
SSH_OPTS="${SSH_OPTS:--o StrictHostKeyChecking=accept-new}"
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
validate_enode() {
local enode="$1"
local node_id
node_id=$(echo "$enode" | sed 's|^enode://||' | cut -d'@' -f1 | tr '[:upper:]' '[:lower:]')
if [[ ${#node_id} -ne 128 ]]; then
return 1
fi
if ! echo "$node_id" | grep -qE '^[0-9a-f]{128}$'; then
return 1
fi
return 0
}
extract_via_rpc() {
local ip="$1"
local rpc_port="$2"
local rpc_url="http://${ip}:${rpc_port}"
local response
response=$(curl -s -m 5 -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"admin_nodeInfo","params":[],"id":1}' \
"${rpc_url}" 2>/dev/null || echo "")
if [[ -z "$response" ]]; then
return 1
fi
if echo "$response" | python3 -c "import sys, json; data=json.load(sys.stdin); sys.exit(0 if 'error' not in data else 1)" 2>/dev/null; then
return 1
fi
local enode
enode=$(echo "$response" | python3 -c "import sys, json; print(json.load(sys.stdin).get('result', {}).get('enode', ''))" 2>/dev/null)
if [[ -z "$enode" ]] || [[ "$enode" == "None" ]] || [[ "$enode" == "null" ]]; then
return 1
fi
if validate_enode "$enode"; then
echo "$enode"
return 0
fi
return 1
}
extract_via_ssh_nodekey() {
local ip="$1"
local ssh_target="${SSH_USER}@${ip}"
local enode
enode=$(ssh $SSH_OPTS "$ssh_target" bash << REMOTE_SCRIPT
DATA_PATH="/data/besu"
BESU_BIN="/opt/besu/bin/besu"
HOST_IP="${ip}"
for path in "\${DATA_PATH}/key" "\${DATA_PATH}/nodekey" "/keys/besu/nodekey"; do
if [[ -f "\$path" ]]; then
ENODE=\$("\${BESU_BIN}" public-key export --node-private-key-file="\$path" --format=enode 2>/dev/null | sed "s/@[0-9.]*:/@\${HOST_IP}:/")
if [[ -n "\$ENODE" ]]; then
echo "\$ENODE"
exit 0
fi
fi
done
exit 1
REMOTE_SCRIPT
)
if [[ -n "$enode" ]] && validate_enode "$enode"; then
echo "$enode"
return 0
fi
return 1
}
log_info "Starting enode collection..."
echo ""
COLLECTED_ENODES="$WORK_DIR/collected-enodes.txt"
DUPLICATES="$WORK_DIR/duplicates.txt"
INVALIDS="$WORK_DIR/invalid-enodes.txt"
> "$COLLECTED_ENODES"
> "$DUPLICATES"
> "$INVALIDS"
declare -A ENODE_BY_IP
declare -A NODE_ID_SET
for ip in "${!NODES[@]}"; do
IFS=':' read -r rpc_port use_rpc <<< "${NODES[$ip]}"
log_info "Processing node: $ip"
ENODE=""
if [[ "$use_rpc" == "1" ]]; then
ENODE=$(extract_via_rpc "$ip" "$rpc_port" || echo "")
fi
if [[ -z "$ENODE" ]]; then
ENODE=$(extract_via_ssh_nodekey "$ip" || echo "")
fi
if [[ -z "$ENODE" ]]; then
log_error "Failed to extract enode from $ip"
echo "$ip|FAILED" >> "$INVALIDS"
continue
fi
if ! validate_enode "$ENODE"; then
log_error "Invalid enode format from $ip"
echo "$ip|$ENODE" >> "$INVALIDS"
continue
fi
NODE_ID=$(echo "$ENODE" | sed 's|^enode://||' | cut -d'@' -f1)
ENDPOINT=$(echo "$ENODE" | sed 's|.*@||')
if [[ -n "${NODE_ID_SET[$NODE_ID]:-}" ]]; then
log_warn "Duplicate node ID: ${NODE_ID:0:32}..."
echo "$ip|$ENODE|DUPLICATE_NODE_ID|${NODE_ID_SET[$NODE_ID]}" >> "$DUPLICATES"
continue
fi
if [[ -n "${ENODE_BY_IP[$ENDPOINT]:-}" ]]; then
log_warn "Duplicate endpoint: $ENDPOINT"
echo "$ip|$ENODE|DUPLICATE_ENDPOINT|${ENODE_BY_IP[$ENDPOINT]}" >> "$DUPLICATES"
continue
fi
NODE_ID_SET[$NODE_ID]="$ip"
ENODE_BY_IP[$ENDPOINT]="$ip"
echo "$ip|$ENODE" >> "$COLLECTED_ENODES"
log_success "Collected: $ip"
done
VALID_COUNT=$(wc -l < "$COLLECTED_ENODES" 2>/dev/null || echo "0")
DUP_COUNT=$(wc -l < "$DUPLICATES" 2>/dev/null || echo "0")
INVALID_COUNT=$(wc -l < "$INVALIDS" 2>/dev/null || echo "0")
echo ""
log_info "Collection Summary:"
log_success "Valid enodes: $VALID_COUNT"
[[ "$DUP_COUNT" -gt 0 ]] && log_warn "Duplicates: $DUP_COUNT (see $DUPLICATES)"
[[ "$INVALID_COUNT" -gt 0 ]] && log_error "Invalid: $INVALID_COUNT (see $INVALIDS)"
echo ""
log_info "Output directory: $WORK_DIR"
log_info "Run: bash ${SCRIPT_DIR}/besu-generate-allowlist.sh $COLLECTED_ENODES"

View File

@@ -0,0 +1,68 @@
#!/bin/bash
# Deploy corrected allowlist files to all Proxmox containers
# Usage: bash besu-deploy-allowlist.sh <static-nodes.json> <permissions-nodes.toml> [proxmox-host]
set -euo pipefail
PROXMOX_HOST="${3:-192.168.11.10}"
STATIC_NODES_FILE="${1:-static-nodes.json}"
PERMISSIONS_TOML_FILE="${2:-permissions-nodes.toml}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [[ ! -f "$STATIC_NODES_FILE" ]] || [[ ! -f "$PERMISSIONS_TOML_FILE" ]]; then
echo "ERROR: Files not found:" >&2
[[ ! -f "$STATIC_NODES_FILE" ]] && echo " - $STATIC_NODES_FILE" >&2
[[ ! -f "$PERMISSIONS_TOML_FILE" ]] && echo " - $PERMISSIONS_TOML_FILE" >&2
exit 1
fi
# Validate files first
echo "Validating files before deployment..."
if ! bash "${SCRIPT_DIR}/besu-validate-allowlist.sh" "$STATIC_NODES_FILE" "$PERMISSIONS_TOML_FILE"; then
echo "ERROR: Validation failed. Fix files before deploying." >&2
exit 1
fi
echo ""
echo "Deploying to Proxmox host: $PROXMOX_HOST"
# Copy files to host
scp -o StrictHostKeyChecking=accept-new \
"$STATIC_NODES_FILE" \
"$PERMISSIONS_TOML_FILE" \
"root@${PROXMOX_HOST}:/tmp/"
# Deploy to all containers
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2500 2501 2502; do
echo "Deploying to container $vmid..."
ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" << DEPLOY_SCRIPT
if ! pct status $vmid 2>/dev/null | grep -q running; then
echo " Container $vmid not running, skipping"
exit 0
fi
pct push $vmid /tmp/$(basename $STATIC_NODES_FILE) /etc/besu/static-nodes.json
pct push $vmid /tmp/$(basename $PERMISSIONS_TOML_FILE) /etc/besu/permissions-nodes.toml
pct exec $vmid -- chown besu:besu /etc/besu/static-nodes.json /etc/besu/permissions-nodes.toml
if pct exec $vmid -- test -f /etc/besu/static-nodes.json && \
pct exec $vmid -- test -f /etc/besu/permissions-nodes.toml; then
echo " ✓ Container $vmid: Files deployed"
else
echo " ✗ Container $vmid: Deployment failed"
fi
DEPLOY_SCRIPT
done
# Cleanup
ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"rm -f /tmp/$(basename $STATIC_NODES_FILE) /tmp/$(basename $PERMISSIONS_TOML_FILE)"
echo ""
echo "✓ Deployment complete"
echo ""
echo "Next steps:"
echo "1. Restart Besu services on all containers"
echo "2. Run verification: bash ${SCRIPT_DIR}/besu-verify-peers.sh <rpc-url>"

View File

@@ -0,0 +1,56 @@
#!/bin/bash
# Extract enode from Besu nodekey file using Besu CLI
# Usage: DATA_PATH=/data/besu NODE_IP=192.168.11.13 bash extract-enode-from-nodekey.sh
set -euo pipefail
DATA_PATH="${DATA_PATH:-/data/besu}"
BESU_BIN="${BESU_BIN:-/opt/besu/bin/besu}"
NODE_IP="${NODE_IP:-}"
P2P_PORT="${P2P_PORT:-30303}"
# Find nodekey file
NODEKEY_FILE=""
for path in "${DATA_PATH}/key" "${DATA_PATH}/nodekey" "/keys/besu/nodekey"; do
if [[ -f "$path" ]]; then
NODEKEY_FILE="$path"
break
fi
done
if [[ -z "$NODEKEY_FILE" ]]; then
echo "ERROR: Nodekey file not found in ${DATA_PATH}/key, ${DATA_PATH}/nodekey, or /keys/besu/nodekey" >&2
exit 1
fi
echo "Found nodekey: $NODEKEY_FILE" >&2
# Generate enode using Besu CLI
if [[ -n "$NODE_IP" ]]; then
ENODE=$("${BESU_BIN}" public-key export --node-private-key-file="${NODEKEY_FILE}" --format=enode 2>/dev/null | sed "s/@[0-9.]*:/@${NODE_IP}:/")
else
ENODE=$("${BESU_BIN}" public-key export --node-private-key-file="${NODEKEY_FILE}" --format=enode 2>/dev/null)
fi
if [[ -z "$ENODE" ]]; then
echo "ERROR: Failed to generate enode from nodekey" >&2
exit 1
fi
# Extract and validate node ID length
NODE_ID=$(echo "$ENODE" | sed 's|^enode://||' | cut -d'@' -f1 | tr '[:upper:]' '[:lower:]')
NODE_ID_LEN=${#NODE_ID}
if [[ "$NODE_ID_LEN" -ne 128 ]]; then
echo "ERROR: Invalid node ID length: $NODE_ID_LEN (expected 128)" >&2
echo "Node ID: ${NODE_ID:0:32}...${NODE_ID: -32}" >&2
exit 1
fi
# Validate hex format
if ! echo "$NODE_ID" | grep -qE '^[0-9a-f]{128}$'; then
echo "ERROR: Node ID contains invalid hex characters" >&2
exit 1
fi
echo "$ENODE"

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Extract enode from running Besu node via JSON-RPC
# Usage: RPC_URL=http://192.168.11.13:8545 NODE_IP=192.168.11.13 bash extract-enode-from-rpc.sh
set -euo pipefail
RPC_URL="${RPC_URL:-http://localhost:8545}"
NODE_IP="${NODE_IP:-}"
# Get node info via JSON-RPC
RESPONSE=$(curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"admin_nodeInfo","params":[],"id":1}' \
"${RPC_URL}")
# Check for errors
if echo "$RESPONSE" | python3 -c "import sys, json; data=json.load(sys.stdin); sys.exit(0 if 'error' not in data else 1)" 2>/dev/null; then
ERROR_MSG=$(echo "$RESPONSE" | python3 -c "import sys, json; data=json.load(sys.stdin); print(data.get('error', {}).get('message', 'Unknown error'))" 2>/dev/null)
if [[ -n "$ERROR_MSG" ]]; then
echo "ERROR: JSON-RPC error: $ERROR_MSG" >&2
echo "NOTE: Ensure RPC is enabled with --rpc-http-api=ADMIN,NET" >&2
exit 1
fi
fi
# Extract enode
ENODE=$(echo "$RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin).get('result', {}).get('enode', ''))" 2>/dev/null)
if [[ -z "$ENODE" ]] || [[ "$ENODE" == "None" ]] || [[ "$ENODE" == "null" ]]; then
echo "ERROR: Could not extract enode from admin_nodeInfo" >&2
echo "Response: $RESPONSE" >&2
exit 1
fi
# Extract and validate node ID
NODE_ID=$(echo "$ENODE" | sed 's|^enode://||' | cut -d'@' -f1 | tr '[:upper:]' '[:lower:]')
NODE_ID_LEN=${#NODE_ID}
if [[ "$NODE_ID_LEN" -ne 128 ]]; then
echo "ERROR: Invalid node ID length: $NODE_ID_LEN (expected 128)" >&2
echo "Enode: $ENODE" >&2
exit 1
fi
# Extract IP and port
ENODE_IP=$(echo "$ENODE" | sed 's|.*@||' | cut -d':' -f1)
ENODE_PORT=$(echo "$ENODE" | sed 's|.*:||')
# Verify IP if provided
if [[ -n "$NODE_IP" ]] && [[ "$ENODE_IP" != "$NODE_IP" ]]; then
echo "WARNING: Enode IP ($ENODE_IP) does not match expected IP ($NODE_IP)" >&2
echo "NOTE: Check --p2p-host and --nat-method configuration" >&2
fi
echo "$ENODE"

View File

@@ -0,0 +1,91 @@
#!/bin/bash
# Generate Besu allowlist files from collected enodes
# Usage: bash besu-generate-allowlist.sh <collected-enodes.txt> [validator-ips...]
set -euo pipefail
COLLECTED_FILE="${1:-}"
OUTPUT_DIR="${OUTPUT_DIR:-.}"
if [[ -z "$COLLECTED_FILE" ]] || [[ ! -f "$COLLECTED_FILE" ]]; then
echo "Usage: $0 <collected-enodes.txt> [validator-ip1] [validator-ip2] ..." >&2
echo "Example: $0 collected-enodes.txt 192.168.11.13 192.168.11.14 192.168.11.15 192.168.11.16 192.168.11.18" >&2
exit 1
fi
shift || true
VALIDATOR_IPS=("$@")
# If no validator IPs provided, use first 5 entries
if [[ ${#VALIDATOR_IPS[@]} -eq 0 ]]; then
VALIDATOR_IPS=($(head -5 "$COLLECTED_FILE" | cut -d'|' -f1))
fi
GREEN='\033[0;32m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_info "Generating allowlist files..."
python3 << PYEOF
import json
import sys
collected_file = '$COLLECTED_FILE'
validator_ips = ${VALIDATOR_IPS[@]}
output_dir = '$OUTPUT_DIR'
# Read collected enodes
enodes_all = []
enodes_validators = []
with open(collected_file, 'r') as f:
for line in f:
line = line.strip()
if not line or '|' not in line:
continue
parts = line.split('|')
if len(parts) >= 2:
ip = parts[0]
enode = parts[1]
enodes_all.append(enode)
if ip in validator_ips:
enodes_validators.append(enode)
# Sort for determinism
enodes_all.sort()
enodes_validators.sort()
# Generate static-nodes.json (validators only)
static_nodes_file = f'{output_dir}/static-nodes.json'
with open(static_nodes_file, 'w') as f:
json.dump(enodes_validators, f, indent=2)
print(f"Generated {static_nodes_file} with {len(enodes_validators)} validators")
# Generate permissions-nodes.toml (all nodes)
permissions_file = f'{output_dir}/permissions-nodes.toml'
toml_content = f"""# Node Permissioning Configuration
# Lists nodes that are allowed to connect to this node
# Generated: {__import__('datetime').datetime.now().isoformat()}
# Total nodes: {len(enodes_all)}
nodes-allowlist=[
"""
for enode in enodes_all:
toml_content += f' "{enode}",\n'
toml_content = toml_content.rstrip(',\n') + '\n]'
with open(permissions_file, 'w') as f:
f.write(toml_content)
print(f"Generated {permissions_file} with {len(enodes_all)} nodes")
PYEOF
log_success "Files generated in: $OUTPUT_DIR"
log_info " - static-nodes.json (validators)"
log_info " - permissions-nodes.toml (all nodes)"

View File

@@ -0,0 +1,121 @@
#!/bin/bash
# Validate all enodes in generated files
# Usage: bash besu-validate-allowlist.sh <static-nodes.json> <permissions-nodes.toml>
set -euo pipefail
STATIC_NODES="${1:-static-nodes.json}"
PERMISSIONS_TOML="${2:-permissions-nodes.toml}"
ERRORS=0
validate_enode_file() {
local file="$1"
local file_type="$2"
echo "Validating $file_type: $file"
if [[ "$file_type" == "json" ]]; then
python3 << PYEOF
import json
import re
import sys
try:
with open('$file', 'r') as f:
enodes = json.load(f)
except Exception as e:
print(f"ERROR: Failed to read file: {e}", file=sys.stderr)
sys.exit(1)
errors = 0
node_ids_seen = set()
endpoints_seen = set()
for i, enode in enumerate(enodes):
match = re.match(r'enode://([0-9a-fA-F]+)@([0-9.]+):(\d+)', enode)
if not match:
print(f"ERROR: Invalid enode format at index {i}: {enode}", file=sys.stderr)
errors += 1
continue
node_id = match.group(1).lower()
endpoint = f"{match.group(2)}:{match.group(3)}"
if len(node_id) != 128:
print(f"ERROR: Node ID length {len(node_id)} at index {i} (expected 128): {node_id[:32]}...", file=sys.stderr)
errors += 1
continue
if not re.match(r'^[0-9a-f]{128}$', node_id):
print(f"ERROR: Invalid hex in node ID at index {i}: {node_id[:32]}...", file=sys.stderr)
errors += 1
continue
if node_id in node_ids_seen:
print(f"WARNING: Duplicate node ID at index {i}: {node_id[:32]}...", file=sys.stderr)
node_ids_seen.add(node_id)
if endpoint in endpoints_seen:
print(f"WARNING: Duplicate endpoint at index {i}: {endpoint}", file=sys.stderr)
endpoints_seen.add(endpoint)
sys.exit(errors)
PYEOF
ERRORS=$((ERRORS + $?))
else
python3 << PYEOF
import re
import sys
try:
with open('$file', 'r') as f:
content = f.read()
except Exception as e:
print(f"ERROR: Failed to read file: {e}", file=sys.stderr)
sys.exit(1)
enodes = re.findall(r'"enode://([0-9a-fA-F]+)@([0-9.]+):(\d+)"', content)
errors = 0
node_ids_seen = set()
endpoints_seen = set()
for i, (node_id_hex, ip, port) in enumerate(enodes):
node_id = node_id_hex.lower()
endpoint = f"{ip}:{port}"
if len(node_id) != 128:
print(f"ERROR: Node ID length {len(node_id)} at entry {i+1} (expected 128): {node_id[:32]}...", file=sys.stderr)
errors += 1
continue
if not re.match(r'^[0-9a-f]{128}$', node_id):
print(f"ERROR: Invalid hex in node ID at entry {i+1}: {node_id[:32]}...", file=sys.stderr)
errors += 1
continue
if node_id in node_ids_seen:
print(f"WARNING: Duplicate node ID at entry {i+1}: {node_id[:32]}...", file=sys.stderr)
node_ids_seen.add(node_id)
if endpoint in endpoints_seen:
print(f"WARNING: Duplicate endpoint at entry {i+1}: {endpoint}", file=sys.stderr)
endpoints_seen.add(endpoint)
sys.exit(errors)
PYEOF
ERRORS=$((ERRORS + $?))
fi
}
validate_enode_file "$STATIC_NODES" "json"
validate_enode_file "$PERMISSIONS_TOML" "toml"
if [[ $ERRORS -eq 0 ]]; then
echo "✓ All enodes validated successfully"
exit 0
else
echo "✗ Validation failed with $ERRORS errors"
exit 1
fi

62
scripts/besu-verify-peers.sh Executable file
View File

@@ -0,0 +1,62 @@
#!/bin/bash
# Check peer connections on Besu node
# Usage: bash besu-verify-peers.sh <rpc-url>
# Example: bash besu-verify-peers.sh http://192.168.11.13:8545
set -euo pipefail
RPC_URL="${1:-http://localhost:8545}"
echo "Checking peers on: $RPC_URL"
echo ""
# Get node info
NODE_INFO=$(curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"admin_nodeInfo","params":[],"id":1}' \
"${RPC_URL}")
ENODE=$(echo "$NODE_INFO" | python3 -c "import sys, json; print(json.load(sys.stdin).get('result', {}).get('enode', 'ERROR'))" 2>/dev/null)
if [[ "$ENODE" == "ERROR" ]] || [[ -z "$ENODE" ]]; then
echo "ERROR: Could not get node info. Is RPC enabled with ADMIN API?" >&2
exit 1
fi
echo "This node's enode:"
echo "$ENODE"
echo ""
# Get peers
PEERS_RESPONSE=$(curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"admin_peers","params":[],"id":2}' \
"${RPC_URL}")
PEERS=$(echo "$PEERS_RESPONSE" | python3 -c "import sys, json; peers=json.load(sys.stdin).get('result', []); print(len(peers))" 2>/dev/null)
PEERS_LIST=$(echo "$PEERS_RESPONSE" | python3 -c "import sys, json; peers=json.load(sys.stdin).get('result', []); [print(f\" - {p.get('enode', 'unknown')}\") for p in peers]" 2>/dev/null)
echo "Connected peers: $PEERS"
echo ""
if [[ "$PEERS" == "0" ]]; then
echo "⚠️ NO PEERS CONNECTED"
echo ""
echo "Possible causes:"
echo "1. Other nodes not running"
echo "2. Firewall blocking port 30303"
echo "3. Malformed enodes in allowlist"
echo "4. Discovery disabled and static-nodes.json incorrect"
echo "5. Permissions enabled but allowlist missing this node"
echo "6. Network connectivity issues"
else
echo "Peer list:"
echo "$PEERS_LIST"
fi
# Check peer details if jq available
if [[ "$PEERS" != "0" ]] && command -v jq >/dev/null 2>&1; then
echo ""
echo "Peer details:"
echo "$PEERS_RESPONSE" | jq -r '.result[] | " - \(.id): \(.name) @ \(.network.remoteAddress)"'
fi

307
scripts/besu_balances_106_117.js Executable file
View File

@@ -0,0 +1,307 @@
#!/usr/bin/env node
/**
* Query balances from Besu RPC nodes (VMID 2500-2502 by default)
*
* Note: Only RPC nodes (2500-2502) expose RPC endpoints.
* Validators (1000-1004) and sentries (1500-1503) don't have RPC enabled.
*
* Usage:
* RPC_URLS="http://192.168.11.23:8545,http://192.168.11.24:8545,http://192.168.11.25:8545" \
* WETH9_ADDRESS="0x..." \
* WETH10_ADDRESS="0x..." \
* node scripts/besu_balances_106_117.js
*
* Or use template (defaults to RPC nodes 115-117):
* RPC_TEMPLATE="http://192.168.11.{vmid}:8545" \
* node scripts/besu_balances_106_117.js
*/
import { ethers } from 'ethers';
// Configuration
const WALLET_ADDRESS = '0xa55A4B57A91561e9df5a883D4883Bd4b1a7C4882';
// RPC nodes are 2500-2502; validators (1000-1004) and sentries (1500-1503) don't expose RPC
const VMID_START = process.env.VMID_START ? parseInt(process.env.VMID_START) : 2500;
const VMID_END = process.env.VMID_END ? parseInt(process.env.VMID_END) : 2502;
const CONCURRENCY_LIMIT = 4;
const REQUEST_TIMEOUT = 15000; // 15 seconds
// ERC-20 minimal ABI
const ERC20_ABI = [
'function balanceOf(address owner) view returns (uint256)',
'function decimals() view returns (uint8)',
'function symbol() view returns (string)'
];
// Default token addresses
const WETH9_ADDRESS = process.env.WETH9_ADDRESS || '0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2';
const WETH10_ADDRESS = process.env.WETH10_ADDRESS || null;
// VMID to IP mapping for better VMID detection
const VMID_TO_IP = {
'192.168.11.13': 106,
'192.168.11.14': 107,
'192.168.11.15': 108,
'192.168.11.16': 109,
'192.168.11.18': 110,
'192.168.11.19': 111,
'192.168.11.20': 112,
'192.168.11.21': 113,
'192.168.11.22': 114,
'192.168.11.23': 115,
'192.168.11.24': 116,
'192.168.11.25': 117,
};
// Reverse IP to VMID mapping (VMID -> IP)
const IP_TO_VMID = {
106: '192.168.11.13',
107: '192.168.11.14',
108: '192.168.11.15',
109: '192.168.11.16',
110: '192.168.11.18',
111: '192.168.11.19',
112: '192.168.11.20',
113: '192.168.11.21',
114: '192.168.11.22',
115: '192.168.11.23',
116: '192.168.11.24',
117: '192.168.11.25',
};
// RPC endpoint configuration
function getRpcUrls() {
if (process.env.RPC_URLS) {
return process.env.RPC_URLS.split(',').map(url => url.trim());
}
const template = process.env.RPC_TEMPLATE;
const urls = [];
for (let vmid = VMID_START; vmid <= VMID_END; vmid++) {
if (template && template.includes('{vmid}')) {
// Use template if provided
urls.push(template.replace('{vmid}', vmid.toString()));
} else {
// Use actual IP from mapping (default behavior)
const ip = IP_TO_VMID[vmid];
if (ip) {
urls.push(`http://${ip}:8545`);
} else {
// Fallback to template or direct VMID
const fallbackTemplate = template || 'http://192.168.11.{vmid}:8545';
urls.push(fallbackTemplate.replace('{vmid}', vmid.toString()));
}
}
}
return urls;
}
// Get VMID from URL
function getVmidFromUrl(url) {
// Try IP mapping first
const ipMatch = url.match(/(\d+\.\d+\.\d+\.\d+)/);
if (ipMatch && VMID_TO_IP[ipMatch[1]]) {
return VMID_TO_IP[ipMatch[1]];
}
// Fallback to pattern matching
const match = url.match(/(?:\.|:)(\d{3})(?::|\/)/);
return match ? parseInt(match[1]) : null;
}
// Format balance with decimals
function formatBalance(balance, decimals, symbol) {
const formatted = ethers.formatUnits(balance, decimals);
return `${formatted} ${symbol}`;
}
// Query single RPC endpoint
async function queryRpc(url, walletAddress) {
const vmid = getVmidFromUrl(url) || 'unknown';
const result = {
vmid,
url,
success: false,
chainId: null,
blockNumber: null,
ethBalance: null,
ethBalanceWei: null,
weth9: null,
weth10: null,
errors: []
};
try {
// Create provider with timeout
const provider = new ethers.JsonRpcProvider(url, undefined, {
staticNetwork: false,
batchMaxCount: 1,
batchMaxSize: 250000,
staticNetworkCode: false,
});
// Set timeout
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), REQUEST_TIMEOUT);
try {
// Health checks
const [chainId, blockNumber] = await Promise.all([
provider.getNetwork().then(n => Number(n.chainId)),
provider.getBlockNumber()
]);
clearTimeout(timeoutId);
result.chainId = chainId;
result.blockNumber = blockNumber;
// Query ETH balance
const ethBalance = await provider.getBalance(walletAddress);
result.ethBalanceWei = ethBalance.toString();
result.ethBalance = ethers.formatEther(ethBalance);
// Query WETH9 balance
try {
const weth9Contract = new ethers.Contract(WETH9_ADDRESS, ERC20_ABI, provider);
const [balance, decimals, symbol] = await Promise.all([
weth9Contract.balanceOf(walletAddress),
weth9Contract.decimals(),
weth9Contract.symbol()
]);
result.weth9 = {
balance: balance.toString(),
decimals: Number(decimals),
symbol: symbol,
formatted: formatBalance(balance, decimals, symbol)
};
} catch (err) {
result.errors.push(`WETH9: ${err.message}`);
}
// Query WETH10 balance (if address provided)
if (WETH10_ADDRESS) {
try {
const weth10Contract = new ethers.Contract(WETH10_ADDRESS, ERC20_ABI, provider);
const [balance, decimals, symbol] = await Promise.all([
weth10Contract.balanceOf(walletAddress),
weth10Contract.decimals(),
weth10Contract.symbol()
]);
result.weth10 = {
balance: balance.toString(),
decimals: Number(decimals),
symbol: symbol,
formatted: formatBalance(balance, decimals, symbol)
};
} catch (err) {
result.errors.push(`WETH10: ${err.message}`);
}
}
result.success = true;
} catch (err) {
clearTimeout(timeoutId);
throw err;
}
} catch (err) {
result.errors.push(err.message);
result.success = false;
}
return result;
}
// Process URLs with concurrency limit
async function processWithConcurrencyLimit(urls, walletAddress, limit) {
const results = [];
const executing = [];
for (const url of urls) {
const promise = queryRpc(url, walletAddress).then(result => {
results.push(result);
});
executing.push(promise);
if (executing.length >= limit) {
await Promise.race(executing);
executing.splice(executing.findIndex(p => p === promise), 1);
}
}
await Promise.all(executing);
return results.sort((a, b) => (a.vmid === 'unknown' ? 999 : a.vmid) - (b.vmid === 'unknown' ? 999 : b.vmid));
}
// Format and print results
function printResults(results) {
let successCount = 0;
for (const result of results) {
console.log(`VMID: ${result.vmid}`);
console.log(`RPC: ${result.url}`);
if (result.success) {
successCount++;
console.log(`chainId: ${result.chainId}`);
console.log(`block: ${result.blockNumber}`);
console.log(`ETH: ${result.ethBalance} (wei: ${result.ethBalanceWei})`);
if (result.weth9) {
console.log(`WETH9: ${result.weth9.formatted} (raw: ${result.weth9.balance})`);
} else {
console.log(`WETH9: error (see errors below)`);
}
if (WETH10_ADDRESS) {
if (result.weth10) {
console.log(`WETH10: ${result.weth10.formatted} (raw: ${result.weth10.balance})`);
} else {
console.log(`WETH10: error (see errors below)`);
}
} else {
console.log(`WETH10: skipped (missing address)`);
}
if (result.errors.length > 0) {
console.log(`Errors: ${result.errors.join('; ')}`);
}
} else {
console.log(`Status: FAILED`);
if (result.errors.length > 0) {
console.log(`Errors: ${result.errors.join('; ')}`);
}
}
console.log('---');
}
return successCount;
}
// Main execution
async function main() {
console.log('Querying balances from Besu RPC nodes...');
console.log(`Wallet: ${WALLET_ADDRESS}`);
console.log(`WETH9: ${WETH9_ADDRESS}`);
console.log(`WETH10: ${WETH10_ADDRESS || 'not set (will skip)'}`);
console.log('');
const rpcUrls = getRpcUrls();
console.log(`Checking ${rpcUrls.length} RPC endpoints (concurrency: ${CONCURRENCY_LIMIT})...`);
console.log('');
const results = await processWithConcurrencyLimit(rpcUrls, WALLET_ADDRESS, CONCURRENCY_LIMIT);
const successCount = printResults(results);
console.log('');
console.log(`Summary: ${successCount} out of ${results.length} RPC endpoints succeeded`);
// Exit code: 0 if at least one succeeded, 1 if all failed
process.exit(successCount > 0 ? 0 : 1);
}
main().catch(err => {
console.error('Fatal error:', err);
process.exit(1);
});

View File

@@ -0,0 +1,129 @@
#!/bin/bash
# Simplified Deployment Status Check
# Shows what containers are deployed and their status
set +e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/load-env.sh"
load_env_file
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
echo ""
echo "╔════════════════════════════════════════════════════════════════╗"
echo "║ Deployment Status - ${PROXMOX_HOST}"
echo "╚════════════════════════════════════════════════════════════════╝"
echo ""
# Get first node
NODES_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes" 2>&1)
FIRST_NODE=$(echo "$NODES_RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin)['data'][0]['node'])" 2>/dev/null)
# Get containers
CONTAINERS_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes/${FIRST_NODE}/lxc" 2>&1)
# Parse and display
echo "$CONTAINERS_RESPONSE" | python3 << 'PYEOF'
import sys, json
try:
data = json.load(sys.stdin)['data']
if len(data) == 0:
print("No containers found")
sys.exit(0)
# Categorize containers
validators = []
sentries = []
rpc_nodes = []
services = []
monitoring = []
explorer = []
hyperledger = []
other = []
for container in data:
vmid = container['vmid']
name = container.get('name', 'N/A')
status = container.get('status', 'unknown')
if 106 <= vmid <= 109:
validators.append((vmid, name, status))
elif 110 <= vmid <= 114:
sentries.append((vmid, name, status))
elif 115 <= vmid <= 119:
rpc_nodes.append((vmid, name, status))
elif 120 <= vmid <= 129:
services.append((vmid, name, status))
elif 130 <= vmid <= 139:
monitoring.append((vmid, name, status))
elif 140 <= vmid <= 149:
explorer.append((vmid, name, status))
elif 150 <= vmid <= 153:
hyperledger.append((vmid, name, status))
else:
other.append((vmid, name, status))
# Display by category
def print_category(title, containers, color_code):
if containers:
print(f"\n{title} ({len(containers)}):")
print(f" {'VMID':<6} {'Name':<30} {'Status':<10}")
print(" " + "-" * 48)
for vmid, name, status in sorted(containers):
status_display = f"\033[32m{status}\033[0m" if status == "running" else f"\033[31m{status}\033[0m" if status == "stopped" else f"\033[33m{status}\033[0m"
print(f" {vmid:<6} {name:<30} {status_display}")
print_category("Validators", validators, 36)
print_category("Sentries", sentries, 34)
print_category("RPC Nodes", rpc_nodes, 32)
print_category("Services", services, 33)
print_category("Monitoring", monitoring, 36)
print_category("Explorer", explorer, 34)
print_category("Hyperledger", hyperledger, 32)
if other:
print(f"\nOther Containers ({len(other)}):")
for vmid, name, status in sorted(other):
status_display = f"\033[32m{status}\033[0m" if status == "running" else f"\033[31m{status}\033[0m" if status == "stopped" else f"\033[33m{status}\033[0m"
print(f" VMID {vmid}: {name} ({status_display})")
# Summary
print("\n" + "=" * 60)
print("Summary")
print("=" * 60)
print(f"Total Containers: {len(data)}")
running = sum(1 for c in data if c.get('status') == 'running')
stopped = sum(1 for c in data if c.get('status') == 'stopped')
print(f" Running: {running}")
print(f" Stopped: {stopped}")
print(f"\nSMOM-DBIS-138 Deployment:")
print(f" Validators: {len(validators)}/4")
print(f" Sentries: {len(sentries)}/3")
print(f" RPC Nodes: {len(rpc_nodes)}/3")
print(f" Services: {len(services)}")
print(f" Monitoring: {len(monitoring)}")
print(f" Explorer: {len(explorer)}")
print(f" Hyperledger: {len(hyperledger)}")
if len(validators) == 0 and len(sentries) == 0 and len(rpc_nodes) == 0:
print("\n⚠ No SMOM-DBIS-138 containers found")
print("To deploy: ./scripts/deploy-to-proxmox-host.sh")
else:
print("\n✅ SMOM-DBIS-138 deployment found")
except Exception as e:
print(f"Error: {e}")
sys.exit(1)
PYEOF

150
scripts/check-deployments.sh Executable file
View File

@@ -0,0 +1,150 @@
#!/bin/bash
# Check Deployment Status on Proxmox Host
# Shows what containers are deployed and their status
set +e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/load-env.sh"
load_env_file
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
echo ""
echo "╔════════════════════════════════════════════════════════════════╗"
echo "║ Deployment Status - ${PROXMOX_HOST}"
echo "╚════════════════════════════════════════════════════════════════╝"
echo ""
# Get first node
NODES_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes" 2>&1)
FIRST_NODE=$(echo "$NODES_RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin)['data'][0]['node'])" 2>/dev/null)
# Get containers
CONTAINERS_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes/${FIRST_NODE}/lxc" 2>&1)
# Check if response is valid JSON
if ! echo "$CONTAINERS_RESPONSE" | python3 -c "import sys, json; json.load(sys.stdin)" 2>/dev/null; then
echo "Error: Failed to retrieve containers"
echo "Response preview: $(echo "$CONTAINERS_RESPONSE" | head -c 200)"
exit 1
fi
# Parse and display using Python
python3 << PYEOF
import sys, json
try:
response = """$CONTAINERS_RESPONSE"""
data = json.loads(response)['data']
if len(data) == 0:
print("No containers found")
print("\nTo deploy:")
print(" ./scripts/deploy-to-proxmox-host.sh")
sys.exit(0)
# Categorize containers
validators = []
sentries = []
rpc_nodes = []
services = []
monitoring = []
explorer = []
hyperledger = []
other = []
for container in data:
vmid = container['vmid']
name = container.get('name', 'N/A')
status = container.get('status', 'unknown')
if 106 <= vmid <= 109:
validators.append((vmid, name, status))
elif 110 <= vmid <= 114:
sentries.append((vmid, name, status))
elif 115 <= vmid <= 119:
rpc_nodes.append((vmid, name, status))
elif 120 <= vmid <= 129:
services.append((vmid, name, status))
elif 130 <= vmid <= 139:
monitoring.append((vmid, name, status))
elif 140 <= vmid <= 149:
explorer.append((vmid, name, status))
elif 150 <= vmid <= 153:
hyperledger.append((vmid, name, status))
else:
other.append((vmid, name, status))
# Display by category
def print_category(title, containers):
if containers:
print(f"\n{title} ({len(containers)}):")
print(f" {'VMID':<6} {'Name':<30} {'Status':<10}")
print(" " + "-" * 48)
for vmid, name, status in sorted(containers):
if status == "running":
status_display = f"\033[32m{status}\033[0m"
elif status == "stopped":
status_display = f"\033[31m{status}\033[0m"
else:
status_display = f"\033[33m{status}\033[0m"
print(f" {vmid:<6} {name:<30} {status_display}")
print_category("Validators", validators)
print_category("Sentries", sentries)
print_category("RPC Nodes", rpc_nodes)
print_category("Services", services)
print_category("Monitoring", monitoring)
print_category("Explorer", explorer)
print_category("Hyperledger", hyperledger)
if other:
print(f"\nOther Containers ({len(other)}):")
for vmid, name, status in sorted(other):
if status == "running":
status_display = f"\033[32m{status}\033[0m"
elif status == "stopped":
status_display = f"\033[31m{status}\033[0m"
else:
status_display = f"\033[33m{status}\033[0m"
print(f" VMID {vmid}: {name} ({status_display})")
# Summary
print("\n" + "=" * 60)
print("Summary")
print("=" * 60)
print(f"Total Containers: {len(data)}")
running = sum(1 for c in data if c.get('status') == 'running')
stopped = sum(1 for c in data if c.get('status') == 'stopped')
print(f" Running: {running}")
print(f" Stopped: {stopped}")
print(f"\nSMOM-DBIS-138 Deployment:")
print(f" Validators: {len(validators)}/4")
print(f" Sentries: {len(sentries)}/3")
print(f" RPC Nodes: {len(rpc_nodes)}/3")
print(f" Services: {len(services)}")
print(f" Monitoring: {len(monitoring)}")
print(f" Explorer: {len(explorer)}")
print(f" Hyperledger: {len(hyperledger)}")
if len(validators) == 0 and len(sentries) == 0 and len(rpc_nodes) == 0:
print("\n⚠ No SMOM-DBIS-138 containers found")
print("To deploy: ./scripts/deploy-to-proxmox-host.sh")
else:
print("\n✅ SMOM-DBIS-138 deployment found")
except Exception as e:
print(f"Error parsing response: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
PYEOF

346
scripts/check-prerequisites.sh Executable file
View File

@@ -0,0 +1,346 @@
#!/bin/bash
# Comprehensive Prerequisites Check Script
# Validates all prerequisites for the Proxmox workspace
set +e # Don't exit on errors - collect all results
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Status tracking
PASSED=0
FAILED=0
WARNINGS=0
pass() {
echo -e "${GREEN}${NC} $1"
((PASSED++))
}
fail() {
echo -e "${RED}${NC} $1"
((FAILED++))
}
warn() {
echo -e "${YELLOW}⚠️${NC} $1"
((WARNINGS++))
}
info() {
echo -e "${BLUE}${NC} $1"
}
section() {
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${CYAN}$1${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
}
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} Prerequisites Check for Proxmox Workspace ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════════════════╝${NC}"
echo ""
# ============================================================================
# SECTION 1: SYSTEM PREREQUISITES
# ============================================================================
section "1. SYSTEM PREREQUISITES"
# Check Node.js
if command -v node &> /dev/null; then
NODE_VERSION=$(node --version | sed 's/v//')
NODE_MAJOR=$(echo "$NODE_VERSION" | cut -d. -f1)
if [ "$NODE_MAJOR" -ge 16 ]; then
pass "Node.js installed: v$NODE_VERSION (requires 16+)"
else
fail "Node.js version too old: v$NODE_VERSION (requires 16+)"
fi
else
fail "Node.js not installed (requires 16+)"
fi
# Check pnpm
if command -v pnpm &> /dev/null; then
PNPM_VERSION=$(pnpm --version)
PNPM_MAJOR=$(echo "$PNPM_VERSION" | cut -d. -f1)
if [ "$PNPM_MAJOR" -ge 8 ]; then
pass "pnpm installed: $PNPM_VERSION (requires 8+)"
else
fail "pnpm version too old: $PNPM_VERSION (requires 8+)"
fi
else
fail "pnpm not installed (requires 8+)"
fi
# Check Git
if command -v git &> /dev/null; then
GIT_VERSION=$(git --version | awk '{print $3}')
pass "Git installed: $GIT_VERSION"
else
fail "Git not installed"
fi
# Check required tools
REQUIRED_TOOLS=("curl" "jq" "bash")
for tool in "${REQUIRED_TOOLS[@]}"; do
if command -v "$tool" &> /dev/null; then
pass "$tool installed"
else
fail "$tool not found (required)"
fi
done
# ============================================================================
# SECTION 2: WORKSPACE STRUCTURE
# ============================================================================
section "2. WORKSPACE STRUCTURE"
# Check project root
if [ -d "$PROJECT_ROOT" ]; then
pass "Project root exists: $PROJECT_ROOT"
else
fail "Project root not found: $PROJECT_ROOT"
exit 1
fi
# Check package.json
if [ -f "$PROJECT_ROOT/package.json" ]; then
pass "package.json exists"
else
fail "package.json not found"
fi
# Check pnpm-workspace.yaml
if [ -f "$PROJECT_ROOT/pnpm-workspace.yaml" ]; then
pass "pnpm-workspace.yaml exists"
else
fail "pnpm-workspace.yaml not found"
fi
# Check submodules
if [ -d "$PROJECT_ROOT/mcp-proxmox" ]; then
if [ -f "$PROJECT_ROOT/mcp-proxmox/index.js" ]; then
pass "mcp-proxmox submodule exists and has index.js"
else
warn "mcp-proxmox submodule exists but index.js not found"
fi
else
fail "mcp-proxmox submodule not found"
fi
if [ -d "$PROJECT_ROOT/ProxmoxVE" ]; then
pass "ProxmoxVE submodule exists"
else
warn "ProxmoxVE submodule not found"
fi
if [ -d "$PROJECT_ROOT/smom-dbis-138-proxmox" ]; then
pass "smom-dbis-138-proxmox submodule exists"
else
warn "smom-dbis-138-proxmox submodule not found"
fi
# Check directory structure
[ -d "$PROJECT_ROOT/scripts" ] && pass "scripts/ directory exists" || fail "scripts/ directory not found"
[ -d "$PROJECT_ROOT/docs" ] && pass "docs/ directory exists" || fail "docs/ directory not found"
# ============================================================================
# SECTION 3: DEPENDENCIES
# ============================================================================
section "3. DEPENDENCIES"
# Check if node_modules exists
if [ -d "$PROJECT_ROOT/node_modules" ]; then
pass "node_modules exists (dependencies installed)"
# Check MCP server dependencies
if [ -d "$PROJECT_ROOT/mcp-proxmox/node_modules" ]; then
pass "MCP server dependencies installed"
else
warn "MCP server dependencies not installed (run: pnpm install)"
fi
# Check frontend dependencies
if [ -d "$PROJECT_ROOT/ProxmoxVE/frontend/node_modules" ]; then
pass "Frontend dependencies installed"
else
warn "Frontend dependencies not installed (run: pnpm install)"
fi
else
fail "node_modules not found (run: pnpm install)"
fi
# ============================================================================
# SECTION 4: CONFIGURATION FILES
# ============================================================================
section "4. CONFIGURATION FILES"
# Check .env file
ENV_FILE="$HOME/.env"
if [ -f "$ENV_FILE" ]; then
pass ".env file exists: $ENV_FILE"
# Check for required variables
source scripts/load-env.sh 2>/dev/null || true
load_env_file 2>/dev/null || true
[ -n "${PROXMOX_HOST:-}" ] && [ "${PROXMOX_HOST}" != "your-proxmox-ip-or-hostname" ] && \
pass "PROXMOX_HOST configured: $PROXMOX_HOST" || \
warn "PROXMOX_HOST not configured or using placeholder"
[ -n "${PROXMOX_USER:-}" ] && [ "${PROXMOX_USER}" != "your-username" ] && \
pass "PROXMOX_USER configured: $PROXMOX_USER" || \
warn "PROXMOX_USER not configured or using placeholder"
[ -n "${PROXMOX_TOKEN_NAME:-}" ] && [ "${PROXMOX_TOKEN_NAME}" != "your-token-name" ] && \
pass "PROXMOX_TOKEN_NAME configured: $PROXMOX_TOKEN_NAME" || \
warn "PROXMOX_TOKEN_NAME not configured or using placeholder"
if [ -n "${PROXMOX_TOKEN_VALUE:-}" ] && [ "${PROXMOX_TOKEN_VALUE}" != "your-token-secret-here" ] && [ "${PROXMOX_TOKEN_VALUE}" != "your-token-secret" ]; then
pass "PROXMOX_TOKEN_VALUE configured (secret present)"
else
fail "PROXMOX_TOKEN_VALUE not configured or using placeholder"
fi
else
fail ".env file not found: $ENV_FILE"
info "Create it with: ./scripts/setup.sh"
fi
# Check Claude Desktop config
CLAUDE_CONFIG="$HOME/.config/Claude/claude_desktop_config.json"
if [ -f "$CLAUDE_CONFIG" ]; then
pass "Claude Desktop config exists: $CLAUDE_CONFIG"
# Check if MCP server is configured
if grep -q "proxmox" "$CLAUDE_CONFIG" 2>/dev/null; then
pass "MCP server configured in Claude Desktop"
else
warn "MCP server not found in Claude Desktop config"
fi
else
warn "Claude Desktop config not found: $CLAUDE_CONFIG"
info "Create it with: ./scripts/setup.sh"
fi
# Check deployment configs
if [ -d "$PROJECT_ROOT/smom-dbis-138-proxmox/config" ]; then
if [ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" ]; then
pass "Deployment proxmox.conf exists"
else
warn "Deployment proxmox.conf not found (example exists)"
fi
if [ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/network.conf" ]; then
pass "Deployment network.conf exists"
else
warn "Deployment network.conf not found (example exists)"
fi
fi
# ============================================================================
# SECTION 5: SCRIPTS
# ============================================================================
section "5. SCRIPTS"
REQUIRED_SCRIPTS=(
"setup.sh"
"complete-setup.sh"
"verify-setup.sh"
"load-env.sh"
"test-connection.sh"
"validate-ml110-deployment.sh"
)
for script in "${REQUIRED_SCRIPTS[@]}"; do
SCRIPT_PATH="$PROJECT_ROOT/scripts/$script"
if [ -f "$SCRIPT_PATH" ]; then
if [ -x "$SCRIPT_PATH" ]; then
pass "$script exists and is executable"
else
warn "$script exists but is not executable"
fi
else
fail "$script not found"
fi
done
# ============================================================================
# SECTION 6: PROXMOX CONNECTION (if configured)
# ============================================================================
section "6. PROXMOX CONNECTION"
if [ -f "$ENV_FILE" ]; then
source scripts/load-env.sh 2>/dev/null || true
load_env_file 2>/dev/null || true
if [ -n "${PROXMOX_HOST:-}" ] && [ -n "${PROXMOX_TOKEN_VALUE:-}" ] && \
[ "${PROXMOX_TOKEN_VALUE}" != "your-token-secret-here" ] && \
[ "${PROXMOX_TOKEN_VALUE}" != "your-token-secret" ]; then
info "Testing connection to ${PROXMOX_HOST}..."
API_RESPONSE=$(curl -k -s -w "\n%{http_code}" -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT:-8006}/api2/json/version" 2>&1)
HTTP_CODE=$(echo "$API_RESPONSE" | tail -1)
if [ "$HTTP_CODE" = "200" ]; then
VERSION=$(echo "$API_RESPONSE" | sed '$d' | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['version'])" 2>/dev/null || echo "unknown")
pass "Proxmox API connection successful (version: $VERSION)"
else
fail "Proxmox API connection failed (HTTP $HTTP_CODE)"
fi
else
warn "Cannot test Proxmox connection - credentials not fully configured"
fi
else
warn "Cannot test Proxmox connection - .env file not found"
fi
# ============================================================================
# SUMMARY
# ============================================================================
section "PREREQUISITES SUMMARY"
echo -e "${CYAN}Results:${NC}"
echo -e " ${GREEN}Passed:${NC} $PASSED"
echo -e " ${RED}Failed:${NC} $FAILED"
echo -e " ${YELLOW}Warnings:${NC} $WARNINGS"
echo ""
if [ $FAILED -eq 0 ]; then
echo -e "${GREEN}✅ All prerequisites met!${NC}"
echo ""
echo "Next steps:"
echo " 1. Run deployment validation: ./scripts/validate-ml110-deployment.sh"
echo " 2. Start MCP server: pnpm mcp:start"
exit 0
else
echo -e "${RED}❌ Some prerequisites are missing${NC}"
echo ""
echo "Please fix the failures above before proceeding."
echo ""
echo "Quick fixes:"
[ $FAILED -gt 0 ] && echo " - Run: ./scripts/complete-setup.sh"
exit 1
fi

227
scripts/check-vmid-consistency.sh Executable file
View File

@@ -0,0 +1,227 @@
#!/usr/bin/env bash
# Check VMID consistency across all documentation and scripts
# Validates that all references match the expected ranges
set -euo pipefail
PROJECT_ROOT="/home/intlc/projects/proxmox"
cd "$PROJECT_ROOT"
# Expected VMID ranges
VALIDATORS_START=1000
VALIDATORS_END=1004
VALIDATORS_COUNT=5
SENTRIES_START=1500
SENTRIES_END=1503
SENTRIES_COUNT=4
RPC_START=2500
RPC_END=2502
RPC_COUNT=3
echo "=== VMID Consistency Check ==="
echo ""
echo "Expected Ranges:"
echo " Validators: $VALIDATORS_START-$VALIDATORS_END ($VALIDATORS_COUNT nodes)"
echo " Sentries: $SENTRIES_START-$SENTRIES_END ($SENTRIES_COUNT nodes)"
echo " RPC: $RPC_START-$RPC_END ($RPC_COUNT nodes)"
echo ""
errors=0
warnings=0
# Check for validator VMID inconsistencies
echo "=== Checking Validator VMIDs ==="
echo ""
# Find all validator VMID references
validator_refs=$(grep -rE "\b(100[0-4]|1000-1004|validator.*VMID|VALIDATOR.*=)" \
--include="*.md" --include="*.sh" --include="*.js" --include="*.py" \
--include="*.conf" --include="*.example" \
smom-dbis-138-proxmox/ 2>/dev/null | grep -v ".git" | cut -d: -f1 | sort -u)
if [[ -n "$validator_refs" ]]; then
for file in $validator_refs; do
# Check if file contains correct validator ranges
if grep -qE "\b(1000-1004|1000.*1004|100[0-4])\b" "$file" 2>/dev/null; then
# Check for incorrect validator ranges
if grep -qE "\b(106|107|108|109|110|1110|1100-1104)\b" "$file" 2>/dev/null && ! grep -qE "\b(1000|1001|1002|1003|1004)\b" "$file" 2>/dev/null; then
echo "$file - Contains OLD validator VMIDs only"
errors=$((errors + 1))
elif grep -qE "\b(106|107|108|109|110)\b" "$file" 2>/dev/null; then
echo " ⚠️ $file - Contains BOTH old and new validator VMIDs"
warnings=$((warnings + 1))
else
echo "$file - Validator VMIDs look correct"
fi
fi
done
fi
echo ""
echo "=== Checking Sentry VMIDs ==="
echo ""
# Find all sentry VMID references
sentry_refs=$(grep -rE "\b(150[0-3]|1500-1503|sentry.*VMID|SENTRY.*=)" \
--include="*.md" --include="*.sh" --include="*.js" --include="*.py" \
--include="*.conf" --include="*.example" \
smom-dbis-138-proxmox/ 2>/dev/null | grep -v ".git" | cut -d: -f1 | sort -u)
if [[ -n "$sentry_refs" ]]; then
for file in $sentry_refs; do
# Check if file contains correct sentry ranges
if grep -qE "\b(1500-1503|150[0-3])\b" "$file" 2>/dev/null; then
# Check for incorrect sentry ranges
if grep -qE "\b(111|112|113|114|1110|1110-1113)\b" "$file" 2>/dev/null && ! grep -qE "\b(1500|1501|1502|1503)\b" "$file" 2>/dev/null; then
echo "$file - Contains OLD sentry VMIDs only"
errors=$((errors + 1))
elif grep -qE "\b(111|112|113|114)\b" "$file" 2>/dev/null; then
echo " ⚠️ $file - Contains BOTH old and new sentry VMIDs"
warnings=$((warnings + 1))
else
echo "$file - Sentry VMIDs look correct"
fi
fi
done
fi
echo ""
echo "=== Checking RPC VMIDs ==="
echo ""
# Find all RPC VMID references
rpc_refs=$(grep -rE "\b(250[0-2]|2500-2502|rpc.*VMID|RPC.*=)" \
--include="*.md" --include="*.sh" --include="*.js" --include="*.py" \
--include="*.conf" --include="*.example" \
smom-dbis-138-proxmox/ 2>/dev/null | grep -v ".git" | cut -d: -f1 | sort -u)
if [[ -n "$rpc_refs" ]]; then
for file in $rpc_refs; do
# Check if file contains correct RPC ranges
if grep -qE "\b(2500-2502|250[0-2])\b" "$file" 2>/dev/null; then
# Check for incorrect RPC ranges
if grep -qE "\b(115|116|117|1120|1120-1122)\b" "$file" 2>/dev/null && ! grep -qE "\b(2500|2501|2502)\b" "$file" 2>/dev/null; then
echo "$file - Contains OLD RPC VMIDs only"
errors=$((errors + 1))
elif grep -qE "\b(115|116|117)\b" "$file" 2>/dev/null; then
echo " ⚠️ $file - Contains BOTH old and new RPC VMIDs"
warnings=$((warnings + 1))
else
echo "$file - RPC VMIDs look correct"
fi
fi
done
fi
echo ""
echo "=== Checking Count Consistency ==="
echo ""
# Check for count mismatches
validator_count_refs=$(grep -rE "(VALIDATOR.*COUNT|validators?.*count|5.*validator)" \
--include="*.md" --include="*.sh" --include="*.conf" \
smom-dbis-138-proxmox/ 2>/dev/null | grep -v ".git")
sentry_count_refs=$(grep -rE "(SENTRY.*COUNT|sentries?.*count|4.*sentry)" \
--include="*.md" --include="*.sh" --include="*.conf" \
smom-dbis-138-proxmox/ 2>/dev/null | grep -v ".git")
rpc_count_refs=$(grep -rE "(RPC.*COUNT|rpc.*count|3.*rpc)" \
--include="*.md" --include="*.sh" --include="*.conf" \
smom-dbis-138-proxmox/ 2>/dev/null | grep -v ".git")
# Check validator counts
for line in $validator_count_refs; do
file=$(echo "$line" | cut -d: -f1)
content=$(echo "$line" | cut -d: -f2-)
if echo "$content" | grep -qE "\b(4|3|6|7|8)\b" && echo "$content" | grep -qi validator; then
if ! echo "$content" | grep -qE "\b($VALIDATORS_COUNT|5)\b"; then
echo " ⚠️ $file - Validator count may be incorrect: $content"
warnings=$((warnings + 1))
fi
fi
done
# Check sentry counts
for line in $sentry_count_refs; do
file=$(echo "$line" | cut -d: -f1)
content=$(echo "$line" | cut -d: -f2-)
if echo "$content" | grep -qE "\b(3|5|6|7|8)\b" && echo "$content" | grep -qi sentry; then
if ! echo "$content" | grep -qE "\b($SENTRIES_COUNT|4)\b"; then
echo " ⚠️ $file - Sentry count may be incorrect: $content"
warnings=$((warnings + 1))
fi
fi
done
# Check RPC counts
for line in $rpc_count_refs; do
file=$(echo "$line" | cut -d: -f1)
content=$(echo "$line" | cut -d: -f2-)
if echo "$content" | grep -qE "\b(2|4|5|6)\b" && echo "$content" | grep -qi rpc; then
if ! echo "$content" | grep -qE "\b($RPC_COUNT|3)\b"; then
echo " ⚠️ $file - RPC count may be incorrect: $content"
warnings=$((warnings + 1))
fi
fi
done
echo ""
echo "=== Checking Array Definitions ==="
echo ""
# Check for hardcoded VMID arrays
array_files=$(grep -rE "(VALIDATORS|SENTRIES|RPCS?)=\(.*\)" \
--include="*.sh" --include="*.py" \
smom-dbis-138-proxmox/ 2>/dev/null | cut -d: -f1 | sort -u)
for file in $array_files; do
echo " Checking: $file"
# Check validators array
if grep -qE "VALIDATORS.*=" "$file" 2>/dev/null; then
validator_array=$(grep -A 1 "VALIDATORS.*=" "$file" 2>/dev/null | grep -E "\(.*\)")
if echo "$validator_array" | grep -qE "\b(106|107|108|109|110)\b" && ! echo "$validator_array" | grep -qE "\b(1000|1001|1002|1003|1004)\b"; then
echo " ❌ Validators array contains old VMIDs: $validator_array"
errors=$((errors + 1))
fi
fi
# Check sentries array
if grep -qE "SENTRIES.*=" "$file" 2>/dev/null; then
sentry_array=$(grep -A 1 "SENTRIES.*=" "$file" 2>/dev/null | grep -E "\(.*\)")
if echo "$sentry_array" | grep -qE "\b(111|112|113|114)\b" && ! echo "$sentry_array" | grep -qE "\b(1500|1501|1502|1503)\b"; then
echo " ❌ Sentries array contains old VMIDs: $sentry_array"
errors=$((errors + 1))
fi
fi
# Check RPC array
if grep -qE "RPCS?.*=" "$file" 2>/dev/null; then
rpc_array=$(grep -A 1 "RPCS?.*=" "$file" 2>/dev/null | grep -E "\(.*\)")
if echo "$rpc_array" | grep -qE "\b(115|116|117)\b" && ! echo "$rpc_array" | grep -qE "\b(2500|2501|2502)\b"; then
echo " ❌ RPC array contains old VMIDs: $rpc_array"
errors=$((errors + 1))
fi
fi
done
echo ""
echo "=== Summary ==="
echo ""
echo "Errors found: $errors"
echo "Warnings found: $warnings"
echo ""
if [[ $errors -eq 0 && $warnings -eq 0 ]]; then
echo "✅ All VMID references appear consistent!"
exit 0
elif [[ $errors -eq 0 ]]; then
echo "⚠️ Some warnings found - review recommended"
exit 0
else
echo "❌ Errors found - fix required"
exit 1
fi

42
scripts/clean-ml110.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env bash
# Clean ml110 - Remove old files before syncing
# This is a separate script for safety
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_BASE="/opt"
log_info() { echo "[INFO] $1"; }
log_warn() { echo "[WARN] $1"; }
log_error() { echo "[ERROR] $1"; }
log_warn "=== WARNING: This will DELETE all files in:"
log_warn " - ${REMOTE_BASE}/smom-dbis-138-proxmox"
log_warn " - ${REMOTE_BASE}/smom-dbis-138 (config and keys will be preserved if you run sync script)"
log_warn ""
read -p "Are you sure you want to delete these directories? (type 'DELETE' to confirm): " confirm
if [[ "$confirm" != "DELETE" ]]; then
log_info "Aborted by user"
exit 0
fi
# Create backup first
BACKUP_DIR="/opt/backup-$(date +%Y%m%d-%H%M%S)"
log_info "Creating backup at ${BACKUP_DIR}..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"mkdir -p ${BACKUP_DIR} && \
(test -d ${REMOTE_BASE}/smom-dbis-138 && cp -r ${REMOTE_BASE}/smom-dbis-138 ${BACKUP_DIR}/ || true) && \
(test -d ${REMOTE_BASE}/smom-dbis-138-proxmox && cp -r ${REMOTE_BASE}/smom-dbis-138-proxmox ${BACKUP_DIR}/ || true) && \
echo 'Backup created'"
log_info "Removing old directories..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"rm -rf ${REMOTE_BASE}/smom-dbis-138-proxmox && \
echo '✓ Removed smom-dbis-138-proxmox'"
log_info "✓ Cleanup complete"
log_info "Backup saved at: ${BACKUP_DIR}"

277
scripts/cleanup-all-old-files.sh Executable file
View File

@@ -0,0 +1,277 @@
#!/usr/bin/env bash
# Comprehensive Cleanup of Old, Backup, and Unreferenced Files
# Safely removes old files from both local projects and remote ml110
#
# Targets:
# - Backup directories (backup-*, *backup*)
# - Temporary key generation directories (temp-all-keys-*)
# - Old log files (logs/*.log older than 30 days)
# - Temporary files (*.bak, *.old, *~, *.swp)
# - Old documentation files that are no longer referenced
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Configuration
DRY_RUN="${DRY_RUN:-true}"
REMOTE_HOST="${REMOTE_HOST:-192.168.11.10}"
REMOTE_USER="${REMOTE_USER:-root}"
REMOTE_PASS="${REMOTE_PASS:-L@kers2010}"
MIN_LOG_AGE_DAYS=30
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--execute)
DRY_RUN=false
shift
;;
--help)
cat << EOF
Usage: $0 [OPTIONS]
Comprehensive cleanup of old, backup, and unreferenced files.
Options:
--execute Actually delete files (default: dry-run)
--help Show this help
Safety:
- By default, runs in DRY-RUN mode
- Use --execute to actually delete files
- Creates detailed manifest of files to be deleted
EOF
exit 0
;;
*)
log_error "Unknown option: $1"
exit 1
;;
esac
done
# Create cleanup manifest
CLEANUP_LOG="$PROJECT_ROOT/logs/cleanup-$(date +%Y%m%d-%H%M%S).log"
mkdir -p "$PROJECT_ROOT/logs"
> "$CLEANUP_LOG"
log_info "========================================="
log_info "Comprehensive File Cleanup"
log_info "========================================="
log_info "Mode: $([ "$DRY_RUN" == "true" ] && echo "DRY-RUN" || echo "EXECUTE")"
log_info "Log: $CLEANUP_LOG"
log_info ""
TOTAL_FOUND=0
TOTAL_DELETED=0
# Function to safely delete a file/directory
safe_delete() {
local target="$1"
local label="${2:-item}"
if [[ ! -e "$target" ]]; then
return 0
fi
echo "$target" >> "$CLEANUP_LOG"
TOTAL_FOUND=$((TOTAL_FOUND + 1))
if [[ "$DRY_RUN" != "true" ]]; then
if rm -rf "$target" 2>/dev/null; then
TOTAL_DELETED=$((TOTAL_DELETED + 1))
echo "✓ Deleted: $target"
return 0
else
echo "✗ Failed: $target" >&2
return 1
fi
else
echo "Would delete: $target"
return 0
fi
}
# Clean local proxmox project
log_info "=== Cleaning Local Proxmox Project ==="
PROXMOX_DIR="$PROJECT_ROOT"
# Old markdown files in root (status/completion docs that are superseded)
OLD_DOCS_PROXMOX=(
"$PROXMOX_DIR/ACTION_PLAN_NOW.md"
"$PROXMOX_DIR/DEPLOYMENT_IN_PROGRESS.md"
"$PROXMOX_DIR/DEPLOYMENT_SOLUTION.md"
"$PROXMOX_DIR/FINAL_STATUS.txt"
"$PROXMOX_DIR/IMPLEMENTATION_COMPLETE.md"
"$PROXMOX_DIR/NEXT_STEPS_QUICK_REFERENCE.md"
"$PROXMOX_DIR/ORGANIZATION_SUMMARY.md"
"$PROXMOX_DIR/PROJECT_STRUCTURE.md"
"$PROXMOX_DIR/QUICK_DEPLOY_FIX.md"
"$PROXMOX_DIR/QUICK_DEPLOY.md"
"$PROXMOX_DIR/QUICK_START_VALIDATED_SET.md"
"$PROXMOX_DIR/STATUS_FINAL.md"
"$PROXMOX_DIR/STATUS.md"
"$PROXMOX_DIR/VALIDATED_SET_IMPLEMENTATION_SUMMARY.md"
)
for doc in "${OLD_DOCS_PROXMOX[@]}"; do
safe_delete "$doc" "old doc"
done
# Temporary besu-enodes directories
while IFS= read -r dir; do
safe_delete "$dir" "temp enode dir"
done < <(find "$PROXMOX_DIR" -maxdepth 1 -type d -name "besu-enodes-*" 2>/dev/null)
# Old log files in smom-dbis-138-proxmox/logs
if [[ -d "$PROXMOX_DIR/smom-dbis-138-proxmox/logs" ]]; then
while IFS= read -r logfile; do
if [[ -f "$logfile" ]]; then
file_age=$(( ($(date +%s) - $(stat -c %Y "$logfile" 2>/dev/null || echo 0)) / 86400 ))
if [[ $file_age -gt $MIN_LOG_AGE_DAYS ]]; then
safe_delete "$logfile" "old log"
fi
fi
done < <(find "$PROXMOX_DIR/smom-dbis-138-proxmox/logs" -type f -name "*.log" 2>/dev/null)
fi
# Backup/temp files (only in specific project directories)
while IFS= read -r file; do
# Only process files in our project directories
if [[ "$file" == "$PROXMOX_DIR/"* ]] && [[ "$file" != *"/node_modules/"* ]] && [[ "$file" != *"/ProxmoxVE/"* ]] && [[ "$file" != *"/mcp-proxmox/"* ]] && [[ "$file" != *"/the_order/"* ]]; then
safe_delete "$file" "backup/temp file"
fi
done < <(find "$PROXMOX_DIR" -maxdepth 3 -type f \( -name "*.bak" -o -name "*.old" -o -name "*~" -o -name "*.swp" \) 2>/dev/null)
# Clean local smom-dbis-138 project
log_info ""
log_info "=== Cleaning Local smom-dbis-138 Project ==="
# Try different possible locations
SMOM_DIR=""
for possible_dir in "$PROJECT_ROOT/../smom-dbis-138" "/home/intlc/projects/smom-dbis-138"; do
if [[ -d "$possible_dir" ]]; then
SMOM_DIR="$possible_dir"
break
fi
done
if [[ -n "$SMOM_DIR" ]] && [[ -d "$SMOM_DIR" ]]; then
log_info "Using smom-dbis-138 directory: $SMOM_DIR"
# Temporary key generation directories
while IFS= read -r dir; do
safe_delete "$dir" "temp key gen dir"
done < <(find "$SMOM_DIR" -maxdepth 1 -type d -name "temp-all-keys-*" 2>/dev/null)
# Backup key directories (keep only the most recent)
LATEST_BACKUP=$(find "$SMOM_DIR" -maxdepth 1 -type d -name "backup-keys-*" 2>/dev/null | sort | tail -1)
while IFS= read -r dir; do
if [[ "$dir" != "$LATEST_BACKUP" ]]; then
safe_delete "$dir" "old backup keys"
fi
done < <(find "$SMOM_DIR" -maxdepth 1 -type d -name "backup-keys-*" 2>/dev/null)
# Old log files
if [[ -d "$SMOM_DIR/logs" ]]; then
while IFS= read -r logfile; do
if [[ -f "$logfile" ]]; then
file_age=$(( ($(date +%s) - $(stat -c %Y "$logfile" 2>/dev/null || echo 0)) / 86400 ))
if [[ $file_age -gt $MIN_LOG_AGE_DAYS ]]; then
safe_delete "$logfile" "old log"
fi
fi
done < <(find "$SMOM_DIR/logs" -type f -name "*.log" 2>/dev/null)
fi
# Temporary/backup files
while IFS= read -r file; do
safe_delete "$file" "backup/temp file"
done < <(find "$SMOM_DIR" -maxdepth 2 -type f \( -name "*.bak" -o -name "*.old" -o -name "*~" -o -name "*.swp" \) ! -path "*/node_modules/*" 2>/dev/null)
else
log_warn "smom-dbis-138 directory not found: $SMOM_DIR"
fi
# Clean remote ml110
log_info ""
log_info "=== Cleaning Remote Host (ml110) ==="
if sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connected'" 2>/dev/null; then
log_info "Connected to ${REMOTE_HOST}"
# Get list of files to clean
REMOTE_CLEANUP=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd /opt && {
# Find backup/temp directories
find smom-dbis-138* -type d -name '*backup*' 2>/dev/null
find smom-dbis-138* -type d -name 'temp-all-keys-*' 2>/dev/null
# Find old log files (older than $MIN_LOG_AGE_DAYS days)
find smom-dbis-138*/logs -type f -name '*.log' 2>/dev/null | while read -r log; do
age=\$(( (\$(date +%s) - \$(stat -c %Y \"\$log\" 2>/dev/null || echo 0)) / 86400 ))
if [[ \$age -gt $MIN_LOG_AGE_DAYS ]]; then
echo \"\$log\"
fi
done
# Find backup/temp files
find smom-dbis-138* -type f \( -name '*.bak' -o -name '*.old' -o -name '*~' -o -name '*.swp' \) 2>/dev/null
}" 2>/dev/null)
if [[ -n "$REMOTE_CLEANUP" ]]; then
REMOTE_COUNT=0
echo "$REMOTE_CLEANUP" | while IFS= read -r item; do
if [[ -n "$item" ]]; then
REMOTE_COUNT=$((REMOTE_COUNT + 1))
echo "/opt/$item" >> "$CLEANUP_LOG"
echo "Would delete (remote): /opt/$item"
if [[ "$DRY_RUN" != "true" ]]; then
if sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "rm -rf \"/opt/$item\" 2>/dev/null && echo '✓' || echo '✗'" 2>/dev/null | grep -q "✓"; then
TOTAL_DELETED=$((TOTAL_DELETED + 1))
fi
fi
fi
done
log_info "Found $REMOTE_COUNT items on remote"
else
log_info "No cleanup targets found on remote"
fi
else
log_warn "Cannot connect to ${REMOTE_HOST}, skipping remote cleanup"
fi
# Summary
log_info ""
log_info "========================================="
log_info "Cleanup Summary"
log_info "========================================="
log_info "Total items found: $TOTAL_FOUND"
if [[ "$DRY_RUN" == "true" ]]; then
log_warn "DRY-RUN mode: No files were deleted"
log_info "Review the log file: $CLEANUP_LOG"
log_info "Run with --execute to actually delete: $0 --execute"
else
log_success "Total items deleted: $TOTAL_DELETED"
log_info "Cleanup log: $CLEANUP_LOG"
fi
log_info ""

206
scripts/cleanup-ml110-docs.sh Executable file
View File

@@ -0,0 +1,206 @@
#!/usr/bin/env bash
# Cleanup Documentation on ml110
# Performs the same cleanup as local: deletes obsolete docs and marks historical ones
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_PASS="L@kers2010"
REMOTE_BASE="/opt"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
DRY_RUN="${DRY_RUN:-true}"
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--execute)
DRY_RUN=false
shift
;;
--help)
cat << EOF
Usage: $0 [OPTIONS]
Cleanup documentation on ml110 (same as local cleanup).
Options:
--execute Actually delete files (default: dry-run)
--help Show this help
EOF
exit 0
;;
*)
log_error "Unknown option: $1"
exit 1
;;
esac
done
log_info "========================================="
log_info "Cleanup Documentation on ml110"
log_info "========================================="
log_info "Remote: ${REMOTE_USER}@${REMOTE_HOST}"
log_info "Mode: $([ "$DRY_RUN" == "true" ] && echo "DRY-RUN" || echo "EXECUTE")"
log_info ""
# Test connection
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connected'" 2>/dev/null; then
log_error "Cannot connect to ${REMOTE_HOST}"
exit 1
fi
log_success "Connected to ${REMOTE_HOST}"
# Obsolete files to delete
OBSOLETE_FILES=(
"smom-dbis-138-proxmox/docs/DEPLOYMENT_STATUS.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_REVIEW_COMPLETE.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_REVIEW.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_TIME_ESTIMATE.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_TIME_ESTIMATE_BESU_ONLY.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_VALIDATION_REPORT.md"
"smom-dbis-138-proxmox/docs/DEPLOYED_VMIDS_LIST.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_OPTIMIZATION_COMPLETE.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_OPTIMIZATION_RECOMMENDATIONS.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_RECOMMENDATIONS_STATUS.md"
"smom-dbis-138-proxmox/docs/DEPLOYMENT_CONFIGURATION_VERIFICATION.md"
"smom-dbis-138-proxmox/docs/NEXT_STEPS_COMPREHENSIVE.md"
"smom-dbis-138-proxmox/docs/NEXT_STEPS_COMPLETE.md"
"smom-dbis-138-proxmox/docs/NEXT_STEPS_SUMMARY.md"
"smom-dbis-138-proxmox/docs/COMPLETION_REPORT.md"
"smom-dbis-138-proxmox/docs/FIXES_APPLIED.md"
"smom-dbis-138-proxmox/docs/REVIEW_FIXES_APPLIED.md"
"smom-dbis-138-proxmox/docs/MINOR_OBSERVATIONS_FIXED.md"
"smom-dbis-138-proxmox/docs/NON_CRITICAL_FIXES_COMPLETE.md"
"smom-dbis-138-proxmox/docs/QUICK_WINS_COMPLETED.md"
"smom-dbis-138-proxmox/docs/TASK_COMPLETION_SUMMARY.md"
"smom-dbis-138-proxmox/docs/IMPLEMENTATION_COMPLETE.md"
"smom-dbis-138-proxmox/docs/PREREQUISITES_COMPLETE.md"
"smom-dbis-138-proxmox/docs/SETUP_COMPLETE.md"
"smom-dbis-138-proxmox/docs/SETUP_COMPLETE_FINAL.md"
"smom-dbis-138-proxmox/docs/SETUP_STATUS.md"
"smom-dbis-138-proxmox/docs/VALIDATION_STATUS.md"
"smom-dbis-138-proxmox/docs/CONFIGURATION_ALIGNMENT.md"
"smom-dbis-138-proxmox/docs/REVIEW_INCONSISTENCIES_GAPS.md"
"smom-dbis-138-proxmox/docs/REVIEW_SUMMARY.md"
"smom-dbis-138-proxmox/docs/COMPREHENSIVE_REVIEW.md"
"smom-dbis-138-proxmox/docs/FINAL_REVIEW.md"
"smom-dbis-138-proxmox/docs/DETAILED_ISSUES_REVIEW.md"
"smom-dbis-138-proxmox/docs/RECOMMENDATIONS_OVERVIEW.md"
"smom-dbis-138-proxmox/docs/OS_TEMPLATE_CHANGE.md"
"smom-dbis-138-proxmox/docs/UBUNTU_DEBIAN_ANALYSIS.md"
"smom-dbis-138-proxmox/docs/OS_TEMPLATE_ANALYSIS.md"
"smom-dbis-138-proxmox/docs/DHCP_IP_ADDRESSES.md"
"smom-dbis-138-proxmox/docs/VMID_CONSISTENCY_REPORT.md"
"smom-dbis-138-proxmox/docs/ACTIVE_DOCS_UPDATE_SUMMARY.md"
"smom-dbis-138-proxmox/docs/PROJECT_UPDATE_COMPLETE.md"
)
# Historical files to mark
HISTORICAL_FILES=(
"smom-dbis-138-proxmox/docs/EXPECTED_CONTAINERS.md"
"smom-dbis-138-proxmox/docs/VMID_ALLOCATION.md"
"smom-dbis-138-proxmox/docs/VMID_REFERENCE_AUDIT.md"
"smom-dbis-138-proxmox/docs/VMID_UPDATE_COMPLETE.md"
)
log_info "=== Checking Files on ml110 ==="
# Check which obsolete files exist
EXISTING_OBSOLETE=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && for f in \"${OBSOLETE_FILES[@]}\"; do [ -f \"\$f\" ] && echo \"\$f\"; done" 2>/dev/null)
if [[ -n "$EXISTING_OBSOLETE" ]]; then
OBSOLETE_COUNT=$(echo "$EXISTING_OBSOLETE" | wc -l)
log_info "Found $OBSOLETE_COUNT obsolete files to delete"
if [[ "$DRY_RUN" == "true" ]]; then
echo "$EXISTING_OBSOLETE" | while read -r file; do
log_info "Would delete: $file"
done
else
# Delete obsolete files
echo "$EXISTING_OBSOLETE" | while read -r file; do
if sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && rm -f \"$file\" 2>/dev/null && echo 'deleted'" 2>/dev/null | grep -q "deleted"; then
log_success "Deleted: $file"
else
log_warn "Failed to delete: $file"
fi
done
fi
else
log_info "No obsolete files found to delete"
fi
echo ""
# Mark historical files
log_info "=== Marking Historical Documentation ==="
for file in "${HISTORICAL_FILES[@]}"; do
# Check if file exists and is not already marked
EXISTS=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && [ -f \"$file\" ] && echo 'exists' || echo 'missing'" 2>/dev/null)
if [[ "$EXISTS" == "exists" ]]; then
ALREADY_MARKED=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && head -1 \"$file\" 2>/dev/null | grep -q 'HISTORICAL' && echo 'yes' || echo 'no'" 2>/dev/null)
if [[ "$ALREADY_MARKED" == "yes" ]]; then
log_info "Already marked: $file"
else
if [[ "$DRY_RUN" == "true" ]]; then
log_info "Would mark as historical: $file"
else
# Add historical header based on file type
if [[ "$file" == *"EXPECTED_CONTAINERS"* ]]; then
HEADER="<!-- HISTORICAL: This document contains historical VMID ranges (106-117) and is kept for reference only. Current ranges: Validators 1000-1004, Sentries 1500-1503, RPC 2500-2502 -->"
elif [[ "$file" == *"VMID_ALLOCATION"* ]]; then
HEADER="<!-- HISTORICAL: This document contains historical VMID ranges (1100-1122) and is kept for reference only. Current ranges: Validators 1000-1004, Sentries 1500-1503, RPC 2500-2502 -->"
else
HEADER="<!-- HISTORICAL: This document is a historical record and is kept for reference only. -->"
fi
# Add header to file
sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && sed -i '1i${HEADER}' \"$file\" 2>/dev/null && echo 'marked'" 2>/dev/null | grep -q "marked" && \
log_success "Marked: $file" || \
log_warn "Failed to mark: $file"
fi
fi
else
log_warn "File not found: $file"
fi
done
echo ""
# Summary
log_info "========================================="
log_info "Summary"
log_info "========================================="
if [[ "$DRY_RUN" == "true" ]]; then
log_warn "DRY-RUN mode: No files were modified"
log_info "Run with --execute to actually delete/mark files"
else
log_success "Cleanup completed on ml110"
fi
log_info ""

295
scripts/cleanup-old-files.sh Executable file
View File

@@ -0,0 +1,295 @@
#!/usr/bin/env bash
# Cleanup Old, Backup, and Unreferenced Files
# Safely removes old files, backups, and unused files from both local and remote
#
# This script identifies and removes:
# - Backup directories (backup-*, *backup*)
# - Temporary files (*.tmp, *.temp, *~, *.swp)
# - Old log files (logs/*.log older than 30 days)
# - Duplicate/unused files
# - Old documentation that's been superseded
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Configuration
DRY_RUN="${DRY_RUN:-true}"
REMOTE_HOST="${REMOTE_HOST:-192.168.11.10}"
REMOTE_USER="${REMOTE_USER:-root}"
CLEAN_LOCAL="${CLEAN_LOCAL:-true}"
CLEAN_REMOTE="${CLEAN_REMOTE:-true}"
MIN_LOG_AGE_DAYS=30
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--execute)
DRY_RUN=false
shift
;;
--skip-remote)
CLEAN_REMOTE=false
shift
;;
--skip-local)
CLEAN_LOCAL=false
shift
;;
--help)
cat << EOF
Usage: $0 [OPTIONS]
Cleanup old, backup, and unreferenced files from project directories.
Options:
--execute Actually delete files (default: dry-run, only shows what would be deleted)
--skip-remote Skip cleaning remote host (ml110)
--skip-local Skip cleaning local project
--help Show this help
Safety:
- By default, runs in DRY-RUN mode (shows files but doesn't delete)
- Use --execute to actually delete files
- Creates a manifest of files that will be deleted
EOF
exit 0
;;
*)
log_error "Unknown option: $1"
exit 1
;;
esac
done
# Create cleanup manifest
CLEANUP_MANIFEST="$PROJECT_ROOT/logs/cleanup-manifest-$(date +%Y%m%d-%H%M%S).txt"
mkdir -p "$PROJECT_ROOT/logs"
> "$CLEANUP_MANIFEST"
log_info "========================================="
log_info "File Cleanup Script"
log_info "========================================="
log_info "Mode: $([ "$DRY_RUN" == "true" ] && echo "DRY-RUN (no files will be deleted)" || echo "EXECUTE (files will be deleted)")"
log_info "Manifest: $CLEANUP_MANIFEST"
log_info ""
# Function to find and catalog files to delete
find_cleanup_targets() {
local base_dir="$1"
local label="$2"
log_info "=== Scanning $label ==="
local count=0
# Backup directories
while IFS= read -r dir; do
if [[ -d "$dir" ]]; then
echo "$dir" >> "$CLEANUP_MANIFEST"
echo "DIR: $dir"
((count++))
fi
done < <(find "$base_dir" -type d -name "*backup*" 2>/dev/null)
# Temporary directories
while IFS= read -r dir; do
if [[ -d "$dir" ]] && [[ "$dir" != "$base_dir" ]]; then
echo "$dir" >> "$CLEANUP_MANIFEST"
echo "DIR: $dir"
((count++))
fi
done < <(find "$base_dir" -type d \( -name "*tmp*" -o -name "*temp*" \) 2>/dev/null)
# Temporary/backup files
while IFS= read -r file; do
if [[ -f "$file" ]]; then
echo "$file" >> "$CLEANUP_MANIFEST"
echo "FILE: $file"
((count++))
fi
done < <(find "$base_dir" -type f \( -name "*.bak" -o -name "*.old" -o -name "*~" -o -name "*.swp" -o -name "*.tmp" -o -name "*.temp" \) 2>/dev/null)
# Old log files (older than MIN_LOG_AGE_DAYS)
if [[ -d "$base_dir/logs" ]]; then
while IFS= read -r file; do
if [[ -f "$file" ]]; then
local file_age=$(( ($(date +%s) - $(stat -c %Y "$file" 2>/dev/null || echo 0)) / 86400 ))
if [[ $file_age -gt $MIN_LOG_AGE_DAYS ]]; then
echo "$file" >> "$CLEANUP_MANIFEST"
echo "OLD LOG ($file_age days): $file"
((count++))
fi
fi
done < <(find "$base_dir/logs" -type f -name "*.log" 2>/dev/null)
fi
# temp-all-keys-* directories in smom-dbis-138
if [[ "$base_dir" == *"smom-dbis-138"* ]]; then
while IFS= read -r dir; do
if [[ -d "$dir" ]]; then
echo "$dir" >> "$CLEANUP_MANIFEST"
echo "TEMP KEY GEN: $dir"
((count++))
fi
done < <(find "$base_dir" -type d -name "temp-all-keys-*" 2>/dev/null)
fi
log_info "Found $count items to clean"
echo "$count"
}
# Function to delete files from manifest
delete_from_manifest() {
local manifest_file="$1"
local label="$2"
if [[ ! -f "$manifest_file" ]]; then
log_warn "Manifest file not found: $manifest_file"
return 0
fi
local count=$(wc -l < "$manifest_file" | tr -d ' ')
if [[ $count -eq 0 ]]; then
log_info "No files to delete for $label"
return 0
fi
log_info "Deleting $count items from $label..."
local deleted=0
local failed=0
while IFS= read -r target; do
if [[ -z "$target" ]]; then
continue
fi
if [[ -e "$target" ]]; then
if rm -rf "$target" 2>/dev/null; then
((deleted++))
else
log_warn "Failed to delete: $target"
((failed++))
fi
fi
done < "$manifest_file"
log_success "Deleted $deleted items, $failed failures"
}
# Clean local project
if [[ "$CLEAN_LOCAL" == "true" ]]; then
log_info ""
log_info "=== Local Project Cleanup ==="
# Clean proxmox project
PROXMOX_CLEANUP="$PROJECT_ROOT/logs/proxmox-cleanup-$(date +%Y%m%d-%H%M%S).txt"
> "$PROXMOX_CLEANUP"
find_cleanup_targets "$PROJECT_ROOT" "proxmox project" | tee -a "$PROXMOX_CLEANUP" | tail -20
proxmox_count=$(tail -1 "$PROXMOX_CLEANUP" | grep -oE '[0-9]+' | head -1 || echo "0")
# Clean smom-dbis-138 project
if [[ -d "$PROJECT_ROOT/../smom-dbis-138" ]]; then
SMOM_CLEANUP="$PROJECT_ROOT/logs/smom-cleanup-$(date +%Y%m%d-%H%M%S).txt"
> "$SMOM_CLEANUP"
find_cleanup_targets "$PROJECT_ROOT/../smom-dbis-138" "smom-dbis-138 project" | tee -a "$SMOM_CLEANUP" | tail -20
smom_count=$(tail -1 "$SMOM_CLEANUP" | grep -oE '[0-9]+' | head -1 || echo "0")
else
smom_count=0
fi
total_local=$((proxmox_count + smom_count))
if [[ "$DRY_RUN" != "true" ]] && [[ $total_local -gt 0 ]]; then
log_info ""
log_warn "Executing deletion of $total_local local items..."
delete_from_manifest "$CLEANUP_MANIFEST" "local project"
fi
fi
# Clean remote host
if [[ "$CLEAN_REMOTE" == "true" ]]; then
log_info ""
log_info "=== Remote Host Cleanup (ml110) ==="
# Test SSH connection
if ! sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connected'" 2>/dev/null; then
log_warn "Cannot connect to ${REMOTE_HOST}, skipping remote cleanup"
else
log_info "Scanning remote host..."
# Get list of files to clean on remote
REMOTE_CLEANUP_LIST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd /opt && \
find smom-dbis-138* -type d -name '*backup*' 2>/dev/null && \
find smom-dbis-138* -type d \( -name '*tmp*' -o -name '*temp*' \) 2>/dev/null && \
find smom-dbis-138* -type f \( -name '*.bak' -o -name '*.old' -o -name '*~' -o -name '*.swp' \) 2>/dev/null" 2>/dev/null | head -50)
remote_count=0
if [[ -n "$REMOTE_CLEANUP_LIST" ]]; then
echo "$REMOTE_CLEANUP_LIST" | while IFS= read -r item; do
if [[ -n "$item" ]]; then
echo "/opt/$item" >> "$CLEANUP_MANIFEST"
echo "REMOTE: /opt/$item"
((remote_count++))
fi
done
log_info "Found $remote_count items to clean on remote"
else
log_info "No cleanup targets found on remote"
fi
if [[ "$DRY_RUN" != "true" ]] && [[ $remote_count -gt 0 ]]; then
log_info ""
log_warn "Executing deletion of $remote_count remote items..."
# Delete remote files
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd /opt && \
find smom-dbis-138* -type d -name '*backup*' -exec rm -rf {} + 2>/dev/null; \
find smom-dbis-138* -type d \( -name '*tmp*' -o -name '*temp*' \) -exec rm -rf {} + 2>/dev/null; \
find smom-dbis-138* -type f \( -name '*.bak' -o -name '*.old' -o -name '*~' -o -name '*.swp' \) -delete 2>/dev/null; \
echo 'Remote cleanup completed'"
log_success "Remote cleanup completed"
fi
fi
fi
# Summary
log_info ""
log_info "========================================="
log_info "Cleanup Summary"
log_info "========================================="
log_info "Manifest file: $CLEANUP_MANIFEST"
log_info "Mode: $([ "$DRY_RUN" == "true" ] && echo "DRY-RUN" || echo "EXECUTED")"
log_info ""
if [[ "$DRY_RUN" == "true" ]]; then
log_warn "This was a DRY-RUN. No files were deleted."
log_info "Review the manifest file and run with --execute to delete files:"
log_info " $0 --execute"
else
log_success "Cleanup completed. Check manifest for details: $CLEANUP_MANIFEST"
fi
log_info ""

437
scripts/cloudflare-setup-web.py Executable file
View File

@@ -0,0 +1,437 @@
#!/usr/bin/env python3
"""
Cloudflare Credentials Setup Web Interface
Provides a web UI to configure Cloudflare API credentials
"""
import os
import json
import subprocess
import re
from pathlib import Path
from flask import Flask, render_template_string, request, jsonify, redirect, url_for
app = Flask(__name__)
SCRIPT_DIR = Path(__file__).parent.parent
ENV_FILE = SCRIPT_DIR / ".env"
# HTML Template
HTML_TEMPLATE = """
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Cloudflare Credentials Setup</title>
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
padding: 20px;
}
.container {
background: white;
border-radius: 12px;
box-shadow: 0 20px 60px rgba(0,0,0,0.3);
max-width: 600px;
width: 100%;
padding: 40px;
}
h1 {
color: #333;
margin-bottom: 10px;
font-size: 28px;
}
.subtitle {
color: #666;
margin-bottom: 30px;
font-size: 14px;
}
.form-group {
margin-bottom: 20px;
}
label {
display: block;
margin-bottom: 8px;
color: #333;
font-weight: 500;
font-size: 14px;
}
input[type="text"], input[type="email"], input[type="password"] {
width: 100%;
padding: 12px;
border: 2px solid #e0e0e0;
border-radius: 6px;
font-size: 14px;
transition: border-color 0.3s;
}
input:focus {
outline: none;
border-color: #667eea;
}
.help-text {
font-size: 12px;
color: #888;
margin-top: 4px;
}
.btn {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
border: none;
padding: 14px 28px;
border-radius: 6px;
font-size: 16px;
font-weight: 600;
cursor: pointer;
width: 100%;
transition: transform 0.2s, box-shadow 0.2s;
}
.btn:hover {
transform: translateY(-2px);
box-shadow: 0 10px 20px rgba(102, 126, 234, 0.4);
}
.btn:active {
transform: translateY(0);
}
.btn-secondary {
background: #6c757d;
margin-top: 10px;
}
.alert {
padding: 12px;
border-radius: 6px;
margin-bottom: 20px;
font-size: 14px;
}
.alert-success {
background: #d4edda;
color: #155724;
border: 1px solid #c3e6cb;
}
.alert-error {
background: #f8d7da;
color: #721c24;
border: 1px solid #f5c6cb;
}
.alert-info {
background: #d1ecf1;
color: #0c5460;
border: 1px solid #bee5eb;
}
.status-section {
background: #f8f9fa;
border-radius: 6px;
padding: 20px;
margin-bottom: 20px;
}
.status-item {
display: flex;
justify-content: space-between;
padding: 8px 0;
border-bottom: 1px solid #e0e0e0;
}
.status-item:last-child {
border-bottom: none;
}
.status-label {
color: #666;
font-size: 14px;
}
.status-value {
color: #333;
font-weight: 500;
font-size: 14px;
}
.status-value.valid {
color: #28a745;
}
.status-value.invalid {
color: #dc3545;
}
.links {
margin-top: 20px;
text-align: center;
}
.links a {
color: #667eea;
text-decoration: none;
font-size: 13px;
margin: 0 10px;
}
.links a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<div class="container">
<h1>🔐 Cloudflare Setup</h1>
<p class="subtitle">Configure your Cloudflare API credentials</p>
{% if message %}
<div class="alert alert-{{ message_type }}">
{{ message }}
</div>
{% endif %}
<div class="status-section">
<h3 style="margin-bottom: 15px; font-size: 16px; color: #333;">Current Status</h3>
<div class="status-item">
<span class="status-label">Email:</span>
<span class="status-value {% if current_email %}valid{% else %}invalid{% endif %}">
{{ current_email or 'Not set' }}
</span>
</div>
<div class="status-item">
<span class="status-label">API Key:</span>
<span class="status-value {% if has_api_key %}valid{% else %}invalid{% endif %}">
{{ 'Set' if has_api_key else 'Not set' }}
</span>
</div>
<div class="status-item">
<span class="status-label">API Token:</span>
<span class="status-value {% if has_api_token %}valid{% else %}invalid{% endif %}">
{{ 'Set' if has_api_token else 'Not set' }}
</span>
</div>
<div class="status-item">
<span class="status-label">Account ID:</span>
<span class="status-value {% if current_account_id %}valid{% else %}invalid{% endif %}">
{{ current_account_id or 'Not set' }}
</span>
</div>
<div class="status-item">
<span class="status-label">Zone ID:</span>
<span class="status-value {% if current_zone_id %}valid{% else %}invalid{% endif %}">
{{ current_zone_id or 'Not set' }}
</span>
</div>
<div class="status-item">
<span class="status-label">Domain:</span>
<span class="status-value {% if current_domain %}valid{% else %}invalid{% endif %}">
{{ current_domain or 'Not set' }}
</span>
</div>
</div>
<form method="POST" action="/save">
<div class="form-group">
<label for="email">Cloudflare Email *</label>
<input type="email" id="email" name="email" value="{{ current_email or '' }}" required>
<div class="help-text">Your Cloudflare account email address</div>
</div>
<div class="form-group">
<label for="api_key">Global API Key</label>
<input type="password" id="api_key" name="api_key" placeholder="Leave empty to keep current">
<div class="help-text">
<a href="https://dash.cloudflare.com/profile/api-tokens" target="_blank">Get from Cloudflare Dashboard</a>
</div>
</div>
<div class="form-group">
<label for="api_token">API Token (Alternative)</label>
<input type="password" id="api_token" name="api_token" placeholder="Leave empty to keep current">
<div class="help-text">
<a href="https://dash.cloudflare.com/profile/api-tokens" target="_blank">Create API Token</a> (Recommended - more secure)
</div>
</div>
<div class="form-group">
<label for="account_id">Account ID</label>
<input type="text" id="account_id" name="account_id" value="{{ current_account_id or '' }}">
<div class="help-text">Your Cloudflare account ID</div>
</div>
<div class="form-group">
<label for="zone_id">Zone ID</label>
<input type="text" id="zone_id" name="zone_id" value="{{ current_zone_id or '' }}">
<div class="help-text">DNS zone ID for your domain</div>
</div>
<div class="form-group">
<label for="domain">Domain</label>
<input type="text" id="domain" name="domain" value="{{ current_domain or '' }}" placeholder="example.com">
<div class="help-text">Your domain name (e.g., d-bis.org)</div>
</div>
<div class="form-group">
<label for="tunnel_token">Tunnel Token</label>
<input type="password" id="tunnel_token" name="tunnel_token" placeholder="Leave empty to keep current">
<div class="help-text">Cloudflare Tunnel service token</div>
</div>
<button type="submit" class="btn">💾 Save Credentials</button>
</form>
<form method="POST" action="/test" style="margin-top: 10px;">
<button type="submit" class="btn btn-secondary">🧪 Test API Connection</button>
</form>
<div class="links">
<a href="https://dash.cloudflare.com/profile/api-tokens" target="_blank">Cloudflare API Tokens</a>
<a href="https://dash.cloudflare.com/" target="_blank">Cloudflare Dashboard</a>
</div>
</div>
</body>
</html>
"""
def load_env():
"""Load current .env file values"""
env_vars = {}
if ENV_FILE.exists():
with open(ENV_FILE, 'r') as f:
for line in f:
line = line.strip()
if line and not line.startswith('#') and '=' in line:
key, value = line.split('=', 1)
key = key.strip()
value = value.strip().strip('"').strip("'")
env_vars[key] = value
return env_vars
def save_env(env_vars):
"""Save environment variables to .env file"""
# Read existing file to preserve comments and structure
lines = []
if ENV_FILE.exists():
with open(ENV_FILE, 'r') as f:
lines = f.readlines()
# Update or add variables
updated_keys = set()
new_lines = []
for line in lines:
stripped = line.strip()
if stripped and not stripped.startswith('#') and '=' in stripped:
key = stripped.split('=', 1)[0].strip()
if key in env_vars:
new_lines.append(f'{key}="{env_vars[key]}"\n')
updated_keys.add(key)
continue
new_lines.append(line)
# Add new variables
for key, value in env_vars.items():
if key not in updated_keys:
new_lines.append(f'{key}="{value}"\n')
with open(ENV_FILE, 'w') as f:
f.writelines(new_lines)
def test_api_credentials(email, api_key, api_token):
"""Test Cloudflare API credentials"""
import requests
headers = {"Content-Type": "application/json"}
if api_token:
headers["Authorization"] = f"Bearer {api_token}"
url = "https://api.cloudflare.com/client/v4/user"
elif api_key and email:
headers["X-Auth-Email"] = email
headers["X-Auth-Key"] = api_key
url = "https://api.cloudflare.com/client/v4/user"
else:
return False, "No credentials provided"
try:
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
if data.get('success'):
user_email = data.get('result', {}).get('email', '')
return True, f"✓ Authentication successful! Logged in as: {user_email}"
else:
error = data.get('errors', [{}])[0].get('message', 'Unknown error')
return False, f"✗ Authentication failed: {error}"
except Exception as e:
return False, f"✗ Connection error: {str(e)}"
@app.route('/')
def index():
"""Display the setup form"""
env = load_env()
return render_template_string(HTML_TEMPLATE,
current_email=env.get('CLOUDFLARE_EMAIL', ''),
has_api_key=bool(env.get('CLOUDFLARE_API_KEY', '')),
has_api_token=bool(env.get('CLOUDFLARE_API_TOKEN', '')),
current_account_id=env.get('CLOUDFLARE_ACCOUNT_ID', ''),
current_zone_id=env.get('CLOUDFLARE_ZONE_ID', ''),
current_domain=env.get('CLOUDFLARE_DOMAIN', ''),
message=request.args.get('message', ''),
message_type=request.args.get('type', 'info')
)
@app.route('/save', methods=['POST'])
def save():
"""Save credentials to .env file"""
env = load_env()
# Update only provided fields
if request.form.get('email'):
env['CLOUDFLARE_EMAIL'] = request.form.get('email')
if request.form.get('api_key'):
env['CLOUDFLARE_API_KEY'] = request.form.get('api_key')
if request.form.get('api_token'):
env['CLOUDFLARE_API_TOKEN'] = request.form.get('api_token')
# Remove API key if token is provided
if 'CLOUDFLARE_API_KEY' in env:
del env['CLOUDFLARE_API_KEY']
if request.form.get('account_id'):
env['CLOUDFLARE_ACCOUNT_ID'] = request.form.get('account_id')
if request.form.get('zone_id'):
env['CLOUDFLARE_ZONE_ID'] = request.form.get('zone_id')
if request.form.get('domain'):
env['CLOUDFLARE_DOMAIN'] = request.form.get('domain')
if request.form.get('tunnel_token'):
env['CLOUDFLARE_TUNNEL_TOKEN'] = request.form.get('tunnel_token')
save_env(env)
return redirect(url_for('index', message='Credentials saved successfully!', type='success'))
@app.route('/test', methods=['POST'])
def test():
"""Test API credentials"""
env = load_env()
email = request.form.get('email') or env.get('CLOUDFLARE_EMAIL', '')
api_key = request.form.get('api_key') or env.get('CLOUDFLARE_API_KEY', '')
api_token = request.form.get('api_token') or env.get('CLOUDFLARE_API_TOKEN', '')
if not email and not api_token:
return redirect(url_for('index', message='Please provide email and API key, or API token', type='error'))
success, message = test_api_credentials(email, api_key, api_token)
return redirect(url_for('index', message=message, type='success' if success else 'error'))
if __name__ == '__main__':
print("\n" + "="*60)
print("🌐 Cloudflare Credentials Setup Web Interface")
print("="*60)
print(f"\n📁 .env file location: {ENV_FILE}")
print(f"\n🔗 Open in your browser:")
print(f" http://localhost:5000")
print(f" http://127.0.0.1:5000")
print(f"\n⚠️ This server is only accessible from localhost")
print(f" Press Ctrl+C to stop the server\n")
print("="*60 + "\n")
app.run(host='127.0.0.1', port=5000, debug=False)

276
scripts/complete-setup.sh Executable file
View File

@@ -0,0 +1,276 @@
#!/bin/bash
# Complete Setup Script for Proxmox Workspace
# This script ensures all prerequisites are met and completes all setup steps
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
echo "🚀 Proxmox Workspace Complete Setup"
echo "===================================="
echo ""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Check functions
check_pass() {
echo -e "${GREEN}${NC} $1"
}
check_warn() {
echo -e "${YELLOW}⚠️${NC} $1"
}
check_fail() {
echo -e "${RED}${NC} $1"
exit 1
}
check_info() {
echo -e "${BLUE}${NC} $1"
}
# Step 1: Verify Prerequisites
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 1: Verifying Prerequisites"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Check Node.js
if command -v node &> /dev/null; then
NODE_VERSION=$(node --version)
NODE_MAJOR=$(echo "$NODE_VERSION" | cut -d'.' -f1 | sed 's/v//')
if [ "$NODE_MAJOR" -ge 16 ]; then
check_pass "Node.js $NODE_VERSION installed (requires 16+)"
else
check_fail "Node.js version $NODE_VERSION is too old (requires 16+)"
fi
else
check_fail "Node.js is not installed"
fi
# Check pnpm
if command -v pnpm &> /dev/null; then
PNPM_VERSION=$(pnpm --version)
PNPM_MAJOR=$(echo "$PNPM_VERSION" | cut -d'.' -f1)
if [ "$PNPM_MAJOR" -ge 8 ]; then
check_pass "pnpm $PNPM_VERSION installed (requires 8+)"
else
check_fail "pnpm version $PNPM_VERSION is too old (requires 8+)"
fi
else
check_fail "pnpm is not installed. Install with: npm install -g pnpm"
fi
# Check Git
if command -v git &> /dev/null; then
GIT_VERSION=$(git --version | awk '{print $3}')
check_pass "Git $GIT_VERSION installed"
else
check_fail "Git is not installed"
fi
echo ""
# Step 2: Initialize Submodules
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 2: Initializing Git Submodules"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
if [ -f ".gitmodules" ] || [ -d ".git" ]; then
check_info "Initializing and updating submodules..."
git submodule update --init --recursive 2>&1 | grep -E "Submodule|Cloning|fatal" || true
check_pass "Submodules initialized"
else
check_warn "Not a git repository, skipping submodule initialization"
fi
echo ""
# Step 3: Install Dependencies
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 3: Installing Workspace Dependencies"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
check_info "Installing dependencies for all workspace packages..."
if pnpm install; then
check_pass "All dependencies installed"
else
check_fail "Failed to install dependencies"
fi
echo ""
# Step 4: Verify Workspace Structure
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 4: Verifying Workspace Structure"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Check workspace config
if [ -f "pnpm-workspace.yaml" ]; then
check_pass "pnpm-workspace.yaml exists"
else
check_fail "pnpm-workspace.yaml not found"
fi
# Check root package.json
if [ -f "package.json" ]; then
check_pass "Root package.json exists"
if grep -q "mcp:start" package.json; then
check_pass "Workspace scripts configured"
fi
else
check_fail "package.json not found"
fi
# Check submodules
if [ -d "mcp-proxmox" ] && [ -f "mcp-proxmox/index.js" ]; then
check_pass "mcp-proxmox submodule present"
else
check_warn "mcp-proxmox submodule may be missing"
fi
if [ -d "ProxmoxVE/frontend" ] && [ -f "ProxmoxVE/frontend/package.json" ]; then
check_pass "ProxmoxVE/frontend submodule present"
else
check_warn "ProxmoxVE/frontend submodule may be missing"
fi
echo ""
# Step 5: Configuration Files
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 5: Setting Up Configuration Files"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
ENV_FILE="$HOME/.env"
if [ ! -f "$ENV_FILE" ]; then
check_info "Creating .env file..."
cat > "$ENV_FILE" << 'EOF'
# Proxmox MCP Server Configuration
# Fill in your actual values below
# Proxmox Configuration (REQUIRED)
PROXMOX_HOST=your-proxmox-ip-or-hostname
PROXMOX_USER=root@pam
PROXMOX_TOKEN_NAME=your-token-name
PROXMOX_TOKEN_VALUE=your-token-secret
# Security Settings (REQUIRED)
# ⚠️ WARNING: Setting PROXMOX_ALLOW_ELEVATED=true enables DESTRUCTIVE operations
PROXMOX_ALLOW_ELEVATED=false
# Optional Settings
# PROXMOX_PORT=8006 # Defaults to 8006 if not specified
EOF
check_pass ".env file created at $ENV_FILE"
check_warn "Please edit $ENV_FILE and add your Proxmox credentials"
else
check_pass ".env file exists at $ENV_FILE"
# Check if configured
if grep -q "your-proxmox-ip" "$ENV_FILE" 2>/dev/null || ! grep -q "^PROXMOX_HOST=" "$ENV_FILE" 2>/dev/null; then
check_warn ".env file needs Proxmox credentials configured"
else
check_pass ".env file appears configured"
fi
fi
# Claude Desktop config
CLAUDE_CONFIG_DIR="$HOME/.config/Claude"
CLAUDE_CONFIG="$CLAUDE_CONFIG_DIR/claude_desktop_config.json"
if [ ! -d "$CLAUDE_CONFIG_DIR" ]; then
mkdir -p "$CLAUDE_CONFIG_DIR"
check_pass "Claude config directory created"
fi
if [ ! -f "$CLAUDE_CONFIG" ]; then
check_info "Creating Claude Desktop config..."
cat > "$CLAUDE_CONFIG" << EOF
{
"mcpServers": {
"proxmox": {
"command": "node",
"args": [
"$SCRIPT_DIR/mcp-proxmox/index.js"
]
}
}
}
EOF
check_pass "Claude Desktop config created"
else
check_pass "Claude Desktop config exists"
fi
echo ""
# Step 6: Verify Dependencies
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 6: Verifying Package Dependencies"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Check MCP server dependencies
if [ -d "mcp-proxmox/node_modules" ]; then
if [ -d "mcp-proxmox/node_modules/@modelcontextprotocol" ]; then
check_pass "MCP server dependencies installed"
else
check_warn "MCP server dependencies may be incomplete"
fi
else
check_warn "MCP server node_modules not found (may need: cd mcp-proxmox && pnpm install)"
fi
# Check frontend dependencies
if [ -d "ProxmoxVE/frontend/node_modules" ]; then
check_pass "Frontend dependencies installed"
else
check_warn "Frontend dependencies not found (may need: cd ProxmoxVE/frontend && pnpm install)"
fi
echo ""
# Step 7: Final Verification
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Step 7: Final Verification"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Run the verification script if it exists
if [ -f "verify-setup.sh" ]; then
check_info "Running verification script..."
echo ""
bash verify-setup.sh || check_warn "Some verification checks failed (see above)"
else
check_warn "verify-setup.sh not found"
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "✅ Setup Complete!"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Next steps:"
echo "1. Edit $ENV_FILE with your Proxmox credentials"
echo "2. (Optional) Create Proxmox API token: ./create-proxmox-token.sh"
echo "3. Restart Claude Desktop to load MCP server"
echo "4. Test MCP server: pnpm test:basic"
echo "5. Start MCP server: pnpm mcp:start"
echo ""
echo "Available commands:"
echo " • pnpm mcp:start - Start MCP server"
echo " • pnpm mcp:dev - Start MCP server in watch mode"
echo " • pnpm frontend:dev - Start frontend dev server"
echo " • ./verify-setup.sh - Verify setup"
echo ""

85
scripts/complete-validation.sh Executable file
View File

@@ -0,0 +1,85 @@
#!/bin/bash
# Complete Validation Script
# Runs all validation checks in sequence
set +e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} Complete Validation Suite ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════════════════╝${NC}"
echo ""
# Step 1: Prerequisites Check
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${CYAN}Step 1: Prerequisites Check${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
"$SCRIPT_DIR/check-prerequisites.sh"
PREREQ_RESULT=$?
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${CYAN}Step 2: Deployment Validation${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
"$SCRIPT_DIR/validate-ml110-deployment.sh"
DEPLOY_RESULT=$?
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${CYAN}Step 3: Connection Test${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
"$SCRIPT_DIR/test-connection.sh"
CONNECTION_RESULT=$?
# Summary
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} Validation Summary ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════════════════╝${NC}"
echo ""
if [ $PREREQ_RESULT -eq 0 ]; then
echo -e "${GREEN}✅ Prerequisites: PASSED${NC}"
else
echo -e "${RED}❌ Prerequisites: FAILED${NC}"
fi
if [ $DEPLOY_RESULT -eq 0 ]; then
echo -e "${GREEN}✅ Deployment Validation: PASSED${NC}"
else
echo -e "${RED}❌ Deployment Validation: FAILED${NC}"
fi
if [ $CONNECTION_RESULT -eq 0 ]; then
echo -e "${GREEN}✅ Connection Test: PASSED${NC}"
else
echo -e "${RED}❌ Connection Test: FAILED${NC}"
fi
echo ""
if [ $PREREQ_RESULT -eq 0 ] && [ $DEPLOY_RESULT -eq 0 ] && [ $CONNECTION_RESULT -eq 0 ]; then
echo -e "${GREEN}✅ All validations passed! System is ready for deployment.${NC}"
exit 0
else
echo -e "${YELLOW}⚠️ Some validations failed. Please review the output above.${NC}"
exit 1
fi

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env bash
# Comprehensive Project Update Script
# Updates all files to ensure consistency with current standards
set -euo pipefail
PROJECT_ROOT="/home/intlc/projects/proxmox"
PROXMOX_PROJECT="$PROJECT_ROOT/smom-dbis-138-proxmox"
echo "=== Comprehensive Project Update ==="
echo ""
echo "This script will:"
echo " 1. Verify VMID consistency (1000-1004, 1500-1503, 2500-2502)"
echo " 2. Check for outdated IP addresses (10.3.1.X should be 192.168.11.X)"
echo " 3. Verify Besu documentation references"
echo " 4. Check for old VMID references (106-122)"
echo ""
cd "$PROJECT_ROOT"
# Expected values
EXPECTED_VALIDATOR_COUNT=5
EXPECTED_VALIDATOR_RANGE="1000-1004"
EXPECTED_SENTRY_COUNT=4
EXPECTED_SENTRY_RANGE="1500-1503"
EXPECTED_RPC_COUNT=3
EXPECTED_RPC_RANGE="2500-2502"
EXPECTED_SUBNET="192.168.11"
OLD_SUBNET="10.3.1"
EXPECTED_GATEWAY="192.168.11.1"
errors=0
warnings=0
echo "=== 1. Checking VMID Ranges ==="
echo ""
# Check proxmox.conf for correct VMID ranges
if grep -q "VALIDATOR_COUNT=5" "$PROXMOX_PROJECT/config/proxmox.conf" 2>/dev/null; then
echo " ✅ VALIDATOR_COUNT=5 in proxmox.conf"
else
echo " ⚠️ VALIDATOR_COUNT not set to 5 in proxmox.conf"
warnings=$((warnings + 1))
fi
if grep -q "SENTRY_COUNT=4" "$PROXMOX_PROJECT/config/proxmox.conf" 2>/dev/null; then
echo " ✅ SENTRY_COUNT=4 in proxmox.conf"
else
echo " ⚠️ SENTRY_COUNT not set to 4 in proxmox.conf"
warnings=$((warnings + 1))
fi
if grep -q "RPC_COUNT=3" "$PROXMOX_PROJECT/config/proxmox.conf" 2>/dev/null; then
echo " ✅ RPC_COUNT=3 in proxmox.conf"
else
echo " ⚠️ RPC_COUNT not set to 3 in proxmox.conf"
warnings=$((warnings + 1))
fi
echo ""
echo "=== 2. Checking IP Address References ==="
echo ""
# Check for old IP subnet
old_ips=$(grep -rE "\b10\.3\.1\." "$PROXMOX_PROJECT" \
--include="*.sh" --include="*.md" --include="*.conf" \
2>/dev/null | grep -v ".git" | grep -v "node_modules" | wc -l)
if [[ $old_ips -gt 0 ]]; then
echo " ⚠️ Found $old_ips references to old IP subnet (10.3.1.X)"
echo " These should be updated to 192.168.11.X"
warnings=$((warnings + old_ips))
else
echo " ✅ No old IP subnet references found"
fi
# Check network.conf for correct gateway
if grep -q "GATEWAY=\"$EXPECTED_GATEWAY\"" "$PROXMOX_PROJECT/config/network.conf" 2>/dev/null; then
echo " ✅ Gateway correctly set to $EXPECTED_GATEWAY"
else
echo " ⚠️ Gateway may not be set to $EXPECTED_GATEWAY in network.conf"
warnings=$((warnings + 1))
fi
echo ""
echo "=== 3. Checking Besu Documentation References ==="
echo ""
# Check for generic Besu documentation references
besu_docs=$(grep -rE "Besu.*[Dd]ocumentation|besu.*docs" "$PROXMOX_PROJECT" \
--include="*.md" \
2>/dev/null | grep -v "besu.hyperledger.org" | grep -v "github.com/hyperledger/besu" | wc -l)
if [[ $besu_docs -gt 0 ]]; then
echo " ⚠️ Found $besu_docs generic Besu documentation references"
echo " Should reference https://besu.hyperledger.org or https://github.com/hyperledger/besu"
warnings=$((warnings + besu_docs))
else
echo " ✅ All Besu documentation references include official links"
fi
echo ""
echo "=== 4. Checking for Old VMID References ==="
echo ""
# Check for old VMID ranges (106-122)
old_vmids=$(grep -rE "\b(106|107|108|109|110|111|112|115|116|117|120|121|122)\b" "$PROXMOX_PROJECT" \
--include="*.sh" --include="*.md" --include="*.conf" \
2>/dev/null | grep -v ".git" | grep -v "node_modules" | \
grep -v "1006\|1107\|2106" | grep -v "COMMENT\|#.*old" | wc -l)
if [[ $old_vmids -gt 0 ]]; then
echo " ⚠️ Found $old_vmids potential old VMID references"
echo " These should be reviewed to ensure they're not active references"
warnings=$((warnings + old_vmids))
else
echo " ✅ No old VMID references found (or all are commented/contextual)"
fi
echo ""
echo "=== 5. Checking Validator Key Count ==="
echo ""
SOURCE_PROJECT="${SOURCE_PROJECT:-../smom-dbis-138}"
if [[ -d "$SOURCE_PROJECT/keys/validators" ]]; then
key_count=$(find "$SOURCE_PROJECT/keys/validators" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | wc -l)
if [[ $key_count -eq $EXPECTED_VALIDATOR_COUNT ]]; then
echo " ✅ Validator key count matches: $key_count"
else
echo " ⚠️ Validator key count mismatch: found $key_count, expected $EXPECTED_VALIDATOR_COUNT"
warnings=$((warnings + 1))
fi
else
echo " ⚠️ Source project keys directory not found"
warnings=$((warnings + 1))
fi
echo ""
echo "=== Summary ==="
echo ""
echo "Errors: $errors"
echo "Warnings: $warnings"
echo ""
if [[ $errors -eq 0 && $warnings -eq 0 ]]; then
echo "✅ Project appears to be fully updated and consistent!"
exit 0
elif [[ $errors -eq 0 ]]; then
echo "⚠️ Some warnings found - review recommended"
exit 0
else
echo "❌ Errors found - fix required"
exit 1
fi

261
scripts/comprehensive-review.sh Executable file
View File

@@ -0,0 +1,261 @@
#!/usr/bin/env bash
# Comprehensive Review Script
# Checks for inconsistencies, gaps, and dependency issues across the project
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
REPORT_FILE="$PROJECT_ROOT/logs/comprehensive-review-$(date +%Y%m%d-%H%M%S).md"
mkdir -p "$PROJECT_ROOT/logs"
{
cat << 'EOF'
# Comprehensive Project Review Report
Generated: $(date)
## Summary
This report identifies:
- VMID inconsistencies
- IP address inconsistencies
- Configuration gaps
- Missing dependencies
- Unreferenced or obsolete files
---
## 1. VMID Consistency
### Expected VMID Ranges
- **Validators**: 1000-1004 (5 nodes)
- **Sentries**: 1500-1503 (4 nodes)
- **RPC**: 2500-2502 (3 nodes)
### Issues Found
EOF
echo "### Files with Old VMIDs (106-117)"
echo ""
grep -rE "\b(106|107|108|109|110|111|112|113|114|115|116|117)\b" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/" "$PROJECT_ROOT/docs/" 2>/dev/null | \
grep -v node_modules | grep -v ".git" | \
grep -v "EXPECTED_CONTAINERS.md" | \
grep -v "VMID_ALLOCATION.md" | \
grep -v "HISTORICAL" | \
cut -d: -f1 | sort -u | while read -r file; do
echo "- \`$file\`"
done
echo ""
echo "---"
echo ""
echo "## 2. IP Address Consistency"
echo ""
echo "### Expected IP Range"
echo "- Base subnet: 192.168.11.0/24"
echo "- Validators: 192.168.11.100-104"
echo "- Sentries: 192.168.11.150-153"
echo "- RPC: 192.168.11.250-252"
echo ""
echo "### Files with Old IPs (10.3.1.X)"
echo ""
grep -rE "10\.3\.1\." "$PROJECT_ROOT/smom-dbis-138-proxmox/" "$PROJECT_ROOT/docs/" 2>/dev/null | \
grep -v node_modules | grep -v ".git" | \
cut -d: -f1 | sort -u | while read -r file; do
echo "- \`$file\`"
done
echo ""
echo "---"
echo ""
echo "## 3. Configuration Gaps"
echo ""
# Check for missing config files
if [[ ! -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" ]]; then
echo "- ❌ Missing: \`config/proxmox.conf\`"
else
echo "- ✅ Found: \`config/proxmox.conf\`"
fi
if [[ ! -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/network.conf" ]]; then
echo "- ❌ Missing: \`config/network.conf\`"
else
echo "- ✅ Found: \`config/network.conf\`"
fi
echo ""
echo "### Key Configuration Variables Check"
echo ""
# Source config to check variables
if [[ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" ]]; then
source "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" 2>/dev/null || true
if [[ "${VALIDATOR_COUNT:-}" != "5" ]]; then
echo "- ⚠️ VALIDATOR_COUNT=${VALIDATOR_COUNT:-not set} (expected: 5)"
else
echo "- ✅ VALIDATOR_COUNT=5"
fi
if [[ "${SENTRY_COUNT:-}" != "4" ]]; then
echo "- ⚠️ SENTRY_COUNT=${SENTRY_COUNT:-not set} (expected: 4)"
else
echo "- ✅ SENTRY_COUNT=4"
fi
if [[ "${RPC_COUNT:-}" != "3" ]]; then
echo "- ⚠️ RPC_COUNT=${RPC_COUNT:-not set} (expected: 3)"
else
echo "- ✅ RPC_COUNT=3"
fi
fi
echo ""
echo "---"
echo ""
echo "## 4. Dependencies Review"
echo ""
echo "### Required Tools"
echo ""
# Check for required tools
REQUIRED_TOOLS=(
"pct:Proxmox Container Toolkit"
"jq:JSON processor"
"sshpass:SSH password authentication"
"timeout:Command timeout utility"
"openssl:OpenSSL toolkit"
"curl:HTTP client"
"wget:File downloader"
)
for tool_info in "${REQUIRED_TOOLS[@]}"; do
tool=$(echo "$tool_info" | cut -d: -f1)
desc=$(echo "$tool_info" | cut -d: -f2)
if command -v "$tool" >/dev/null 2>&1; then
echo "- ✅ $tool ($desc)"
else
echo "- ❌ $tool ($desc) - MISSING"
fi
done
echo ""
echo "### Optional Tools"
echo ""
OPTIONAL_TOOLS=(
"quorum-genesis-tool:Genesis configuration generator"
"besu:Hyperledger Besu client"
)
for tool_info in "${OPTIONAL_TOOLS[@]}"; do
tool=$(echo "$tool_info" | cut -d: -f1)
desc=$(echo "$tool_info" | cut -d: -f2)
if command -v "$tool" >/dev/null 2>&1; then
echo "- ✅ $tool ($desc)"
else
echo "- ⚠️ $tool ($desc) - Optional (for key generation)"
fi
done
echo ""
echo "---"
echo ""
echo "## 5. Script Dependencies"
echo ""
echo "### Scripts Checking for Dependencies"
echo ""
# Find scripts that check for tools
grep -rE "(command -v|which|command_exists)" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/" 2>/dev/null | \
grep -v ".git" | cut -d: -f1 | sort -u | while read -r file; do
echo "- \`$file\`"
done
echo ""
echo "---"
echo ""
echo "## 6. Missing or Incomplete Files"
echo ""
# Check for common missing files
MISSING_CHECKS=(
"smom-dbis-138-proxmox/scripts/copy-besu-config.sh:Configuration copy script"
"smom-dbis-138-proxmox/scripts/network/bootstrap-network.sh:Network bootstrap script"
"smom-dbis-138-proxmox/scripts/validation/validate-deployment-comprehensive.sh:Deployment validation script"
)
for check in "${MISSING_CHECKS[@]}"; do
file=$(echo "$check" | cut -d: -f1)
desc=$(echo "$check" | cut -d: -f2)
if [[ -f "$PROJECT_ROOT/$file" ]]; then
echo "- ✅ $desc: \`$file\`"
else
echo "- ❌ Missing: $desc - \`$file\`"
fi
done
echo ""
echo "---"
echo ""
echo "## 7. Documentation Inconsistencies"
echo ""
echo "### Documents with Outdated VMID References"
echo ""
# Check documentation files
OLD_VMID_DOCS=(
"docs/EXPECTED_CONTAINERS.md:References old VMIDs (106-117)"
"docs/VMID_ALLOCATION.md:Contains historical VMID ranges (1100-1122)"
)
for doc_info in "${OLD_VMID_DOCS[@]}"; do
doc=$(echo "$doc_info" | cut -d: -f1)
issue=$(echo "$doc_info" | cut -d: -f2)
if [[ -f "$PROJECT_ROOT/$doc" ]]; then
echo "- ⚠️ \`$doc\` - $issue"
fi
done
echo ""
echo "---"
echo ""
echo "## Recommendations"
echo ""
echo "1. Update files with old VMID references to use current ranges (1000-1004, 1500-1503, 2500-2502)"
echo "2. Update files with old IP addresses (10.3.1.X) to use new range (192.168.11.X)"
echo "3. Review and update historical documentation files"
echo "4. Ensure all required tools are installed on deployment hosts"
echo "5. Verify configuration file consistency across all scripts"
echo ""
} > "$REPORT_FILE"
log_info "========================================="
log_info "Comprehensive Review Complete"
log_info "========================================="
log_info "Report saved to: $REPORT_FILE"
log_info ""
cat "$REPORT_FILE"

View File

@@ -0,0 +1,243 @@
#!/usr/bin/env bash
# Comprehensive Validation Script
# Validates templates, genesis.json, keys, configs, and synchronization
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_section() { echo -e "${CYAN}=== $1 ===${NC}"; }
ERRORS=0
WARNINGS=0
# Source project paths
SMOM_PROJECT="${SMOM_PROJECT:-$PROJECT_ROOT/../smom-dbis-138}"
PROXMOX_PROJECT="$PROJECT_ROOT/smom-dbis-138-proxmox"
log_section "Comprehensive Validation Review"
echo ""
# 1. Template Files Review
log_section "1. Template Files Validation"
TEMPLATE_FILES=(
"$PROXMOX_PROJECT/config/proxmox.conf.example"
"$PROXMOX_PROJECT/config/network.conf.example"
)
for template in "${TEMPLATE_FILES[@]}"; do
if [[ -f "$template" ]]; then
log_success "$(basename "$template") exists"
# Check for old VMID references
if grep -qE "\b(106|107|108|109|110|111|112|113|114|115|116|117)\b" "$template" 2>/dev/null; then
log_error "$(basename "$template") contains old VMID references"
((ERRORS++))
fi
# Check for old IP references
if grep -q "10\.3\.1\." "$template" 2>/dev/null; then
log_error "$(basename "$template") contains old IP addresses (10.3.1.X)"
((ERRORS++))
fi
# Check for correct VMID ranges
if grep -qE "\b(1000|1001|1002|1003|1004|1500|1501|1502|1503|2500|2501|2502)\b" "$template" 2>/dev/null; then
log_success " Contains correct VMID ranges"
fi
# Check for correct IP range
if grep -q "192\.168\.11\." "$template" 2>/dev/null; then
log_success " Contains correct IP range (192.168.11.X)"
fi
else
log_warn "$(basename "$template") not found"
((WARNINGS++))
fi
done
echo ""
# 2. Genesis.json Validation
log_section "2. Genesis.json Validation"
GENESIS_FILE="$SMOM_PROJECT/config/genesis.json"
if [[ ! -f "$GENESIS_FILE" ]]; then
log_error "genesis.json not found: $GENESIS_FILE"
((ERRORS++))
else
log_success "genesis.json found"
# Check if it's valid JSON
if python3 -c "import json; json.load(open('$GENESIS_FILE'))" 2>/dev/null; then
log_success " Valid JSON format"
# Check for validator addresses in extraData
EXTRA_DATA=$(python3 -c "import json; g=json.load(open('$GENESIS_FILE')); print(g.get('extraData', ''))" 2>/dev/null)
if [[ -n "$EXTRA_DATA" ]]; then
# Remove 0x prefix if present
HEX_DATA="${EXTRA_DATA#0x}"
# Each validator address is 40 hex characters in extraData (20 bytes)
VALIDATOR_COUNT=$(( ${#HEX_DATA} / 40 ))
log_info " Validators in extraData: $VALIDATOR_COUNT"
if [[ $VALIDATOR_COUNT -eq 5 ]]; then
log_success " ✓ Correct number of validators (5)"
elif [[ $VALIDATOR_COUNT -eq 4 ]]; then
log_error " ✗ Only 4 validators found (expected 5)"
((ERRORS++))
else
log_error " ✗ Unexpected validator count: $VALIDATOR_COUNT (expected 5)"
((ERRORS++))
fi
else
log_warn " extraData field not found or empty"
fi
else
log_error " Invalid JSON format"
((ERRORS++))
fi
fi
echo ""
# 3. Validator Keys Validation
log_section "3. Validator Keys Validation"
EXPECTED_VALIDATORS=5
FOUND_VALIDATORS=0
VALIDATOR_ADDRESSES=()
for i in $(seq 1 $EXPECTED_VALIDATORS); do
KEY_DIR="$SMOM_PROJECT/keys/validators/validator-$i"
if [[ -d "$KEY_DIR" ]]; then
log_success "validator-$i directory exists"
# Check for required key files
KEY_FILES=("key.priv" "key.pem" "pubkey.pem" "address.txt")
for key_file in "${KEY_FILES[@]}"; do
if [[ -f "$KEY_DIR/$key_file" ]]; then
log_success "$key_file"
else
log_error " ✗ Missing: $key_file"
((ERRORS++))
fi
done
# Extract address
if [[ -f "$KEY_DIR/address.txt" ]]; then
ADDR=$(cat "$KEY_DIR/address.txt" | tr -d '\n' | tr '[:upper:]' '[:lower:]')
if [[ ${#ADDR} -eq 42 ]] || [[ ${#ADDR} -eq 40 ]]; then
VALIDATOR_ADDRESSES+=("${ADDR#0x}") # Remove 0x prefix if present
log_info " Address: ${ADDR:0:10}..."
FOUND_VALIDATORS=$((FOUND_VALIDATORS + 1))
else
log_error " ✗ Invalid address format (length: ${#ADDR})"
((ERRORS++))
fi
fi
else
log_error "validator-$i directory missing"
((ERRORS++))
fi
done
if [[ $FOUND_VALIDATORS -eq $EXPECTED_VALIDATORS ]]; then
log_success "All $EXPECTED_VALIDATORS validator keys found"
else
log_error "Only $FOUND_VALIDATORS/$EXPECTED_VALIDATORS validator keys found"
((ERRORS++))
fi
echo ""
# 4. Old References Check
log_section "4. Old References Check"
# Check config files for old VMIDs
OLD_VMID_COUNT=$(grep -rE "\b(106|107|108|109|110|111|112|113|114|115|116|117)\b" \
"$PROXMOX_PROJECT/config" "$PROXMOX_PROJECT/scripts" 2>/dev/null | \
grep -v ".git" | grep -v ".example" | wc -l)
if [[ $OLD_VMID_COUNT -eq 0 ]]; then
log_success "No old VMID references (106-117) in config/scripts"
else
log_error "Found $OLD_VMID_COUNT references to old VMIDs (106-117)"
((ERRORS++))
fi
# Check for old IP addresses
OLD_IP_COUNT=$(grep -rE "10\.3\.1\." \
"$PROXMOX_PROJECT/config" "$PROXMOX_PROJECT/scripts" 2>/dev/null | \
grep -v ".git" | grep -v ".example" | wc -l)
if [[ $OLD_IP_COUNT -eq 0 ]]; then
log_success "No old IP addresses (10.3.1.X) in config/scripts"
else
log_error "Found $OLD_IP_COUNT references to old IP addresses (10.3.1.X)"
((ERRORS++))
fi
echo ""
# 5. Configuration Completeness
log_section "5. Configuration Completeness"
CONFIG_FILES=(
"$PROXMOX_PROJECT/config/proxmox.conf"
"$PROXMOX_PROJECT/config/network.conf"
)
for config in "${CONFIG_FILES[@]}"; do
if [[ -f "$config" ]]; then
log_success "$(basename "$config") exists"
# Check for correct values
if [[ "$config" == *"proxmox.conf"* ]]; then
VALIDATOR_COUNT=$(grep "^VALIDATOR_COUNT=" "$config" 2>/dev/null | cut -d'=' -f2 | tr -d ' ')
if [[ "$VALIDATOR_COUNT" == "5" ]]; then
log_success " VALIDATOR_COUNT=5 ✓"
else
log_error " VALIDATOR_COUNT=$VALIDATOR_COUNT (expected 5)"
((ERRORS++))
fi
fi
else
log_error "$(basename "$config") missing"
((ERRORS++))
fi
done
echo ""
# Summary
log_section "Validation Summary"
echo "Errors: $ERRORS"
echo "Warnings: $WARNINGS"
echo ""
if [[ $ERRORS -eq 0 ]]; then
log_success "✓ All validations passed!"
exit 0
else
log_error "✗ Validation failed with $ERRORS error(s)"
exit 1
fi

View File

@@ -0,0 +1,469 @@
#!/usr/bin/env bash
# Configure Cloudflare Tunnel Routes and DNS Records via API
# Usage: ./configure-cloudflare-api.sh
# Requires: CLOUDFLARE_API_TOKEN and CLOUDFLARE_ZONE_ID environment variables
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
info() { echo -e "${GREEN}[INFO]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
debug() { echo -e "${BLUE}[DEBUG]${NC} $1"; }
# Check for required tools
if ! command -v curl >/dev/null 2>&1; then
error "curl is required but not installed"
exit 1
fi
if ! command -v jq >/dev/null 2>&1; then
error "jq is required but not installed. Install with: apt-get install jq"
exit 1
fi
# Load environment variables
if [[ -f "$SCRIPT_DIR/../.env" ]]; then
source "$SCRIPT_DIR/../.env"
fi
# Cloudflare API configuration (support multiple naming conventions)
CLOUDFLARE_API_TOKEN="${CLOUDFLARE_API_TOKEN:-}"
CLOUDFLARE_ZONE_ID="${CLOUDFLARE_ZONE_ID:-}"
CLOUDFLARE_ACCOUNT_ID="${CLOUDFLARE_ACCOUNT_ID:-}"
CLOUDFLARE_EMAIL="${CLOUDFLARE_EMAIL:-}"
CLOUDFLARE_API_KEY="${CLOUDFLARE_API_KEY:-}"
DOMAIN="${DOMAIN:-${CLOUDFLARE_DOMAIN:-d-bis.org}}"
# Tunnel configuration (support multiple naming conventions)
# Prefer JWT token from installed service, then env vars
INSTALLED_TOKEN=""
if command -v ssh >/dev/null 2>&1; then
INSTALLED_TOKEN=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST:-192.168.11.10} \
"pct exec 102 -- cat /etc/systemd/system/cloudflared.service 2>/dev/null | grep -o 'tunnel run --token [^ ]*' | cut -d' ' -f3" 2>/dev/null || echo "")
fi
TUNNEL_TOKEN="${INSTALLED_TOKEN:-${TUNNEL_TOKEN:-${CLOUDFLARE_TUNNEL_TOKEN:-eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9}}}"
# RPC endpoint configuration
declare -A RPC_ENDPOINTS=(
[rpc-http-pub]="https://192.168.11.251:443"
[rpc-ws-pub]="https://192.168.11.251:443"
[rpc-http-prv]="https://192.168.11.252:443"
[rpc-ws-prv]="https://192.168.11.252:443"
)
# API base URLs
CF_API_BASE="https://api.cloudflare.com/client/v4"
CF_ZERO_TRUST_API="https://api.cloudflare.com/client/v4/accounts"
# Function to make Cloudflare API request
cf_api_request() {
local method="$1"
local endpoint="$2"
local data="${3:-}"
local url="${CF_API_BASE}${endpoint}"
local headers=()
if [[ -n "$CLOUDFLARE_API_TOKEN" ]]; then
headers+=("-H" "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}")
elif [[ -n "$CLOUDFLARE_API_KEY" ]]; then
# Global API Keys are typically 40 chars, API Tokens are longer
# If no email provided, assume it's an API Token
if [[ -z "$CLOUDFLARE_EMAIL" ]] || [[ ${#CLOUDFLARE_API_KEY} -gt 50 ]]; then
headers+=("-H" "Authorization: Bearer ${CLOUDFLARE_API_KEY}")
else
headers+=("-H" "X-Auth-Email: ${CLOUDFLARE_EMAIL}")
headers+=("-H" "X-Auth-Key: ${CLOUDFLARE_API_KEY}")
fi
else
error "Cloudflare API credentials not found!"
error "Set CLOUDFLARE_API_TOKEN or CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY"
exit 1
fi
headers+=("-H" "Content-Type: application/json")
local response
if [[ -n "$data" ]]; then
response=$(curl -s -X "$method" "$url" "${headers[@]}" -d "$data")
else
response=$(curl -s -X "$method" "$url" "${headers[@]}")
fi
# Check if response is valid JSON
if ! echo "$response" | jq -e . >/dev/null 2>&1; then
error "Invalid JSON response from API"
debug "Response: $response"
return 1
fi
# Check for API errors
local success=$(echo "$response" | jq -r '.success // false' 2>/dev/null)
if [[ "$success" != "true" ]]; then
local errors=$(echo "$response" | jq -r '.errors[]?.message // .error // "Unknown error"' 2>/dev/null | head -3)
if [[ -z "$errors" ]]; then
errors="API request failed (check response)"
fi
error "API request failed: $errors"
debug "Response: $response"
return 1
fi
echo "$response"
}
# Function to get zone ID from domain
get_zone_id() {
if [[ -n "$CLOUDFLARE_ZONE_ID" ]]; then
echo "$CLOUDFLARE_ZONE_ID"
return 0
fi
info "Getting zone ID for domain: $DOMAIN"
local response=$(cf_api_request "GET" "/zones?name=${DOMAIN}")
local zone_id=$(echo "$response" | jq -r '.result[0].id // empty')
if [[ -z "$zone_id" ]]; then
error "Zone not found for domain: $DOMAIN"
exit 1
fi
info "Zone ID: $zone_id"
echo "$zone_id"
}
# Function to get account ID (needed for Zero Trust API)
get_account_id() {
info "Getting account ID..."
# Try to get from token verification
local response=$(cf_api_request "GET" "/user/tokens/verify")
local account_id=$(echo "$response" | jq -r '.result.id // empty')
if [[ -z "$account_id" ]]; then
# Try alternative: get from accounts list
response=$(cf_api_request "GET" "/accounts")
account_id=$(echo "$response" | jq -r '.result[0].id // empty')
fi
if [[ -z "$account_id" ]]; then
# Last resort: try to get from zone
local zone_id=$(get_zone_id)
response=$(cf_api_request "GET" "/zones/${zone_id}")
account_id=$(echo "$response" | jq -r '.result.account.id // empty')
fi
if [[ -z "$account_id" ]]; then
error "Could not determine account ID"
error "You may need to specify CLOUDFLARE_ACCOUNT_ID in .env file"
exit 1
fi
info "Account ID: $account_id"
echo "$account_id"
}
# Function to extract tunnel ID from token
get_tunnel_id_from_token() {
local token="$1"
# Check if it's a JWT token (has dots)
if [[ "$token" == *.*.* ]]; then
# Decode JWT token (basic base64 decode of payload)
local payload=$(echo "$token" | cut -d'.' -f2)
# Add padding if needed
local padding=$((4 - ${#payload} % 4))
if [[ $padding -ne 4 ]]; then
payload="${payload}$(printf '%*s' $padding | tr ' ' '=')"
fi
# Decode and extract tunnel ID (field 't' contains tunnel ID)
if command -v python3 >/dev/null 2>&1; then
echo "$payload" | python3 -c "import sys, base64, json; payload=sys.stdin.read().strip(); padding=4-len(payload)%4; payload+=('='*padding if padding<4 else ''); data=json.loads(base64.b64decode(payload)); print(data.get('t', ''))" 2>/dev/null || echo ""
else
echo "$payload" | base64 -d 2>/dev/null | jq -r '.t // empty' 2>/dev/null || echo ""
fi
else
# Not a JWT token, return empty
echo ""
fi
}
# Function to get tunnel ID
get_tunnel_id() {
local account_id="$1"
local token="$2"
# Try to extract from JWT token first
local tunnel_id=$(get_tunnel_id_from_token "$token")
if [[ -n "$tunnel_id" ]]; then
info "Tunnel ID from token: $tunnel_id"
echo "$tunnel_id"
return 0
fi
# Fallback: list tunnels and find the one
warn "Could not extract tunnel ID from token, listing tunnels..."
local response=$(cf_api_request "GET" "/accounts/${account_id}/cfd_tunnel" 2>/dev/null)
if [[ -z "$response" ]]; then
error "Failed to list tunnels. Check API credentials."
exit 1
fi
local tunnel_id=$(echo "$response" | jq -r '.result[0].id // empty' 2>/dev/null)
if [[ -z "$tunnel_id" ]]; then
error "Could not find tunnel ID"
debug "Response: $response"
exit 1
fi
info "Tunnel ID: $tunnel_id"
echo "$tunnel_id"
}
# Function to get tunnel name
get_tunnel_name() {
local account_id="$1"
local tunnel_id="$2"
local response=$(cf_api_request "GET" "/accounts/${account_id}/cfd_tunnel/${tunnel_id}")
local tunnel_name=$(echo "$response" | jq -r '.result.name // empty')
echo "$tunnel_name"
}
# Function to configure tunnel routes
configure_tunnel_routes() {
local account_id="$1"
local tunnel_id="$2"
local tunnel_name="$3"
info "Configuring tunnel routes for: $tunnel_name"
# Build ingress rules array
local ingress_array="["
local first=true
for subdomain in "${!RPC_ENDPOINTS[@]}"; do
local service="${RPC_ENDPOINTS[$subdomain]}"
local hostname="${subdomain}.${DOMAIN}"
if [[ "$first" == "true" ]]; then
first=false
else
ingress_array+=","
fi
# Determine if WebSocket
local is_ws=false
if [[ "$subdomain" == *"ws"* ]]; then
is_ws=true
fi
# Build ingress rule
# Add noTLSVerify to skip certificate validation (certificates don't have IP SANs)
if [[ "$is_ws" == "true" ]]; then
ingress_array+="{\"hostname\":\"${hostname}\",\"service\":\"${service}\",\"originRequest\":{\"httpHostHeader\":\"${hostname}\",\"noTLSVerify\":true}}"
else
ingress_array+="{\"hostname\":\"${hostname}\",\"service\":\"${service}\",\"originRequest\":{\"noTLSVerify\":true}}"
fi
info " Adding route: ${hostname}${service}"
done
# Add catch-all (must be last)
ingress_array+=",{\"service\":\"http_status:404\"}]"
# Create config JSON
local config_data=$(echo "$ingress_array" | jq -c '{
config: {
ingress: .
}
}')
info "Updating tunnel configuration..."
local response=$(cf_api_request "PUT" "/accounts/${account_id}/cfd_tunnel/${tunnel_id}/configurations" "$config_data")
if echo "$response" | jq -e '.success' >/dev/null 2>&1; then
info "✓ Tunnel routes configured successfully"
else
local errors=$(echo "$response" | jq -r '.errors[]?.message // "Unknown error"' | head -3)
error "Failed to configure tunnel routes: $errors"
debug "Response: $response"
return 1
fi
}
# Function to create or update DNS record
create_or_update_dns_record() {
local zone_id="$1"
local name="$2"
local target="$3"
local proxied="${4:-true}"
# Check if record exists
local response=$(cf_api_request "GET" "/zones/${zone_id}/dns_records?name=${name}.${DOMAIN}&type=CNAME")
local record_id=$(echo "$response" | jq -r '.result[0].id // empty')
local data=$(jq -n \
--arg name "${name}.${DOMAIN}" \
--arg target "$target" \
--argjson proxied "$proxied" \
'{
type: "CNAME",
name: $name,
content: $target,
proxied: $proxied,
ttl: 1
}')
if [[ -n "$record_id" ]]; then
info " Updating existing DNS record: ${name}.${DOMAIN}"
response=$(cf_api_request "PUT" "/zones/${zone_id}/dns_records/${record_id}" "$data")
else
info " Creating DNS record: ${name}.${DOMAIN}"
response=$(cf_api_request "POST" "/zones/${zone_id}/dns_records" "$data")
fi
if echo "$response" | jq -e '.success' >/dev/null 2>&1; then
info " ✓ DNS record configured"
else
error " ✗ Failed to configure DNS record"
return 1
fi
}
# Function to configure DNS records
configure_dns_records() {
local zone_id="$1"
local tunnel_id="$2"
local tunnel_target="${tunnel_id}.cfargotunnel.com"
info "Configuring DNS records..."
info "Tunnel target: $tunnel_target"
for subdomain in "${!RPC_ENDPOINTS[@]}"; do
create_or_update_dns_record "$zone_id" "$subdomain" "$tunnel_target" "true"
done
}
# Main execution
main() {
info "Cloudflare API Configuration Script"
info "===================================="
echo ""
# Validate credentials
if [[ -z "$CLOUDFLARE_API_TOKEN" ]] && [[ -z "$CLOUDFLARE_EMAIL" ]] && [[ -z "$CLOUDFLARE_API_KEY" ]]; then
error "Cloudflare API credentials required!"
echo ""
echo "Set one of:"
echo " export CLOUDFLARE_API_TOKEN='your-api-token'"
echo " OR"
echo " export CLOUDFLARE_EMAIL='your-email@example.com'"
echo " export CLOUDFLARE_API_KEY='your-api-key'"
echo ""
echo "You can also create a .env file in the project root with these variables."
exit 1
fi
# If API_KEY is provided but no email, we need email for Global API Key
if [[ -n "$CLOUDFLARE_API_KEY" ]] && [[ -z "$CLOUDFLARE_EMAIL" ]] && [[ -z "$CLOUDFLARE_API_TOKEN" ]]; then
error "CLOUDFLARE_API_KEY requires CLOUDFLARE_EMAIL"
error "Please add CLOUDFLARE_EMAIL to your .env file"
error ""
error "OR create an API Token instead:"
error " 1. Go to: https://dash.cloudflare.com/profile/api-tokens"
error " 2. Create token with: Zone:DNS:Edit, Account:Cloudflare Tunnel:Edit"
error " 3. Set CLOUDFLARE_API_TOKEN in .env"
exit 1
fi
# Get zone ID
local zone_id=$(get_zone_id)
# Get account ID
local account_id="${CLOUDFLARE_ACCOUNT_ID:-}"
if [[ -z "$account_id" ]]; then
account_id=$(get_account_id)
else
info "Using provided Account ID: $account_id"
fi
# Get tunnel ID - try from .env first, then extraction, then API
local tunnel_id="${CLOUDFLARE_TUNNEL_ID:-}"
# If not in .env, try to extract from JWT token
if [[ -z "$tunnel_id" ]] && [[ "$TUNNEL_TOKEN" == *.*.* ]]; then
local payload=$(echo "$TUNNEL_TOKEN" | cut -d'.' -f2)
local padding=$((4 - ${#payload} % 4))
if [[ $padding -ne 4 ]]; then
payload="${payload}$(printf '%*s' $padding | tr ' ' '=')"
fi
if command -v python3 >/dev/null 2>&1; then
tunnel_id=$(echo "$payload" | python3 -c "import sys, base64, json; payload=sys.stdin.read().strip(); padding=4-len(payload)%4; payload+=('='*padding if padding<4 else ''); data=json.loads(base64.b64decode(payload)); print(data.get('t', ''))" 2>/dev/null || echo "")
fi
fi
# If extraction failed, try API (but don't fail if API doesn't work)
if [[ -z "$tunnel_id" ]]; then
tunnel_id=$(get_tunnel_id "$account_id" "$TUNNEL_TOKEN" 2>/dev/null || echo "")
fi
if [[ -z "$tunnel_id" ]]; then
error "Could not determine tunnel ID"
error "Please set CLOUDFLARE_TUNNEL_ID in .env file"
error "Or ensure API credentials are valid to fetch it automatically"
exit 1
fi
info "Using Tunnel ID: $tunnel_id"
local tunnel_name=$(get_tunnel_name "$account_id" "$tunnel_id" 2>/dev/null || echo "tunnel-${tunnel_id:0:8}")
echo ""
info "Configuration Summary:"
echo " Domain: $DOMAIN"
echo " Zone ID: $zone_id"
echo " Account ID: $account_id"
echo " Tunnel: $tunnel_name (ID: $tunnel_id)"
echo ""
# Configure tunnel routes
echo "=========================================="
info "Step 1: Configuring Tunnel Routes"
echo "=========================================="
configure_tunnel_routes "$account_id" "$tunnel_id" "$tunnel_name"
echo ""
echo "=========================================="
info "Step 2: Configuring DNS Records"
echo "=========================================="
configure_dns_records "$zone_id" "$tunnel_id"
echo ""
echo "=========================================="
info "Configuration Complete!"
echo "=========================================="
echo ""
info "Next steps:"
echo " 1. Wait 1-2 minutes for DNS propagation"
echo " 2. Test endpoints:"
echo " curl https://rpc-http-pub.d-bis.org/health"
echo " 3. Verify in Cloudflare Dashboard:"
echo " - Zero Trust → Networks → Tunnels → Check routes"
echo " - DNS → Records → Verify CNAME records"
}
# Run main function
main

54
scripts/configure-env.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/bin/bash
# Quick configuration script to update .env with Proxmox credentials
set -e
HOST="${1:-192.168.11.10}"
USER="${2:-root@pam}"
TOKEN_NAME="${3:-mcp-server}"
echo "Configuring .env file with Proxmox connection..."
echo "Host: $HOST"
echo "User: $USER"
echo "Token Name: $TOKEN_NAME"
echo ""
# Update .env file
cat > "$HOME/.env" << EOF
# Proxmox MCP Server Configuration
# Configured with: $HOST
# Proxmox Configuration
PROXMOX_HOST=$HOST
PROXMOX_USER=$USER
PROXMOX_TOKEN_NAME=$TOKEN_NAME
PROXMOX_TOKEN_VALUE=your-token-secret-here
# Security Settings
# ⚠️ WARNING: Setting PROXMOX_ALLOW_ELEVATED=true enables DESTRUCTIVE operations
PROXMOX_ALLOW_ELEVATED=false
# Optional Settings
PROXMOX_PORT=8006
EOF
echo "✅ .env file updated!"
echo ""
echo "⚠️ IMPORTANT: You need to create the API token and add it to .env"
echo ""
echo "Option 1: Via Proxmox Web UI (Recommended)"
echo " 1. Go to: https://$HOST:8006"
echo " 2. Navigate to: Datacenter → Permissions → API Tokens"
echo " 3. Click 'Add' and create token: $TOKEN_NAME"
echo " 4. Copy the secret value"
echo " 5. Update ~/.env: PROXMOX_TOKEN_VALUE=<paste-secret-here>"
echo ""
echo "Option 2: Try automated token creation"
echo " ./create-proxmox-token.sh $HOST $USER <password> $TOKEN_NAME"
echo ""
echo "Current .env contents:"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat "$HOME/.env" | grep -v "TOKEN_VALUE="
echo "PROXMOX_TOKEN_VALUE=<needs-to-be-added>"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

View File

@@ -0,0 +1,251 @@
#!/usr/bin/env bash
# Configure Nginx for Core RPC Node (VMID 2500)
# This configures Nginx as a reverse proxy for Besu RPC endpoints
set -e
VMID=2500
HOSTNAME="besu-rpc-1"
IP="192.168.11.250"
PROXMOX_HOST="192.168.11.10"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Configuring Nginx for Core RPC Node (VMID $VMID)"
log_info "Hostname: $HOSTNAME"
log_info "IP: $IP"
echo ""
# Create Nginx configuration
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'NGINX_CONFIG_EOF'
cat > /etc/nginx/sites-available/rpc-core <<'EOF'
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name besu-rpc-1 192.168.11.250 rpc-core.besu.local rpc-core.chainid138.local;
# Redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
# HTTPS server - HTTP RPC API (port 8545)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name besu-rpc-1 192.168.11.250 rpc-core.besu.local rpc-core.chainid138.local;
# SSL configuration
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Logging
access_log /var/log/nginx/rpc-core-http-access.log;
error_log /var/log/nginx/rpc-core-http-error.log;
# Increase timeouts for RPC calls
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
client_max_body_size 10M;
# HTTP RPC endpoint (port 8545)
location / {
proxy_pass http://127.0.0.1:8545;
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Buffer settings (disable for RPC)
proxy_buffering off;
proxy_request_buffering off;
# CORS headers (if needed for web apps)
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, OPTIONS" always;
add_header Access-Control-Allow-Headers "Content-Type, Authorization" always;
# Handle OPTIONS requests
if ($request_method = OPTIONS) {
return 204;
}
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Metrics endpoint (if exposed)
location /metrics {
proxy_pass http://127.0.0.1:9545;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
# HTTPS server - WebSocket RPC API (port 8546)
server {
listen 8443 ssl http2;
listen [::]:8443 ssl http2;
server_name besu-rpc-1 192.168.11.250 rpc-core-ws.besu.local rpc-core-ws.chainid138.local;
# SSL configuration
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Logging
access_log /var/log/nginx/rpc-core-ws-access.log;
error_log /var/log/nginx/rpc-core-ws-error.log;
# WebSocket RPC endpoint (port 8546)
location / {
proxy_pass http://127.0.0.1:8546;
proxy_http_version 1.1;
# WebSocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Long timeouts for WebSocket connections
proxy_read_timeout 86400;
proxy_send_timeout 86400;
proxy_connect_timeout 300s;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
EOF
# Enable the site
ln -sf /etc/nginx/sites-available/rpc-core /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default
# Test configuration
nginx -t
# Reload Nginx
systemctl enable nginx
systemctl restart nginx
NGINX_CONFIG_EOF
if [ $? -eq 0 ]; then
log_success "Nginx configuration created"
else
log_error "Failed to create Nginx configuration"
exit 1
fi
# Verify Nginx is running
log_info "Verifying Nginx status..."
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl is-active nginx >/dev/null 2>&1"; then
log_success "Nginx service is active"
else
log_error "Nginx service is not active"
exit 1
fi
# Check if ports are listening
log_info "Checking listening ports..."
PORTS=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- ss -tlnp 2>&1 | grep -E ':80|:443|:8443' || echo ''")
if echo "$PORTS" | grep -q ':80'; then
log_success "Port 80 is listening"
else
log_warn "Port 80 may not be listening"
fi
if echo "$PORTS" | grep -q ':443'; then
log_success "Port 443 is listening"
else
log_warn "Port 443 may not be listening"
fi
if echo "$PORTS" | grep -q ':8443'; then
log_success "Port 8443 is listening"
else
log_warn "Port 8443 may not be listening"
fi
# Test RPC endpoint through Nginx
log_info "Testing RPC endpoint through Nginx..."
RPC_TEST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- timeout 5 curl -k -s -X POST https://localhost:443 \
-H 'Content-Type: application/json' \
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}' 2>&1 || echo 'FAILED'")
if echo "$RPC_TEST" | grep -q "result"; then
BLOCK_NUM=$(echo "$RPC_TEST" | grep -oP '"result":"\K[^"]+' | head -1)
log_success "RPC endpoint is responding through Nginx!"
log_info "Current block: $BLOCK_NUM"
else
log_warn "RPC endpoint test failed or needs more time"
log_info "Response: $RPC_TEST"
fi
echo ""
log_success "Nginx configuration complete!"
echo ""
log_info "Configuration Summary:"
log_info " - HTTP RPC: https://$IP:443 (proxies to localhost:8545)"
log_info " - WebSocket RPC: https://$IP:8443 (proxies to localhost:8546)"
log_info " - HTTP redirect: http://$IP:80 → https://$IP:443"
log_info " - Health check: https://$IP:443/health"
echo ""
log_info "Next steps:"
log_info " 1. Test from external: curl -k https://$IP:443/health"
log_info " 2. Test RPC: curl -k -X POST https://$IP:443 -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}'"
log_info " 3. Replace self-signed certificate with Let's Encrypt if needed"
log_info " 4. Configure firewall rules if needed"

View File

@@ -0,0 +1,165 @@
#!/usr/bin/env bash
# Configure additional security features for Nginx on VMID 2500
# - Rate limiting
# - Firewall rules
# - Security headers enhancement
set -e
VMID=2500
PROXMOX_HOST="192.168.11.10"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Configuring additional security features for Nginx on VMID $VMID"
echo ""
# Configure rate limiting in Nginx
log_info "1. Configuring rate limiting..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'RATE_LIMIT_EOF'
# Add rate limiting configuration to nginx.conf
if ! grep -q "limit_req_zone" /etc/nginx/nginx.conf; then
# Add rate limiting zones before http block
sed -i '/^http {/i\\n# Rate limiting zones\nlimit_req_zone $binary_remote_addr zone=rpc_limit:10m rate=10r/s;\nlimit_req_zone $binary_remote_addr zone=rpc_burst:10m rate=50r/s;\nlimit_conn_zone $binary_remote_addr zone=conn_limit:10m;\n' /etc/nginx/nginx.conf
fi
# Update site configuration to use rate limiting
if [ -f /etc/nginx/sites-available/rpc-core ]; then
# Add rate limiting to HTTP RPC location
sed -i '/location \/ {/,/^ }/ {
/proxy_pass http:\/\/127.0.0.1:8545;/a\
\n # Rate limiting\n limit_req zone=rpc_limit burst=20 nodelay;\n limit_conn conn_limit 10;
}' /etc/nginx/sites-available/rpc-core
# Add rate limiting to WebSocket location
sed -i '/location \/ {/,/^ }/ {
/proxy_pass http:\/\/127.0.0.1:8546;/a\
\n # Rate limiting\n limit_req zone=rpc_burst burst=50 nodelay;\n limit_conn conn_limit 5;
}' /etc/nginx/sites-available/rpc-core
fi
# Test configuration
nginx -t
RATE_LIMIT_EOF
if [ $? -eq 0 ]; then
log_success "Rate limiting configured"
else
log_warn "Rate limiting configuration may need manual adjustment"
fi
# Configure firewall rules (if iptables is available)
log_info ""
log_info "2. Configuring firewall rules..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'FIREWALL_EOF'
# Check if iptables is available
if command -v iptables >/dev/null 2>&1; then
# Allow HTTP
iptables -A INPUT -p tcp --dport 80 -j ACCEPT 2>/dev/null || true
# Allow HTTPS
iptables -A INPUT -p tcp --dport 443 -j ACCEPT 2>/dev/null || true
# Allow WebSocket HTTPS
iptables -A INPUT -p tcp --dport 8443 -j ACCEPT 2>/dev/null || true
# Allow Besu RPC (internal only)
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 8545 -j ACCEPT 2>/dev/null || true
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 8546 -j ACCEPT 2>/dev/null || true
# Allow Besu P2P (if needed)
iptables -A INPUT -p tcp --dport 30303 -j ACCEPT 2>/dev/null || true
# Allow Besu Metrics (internal only)
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 9545 -j ACCEPT 2>/dev/null || true
echo "Firewall rules configured (may need to be persisted)"
else
echo "iptables not available, skipping firewall configuration"
fi
FIREWALL_EOF
log_success "Firewall rules configured"
# Enhance security headers
log_info ""
log_info "3. Enhancing security headers..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'SECURITY_EOF'
if [ -f /etc/nginx/sites-available/rpc-core ]; then
# Add additional security headers if not present
if ! grep -q "Referrer-Policy" /etc/nginx/sites-available/rpc-core; then
sed -i '/add_header X-XSS-Protection/a\
add_header Referrer-Policy "strict-origin-when-cross-origin" always;\
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
' /etc/nginx/sites-available/rpc-core
fi
# Test configuration
nginx -t
fi
SECURITY_EOF
if [ $? -eq 0 ]; then
log_success "Security headers enhanced"
else
log_warn "Security headers may need manual adjustment"
fi
# Reload Nginx
log_info ""
log_info "4. Reloading Nginx..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl reload nginx"
if [ $? -eq 0 ]; then
log_success "Nginx reloaded successfully"
else
log_error "Failed to reload Nginx"
exit 1
fi
# Verify configuration
log_info ""
log_info "5. Verifying configuration..."
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- nginx -t 2>&1 | grep -q 'successful'"; then
log_success "Nginx configuration is valid"
else
log_error "Nginx configuration test failed"
exit 1
fi
# Test rate limiting
log_info ""
log_info "6. Testing rate limiting..."
RATE_TEST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- timeout 2 curl -k -s -X POST https://localhost:443 \
-H 'Content-Type: application/json' \
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}' 2>&1 || echo 'TEST'")
if echo "$RATE_TEST" | grep -q "result\|jsonrpc"; then
log_success "RPC endpoint still responding (rate limiting active)"
else
log_warn "Rate limiting test inconclusive"
fi
echo ""
log_success "Security configuration complete!"
echo ""
log_info "Configuration Summary:"
log_info " ✓ Rate limiting: 10 req/s (burst: 20) for HTTP RPC"
log_info " ✓ Rate limiting: 50 req/s (burst: 50) for WebSocket RPC"
log_info " ✓ Connection limiting: 10 connections per IP (HTTP), 5 (WebSocket)"
log_info " ✓ Firewall rules: Configured for ports 80, 443, 8443"
log_info " ✓ Enhanced security headers: Added"
echo ""
log_info "Note: Firewall rules may need to be persisted (iptables-save)"

43
scripts/copy-all-to-proxmox.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env bash
# Copy entire smom-dbis-138-proxmox directory to Proxmox host
# This copies everything needed for deployment
# Suppress locale warnings
export LC_ALL=C
export LANG=C
HOST="${1:-192.168.11.10}"
USER="${2:-root}"
REMOTE_DIR="${3:-/opt/smom-dbis-138-proxmox}"
echo "Copying entire project to $USER@$HOST:$REMOTE_DIR"
# Test connection (suppress locale warnings)
if ! ssh -o BatchMode=yes -o ConnectTimeout=5 "$USER@$HOST" "export LC_ALL=C; export LANG=C; exit" 2>/dev/null; then
echo "❌ Cannot connect to $HOST"
echo " Ensure SSH key is set up: ssh-copy-id $USER@$HOST"
exit 1
fi
# Create remote directory
ssh "$USER@$HOST" "mkdir -p $REMOTE_DIR"
# Copy entire smom-dbis-138-proxmox directory
echo "Copying files (this may take a few minutes)..."
rsync -avz --exclude='.git' --exclude='*.log' \
smom-dbis-138-proxmox/ \
"$USER@$HOST:$REMOTE_DIR/" || {
echo "⚠ rsync not available, using scp..."
scp -r smom-dbis-138-proxmox/* "$USER@$HOST:$REMOTE_DIR/"
}
# Make all scripts executable
ssh "$USER@$HOST" "find $REMOTE_DIR -name '*.sh' -exec chmod +x {} \;"
echo "✅ All files copied to $REMOTE_DIR"
echo ""
echo "SSH to Proxmox host and run:"
echo " ssh $USER@$HOST"
echo " cd $REMOTE_DIR"
echo " sudo ./scripts/deployment/deploy-phased.sh --source-project /path/to/smom-dbis-138"

View File

@@ -0,0 +1,55 @@
#!/usr/bin/env bash
# Quick script to copy deployment scripts to Proxmox host
# Usage: ./scripts/copy-scripts-to-proxmox.sh [host] [user]
# Suppress locale warnings
export LC_ALL=C
export LANG=C
HOST="${1:-192.168.11.10}"
USER="${2:-root}"
REMOTE_DIR="/opt/smom-dbis-138-proxmox"
echo "Copying scripts to $USER@$HOST:$REMOTE_DIR"
# Test connection (suppress locale warnings)
if ! ssh -o BatchMode=yes -o ConnectTimeout=5 "$USER@$HOST" "export LC_ALL=C; export LANG=C; exit" 2>/dev/null; then
echo "❌ Cannot connect to $HOST"
echo " Run: ssh $USER@$HOST"
exit 1
fi
# Create directory structure (suppress locale warnings)
ssh "$USER@$HOST" "export LC_ALL=C; export LANG=C; mkdir -p $REMOTE_DIR/{scripts/{deployment,validation,network},config,lib,install}" 2>/dev/null
# Copy deployment scripts
echo "Copying deployment scripts..."
scp smom-dbis-138-proxmox/scripts/deployment/deploy-phased.sh "$USER@$HOST:$REMOTE_DIR/scripts/deployment/" 2>/dev/null || true
scp smom-dbis-138-proxmox/scripts/deployment/pre-cache-os-template.sh "$USER@$HOST:$REMOTE_DIR/scripts/deployment/" 2>/dev/null || true
scp smom-dbis-138-proxmox/scripts/deployment/deploy-besu-nodes.sh "$USER@$HOST:$REMOTE_DIR/scripts/deployment/" 2>/dev/null || true
scp smom-dbis-138-proxmox/scripts/deployment/deploy-ccip-nodes.sh "$USER@$HOST:$REMOTE_DIR/scripts/deployment/" 2>/dev/null || true
# Copy library files (needed by deployment scripts)
echo "Copying library files..."
if [[ -d "smom-dbis-138-proxmox/lib" ]]; then
scp smom-dbis-138-proxmox/lib/*.sh "$USER@$HOST:$REMOTE_DIR/lib/" 2>/dev/null || true
fi
# Copy configuration files
echo "Copying configuration files..."
scp smom-dbis-138-proxmox/config/proxmox.conf "$USER@$HOST:$REMOTE_DIR/config/" 2>/dev/null || true
# Copy validation scripts (optional but useful)
echo "Copying validation scripts..."
scp smom-dbis-138-proxmox/scripts/validation/check-prerequisites.sh "$USER@$HOST:$REMOTE_DIR/scripts/validation/" 2>/dev/null || true
# Make all scripts executable (suppress locale warnings)
ssh "$USER@$HOST" "export LC_ALL=C; export LANG=C; chmod +x $REMOTE_DIR/scripts/**/*.sh 2>/dev/null; chmod +x $REMOTE_DIR/lib/*.sh 2>/dev/null; true" 2>/dev/null
echo "✅ Scripts copied to $REMOTE_DIR/scripts/deployment/"
echo ""
echo "Run on Proxmox host:"
echo " ssh $USER@$HOST"
echo " cd $REMOTE_DIR"
echo " sudo ./scripts/deployment/deploy-phased.sh --source-project /path/to/smom-dbis-138"

141
scripts/copy-to-proxmox.sh Executable file
View File

@@ -0,0 +1,141 @@
#!/usr/bin/env bash
# Copy Deployment Scripts to Proxmox Host
# Copies all necessary deployment scripts and configuration to the Proxmox host
set -euo pipefail
# Suppress locale warnings
export LC_ALL=C
export LANG=C
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
PROXMOX_USER="${PROXMOX_USER:-root}"
REMOTE_BASE_DIR="${REMOTE_BASE_DIR:-/opt/smom-dbis-138-proxmox}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
echo "========================================="
echo "Copy Deployment Scripts to Proxmox Host"
echo "========================================="
echo ""
echo "Proxmox Host: $PROXMOX_HOST"
echo "User: $PROXMOX_USER"
echo "Remote Directory: $REMOTE_BASE_DIR"
echo ""
# Check if SSH key is available
if ! ssh -o BatchMode=yes -o ConnectTimeout=5 "$PROXMOX_USER@$PROXMOX_HOST" exit 2>/dev/null; then
echo "❌ SSH connection to $PROXMOX_HOST failed"
echo ""
echo "Please ensure:"
echo " 1. SSH key is set up for passwordless access"
echo " 2. Host is reachable: ping $PROXMOX_HOST"
echo " 3. User has access: ssh $PROXMOX_USER@$PROXMOX_HOST"
exit 1
fi
echo "✅ SSH connection successful"
echo ""
# Create remote directory structure
echo "Creating remote directory structure..."
ssh "$PROXMOX_USER@$PROXMOX_HOST" "mkdir -p $REMOTE_BASE_DIR/{scripts/{deployment,validation,network,manage},config,lib,install,logs,docs}" || {
echo "❌ Failed to create remote directories"
exit 1
}
echo "✅ Remote directories created"
echo ""
# Copy configuration files
echo "Copying configuration files..."
scp -r "$PROJECT_ROOT/smom-dbis-138-proxmox/config/"* "$PROXMOX_USER@$PROXMOX_HOST:$REMOTE_BASE_DIR/config/" 2>/dev/null || {
echo "⚠ Configuration files copy had issues (may not exist)"
}
echo "✅ Configuration files copied"
echo ""
# Copy library files
echo "Copying library files..."
if [[ -d "$PROJECT_ROOT/smom-dbis-138-proxmox/lib" ]]; then
scp -r "$PROJECT_ROOT/smom-dbis-138-proxmox/lib/"* "$PROXMOX_USER@$PROXMOX_HOST:$REMOTE_BASE_DIR/lib/" || {
echo "⚠ Library files copy had issues"
}
echo "✅ Library files copied"
else
echo "⚠ lib/ directory not found, skipping"
fi
echo ""
# Copy deployment scripts
echo "Copying deployment scripts..."
if [[ -d "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment" ]]; then
scp "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment/"*.sh "$PROXMOX_USER@$PROXMOX_HOST:$REMOTE_BASE_DIR/scripts/deployment/" || {
echo "❌ Failed to copy deployment scripts"
exit 1
}
# Make scripts executable
ssh "$PROXMOX_USER@$PROXMOX_HOST" "chmod +x $REMOTE_BASE_DIR/scripts/deployment/*.sh"
echo "✅ Deployment scripts copied and made executable"
else
echo "❌ scripts/deployment/ directory not found"
exit 1
fi
echo ""
# Copy validation scripts
echo "Copying validation scripts..."
if [[ -d "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/validation" ]]; then
scp "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/validation/"*.sh "$PROXMOX_USER@$PROXMOX_HOST:$REMOTE_BASE_DIR/scripts/validation/" 2>/dev/null || {
echo "⚠ Validation scripts copy had issues"
}
ssh "$PROXMOX_USER@$PROXMOX_HOST" "chmod +x $REMOTE_BASE_DIR/scripts/validation/*.sh 2>/dev/null || true"
echo "✅ Validation scripts copied"
else
echo "⚠ scripts/validation/ directory not found"
fi
echo ""
# Copy network scripts
echo "Copying network scripts..."
if [[ -d "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/network" ]]; then
scp "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/network/"*.sh "$PROXMOX_USER@$PROXMOX_HOST:$REMOTE_BASE_DIR/scripts/network/" 2>/dev/null || true
ssh "$PROXMOX_USER@$PROXMOX_HOST" "chmod +x $REMOTE_BASE_DIR/scripts/network/*.sh 2>/dev/null || true"
echo "✅ Network scripts copied"
fi
echo ""
# Copy install scripts (if needed)
echo "Copying install scripts..."
if [[ -d "$PROJECT_ROOT/smom-dbis-138-proxmox/install" ]]; then
scp "$PROJECT_ROOT/smom-dbis-138-proxmox/install/"*.sh "$PROXMOX_USER@$PROXMOX_HOST:$REMOTE_BASE_DIR/install/" 2>/dev/null || {
echo "⚠ Install scripts copy had issues"
}
ssh "$PROXMOX_USER@$PROXMOX_HOST" "chmod +x $REMOTE_BASE_DIR/install/*.sh 2>/dev/null || true"
echo "✅ Install scripts copied"
fi
echo ""
# Verify copied files
echo "Verifying copied files..."
echo "Deployment scripts on remote host:"
ssh "$PROXMOX_USER@$PROXMOX_HOST" "ls -lh $REMOTE_BASE_DIR/scripts/deployment/*.sh 2>/dev/null | head -10" || echo " (none found)"
echo ""
echo "========================================="
echo "✅ Files Copied Successfully"
echo "========================================="
echo ""
echo "Remote location: $REMOTE_BASE_DIR"
echo ""
echo "You can now SSH to the Proxmox host and run:"
echo " ssh $PROXMOX_USER@$PROXMOX_HOST"
echo " cd $REMOTE_BASE_DIR"
echo " sudo ./scripts/deployment/deploy-phased.sh --source-project /path/to/smom-dbis-138"
echo ""
echo "Or run remotely:"
echo " ssh $PROXMOX_USER@$PROXMOX_HOST 'cd $REMOTE_BASE_DIR && sudo ./scripts/deployment/deploy-phased.sh --source-project /path/to/smom-dbis-138'"
echo ""

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env bash
# Create DNS record for rpc-core.d-bis.org using Cloudflare API
# Usage: ./create-dns-record-rpc-core.sh [API_TOKEN] [ZONE_ID]
# Supports both API_TOKEN and API_KEY+EMAIL from .env file
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
DOMAIN="rpc-core.d-bis.org"
NAME="rpc-core"
IP="192.168.11.250"
# Load .env if exists
if [ -f "$SCRIPT_DIR/../.env" ]; then
source "$SCRIPT_DIR/../.env" 2>/dev/null
elif [ -f "$SCRIPT_DIR/.env" ]; then
source "$SCRIPT_DIR/.env" 2>/dev/null
fi
# Get API credentials (token or key+email)
if [ -n "$1" ]; then
# Token provided as argument
API_TOKEN="$1"
API_EMAIL=""
API_KEY=""
AUTH_METHOD="token"
log_info "Using API token from argument"
elif [ -n "$CLOUDFLARE_API_TOKEN" ]; then
API_TOKEN="$CLOUDFLARE_API_TOKEN"
API_EMAIL=""
API_KEY=""
AUTH_METHOD="token"
log_info "Using API token from .env"
elif [ -n "$CLOUDFLARE_API_KEY" ] && [ -n "$CLOUDFLARE_EMAIL" ]; then
API_TOKEN=""
API_KEY="$CLOUDFLARE_API_KEY"
API_EMAIL="$CLOUDFLARE_EMAIL"
AUTH_METHOD="key"
log_info "Using API key + email from .env"
else
log_error "No Cloudflare credentials found"
log_info "Usage: $0 [API_TOKEN] [ZONE_ID]"
log_info ""
log_info "Or set in .env file:"
log_info " CLOUDFLARE_API_TOKEN=\"your-token\""
log_info " OR"
log_info " CLOUDFLARE_API_KEY=\"your-key\""
log_info " CLOUDFLARE_EMAIL=\"your-email\""
exit 1
fi
ZONE_ID="${2:-${CLOUDFLARE_ZONE_ID:-}}"
# Set up auth headers
if [ "$AUTH_METHOD" = "token" ]; then
AUTH_HEADER="Authorization: Bearer $API_TOKEN"
AUTH_EXTRA=""
else
AUTH_HEADER="X-Auth-Email: $API_EMAIL"
AUTH_EXTRA="X-Auth-Key: $API_KEY"
fi
# Get Zone ID if not provided
if [ -z "$ZONE_ID" ]; then
log_info "Getting Zone ID for d-bis.org..."
if [ "$AUTH_METHOD" = "token" ]; then
ZONE_RESPONSE=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones?name=d-bis.org" \
-H "$AUTH_HEADER" \
-H "Content-Type: application/json")
else
ZONE_RESPONSE=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones?name=d-bis.org" \
-H "$AUTH_HEADER" \
-H "$AUTH_EXTRA" \
-H "Content-Type: application/json")
fi
ZONE_ID=$(echo "$ZONE_RESPONSE" | grep -o '"id":"[^"]*' | head -1 | cut -d'"' -f4)
if [ -z "$ZONE_ID" ]; then
log_error "Failed to get Zone ID. Check API credentials and domain."
log_info "Response: $(echo "$ZONE_RESPONSE" | head -3)"
exit 1
fi
log_success "Zone ID: $ZONE_ID"
else
log_info "Using Zone ID: $ZONE_ID"
fi
# Check if record already exists
log_info "Checking if DNS record already exists..."
if [ "$AUTH_METHOD" = "token" ]; then
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records?name=$DOMAIN" \
-H "$AUTH_HEADER" \
-H "Content-Type: application/json")
else
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records?name=$DOMAIN" \
-H "$AUTH_HEADER" \
-H "$AUTH_EXTRA" \
-H "Content-Type: application/json")
fi
if echo "$EXISTING" | grep -q '"id"'; then
RECORD_ID=$(echo "$EXISTING" | grep -o '"id":"[^"]*' | head -1 | cut -d'"' -f4)
log_warn "DNS record already exists (ID: $RECORD_ID)"
log_info "Updating existing record..."
# Update existing record
if [ "$AUTH_METHOD" = "token" ]; then
RESPONSE=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records/$RECORD_ID" \
-H "$AUTH_HEADER" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"$NAME\",
\"content\": \"$IP\",
\"ttl\": 1,
\"proxied\": false
}")
else
RESPONSE=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records/$RECORD_ID" \
-H "$AUTH_HEADER" \
-H "$AUTH_EXTRA" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"$NAME\",
\"content\": \"$IP\",
\"ttl\": 1,
\"proxied\": false
}")
fi
else
log_info "Creating new DNS record..."
# Create new record
if [ "$AUTH_METHOD" = "token" ]; then
RESPONSE=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records" \
-H "$AUTH_HEADER" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"$NAME\",
\"content\": \"$IP\",
\"ttl\": 1,
\"proxied\": false
}")
else
RESPONSE=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records" \
-H "$AUTH_HEADER" \
-H "$AUTH_EXTRA" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"$NAME\",
\"content\": \"$IP\",
\"ttl\": 1,
\"proxied\": false
}")
fi
fi
# Check response
if echo "$RESPONSE" | grep -q '"success":true'; then
log_success "DNS record created/updated successfully!"
# Get record details
RECORD_ID=$(echo "$RESPONSE" | grep -o '"id":"[^"]*' | head -1 | cut -d'"' -f4)
log_info "Record ID: $RECORD_ID"
log_info "Domain: $DOMAIN"
log_info "IP: $IP"
log_info "Proxied: Yes (🟠 Orange Cloud)"
echo ""
log_info "DNS record created. Wait 2-5 minutes for propagation, then run:"
log_info " pct exec 2500 -- certbot --nginx --non-interactive --agree-tos --email admin@d-bis.org -d rpc-core.d-bis.org --redirect"
else
log_error "Failed to create DNS record"
log_info "Response: $RESPONSE"
exit 1
fi

95
scripts/create-proxmox-token.sh Executable file
View File

@@ -0,0 +1,95 @@
#!/bin/bash
# Script to create a Proxmox API token via the Proxmox API
#
# Usage:
# ./create-proxmox-token.sh <proxmox-host> <username> <password> <token-name>
#
# Example:
# ./create-proxmox-token.sh 192.168.1.100 root@pam mypassword mcp-server
#
# Note: This requires valid Proxmox credentials and uses the Proxmox API v2
set -e
PROXMOX_HOST="${1:-}"
USERNAME="${2:-}"
PASSWORD="${3:-}"
TOKEN_NAME="${4:-mcp-server}"
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
if [ -z "$PROXMOX_HOST" ] || [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then
echo "Usage: $0 <proxmox-host> <username> <password> [token-name]"
echo ""
echo "Example:"
echo " $0 192.168.1.100 root@pam mypassword mcp-server"
echo ""
echo "Environment variables:"
echo " PROXMOX_PORT - Proxmox port (default: 8006)"
exit 1
fi
echo "Creating Proxmox API token..."
echo "Host: $PROXMOX_HOST:$PROXMOX_PORT"
echo "User: $USERNAME"
echo "Token Name: $TOKEN_NAME"
echo ""
# Step 1: Get CSRF token and ticket by authenticating
echo "Authenticating..."
AUTH_RESPONSE=$(curl -s -k -d "username=$USERNAME&password=$PASSWORD" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/access/ticket")
if echo "$AUTH_RESPONSE" | grep -q "data"; then
TICKET=$(echo "$AUTH_RESPONSE" | grep -oP '"ticket":"\K[^"]+')
CSRF_TOKEN=$(echo "$AUTH_RESPONSE" | grep -oP '"CSRFPreventionToken":"\K[^"]+')
if [ -z "$TICKET" ] || [ -z "$CSRF_TOKEN" ]; then
echo "Error: Failed to authenticate. Check credentials."
echo "Response: $AUTH_RESPONSE"
exit 1
fi
echo "✓ Authentication successful"
else
echo "Error: Authentication failed"
echo "Response: $AUTH_RESPONSE"
exit 1
fi
# Step 2: Create API token
echo "Creating API token..."
TOKEN_RESPONSE=$(curl -s -k -X POST \
-H "Cookie: PVEAuthCookie=$TICKET" \
-H "CSRFPreventionToken: $CSRF_TOKEN" \
-d "tokenid=${USERNAME}!${TOKEN_NAME}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/access/users/${USERNAME}/token/${TOKEN_NAME}")
if echo "$TOKEN_RESPONSE" | grep -q "data"; then
TOKEN_VALUE=$(echo "$TOKEN_RESPONSE" | grep -oP '"value":"\K[^"]+')
if [ -z "$TOKEN_VALUE" ]; then
echo "Error: Token created but could not extract value"
echo "Response: $TOKEN_RESPONSE"
exit 1
fi
echo ""
echo "✅ API Token created successfully!"
echo ""
echo "Add these to your ~/.env file:"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "PROXMOX_HOST=$PROXMOX_HOST"
echo "PROXMOX_USER=$USERNAME"
echo "PROXMOX_TOKEN_NAME=$TOKEN_NAME"
echo "PROXMOX_TOKEN_VALUE=$TOKEN_VALUE"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "⚠️ IMPORTANT: Save the PROXMOX_TOKEN_VALUE immediately!"
echo " This is the only time it will be displayed."
echo ""
else
echo "Error: Failed to create token"
echo "Response: $TOKEN_RESPONSE"
exit 1
fi

View File

@@ -0,0 +1,109 @@
#!/usr/bin/env bash
# Deploy Besu temporary VM on ml110
# This script runs the temporary VM deployment on the remote Proxmox host
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_PASS="L@kers2010"
SOURCE_PROJECT="/opt/smom-dbis-138"
echo "=== Besu Temporary VM Deployment on ml110 ==="
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}"
echo "Source Project: $SOURCE_PROJECT"
echo ""
# Test connection
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connected'" 2>/dev/null; then
echo "ERROR: Cannot connect to ${REMOTE_HOST}"
exit 1
fi
echo "✓ Connection successful"
echo ""
# Check if deployment script exists
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" \
"test -f /opt/smom-dbis-138-proxmox/scripts/deployment/deploy-besu-temp-vm-complete.sh" 2>/dev/null; then
echo "ERROR: deploy-besu-temp-vm-complete.sh not found on ${REMOTE_HOST}"
exit 1
fi
echo "✓ Deployment script found"
echo ""
# Check if source project exists
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" \
"test -d $SOURCE_PROJECT" 2>/dev/null; then
echo "ERROR: Source project $SOURCE_PROJECT not found on ${REMOTE_HOST}"
exit 1
fi
echo "✓ Source project found"
echo ""
echo "Starting temporary VM deployment..."
echo "This will:"
echo " 1. Create a VM (VMID 9000) with 32GB RAM, 8 cores, 500GB disk"
echo " 2. Install Docker and set up directory structure"
echo " 3. Copy configuration files and keys"
echo " 4. Start all 12 Besu containers"
echo ""
echo "This may take 15-30 minutes"
echo ""
# Auto-confirm if AUTO_CONFIRM is set
if [[ "${AUTO_CONFIRM:-}" != "true" ]]; then
read -p "Continue? [y/N]: " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Deployment cancelled"
exit 0
fi
else
echo "Auto-confirming deployment (AUTO_CONFIRM=true)"
fi
# Run deployment with timeout (1 hour max)
sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" \
"cd /opt/smom-dbis-138-proxmox && \
chmod +x ./scripts/deployment/deploy-besu-temp-vm-complete.sh && \
timeout 3600 ./scripts/deployment/deploy-besu-temp-vm-complete.sh $SOURCE_PROJECT" 2>&1
EXIT_CODE=$?
if [[ $EXIT_CODE -eq 0 ]]; then
echo ""
echo "✅ Temporary VM deployment completed successfully!"
echo ""
echo "VM Details:"
echo " IP: 192.168.11.90"
echo " VMID: 9000"
echo ""
echo "RPC Endpoints:"
echo " http://192.168.11.90:8545"
echo " http://192.168.11.90:8547"
echo " http://192.168.11.90:8549"
echo ""
echo "Next steps:"
echo " 1. Validate: ssh root@192.168.11.10 'cd /opt/smom-dbis-138-proxmox && ./scripts/validation/validate-besu-temp-vm.sh'"
echo " 2. Monitor: ssh root@192.168.11.90 'docker compose logs -f'"
echo " 3. When ready, migrate to LXC containers"
elif [[ $EXIT_CODE -eq 124 ]]; then
echo ""
echo "⚠ Deployment timed out (1 hour)"
echo "Check the deployment status manually"
else
echo ""
echo "❌ Deployment failed with exit code: $EXIT_CODE"
echo "Check the output above for errors"
fi
exit $EXIT_CODE

View File

@@ -0,0 +1,176 @@
#!/usr/bin/env bash
# Deploy all contracts to Chain 138
# Usage: ./deploy-contracts-chain138.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
SOURCE_PROJECT="/home/intlc/projects/smom-dbis-138"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Check if source project exists
if [ ! -d "$SOURCE_PROJECT" ]; then
log_error "Source project not found at $SOURCE_PROJECT"
exit 1
fi
# Check if .env exists in source project
if [ ! -f "$SOURCE_PROJECT/.env" ]; then
log_warn ".env file not found in source project"
log_info "Creating .env template..."
cat > "$SOURCE_PROJECT/.env.template" <<EOF
# Chain 138 RPC
RPC_URL_138=http://192.168.11.250:8545
# Deployer
PRIVATE_KEY=<your-private-key>
# Oracle Configuration
ORACLE_PRICE_FEED=<oracle-price-feed-address>
# Reserve Configuration
RESERVE_ADMIN=<admin-address>
TOKEN_FACTORY=<token-factory-address>
# Keeper Configuration
KEEPER_ADDRESS=<keeper-address>
EOF
log_warn "Please create .env file in $SOURCE_PROJECT with your configuration"
exit 1
fi
# Load environment
cd "$SOURCE_PROJECT"
source .env 2>/dev/null || true
RPC_URL="${RPC_URL_138:-http://192.168.11.250:8545}"
PRIVATE_KEY="${PRIVATE_KEY:-}"
if [ -z "$PRIVATE_KEY" ]; then
log_error "PRIVATE_KEY not set in .env file"
exit 1
fi
# Ensure PRIVATE_KEY has 0x prefix
if [[ ! "$PRIVATE_KEY" =~ ^0x ]]; then
export PRIVATE_KEY="0x$PRIVATE_KEY"
fi
log_info "========================================="
log_info "Chain 138 Contract Deployment"
log_info "========================================="
log_info ""
log_info "RPC URL: $RPC_URL"
log_info "Deployer: $(cast wallet address --private-key "$PRIVATE_KEY" 2>/dev/null || echo 'unknown')"
log_info ""
# Step 1: Verify network is ready
log_info "Step 1: Verifying network is ready..."
BLOCK=$(cast block-number --rpc-url "$RPC_URL" 2>/dev/null | xargs printf "%d" 2>/dev/null || echo "0")
CHAIN=$(cast chain-id --rpc-url "$RPC_URL" 2>/dev/null | xargs printf "%d" 2>/dev/null || echo "0")
if [ "$BLOCK" -eq 0 ]; then
log_error "Network is not producing blocks yet"
log_info "Please wait for validators to initialize"
exit 1
fi
if [ "$CHAIN" -ne 138 ]; then
log_error "Chain ID mismatch. Expected 138, got $CHAIN"
exit 1
fi
log_success "Network is ready!"
log_info " Current block: $BLOCK"
log_info " Chain ID: $CHAIN"
log_info ""
# Step 2: Deploy contracts
log_info "========================================="
log_info "Step 2: Deploying Contracts"
log_info "========================================="
log_info ""
DEPLOYMENT_LOG="$PROJECT_ROOT/logs/contract-deployment-$(date +%Y%m%d-%H%M%S).log"
mkdir -p "$PROJECT_ROOT/logs"
# Deploy Oracle
log_info "2.1: Deploying Oracle..."
if forge script script/DeployOracle.s.sol:DeployOracle \
--rpc-url "$RPC_URL" \
--broadcast \
--private-key "$PRIVATE_KEY" \
--legacy -vvv 2>&1 | tee -a "$DEPLOYMENT_LOG" | grep -E "Oracle|Aggregator|Proxy|deployed at:|Error" | head -5; then
log_success "Oracle deployment completed"
else
log_error "Oracle deployment failed"
fi
# Deploy CCIP Router
log_info ""
log_info "2.2: Deploying CCIP Router..."
if forge script script/DeployCCIPRouter.s.sol:DeployCCIPRouter \
--rpc-url "$RPC_URL" \
--broadcast \
--private-key "$PRIVATE_KEY" \
--legacy -vvv 2>&1 | tee -a "$DEPLOYMENT_LOG" | grep -E "CCIP Router|deployed at:|Error" | head -3; then
log_success "CCIP Router deployment completed"
else
log_error "CCIP Router deployment failed"
fi
# Deploy CCIP Sender
log_info ""
log_info "2.3: Deploying CCIP Sender..."
if forge script script/DeployCCIPSender.s.sol:DeployCCIPSender \
--rpc-url "$RPC_URL" \
--broadcast \
--private-key "$PRIVATE_KEY" \
--legacy -vvv 2>&1 | tee -a "$DEPLOYMENT_LOG" | grep -E "CCIPSender|deployed at:|Error" | head -3; then
log_success "CCIP Sender deployment completed"
else
log_error "CCIP Sender deployment failed"
fi
# Deploy Keeper (if Oracle Price Feed is available)
log_info ""
log_info "2.4: Deploying Price Feed Keeper..."
if [ -n "$ORACLE_PRICE_FEED" ]; then
if forge script script/reserve/DeployKeeper.s.sol:DeployKeeper \
--rpc-url "$RPC_URL" \
--broadcast \
--private-key "$PRIVATE_KEY" \
--legacy -vvv 2>&1 | tee -a "$DEPLOYMENT_LOG" | grep -E "PriceFeedKeeper|deployed at:|Error" | head -3; then
log_success "Keeper deployment completed"
else
log_error "Keeper deployment failed"
fi
else
log_warn "Skipping Keeper deployment (ORACLE_PRICE_FEED not set)"
fi
log_info ""
log_success "========================================="
log_success "Deployment Complete!"
log_success "========================================="
log_info ""
log_info "Deployment log: $DEPLOYMENT_LOG"
log_info ""
log_info "Next steps:"
log_info "1. Extract contract addresses from broadcast files"
log_info "2. Update .env file with deployed addresses"
log_info "3. Update service configurations in Proxmox containers"
log_info ""

225
scripts/deploy-to-proxmox-host.sh Executable file
View File

@@ -0,0 +1,225 @@
#!/bin/bash
# Script to copy deployment package to Proxmox host and run deployment
# This is the recommended approach for remote deployment
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Load environment
source "$SCRIPT_DIR/load-env.sh"
load_env_file
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
DEPLOY_DIR="/opt/smom-dbis-138-proxmox"
DEPLOY_SOURCE="$PROJECT_ROOT/smom-dbis-138-proxmox"
echo ""
echo -e "${CYAN}╔════════════════════════════════════════════════════════════════╗${NC}"
echo -e "${CYAN}${NC} Deploy to Proxmox Host: ${PROXMOX_HOST} ${CYAN}${NC}"
echo -e "${CYAN}╚════════════════════════════════════════════════════════════════╝${NC}"
echo ""
# Check if deployment directory exists locally
if [ ! -d "$DEPLOY_SOURCE" ]; then
echo -e "${RED}❌ Deployment directory not found: $DEPLOY_SOURCE${NC}"
echo -e "${YELLOW}Current directory: $(pwd)${NC}"
echo -e "${YELLOW}Project root: $PROJECT_ROOT${NC}"
exit 1
fi
echo -e "${BLUE}This script will:${NC}"
echo " 1. Copy deployment package to Proxmox host"
echo " 2. SSH into Proxmox host"
echo " 3. Run deployment on Proxmox host"
echo ""
read -p "$(echo -e ${YELLOW}Continue? [y/N]: ${NC})" -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo -e "${YELLOW}Deployment cancelled${NC}"
exit 0
fi
echo ""
echo -e "${BLUE}Step 1: Copying deployment package to Proxmox host...${NC}"
# Create deployment package (exclude unnecessary files)
TEMP_DIR=$(mktemp -d)
DEPLOY_PACKAGE="$TEMP_DIR/smom-dbis-138-proxmox"
echo -e "${BLUE}Creating deployment package...${NC}"
if command -v rsync &> /dev/null; then
rsync -av --exclude='.git' --exclude='node_modules' --exclude='*.log' \
"$DEPLOY_SOURCE/" "$DEPLOY_PACKAGE/"
else
# Fallback to tar if rsync not available
cd "$PROJECT_ROOT"
tar --exclude='.git' --exclude='node_modules' --exclude='*.log' \
-czf "$TEMP_DIR/smom-dbis-138-proxmox.tar.gz" smom-dbis-138-proxmox/
ssh root@"${PROXMOX_HOST}" "mkdir -p /opt && cd /opt && tar -xzf -" < "$TEMP_DIR/smom-dbis-138-proxmox.tar.gz"
rm -f "$TEMP_DIR/smom-dbis-138-proxmox.tar.gz"
echo -e "${GREEN}✅ Deployment package copied${NC}"
DEPLOY_PACKAGE="" # Skip scp step
fi
if [ -n "$DEPLOY_PACKAGE" ]; then
echo -e "${BLUE}Copying to Proxmox host...${NC}"
# Add host to known_hosts if not present (non-interactive)
if ! ssh-keygen -F "${PROXMOX_HOST}" &>/dev/null; then
echo -e "${YELLOW}Adding ${PROXMOX_HOST} to known_hosts...${NC}"
ssh-keyscan -H "${PROXMOX_HOST}" >> ~/.ssh/known_hosts 2>/dev/null || true
fi
# Use StrictHostKeyChecking=accept-new for first connection (or if key changed)
# Note: Will prompt for password if SSH key not configured
echo -e "${YELLOW}Note: You may be prompted for the root password${NC}"
# Ensure /opt exists on remote host
ssh -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=~/.ssh/known_hosts \
-o PreferredAuthentications=publickey,password \
root@"${PROXMOX_HOST}" "mkdir -p /opt" || {
echo -e "${RED}❌ Failed to create /opt directory. Check SSH authentication.${NC}"
exit 1
}
# Remove old deployment if exists
ssh -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=~/.ssh/known_hosts \
-o PreferredAuthentications=publickey,password \
root@"${PROXMOX_HOST}" "rm -rf $DEPLOY_DIR" 2>/dev/null || true
# Copy files
scp -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=~/.ssh/known_hosts \
-o PreferredAuthentications=publickey,password \
-r "$DEPLOY_PACKAGE" root@"${PROXMOX_HOST}:/opt/" || {
echo -e "${RED}❌ Failed to copy files. Check SSH authentication.${NC}"
echo -e "${YELLOW}You may need to:${NC}"
echo " 1. Set up SSH key: ssh-copy-id root@${PROXMOX_HOST}"
echo " 2. Or enter password when prompted"
exit 1
}
# Verify files were copied
echo -e "${BLUE}Verifying files were copied...${NC}"
ssh -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=~/.ssh/known_hosts \
-o PreferredAuthentications=publickey,password \
root@"${PROXMOX_HOST}" "ls -la $DEPLOY_DIR/scripts/deployment/ 2>/dev/null | head -5 || echo 'Files not found at expected location'" || true
fi
# Cleanup temp directory
rm -rf "$TEMP_DIR"
echo -e "${GREEN}✅ Deployment package copied${NC}"
echo ""
echo -e "${BLUE}Step 2: Running deployment on Proxmox host...${NC}"
echo ""
# SSH and run deployment (with host key acceptance)
# Note: Will prompt for password if SSH key not configured
echo -e "${YELLOW}Note: You may be prompted for the root password again${NC}"
# Create a temporary script file to avoid heredoc issues
REMOTE_SCRIPT=$(cat << 'REMOTE_SCRIPT_END'
#!/bin/bash
set -e
DEPLOY_DIR="/opt/smom-dbis-138-proxmox"
echo "=========================================="
echo "Remote Deployment Script"
echo "=========================================="
echo "Target directory: $DEPLOY_DIR"
echo ""
# Check if directory exists
if [ ! -d "$DEPLOY_DIR" ]; then
echo "ERROR: Directory $DEPLOY_DIR does not exist"
echo ""
echo "Checking /opt/ contents:"
ls -la /opt/ || echo "Cannot access /opt/"
echo ""
echo "Looking for smom directories:"
find /opt -type d -name "*smom*" 2>/dev/null || echo "No smom directories found"
exit 1
fi
# Change to deployment directory
cd "$DEPLOY_DIR" || {
echo "ERROR: Cannot change to $DEPLOY_DIR"
exit 1
}
echo "Current directory: $(pwd)"
echo ""
echo "Directory structure:"
ls -la
echo ""
# Check for scripts directory
if [ ! -d "scripts" ]; then
echo "ERROR: scripts directory not found"
echo "Available directories:"
find . -maxdepth 2 -type d | head -20
exit 1
fi
# Check for deployment scripts
if [ ! -d "scripts/deployment" ]; then
echo "ERROR: scripts/deployment directory not found"
echo "Available in scripts/:"
ls -la scripts/ || echo "Cannot list scripts/"
exit 1
fi
# Make scripts executable
echo "Making scripts executable..."
find scripts/deployment -name "*.sh" -type f -exec chmod +x {} \; 2>/dev/null || true
find install -name "*.sh" -type f -exec chmod +x {} \; 2>/dev/null || true
# Verify deploy-all.sh exists
if [ ! -f "scripts/deployment/deploy-all.sh" ]; then
echo "ERROR: scripts/deployment/deploy-all.sh not found"
echo ""
echo "Available deployment scripts:"
find scripts/deployment -name "*.sh" -type f || echo "No .sh files found"
exit 1
fi
echo "✅ Found deploy-all.sh"
echo ""
echo "Starting deployment..."
echo "=========================================="
echo ""
# Run deployment
exec ./scripts/deployment/deploy-all.sh
REMOTE_SCRIPT_END
)
# Execute the remote script via SSH
ssh -o StrictHostKeyChecking=accept-new -o UserKnownHostsFile=~/.ssh/known_hosts \
-o PreferredAuthentications=publickey,password \
root@"${PROXMOX_HOST}" bash <<< "$REMOTE_SCRIPT"
echo ""
echo -e "${GREEN}✅ Deployment completed!${NC}"
echo ""
echo "Next steps:"
echo " 1. Verify containers: ssh root@${PROXMOX_HOST} 'pct list'"
echo " 2. Check logs: ssh root@${PROXMOX_HOST} 'tail -f $DEPLOY_DIR/logs/*.log'"
echo " 3. Configure services as needed"
echo ""
echo -e "${BLUE}Tip:${NC} To avoid password prompts, set up SSH key:"
echo " ssh-copy-id root@${PROXMOX_HOST}"

View File

@@ -0,0 +1,272 @@
#!/usr/bin/env bash
# Phased Deployment Orchestrator
# Deploys infrastructure in phases: Besu → CCIP → Other Services
# Allows validation between phases to reduce risk
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Try to find project root - could be at same level or in smom-dbis-138-proxmox subdirectory
if [[ -d "$SCRIPT_DIR/../../smom-dbis-138-proxmox" ]]; then
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../smom-dbis-138-proxmox" && pwd)"
elif [[ -d "$SCRIPT_DIR/../.." ]]; then
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
else
PROJECT_ROOT="$SCRIPT_DIR/../.."
fi
source "$PROJECT_ROOT/lib/common.sh" 2>/dev/null || {
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_error() { echo "[ERROR] $1"; exit 1; }
log_warn() { echo "[WARN] $1"; }
}
source "$PROJECT_ROOT/lib/progress-tracking.sh" 2>/dev/null || true
# Load configuration
load_config "$PROJECT_ROOT/config/proxmox.conf" 2>/dev/null || true
# Command line options
SKIP_PHASE1="${SKIP_PHASE1:-false}"
SKIP_PHASE2="${SKIP_PHASE2:-false}"
SKIP_PHASE3="${SKIP_PHASE3:-false}"
SKIP_VALIDATION="${SKIP_VALIDATION:-false}"
SOURCE_PROJECT="${SOURCE_PROJECT:-}"
while [[ $# -gt 0 ]]; do
case $1 in
--skip-phase1)
SKIP_PHASE1=true
shift
;;
--skip-phase2)
SKIP_PHASE2=true
shift
;;
--skip-phase3)
SKIP_PHASE3=true
shift
;;
--skip-validation)
SKIP_VALIDATION=true
shift
;;
--source-project)
SOURCE_PROJECT="$2"
shift 2
;;
--help)
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Phased Deployment Orchestrator"
echo ""
echo "Phases:"
echo " 1. Besu Network (12 containers) - 1.5-2.5 hours"
echo " 2. CCIP Network (41-43 containers) - 2.5-4 hours"
echo " 3. Other Services (14 containers) - 1.5-2.5 hours"
echo ""
echo "Options:"
echo " --skip-phase1 Skip Besu network deployment"
echo " --skip-phase2 Skip CCIP network deployment"
echo " --skip-phase3 Skip other services deployment"
echo " --skip-validation Skip validation between phases"
echo " --source-project PATH Path to source project with config files"
echo " --help Show this help message"
exit 0
;;
*)
log_error "Unknown option: $1"
exit 1
;;
esac
done
log_info "========================================="
log_info "Phased Deployment Orchestrator"
log_info "========================================="
log_info ""
# Check prerequisites
if ! command_exists pct; then
log_error "pct command not found. This script must be run on Proxmox host."
fi
if [[ $EUID -ne 0 ]]; then
log_error "This script must be run as root"
fi
# Helper function to find script
find_script() {
local script_name="$1"
# Try current directory first
if [[ -f "$SCRIPT_DIR/$script_name" ]]; then
echo "$SCRIPT_DIR/$script_name"
# Try PROJECT_ROOT scripts/deployment
elif [[ -f "$PROJECT_ROOT/scripts/deployment/$script_name" ]]; then
echo "$PROJECT_ROOT/scripts/deployment/$script_name"
# Try smom-dbis-138-proxmox path
elif [[ -f "$(dirname "$SCRIPT_DIR")/smom-dbis-138-proxmox/scripts/deployment/$script_name" ]]; then
echo "$(dirname "$SCRIPT_DIR")/smom-dbis-138-proxmox/scripts/deployment/$script_name"
else
echo ""
fi
}
# Pre-cache OS template (recommendation)
log_info "=== Pre-caching OS Template ==="
PRE_CACHE_SCRIPT=$(find_script "pre-cache-os-template.sh")
if [[ -n "$PRE_CACHE_SCRIPT" ]] && [[ -f "$PRE_CACHE_SCRIPT" ]]; then
"$PRE_CACHE_SCRIPT" || log_warn "Template pre-caching had issues, continuing..."
else
log_warn "pre-cache-os-template.sh not found, skipping template pre-cache"
fi
# Phase 1: Besu Network
if [[ "$SKIP_PHASE1" != "true" ]]; then
log_info ""
log_info "========================================="
log_info "PHASE 1: Besu Network Deployment"
log_info "========================================="
log_info "Containers: 11 (4 validators, 4 sentries, 3 RPC)"
log_info "Estimated time: 90-150 minutes (1.5-2.5 hours)"
log_info ""
DEPLOY_BESU_SCRIPT=$(find_script "deploy-besu-nodes.sh")
if [[ -n "$DEPLOY_BESU_SCRIPT" ]] && [[ -f "$DEPLOY_BESU_SCRIPT" ]]; then
if "$DEPLOY_BESU_SCRIPT"; then
log_success "Phase 1 completed successfully"
else
log_error "Phase 1 failed. Fix issues before continuing."
exit 1
fi
else
log_error "deploy-besu-nodes.sh not found in $SCRIPT_DIR or $PROJECT_ROOT/scripts/deployment"
exit 1
fi
# Copy configuration files
if [[ -n "$SOURCE_PROJECT" ]] && [[ -d "$SOURCE_PROJECT" ]]; then
log_info ""
log_info "Copying Besu configuration files..."
if [[ -f "$PROJECT_ROOT/scripts/copy-besu-config-with-nodes.sh" ]]; then
SOURCE_PROJECT="$SOURCE_PROJECT" "$PROJECT_ROOT/scripts/copy-besu-config-with-nodes.sh" || {
log_error "Failed to copy configuration files"
}
fi
fi
# Validation after Phase 1
if [[ "$SKIP_VALIDATION" != "true" ]]; then
log_info ""
log_info "=== Phase 1 Validation ==="
if [[ -f "$PROJECT_ROOT/scripts/validation/validate-deployment-comprehensive.sh" ]]; then
"$PROJECT_ROOT/scripts/validation/validate-deployment-comprehensive.sh" || {
log_warn "Phase 1 validation had issues. Review before continuing to Phase 2."
read -p "Continue to Phase 2? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log_error "Deployment paused. Fix Phase 1 issues before continuing."
fi
}
fi
fi
else
log_info "Skipping Phase 1 (Besu Network)"
fi
# Phase 2: CCIP Network
if [[ "$SKIP_PHASE2" != "true" ]]; then
log_info ""
log_info "========================================="
log_info "PHASE 2: CCIP Network Deployment"
log_info "========================================="
log_info "Containers: 41-43 (2 ops, 2 mon, 16 commit, 16 exec, 5-7 RMN)"
log_info "Estimated time: 150-240 minutes (2.5-4 hours)"
log_info ""
DEPLOY_CCIP_SCRIPT=$(find_script "deploy-ccip-nodes.sh")
if [[ -n "$DEPLOY_CCIP_SCRIPT" ]] && [[ -f "$DEPLOY_CCIP_SCRIPT" ]]; then
if command_exists init_progress_tracking 2>/dev/null; then
init_progress_tracking 5 "CCIP Network Deployment"
update_progress 1 "Deploying CCIP-OPS nodes"
fi
if "$DEPLOY_CCIP_SCRIPT"; then
if command_exists update_progress 2>/dev/null; then
update_progress 5 "CCIP deployment complete"
complete_progress
fi
log_success "Phase 2 completed successfully"
else
log_error "Phase 2 failed. Fix issues before continuing."
exit 1
fi
else
log_warn "deploy-ccip-nodes.sh not found, skipping CCIP deployment"
fi
else
log_info "Skipping Phase 2 (CCIP Network)"
fi
# Phase 3: Other Services
if [[ "$SKIP_PHASE3" != "true" ]]; then
log_info ""
log_info "========================================="
log_info "PHASE 3: Other Services Deployment"
log_info "========================================="
log_info "Containers: ~14 (Blockscout, Cacti, Fabric, Firefly, Indy, etc.)"
log_info "Estimated time: 90-150 minutes (1.5-2.5 hours)"
log_info ""
# Deploy Hyperledger services
HYPERLEDGER_SCRIPT=$(find_script "deploy-hyperledger-services.sh")
if [[ -n "$HYPERLEDGER_SCRIPT" ]] && [[ -f "$HYPERLEDGER_SCRIPT" ]]; then
log_info "Deploying Hyperledger services..."
"$HYPERLEDGER_SCRIPT" || log_warn "Hyperledger services had issues"
fi
# Deploy explorer
EXPLORER_SCRIPT=$(find_script "deploy-explorer.sh")
if [[ -n "$EXPLORER_SCRIPT" ]] && [[ -f "$EXPLORER_SCRIPT" ]]; then
log_info "Deploying Blockscout explorer..."
"$EXPLORER_SCRIPT" || log_warn "Explorer deployment had issues"
fi
# Deploy other services
SERVICES_SCRIPT=$(find_script "deploy-services.sh")
if [[ -n "$SERVICES_SCRIPT" ]] && [[ -f "$SERVICES_SCRIPT" ]]; then
log_info "Deploying other services..."
"$SERVICES_SCRIPT" || log_warn "Services deployment had issues"
fi
# Deploy monitoring
MONITORING_SCRIPT=$(find_script "deploy-monitoring.sh")
if [[ -n "$MONITORING_SCRIPT" ]] && [[ -f "$MONITORING_SCRIPT" ]]; then
log_info "Deploying monitoring stack..."
"$MONITORING_SCRIPT" || log_warn "Monitoring deployment had issues"
fi
log_success "Phase 3 completed"
else
log_info "Skipping Phase 3 (Other Services)"
fi
# Final validation
if [[ "$SKIP_VALIDATION" != "true" ]]; then
log_info ""
log_info "========================================="
log_info "Final Deployment Validation"
log_info "========================================="
if [[ -f "$PROJECT_ROOT/scripts/validation/validate-deployment-comprehensive.sh" ]]; then
"$PROJECT_ROOT/scripts/validation/validate-deployment-comprehensive.sh"
fi
fi
log_info ""
log_success "Phased deployment completed!"
log_info ""
log_info "Next steps:"
log_info " - Verify all services are running"
log_info " - Check service logs for errors"
log_info " - Monitor blockchain sync progress"
log_info " - Configure CCIP DONs (if Phase 2 completed)"

View File

@@ -0,0 +1,69 @@
#!/usr/bin/env bash
# Pre-cache OS Template - Download Ubuntu 22.04 template before deployment
# This saves 5-10 minutes during deployment
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/lib/common.sh" 2>/dev/null || {
# Basic logging if common.sh not available
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_error() { echo "[ERROR] $1"; exit 1; }
}
# Load configuration
load_config "$PROJECT_ROOT/config/proxmox.conf" 2>/dev/null || true
TEMPLATE_NAME="${CONTAINER_OS_TEMPLATE:-local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst}"
TEMPLATE_FILE="ubuntu-22.04-standard_22.04-1_amd64.tar.zst"
log_info "========================================="
log_info "Pre-cache OS Template"
log_info "========================================="
log_info ""
log_info "Template: $TEMPLATE_NAME"
log_info "File: $TEMPLATE_FILE"
log_info ""
# Check if running on Proxmox host
if ! command_exists pveam; then
log_error "pveam command not found. This script must be run on Proxmox host."
fi
# Check if template already exists
log_info "Checking if template already exists..."
if pveam list local | grep -q "$TEMPLATE_FILE"; then
log_success "Template $TEMPLATE_FILE already exists in local storage"
log_info "No download needed. Deployment will use existing template."
log_info ""
log_info "Template details:"
pveam list local | grep "$TEMPLATE_FILE"
exit 0
fi
# Check available templates
log_info "Checking available templates..."
if ! pveam available | grep -q "$TEMPLATE_FILE"; then
log_error "Template $TEMPLATE_FILE not available. Please check template name."
fi
# Download template
log_info "Downloading template $TEMPLATE_FILE..."
log_info "This may take 5-10 minutes depending on network speed..."
log_info ""
if pveam download local "$TEMPLATE_FILE"; then
log_success "Template downloaded successfully"
log_info ""
log_info "Template is now cached and ready for deployment"
log_info "This saves 5-10 minutes during container creation phase"
log_info ""
log_info "Template details:"
pveam list local | grep "$TEMPLATE_FILE"
else
log_error "Failed to download template"
fi

259
scripts/detailed-review.sh Executable file
View File

@@ -0,0 +1,259 @@
#!/usr/bin/env bash
# Detailed Once-Over Review Script
# Comprehensive review of all aspects of the project
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_section() { echo -e "${CYAN}=== $1 ===${NC}"; }
REPORT_FILE="$PROJECT_ROOT/logs/detailed-review-$(date +%Y%m%d-%H%M%S).txt"
mkdir -p "$PROJECT_ROOT/logs"
{
log_section "Detailed Project Review"
echo "Generated: $(date)"
echo ""
log_section "1. Configuration File Validation"
# Check config files exist
CONFIG_FILES=(
"smom-dbis-138-proxmox/config/proxmox.conf"
"smom-dbis-138-proxmox/config/network.conf"
)
for config in "${CONFIG_FILES[@]}"; do
if [[ -f "$PROJECT_ROOT/$config" ]]; then
log_success "$config exists"
# Check if file is readable
if [[ -r "$PROJECT_ROOT/$config" ]]; then
log_success " Readable"
else
log_error " Not readable"
fi
else
log_error "$config missing"
fi
done
echo ""
log_section "2. Configuration Value Consistency"
# Source config if possible
if [[ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" ]]; then
source "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" 2>/dev/null || true
echo "VMID Ranges:"
echo " VALIDATOR_COUNT: ${VALIDATOR_COUNT:-not set}"
echo " SENTRY_COUNT: ${SENTRY_COUNT:-not set}"
echo " RPC_COUNT: ${RPC_COUNT:-not set}"
echo ""
echo "VMID Starts:"
echo " VMID_VALIDATORS_START: ${VMID_VALIDATORS_START:-not set}"
echo " VMID_SENTRIES_START: ${VMID_SENTRIES_START:-not set}"
echo " VMID_RPC_START: ${VMID_RPC_START:-not set}"
# Validate consistency
if [[ "${VALIDATOR_COUNT:-}" == "5" ]] && [[ "${VMID_VALIDATORS_START:-}" == "1000" ]]; then
log_success "Validators: Correct (5 nodes starting at 1000)"
else
log_warn "Validators: Inconsistent or incorrect"
fi
if [[ "${SENTRY_COUNT:-}" == "4" ]] && [[ "${VMID_SENTRIES_START:-}" == "1500" ]]; then
log_success "Sentries: Correct (4 nodes starting at 1500)"
else
log_warn "Sentries: Inconsistent or incorrect"
fi
if [[ "${RPC_COUNT:-}" == "3" ]] && [[ "${VMID_RPC_START:-}" == "2500" ]]; then
log_success "RPC: Correct (3 nodes starting at 2500)"
else
log_warn "RPC: Inconsistent or incorrect"
fi
fi
echo ""
log_section "3. Script Syntax Validation"
SYNTAX_ERRORS=0
SCRIPT_COUNT=0
while IFS= read -r script; do
if [[ -f "$script" ]] && [[ -x "$script" ]] || [[ "$script" == *.sh ]]; then
SCRIPT_COUNT=$((SCRIPT_COUNT + 1))
if ! bash -n "$script" 2>/dev/null; then
echo " Syntax error: $script"
SYNTAX_ERRORS=$((SYNTAX_ERRORS + 1))
fi
fi
done < <(find "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" -name "*.sh" -type f 2>/dev/null)
if [[ $SYNTAX_ERRORS -eq 0 ]]; then
log_success "All $SCRIPT_COUNT scripts have valid syntax"
else
log_error "Found $SYNTAX_ERRORS scripts with syntax errors"
fi
echo ""
log_section "4. Script Dependency Check"
# Check for common library usage
LIB_FILES=(
"lib/common.sh"
"lib/container-utils.sh"
"lib/progress-tracking.sh"
)
for lib in "${LIB_FILES[@]}"; do
if [[ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/$lib" ]]; then
log_success "$lib exists"
# Count scripts using it
usage_count=$(grep -r "source.*$lib" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" 2>/dev/null | wc -l)
echo " Used by $usage_count scripts"
else
log_warn "$lib missing (may be optional)"
fi
done
echo ""
log_section "5. Hardcoded Path Reference Check"
# Check for problematic hardcoded paths
HARDCODED_PATTERNS=(
"/home/intlc/projects"
"/opt/smom-dbis-138"
"/opt/smom-dbis-138-proxmox"
)
for pattern in "${HARDCODED_PATTERNS[@]}"; do
matches=$(grep -r "$pattern" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" 2>/dev/null | grep -v ".git" | grep -v "lib/common.sh" | wc -l)
if [[ $matches -gt 0 ]]; then
log_warn "Found $matches references to hardcoded path: $pattern"
echo " Consider using PROJECT_ROOT or relative paths"
fi
done
echo ""
log_section "6. VMID Array Hardcoding Check"
# Check scripts for hardcoded VMID arrays
HARDCODED_VMIDS=$(grep -rE "VALIDATORS=\(|SENTRIES=\(|RPC_NODES=\(" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" 2>/dev/null | grep -v ".git" | wc -l)
if [[ $HARDCODED_VMIDS -gt 0 ]]; then
log_warn "Found $HARDCODED_VMIDS hardcoded VMID array definitions"
echo " These should ideally use config values"
grep -rE "VALIDATORS=\(|SENTRIES=\(|RPC_NODES=\(" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" 2>/dev/null | grep -v ".git" | head -5 | sed 's/^/ /'
else
log_success "No hardcoded VMID arrays found (using config values)"
fi
echo ""
log_section "7. Network Configuration Consistency"
if [[ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/network.conf" ]]; then
source "$PROJECT_ROOT/smom-dbis-138-proxmox/config/network.conf" 2>/dev/null || true
echo "Network Configuration:"
echo " SUBNET_BASE: ${SUBNET_BASE:-not set}"
echo " GATEWAY: ${GATEWAY:-not set}"
echo " VALIDATORS_START_IP: ${VALIDATORS_START_IP:-not set}"
echo " SENTRIES_START_IP: ${SENTRIES_START_IP:-not set}"
echo " RPC_START_IP: ${RPC_START_IP:-not set}"
if [[ "${SUBNET_BASE:-}" == "192.168.11" ]] && [[ "${GATEWAY:-}" == "192.168.11.1" ]]; then
log_success "Network base and gateway correct"
else
log_warn "Network configuration may be incorrect"
fi
fi
echo ""
log_section "8. Critical File Existence Check"
CRITICAL_FILES=(
"smom-dbis-138-proxmox/scripts/deployment/deploy-validated-set.sh"
"smom-dbis-138-proxmox/scripts/deployment/deploy-besu-nodes.sh"
"smom-dbis-138-proxmox/scripts/copy-besu-config.sh"
"smom-dbis-138-proxmox/scripts/network/bootstrap-network.sh"
"smom-dbis-138-proxmox/scripts/validation/validate-deployment-comprehensive.sh"
"smom-dbis-138-proxmox/scripts/fix-container-ips.sh"
"smom-dbis-138-proxmox/scripts/fix-besu-services.sh"
)
for file in "${CRITICAL_FILES[@]}"; do
if [[ -f "$PROJECT_ROOT/$file" ]]; then
log_success "$(basename "$file")"
else
log_error "Missing: $file"
fi
done
echo ""
log_section "9. IP Address Consistency"
# Check for old IP references in critical files
OLD_IP_COUNT=$(grep -rE "10\.3\.1\." "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" "$PROJECT_ROOT/smom-dbis-138-proxmox/config" 2>/dev/null | grep -v ".git" | grep -v ".example" | wc -l)
if [[ $OLD_IP_COUNT -eq 0 ]]; then
log_success "No old IP addresses (10.3.1.X) in scripts/config"
else
log_warn "Found $OLD_IP_COUNT references to old IP addresses in scripts/config"
fi
# Check for correct IP range usage
CORRECT_IP_COUNT=$(grep -rE "192\.168\.11\." "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" "$PROJECT_ROOT/smom-dbis-138-proxmox/config" 2>/dev/null | grep -v ".git" | wc -l)
log_info "Found $CORRECT_IP_COUNT references to correct IP range (192.168.11.X)"
echo ""
log_section "10. VMID Range Consistency"
# Check for old VMID references in critical files
OLD_VMID_COUNT=$(grep -rE "\b(106|107|108|109|110|111|112|113|114|115|116|117)\b" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" "$PROJECT_ROOT/smom-dbis-138-proxmox/config" 2>/dev/null | grep -v ".git" | grep -v ".example" | wc -l)
if [[ $OLD_VMID_COUNT -eq 0 ]]; then
log_success "No old VMIDs (106-117) in scripts/config"
else
log_warn "Found $OLD_VMID_COUNT references to old VMIDs in scripts/config"
fi
# Check for correct VMID range usage
CORRECT_VMID_COUNT=$(grep -rE "\b(1000|1001|1002|1003|1004|1500|1501|1502|1503|2500|2501|2502)\b" "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts" "$PROJECT_ROOT/smom-dbis-138-proxmox/config" 2>/dev/null | grep -v ".git" | wc -l)
log_info "Found $CORRECT_VMID_COUNT references to correct VMID ranges"
echo ""
log_section "Review Summary"
echo ""
echo "Configuration: $(if [[ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf" ]] && [[ -f "$PROJECT_ROOT/smom-dbis-138-proxmox/config/network.conf" ]]; then echo "✓ Complete"; else echo "✗ Incomplete"; fi)"
echo "Scripts: $SCRIPT_COUNT scripts checked, $SYNTAX_ERRORS syntax errors"
echo "Hardcoded paths: Check warnings above"
echo "VMID consistency: Check warnings above"
echo "IP consistency: Check warnings above"
echo ""
} | tee "$REPORT_FILE"
log_info "Detailed review report saved to: $REPORT_FILE"

View File

@@ -0,0 +1,55 @@
#!/usr/bin/env bash
# Extract deployed contract addresses from Foundry broadcast files
# Usage: ./extract-contract-addresses.sh [chain-id]
set -e
SOURCE_PROJECT="${SOURCE_PROJECT:-/home/intlc/projects/smom-dbis-138}"
CHAIN_ID="${1:-138}"
BROADCAST_DIR="$SOURCE_PROJECT/broadcast"
OUTPUT_FILE="$SOURCE_PROJECT/deployed-addresses-chain138.txt"
if [ ! -d "$BROADCAST_DIR" ]; then
echo "❌ Broadcast directory not found: $BROADCAST_DIR"
exit 1
fi
echo "Extracting contract addresses from Chain $CHAIN_ID..."
echo ""
# Find latest deployment run
LATEST_RUN=$(find "$BROADCAST_DIR" -type d -path "*/$CHAIN_ID/run-*" | sort -V | tail -1)
if [ -z "$LATEST_RUN" ]; then
echo "❌ No deployment found for Chain ID $CHAIN_ID"
exit 1
fi
echo "Found deployment run: $LATEST_RUN"
echo ""
# Extract addresses from broadcast files
{
echo "# Deployed Contract Addresses - Chain $CHAIN_ID"
echo "# Generated: $(date)"
echo ""
# Extract from each deployment script
for script in "$LATEST_RUN"/*.json; do
if [ -f "$script" ]; then
script_name=$(basename "$script" .json)
address=$(jq -r '.transactions[] | select(.transactionType == "CREATE") | .contractAddress' "$script" 2>/dev/null | head -1)
if [ -n "$address" ] && [ "$address" != "null" ]; then
echo "# $script_name"
echo "${script_name^^}_ADDRESS=$address"
echo ""
fi
fi
done
} > "$OUTPUT_FILE"
echo "✅ Contract addresses extracted to: $OUTPUT_FILE"
echo ""
cat "$OUTPUT_FILE"

180
scripts/fix-enode-config.py Executable file
View File

@@ -0,0 +1,180 @@
#!/usr/bin/env python3
"""
Fix enode URLs in static-nodes.json and permissions-nodes.toml
This script fixes the critical issues identified:
1. Invalid enode public key length (trailing zeros padding)
2. IP address mismatches between static-nodes.json and permissions-nodes.toml
3. Ensures all enode IDs are exactly 128 hex characters
Usage:
python3 fix-enode-config.py [container_vmid]
If vmid provided, fixes files in that container.
Otherwise, generates corrected files for all containers.
"""
import json
import re
import sys
import subprocess
# Container IP mapping (VMID -> IP)
CONTAINER_IPS = {
106: '192.168.11.13', # besu-validator-1
107: '192.168.11.14', # besu-validator-2
108: '192.168.11.15', # besu-validator-3
109: '192.168.11.16', # besu-validator-4
110: '192.168.11.18', # besu-validator-5
111: '192.168.11.19', # besu-sentry-2
112: '192.168.11.20', # besu-sentry-3
113: '192.168.11.21', # besu-sentry-4
114: '192.168.11.22', # besu-sentry-5
115: '192.168.11.23', # besu-rpc-1
116: '192.168.11.24', # besu-rpc-2
117: '192.168.11.25', # besu-rpc-3
}
def normalize_node_id(node_id_hex):
"""
Normalize node ID to exactly 128 hex characters.
Ethereum node IDs must be exactly 128 hex chars (64 bytes).
This function:
- Removes trailing zeros (padding)
- Pads with leading zeros if needed to reach 128 chars
- Truncates to 128 chars if longer
"""
# Remove '0x' prefix if present
node_id_hex = node_id_hex.lower().replace('0x', '')
# Remove trailing zeros (these are invalid padding)
node_id_hex = node_id_hex.rstrip('0')
# Pad with leading zeros to reach 128 chars if needed
if len(node_id_hex) < 128:
node_id_hex = '0' * (128 - len(node_id_hex)) + node_id_hex
elif len(node_id_hex) > 128:
# If longer, take first 128 (shouldn't happen, but handle it)
node_id_hex = node_id_hex[:128]
return node_id_hex
def extract_node_id_from_enode(enode_url):
"""Extract and normalize node ID from enode URL."""
match = re.search(r'enode://([a-fA-F0-9]+)@', enode_url)
if not match:
return None
node_id_raw = match.group(1)
return normalize_node_id(node_id_raw)
def create_enode_url(node_id, ip, port=30303):
"""Create properly formatted enode URL."""
if len(node_id) != 128:
raise ValueError(f"Node ID must be exactly 128 chars, got {len(node_id)}: {node_id[:32]}...")
return f"enode://{node_id}@{ip}:{port}"
def fix_static_nodes_json(current_file_content, validator_ips):
"""
Fix static-nodes.json to contain only validator enodes with correct IPs.
Args:
current_file_content: Current static-nodes.json content (JSON string)
validator_ips: List of (node_id, ip) tuples for validators
"""
enodes = []
for node_id, ip in validator_ips:
enode = create_enode_url(node_id, ip)
enodes.append(enode)
return json.dumps(enodes, indent=2)
def fix_permissions_nodes_toml(current_file_content, all_node_ips):
"""
Fix permissions-nodes.toml to contain all nodes with correct IPs.
Args:
current_file_content: Current permissions-nodes.toml content
all_node_ips: List of (node_id, ip) tuples for all nodes
"""
# Extract the header/comment
lines = current_file_content.split('\n')
header_lines = []
for line in lines:
if line.strip().startswith('#') or line.strip() == '':
header_lines.append(line)
elif 'nodes-allowlist' in line:
break
# Build the allowlist
allowlist_lines = ['nodes-allowlist=[']
for node_id, ip in all_node_ips:
enode = create_enode_url(node_id, ip)
allowlist_lines.append(f' "{enode}",')
# Remove trailing comma from last entry
if allowlist_lines:
allowlist_lines[-1] = allowlist_lines[-1].rstrip(',')
allowlist_lines.append(']')
return '\n'.join(header_lines) + '\n' + '\n'.join(allowlist_lines)
# Node IDs from source static-nodes.json (these are the 5 validators)
# Note: These may need adjustment - one appears to be 126 chars, we'll normalize
SOURCE_VALIDATOR_NODE_IDS = [
'889ba317e10114a035ef82248a26125fbc00b1cd65fb29a2106584dddd025aa3dda14657bc423e5e8bf7d91a9858e85a',
'2a827fcff14e548b761d18d0d7177745799d880be5ac54fb17d73aa06b105559527c97fec09005ac050e1363f16cb052',
'aeec2f2f7ee15da9bdbf11261d1d1e5526d2d1ca03d66393e131cc70dcea856a9a01ef3488031b769025447e36e14f4e',
'0f647faab18eb3cd1a334ddf397011af768b3311400923b670d9536f5a937aa04071801de095100142da03b233adb5db',
'037c0feeb799e7e98bc99f7c21b8993254cc48f3251c318b211a76aa40d9c373da8c0a1df60804b327b43a222940ebf', # 126 chars - needs padding
]
def main():
print("=" * 70)
print("Fixing Enode URLs in Besu Configuration Files")
print("=" * 70)
print()
# Normalize validator node IDs
normalized_validators = []
for i, node_id_raw in enumerate(SOURCE_VALIDATOR_NODE_IDS):
node_id = normalize_node_id(node_id_raw)
vmid = 106 + i # Validators are 106-110
if vmid in CONTAINER_IPS:
ip = CONTAINER_IPS[vmid]
normalized_validators.append((node_id, ip))
print(f"Validator {vmid} ({ip}):")
print(f" Original: {node_id_raw[:64]}... (len={len(node_id_raw)})")
print(f" Normalized: {node_id[:64]}... (len={len(node_id)})")
print(f" Enode: {create_enode_url(node_id, ip)}")
print()
# For now, we'll use the same 5 validators for static-nodes.json
# and permissions-nodes.toml (this is a simplified version)
# In production, you'd want all nodes in permissions-nodes.toml
# Generate corrected static-nodes.json
static_nodes_json = fix_static_nodes_json("", normalized_validators)
print("Generated static-nodes.json:")
print(static_nodes_json)
print()
# For permissions-nodes.toml, we need all nodes
# For now, use the same validators (you may want to add more)
permissions_toml = fix_permissions_nodes_toml(
"# Node Permissioning Configuration\n# Lists nodes that are allowed to connect to this node\n\n",
normalized_validators
)
print("Generated permissions-nodes.toml:")
print(permissions_toml[:500] + "..." if len(permissions_toml) > 500 else permissions_toml)
print()
print("=" * 70)
print("IMPORTANT: This is a template. You need to:")
print("1. Verify node IDs match actual Besu node keys")
print("2. Add all nodes (validators, sentries, RPC) to permissions-nodes.toml")
print("3. Copy corrected files to all containers")
print("=" * 70)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,251 @@
#!/usr/bin/env bash
# Practical fix for enode URLs: Extract first 128 chars and map to correct IPs
# Based on Besu requirements: enode IDs must be exactly 128 hex characters
set -euo pipefail
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
WORK_DIR="/tmp/fix-enodes-$$"
mkdir -p "$WORK_DIR"
cleanup() { rm -rf "$WORK_DIR"; }
trap cleanup EXIT
# Container IP mapping
declare -A CONTAINER_IPS=(
[106]="192.168.11.13" # besu-validator-1
[107]="192.168.11.14" # besu-validator-2
[108]="192.168.11.15" # besu-validator-3
[109]="192.168.11.16" # besu-validator-4
[110]="192.168.11.18" # besu-validator-5
[111]="192.168.11.19" # besu-sentry-2
[112]="192.168.11.20" # besu-sentry-3
[113]="192.168.11.21" # besu-sentry-4
[114]="192.168.11.22" # besu-sentry-5
[115]="192.168.11.23" # besu-rpc-1
[116]="192.168.11.24" # besu-rpc-2
[117]="192.168.11.25" # besu-rpc-3
)
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Extract and fix enodes from a container's current static-nodes.json
extract_and_fix_enodes() {
local vmid="$1"
local container_ip="${CONTAINER_IPS[$vmid]}"
log_info "Processing container $vmid ($container_ip)..."
# Get current static-nodes.json
local current_json
current_json=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct exec $vmid -- cat /etc/besu/static-nodes.json 2>/dev/null" || echo '[]')
if [[ "$current_json" == "[]" ]] || [[ -z "$current_json" ]]; then
log_warn "Container $vmid: No static-nodes.json found"
return 1
fi
# Extract node IDs (first 128 chars) and create corrected enodes
python3 << PYEOF
import json
import re
try:
static_nodes = json.loads('''$current_json''')
except:
static_nodes = []
fixed_enodes = []
node_ids_seen = set()
for i, enode in enumerate(static_nodes):
# Extract hex part
match = re.search(r'enode://([a-fA-F0-9]+)@', enode)
if not match:
continue
full_hex = match.group(1).lower()
# Take first 128 chars (removes trailing zeros padding)
node_id = full_hex[:128]
if len(node_id) != 128:
print(f"SKIP: Node ID {i+1} has length {len(node_id)}, not 128", file=sys.stderr)
continue
# Validate hex
if not re.match(r'^[0-9a-f]{128}$', node_id):
print(f"SKIP: Node ID {i+1} contains invalid hex", file=sys.stderr)
continue
# Map first 5 to validator IPs (106-110), others keep original IP for now
if i < 5:
vmid_map = 106 + i
if vmid_map in ${!CONTAINER_IPS[@]}:
ip = "${CONTAINER_IPS[$vmid_map]}"
else:
# Extract original IP
ip_match = re.search(r'@([0-9.]+):', enode)
ip = ip_match.group(1) if ip_match else "$container_ip"
else:
# Extract original IP
ip_match = re.search(r'@([0-9.]+):', enode)
ip = ip_match.group(1) if ip_match else "$container_ip"
fixed_enode = f"enode://{node_id}@{ip}:30303"
# Avoid duplicates
if node_id not in node_ids_seen:
fixed_enodes.append(fixed_enode)
node_ids_seen.add(node_id)
# Save to file
with open('$WORK_DIR/static-nodes-${vmid}.json', 'w') as f:
json.dump(fixed_enodes, f, indent=2)
print(f"Fixed {len(fixed_enodes)} enode URLs for container $vmid")
PYEOF
return 0
}
# Generate corrected permissions-nodes.toml from all containers
generate_permissions_toml() {
log_info "Generating corrected permissions-nodes.toml..."
# Collect all unique enodes from all containers
python3 << 'PYEOF'
import json
import re
import glob
all_enodes = set()
# Read all fixed static-nodes.json files
for json_file in glob.glob('$WORK_DIR/static-nodes-*.json'):
try:
with open(json_file, 'r') as f:
enodes = json.load(f)
for enode in enodes:
all_enodes.add(enode)
except:
pass
# Sort for consistency
sorted_enodes = sorted(all_enodes)
# Generate TOML
toml_content = """# Node Permissioning Configuration
# Lists nodes that are allowed to connect to this node
# Generated from actual container enodes (first 128 chars of node IDs)
# All validators, sentries, and RPC nodes are included
nodes-allowlist=[
"""
for enode in sorted_enodes:
toml_content += f' "{enode}",\n'
# Remove trailing comma
toml_content = toml_content.rstrip(',\n') + '\n]'
with open('$WORK_DIR/permissions-nodes.toml', 'w') as f:
f.write(toml_content)
print(f"Generated permissions-nodes.toml with {len(sorted_enodes)} unique nodes")
PYEOF
log_success "Generated permissions-nodes.toml"
}
# Deploy corrected files
deploy_files() {
log_info "Deploying corrected files to all containers..."
# First, copy permissions-nodes.toml to host
scp -o StrictHostKeyChecking=accept-new \
"$WORK_DIR/permissions-nodes.toml" \
"root@${PROXMOX_HOST}:/tmp/permissions-nodes-fixed.toml"
for vmid in 106 107 108 109 110 111 112 113 114 115 116 117; do
if ! ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct status $vmid 2>/dev/null | grep -q running"; then
log_warn "Container $vmid: not running, skipping"
continue
fi
log_info "Deploying to container $vmid..."
# Copy static-nodes.json (container-specific)
if [[ -f "$WORK_DIR/static-nodes-${vmid}.json" ]]; then
scp -o StrictHostKeyChecking=accept-new \
"$WORK_DIR/static-nodes-${vmid}.json" \
"root@${PROXMOX_HOST}:/tmp/static-nodes-${vmid}.json"
ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" << REMOTE_SCRIPT
pct push $vmid /tmp/static-nodes-${vmid}.json /etc/besu/static-nodes.json
pct exec $vmid -- chown besu:besu /etc/besu/static-nodes.json
rm -f /tmp/static-nodes-${vmid}.json
REMOTE_SCRIPT
fi
# Copy permissions-nodes.toml (same for all)
ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" << REMOTE_SCRIPT
pct push $vmid /tmp/permissions-nodes-fixed.toml /etc/besu/permissions-nodes.toml
pct exec $vmid -- chown besu:besu /etc/besu/permissions-nodes.toml
REMOTE_SCRIPT
log_success "Container $vmid: files deployed"
done
# Cleanup
ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"rm -f /tmp/permissions-nodes-fixed.toml"
}
# Main
main() {
echo "╔════════════════════════════════════════════════════════════════╗"
echo "║ FIX ENODE CONFIGS (EXTRACT FIRST 128 CHARS) ║"
echo "╚════════════════════════════════════════════════════════════════╝"
echo ""
# Process all containers
for vmid in 106 107 108 109 110 111 112 113 114 115 116 117; do
extract_and_fix_enodes "$vmid" || true
done
generate_permissions_toml
echo ""
log_info "Preview of generated files:"
echo ""
echo "=== static-nodes.json (container 106) ==="
cat "$WORK_DIR/static-nodes-106.json" 2>/dev/null | head -10 || echo "Not generated"
echo ""
echo "=== permissions-nodes.toml (first 20 lines) ==="
cat "$WORK_DIR/permissions-nodes.toml" | head -20
echo ""
read -p "Deploy corrected files to all containers? [y/N]: " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
deploy_files
log_success "All files deployed!"
log_info "Next: Restart Besu services on all containers"
else
log_info "Files available in $WORK_DIR for review"
fi
}
main "$@"

155
scripts/fix-enode-ip-mismatch.sh Executable file
View File

@@ -0,0 +1,155 @@
#!/usr/bin/env bash
# Fix enode URL IP mismatches between static-nodes.json and permissions-nodes.toml
# Based on analysis: extract first 128 chars of node IDs and map to correct IPs
set -euo pipefail
# Container IP mapping
declare -A CONTAINER_IPS=(
[106]="192.168.11.13" # besu-validator-1
[107]="192.168.11.14" # besu-validator-2
[108]="192.168.11.15" # besu-validator-3
[109]="192.168.11.16" # besu-validator-4
[110]="192.168.11.18" # besu-validator-5
[111]="192.168.11.19" # besu-sentry-2
[112]="192.168.11.20" # besu-sentry-3
[113]="192.168.11.21" # besu-sentry-4
[114]="192.168.11.22" # besu-sentry-5
[115]="192.168.11.23" # besu-rpc-1
[116]="192.168.11.24" # besu-rpc-2
[117]="192.168.11.25" # besu-rpc-3
)
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Extract first 128 hex chars from enode URL (removes trailing zeros)
extract_node_id() {
local enode="$1"
local node_id_hex
# Extract hex part between enode:// and @
node_id_hex=$(echo "$enode" | sed -n 's|enode://\([a-fA-F0-9]*\)@.*|\1|p' | tr '[:upper:]' '[:lower:]')
# Take first 128 characters (removes trailing zeros padding)
echo "${node_id_hex:0:128}"
}
# Generate corrected static-nodes.json for validators
generate_static_nodes() {
local vmid="$1"
local temp_file="/tmp/static-nodes-${vmid}.json"
log_info "Generating static-nodes.json for container $vmid"
# Get current static-nodes.json from container
local current_content
current_content=$(ssh -o StrictHostKeyChecking=accept-new root@192.168.11.10 "pct exec $vmid -- cat /etc/besu/static-nodes.json 2>/dev/null" || echo '[]')
# Extract node IDs (first 128 chars) and map to validator IPs
python3 << PYEOF
import json
import re
# Read current static-nodes.json
try:
static_nodes = json.loads('''$current_content''')
except:
static_nodes = []
# Node IDs from current file (first 128 chars of each)
node_ids = []
for enode in static_nodes:
match = re.search(r'enode://([a-fA-F0-9]+)@', enode)
if match:
full_hex = match.group(1).lower()
node_id = full_hex[:128] # First 128 chars only
if len(node_id) == 128:
node_ids.append(node_id)
# Validator IPs (106-110)
validator_ips = [
"${CONTAINER_IPS[106]}",
"${CONTAINER_IPS[107]}",
"${CONTAINER_IPS[108]}",
"${CONTAINER_IPS[109]}",
"${CONTAINER_IPS[110]}",
]
# Generate new static-nodes.json with correct IPs
corrected_enodes = []
for i, node_id in enumerate(node_ids[:5]): # First 5 validators
if i < len(validator_ips):
enode = f"enode://{node_id}@{validator_ips[i]}:30303"
corrected_enodes.append(enode)
# Write corrected file
with open('$temp_file', 'w') as f:
json.dump(corrected_enodes, f, indent=2)
print(f"Generated static-nodes.json with {len(corrected_enodes)} validators")
PYEOF
echo "$temp_file"
}
echo "╔════════════════════════════════════════════════════════════════╗"
echo "║ FIXING ENODE URL IP MISMATCHES ║"
echo "╚════════════════════════════════════════════════════════════════╝"
echo ""
echo "This script will:"
echo "1. Extract valid 128-char node IDs (remove trailing zeros)"
echo "2. Map them to correct container IP addresses"
echo "3. Generate corrected static-nodes.json and permissions-nodes.toml"
echo "4. Copy to all containers"
echo ""
echo "NOTE: This is a template script. The actual fix requires:"
echo "- Verifying node IDs match actual Besu node keys"
echo "- Determining which node IDs belong to which containers"
echo "- Ensuring all nodes are included in permissions-nodes.toml"
echo ""
echo "Would you like to continue? (This will create corrected files)"
read -p "Press Enter to continue or Ctrl+C to cancel..."
# For now, demonstrate the concept
log_info "Extracting node IDs from container 107 (running validator)..."
ssh -o StrictHostKeyChecking=accept-new root@192.168.11.10 << 'REMOTE_SCRIPT'
vmid=107
if pct exec $vmid -- test -f /etc/besu/static-nodes.json 2>/dev/null; then
echo "Current static-nodes.json from container $vmid:"
pct exec $vmid -- cat /etc/besu/static-nodes.json | python3 -c "
import json, re, sys
data = json.load(sys.stdin)
print(f'Found {len(data)} enode URLs')
for i, enode in enumerate(data[:3]): # Show first 3
match = re.search(r'enode://([a-fA-F0-9]+)@([0-9.]+):', enode)
if match:
full_hex = match.group(1).lower()
ip = match.group(2)
node_id_128 = full_hex[:128]
print(f'Node {i+1}:')
print(f' Full hex length: {len(full_hex)} chars')
print(f' Node ID (first 128): {node_id_128[:32]}...{node_id_128[-32:]}')
print(f' Current IP: {ip}')
print(f' Has trailing zeros: {\"Yes\" if len(full_hex) > 128 else \"No\"}')
print()
"
else
echo "Container $vmid static-nodes.json not found"
fi
REMOTE_SCRIPT
log_info "Fix script template created. Next steps:"
log_info "1. Determine actual node IDs from Besu node keys"
log_info "2. Map node IDs to container IPs correctly"
log_info "3. Generate corrected files"
log_info "4. Deploy to all containers"

341
scripts/fix-enodes-besu-native.sh Executable file
View File

@@ -0,0 +1,341 @@
#!/usr/bin/env bash
# Fix enode URLs using Besu's built-in commands
# This script generates correct enode URLs from actual Besu node keys
set -euo pipefail
# Container IP mapping
declare -A CONTAINER_IPS=(
[106]="192.168.11.13" # besu-validator-1
[107]="192.168.11.14" # besu-validator-2
[108]="192.168.11.15" # besu-validator-3
[109]="192.168.11.16" # besu-validator-4
[110]="192.168.11.18" # besu-validator-5
[111]="192.168.11.19" # besu-sentry-2
[112]="192.168.11.20" # besu-sentry-3
[113]="192.168.11.21" # besu-sentry-4
[114]="192.168.11.22" # besu-sentry-5
[115]="192.168.11.23" # besu-rpc-1
[116]="192.168.11.24" # besu-rpc-2
[117]="192.168.11.25" # besu-rpc-3
)
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
WORK_DIR="/tmp/besu-enode-fix-$$"
mkdir -p "$WORK_DIR"
cleanup() {
rm -rf "$WORK_DIR"
}
trap cleanup EXIT
# Extract enode from a running Besu node using admin_nodeInfo
extract_enode_from_running_node() {
local vmid="$1"
local ip="${CONTAINER_IPS[$vmid]}"
log_info "Extracting enode from container $vmid ($ip)..."
# Try to get enode via RPC (works if RPC is enabled)
local enode
enode=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct exec $vmid -- curl -s -X POST --data '{\"jsonrpc\":\"2.0\",\"method\":\"admin_nodeInfo\",\"params\":[],\"id\":1}' -H 'Content-Type: application/json' http://localhost:8545 2>/dev/null" | \
python3 -c "import sys, json; data = json.load(sys.stdin); print(data.get('result', {}).get('enode', ''))" 2>/dev/null || echo "")
if [[ -n "$enode" && "$enode" != "null" && "$enode" != "" ]]; then
echo "$enode"
return 0
fi
log_warn "Could not get enode via RPC for container $vmid, trying alternative method..."
return 1
}
# Extract node key path and generate enode using Besu CLI
extract_enode_from_nodekey() {
local vmid="$1"
local ip="${CONTAINER_IPS[$vmid]}"
log_info "Extracting enode from nodekey for container $vmid ($ip)..."
# Find nodekey file (Besu stores it in data directory)
local nodekey_path
nodekey_path=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct exec $vmid -- find /data/besu -name 'nodekey*' -o -name 'key.priv' 2>/dev/null | head -1" || echo "")
if [[ -z "$nodekey_path" ]]; then
# Try common locations
for path in "/data/besu/key" "/data/besu/nodekey" "/keys/besu/nodekey"; do
if ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" "pct exec $vmid -- test -f $path 2>/dev/null"; then
nodekey_path="$path"
break
fi
done
fi
if [[ -z "$nodekey_path" ]]; then
log_error "Nodekey not found for container $vmid"
return 1
fi
log_info "Found nodekey at: $nodekey_path"
# Use Besu to export public key in enode format
local enode
enode=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct exec $vmid -- /opt/besu/bin/besu public-key export --node-private-key-file=\"$nodekey_path\" --format=enode 2>/dev/null" || echo "")
if [[ -n "$enode" ]]; then
# Replace IP in enode with actual container IP
enode=$(echo "$enode" | sed "s/@[0-9.]*:/@${ip}:/")
echo "$enode"
return 0
fi
log_error "Failed to generate enode from nodekey for container $vmid"
return 1
}
# Validate enode format
validate_enode() {
local enode="$1"
# Extract node ID part
local node_id
node_id=$(echo "$enode" | sed 's|^enode://||' | cut -d'@' -f1 | tr '[:upper:]' '[:lower:]')
# Check length
if [[ ${#node_id} -ne 128 ]]; then
log_error "Invalid enode: node ID length is ${#node_id}, expected 128"
return 1
fi
# Check it's valid hex
if ! echo "$node_id" | grep -qE '^[0-9a-f]{128}$'; then
log_error "Invalid enode: node ID contains non-hex characters"
return 1
fi
log_success "Enode validated: ${node_id:0:16}...${node_id: -16} (128 chars)"
return 0
}
# Collect all enodes from containers
collect_enodes() {
log_info "Collecting enodes from all containers..."
echo "" > "$WORK_DIR/enodes.txt"
local validators=()
local sentries=()
local rpcs=()
for vmid in 106 107 108 109 110; do
local hostname ip enode
hostname=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct config $vmid 2>/dev/null | grep '^hostname:' | cut -d' ' -f2" || echo "unknown")
ip="${CONTAINER_IPS[$vmid]}"
# Try running node first, then nodekey
enode=$(extract_enode_from_running_node "$vmid" 2>/dev/null || \
extract_enode_from_nodekey "$vmid" 2>/dev/null || echo "")
if [[ -n "$enode" ]]; then
if validate_enode "$enode"; then
echo "$vmid|$hostname|$ip|$enode|validator" >> "$WORK_DIR/enodes.txt"
validators+=("$enode")
log_success "Container $vmid ($hostname): enode extracted"
else
log_error "Container $vmid: invalid enode format"
fi
else
log_warn "Container $vmid: could not extract enode"
fi
done
for vmid in 111 112 113 114; do
local hostname ip enode
hostname=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct config $vmid 2>/dev/null | grep '^hostname:' | cut -d' ' -f2" || echo "unknown")
ip="${CONTAINER_IPS[$vmid]}"
enode=$(extract_enode_from_running_node "$vmid" 2>/dev/null || \
extract_enode_from_nodekey "$vmid" 2>/dev/null || echo "")
if [[ -n "$enode" ]]; then
if validate_enode "$enode"; then
echo "$vmid|$hostname|$ip|$enode|sentry" >> "$WORK_DIR/enodes.txt"
sentries+=("$enode")
log_success "Container $vmid ($hostname): enode extracted"
fi
fi
done
for vmid in 115 116 117; do
local hostname ip enode
hostname=$(ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" \
"pct config $vmid 2>/dev/null | grep '^hostname:' | cut -d' ' -f2" || echo "unknown")
ip="${CONTAINER_IPS[$vmid]}"
enode=$(extract_enode_from_running_node "$vmid" 2>/dev/null || \
extract_enode_from_nodekey "$vmid" 2>/dev/null || echo "")
if [[ -n "$enode" ]]; then
if validate_enode "$enode"; then
echo "$vmid|$hostname|$ip|$enode|rpc" >> "$WORK_DIR/enodes.txt"
rpcs+=("$enode")
log_success "Container $vmid ($hostname): enode extracted"
fi
fi
done
log_info "Collected ${#validators[@]} validator enodes, ${#sentries[@]} sentry enodes, ${#rpcs[@]} RPC enodes"
}
# Generate corrected static-nodes.json (validators only)
generate_static_nodes_json() {
log_info "Generating corrected static-nodes.json..."
python3 << PYEOF
import json
# Read collected enodes
enodes = []
with open('$WORK_DIR/enodes.txt', 'r') as f:
for line in f:
line = line.strip()
if not line:
continue
parts = line.split('|')
if len(parts) >= 4 and parts[4] == 'validator':
enodes.append(parts[3]) # enode URL
# Generate JSON
static_nodes = sorted(set(enodes)) # Remove duplicates, sort
with open('$WORK_DIR/static-nodes.json', 'w') as f:
json.dump(static_nodes, f, indent=2)
print(f"Generated static-nodes.json with {len(static_nodes)} validators")
PYEOF
log_success "Generated static-nodes.json"
}
# Generate corrected permissions-nodes.toml (all nodes)
generate_permissions_nodes_toml() {
log_info "Generating corrected permissions-nodes.toml..."
python3 << 'PYEOF'
import re
# Read collected enodes
enodes = []
with open('$WORK_DIR/enodes.txt', 'r') as f:
for line in f:
line = line.strip()
if not line:
continue
parts = line.split('|')
if len(parts) >= 4:
enodes.append(parts[3]) # enode URL
# Remove duplicates and sort
enodes = sorted(set(enodes))
# Generate TOML
toml_content = """# Node Permissioning Configuration
# Lists nodes that are allowed to connect to this node
# Generated using Besu native commands
# All validators, sentries, and RPC nodes are included
nodes-allowlist=[
"""
for enode in enodes:
toml_content += f' "{enode}",\n'
# Remove trailing comma
toml_content = toml_content.rstrip(',\n') + '\n]'
with open('$WORK_DIR/permissions-nodes.toml', 'w') as f:
f.write(toml_content)
print(f"Generated permissions-nodes.toml with {len(enodes)} nodes")
PYEOF
log_success "Generated permissions-nodes.toml"
}
# Deploy corrected files to containers
deploy_files() {
log_info "Deploying corrected files to containers..."
# Copy static-nodes.json to all containers
for vmid in 106 107 108 109 110 111 112 113 114 115 116 117; do
if ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" "pct status $vmid 2>/dev/null | grep -q running"; then
log_info "Deploying to container $vmid..."
scp -o StrictHostKeyChecking=accept-new "$WORK_DIR/static-nodes.json" \
"root@${PROXMOX_HOST}:/tmp/static-nodes-${vmid}.json"
scp -o StrictHostKeyChecking=accept-new "$WORK_DIR/permissions-nodes.toml" \
"root@${PROXMOX_HOST}:/tmp/permissions-nodes-${vmid}.toml"
ssh -o StrictHostKeyChecking=accept-new "root@${PROXMOX_HOST}" << REMOTE_SCRIPT
pct push $vmid /tmp/static-nodes-${vmid}.json /etc/besu/static-nodes.json
pct push $vmid /tmp/permissions-nodes-${vmid}.toml /etc/besu/permissions-nodes.toml
pct exec $vmid -- chown besu:besu /etc/besu/static-nodes.json /etc/besu/permissions-nodes.toml
rm -f /tmp/static-nodes-${vmid}.json /tmp/permissions-nodes-${vmid}.toml
REMOTE_SCRIPT
log_success "Container $vmid: files deployed"
else
log_warn "Container $vmid: not running, skipping"
fi
done
}
# Main execution
main() {
echo "╔════════════════════════════════════════════════════════════════╗"
echo "║ FIX ENODE URLs USING BESU NATIVE COMMANDS ║"
echo "╚════════════════════════════════════════════════════════════════╝"
echo ""
collect_enodes
generate_static_nodes_json
generate_permissions_nodes_toml
echo ""
log_info "Preview of generated files:"
echo ""
echo "=== static-nodes.json ==="
cat "$WORK_DIR/static-nodes.json" | head -10
echo ""
echo "=== permissions-nodes.toml (first 20 lines) ==="
cat "$WORK_DIR/permissions-nodes.toml" | head -20
echo ""
read -p "Deploy corrected files to all containers? [y/N]: " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
deploy_files
log_success "All files deployed successfully!"
log_info "Next step: Restart Besu services on all containers"
else
log_info "Files are in $WORK_DIR (will be cleaned up on exit)"
log_info "Review them and run deployment manually if needed"
fi
}
main "$@"

107
scripts/fix-enodes-final.py Normal file
View File

@@ -0,0 +1,107 @@
#!/usr/bin/env python3
"""
Fix Besu enode URLs by extracting first 128 chars and mapping to correct IPs
"""
import json
import re
import sys
# Container IP mapping (VMID -> IP)
# Note: IP addresses will be assigned via DHCP, this is a placeholder mapping
CONTAINER_IPS = {
1000: "192.168.11.100", # validator-1 (DHCP assigned)
1001: "192.168.11.101", # validator-2 (DHCP assigned)
1002: "192.168.11.102", # validator-3 (DHCP assigned)
1003: "192.168.11.103", # validator-4 (DHCP assigned)
1004: "192.168.11.104", # validator-5 (DHCP assigned)
1500: "192.168.11.150", # sentry-1 (DHCP assigned)
1501: "192.168.11.151", # sentry-2 (DHCP assigned)
1502: "192.168.11.152", # sentry-3 (DHCP assigned)
1503: "192.168.11.153", # sentry-4 (DHCP assigned)
2500: "192.168.11.250", # rpc-1 (DHCP assigned)
2501: "192.168.11.251", # rpc-2 (DHCP assigned)
2502: "192.168.11.252", # rpc-3 (DHCP assigned)
}
# Source validator node IDs (from source static-nodes.json, already 128 chars)
SOURCE_VALIDATOR_NODE_IDS = [
"889ba317e10114a035ef82248a26125fbc00b1cd65fb29a2106584dddd025aa3dda14657bc423e5e8bf7d91a9858e85a",
"2a827fcff14e548b761d18d0d7177745799d880be5ac54fb17d73aa06b105559527c97fec09005ac050e1363f16cb052",
"aeec2f2f7ee15da9bdbf11261d1d1e5526d2d1ca03d66393e131cc70dcea856a9a01ef3488031b769025447e36e14f4e",
"0f647faab18eb3cd1a334ddf397011af768b3311400923b670d9536f5a937aa04071801de095100142da03b233adb5db",
"037c0feeb799e7e98bc99f7c21b8993254cc48f3251c318b211a76aa40d9c373da8c0a1df60804b327b43a222940ebf0"
]
# Validator VMIDs (new ranges)
VALIDATOR_VMIDS = [1000, 1001, 1002, 1003, 1004]
def extract_node_id(enode_url):
"""Extract node ID from enode URL, taking first 128 chars"""
match = re.match(r'enode://([0-9a-fA-F]+)@', enode_url)
if not match:
return None
node_id_hex = match.group(1).lower()
# Take first 128 chars (removes trailing zeros padding)
return node_id_hex[:128] if len(node_id_hex) >= 128 else node_id_hex
def normalize_node_id(node_id):
"""Ensure node ID is exactly 128 hex characters (take first 128 if longer)"""
node_id = node_id.lower().strip()
if len(node_id) > 128:
# Take first 128 chars (removes trailing padding)
return node_id[:128]
elif len(node_id) < 128:
# If shorter, it's invalid - return as-is (will fail validation)
return node_id
return node_id
def create_enode(node_id, ip, port=30303):
"""Create enode URL from node ID and IP"""
# Source node IDs are already 128 chars, use as-is
normalized = normalize_node_id(node_id)
if len(normalized) != 128:
raise ValueError(f"Node ID must be exactly 128 chars, got {len(normalized)}")
return f"enode://{normalized}@{ip}:{port}"
def main():
# Map source validator node IDs to container IPs
validator_enodes = []
for i, node_id in enumerate(SOURCE_VALIDATOR_NODE_IDS):
vmid = VALIDATOR_VMIDS[i]
ip = CONTAINER_IPS[vmid]
enode = create_enode(node_id, ip)
validator_enodes.append(enode)
print(f"Validator {vmid} ({ip}): {enode[:80]}...", file=sys.stderr)
# For sentries and RPC nodes, we need to extract from deployed permissions-nodes.toml
# For now, we'll include validators only in static-nodes.json
# and generate permissions-nodes.toml with validators only (will need to add sentries/RPC later)
# Generate static-nodes.json (validators only)
static_nodes = sorted(validator_enodes)
with open('static-nodes.json', 'w') as f:
json.dump(static_nodes, f, indent=2)
print(f"Generated static-nodes.json with {len(static_nodes)} validators", file=sys.stderr)
# Generate permissions-nodes.toml (validators only for now)
toml_content = """# Node Permissioning Configuration
# Lists nodes that are allowed to connect to this node
# Generated from source static-nodes.json (first 128 chars of node IDs)
# Validators only (sentries and RPC nodes need to be added separately)
nodes-allowlist=[
"""
for enode in sorted(validator_enodes):
toml_content += f' "{enode}",\n'
toml_content = toml_content.rstrip(',\n') + '\n]'
with open('permissions-nodes.toml', 'w') as f:
f.write(toml_content)
print(f"Generated permissions-nodes.toml with {len(validator_enodes)} validators", file=sys.stderr)
print("", file=sys.stderr)
print("NOTE: Sentries and RPC nodes need to be added to permissions-nodes.toml", file=sys.stderr)
print(" after extracting their node IDs from running Besu instances.", file=sys.stderr)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
# Fix genesis.json to ensure it contains exactly 5 validators
# This script checks if genesis.json needs to be regenerated
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
SMOM_PROJECT="${SMOM_PROJECT:-$PROJECT_ROOT/../smom-dbis-138}"
GENESIS_FILE="$SMOM_PROJECT/config/genesis.json"
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_warn() { echo "[⚠] $1"; }
log_error() { echo "[✗] $1"; }
log_info "=== Genesis.json Validator Check ==="
echo ""
if [[ ! -f "$GENESIS_FILE" ]]; then
log_error "genesis.json not found: $GENESIS_FILE"
exit 1
fi
# Extract validator addresses from keys
log_info "Extracting validator addresses from keys..."
VALIDATOR_ADDRESSES=()
for i in 1 2 3 4 5; do
KEY_DIR="$SMOM_PROJECT/keys/validators/validator-$i"
if [[ -f "$KEY_DIR/address.txt" ]]; then
ADDR=$(cat "$KEY_DIR/address.txt" | tr -d '\n' | tr '[:upper:]' '[:lower:]' | sed 's/^0x//')
VALIDATOR_ADDRESSES+=("$ADDR")
log_success "validator-$i: 0x${ADDR:0:10}..."
else
log_error "validator-$i address.txt not found"
exit 1
fi
done
if [[ ${#VALIDATOR_ADDRESSES[@]} -ne 5 ]]; then
log_error "Expected 5 validator addresses, found ${#VALIDATOR_ADDRESSES[@]}"
exit 1
fi
log_success "Found all 5 validator addresses"
echo ""
# Check genesis.json
log_info "Analyzing genesis.json extraData..."
VALIDATOR_COUNT=$(python3 << PYEOF
import json
with open('$GENESIS_FILE') as f:
g = json.load(f)
extra = g.get('extraData', '')
hex_data = extra[2:] if extra.startswith('0x') else extra
# Skip vanity (64 hex chars = 32 bytes)
if len(hex_data) > 64:
validator_data = hex_data[64:]
count = len(validator_data) // 40
print(count)
else:
print(0)
PYEOF
)
log_info "Validators in genesis.json extraData: $VALIDATOR_COUNT"
log_info "Expected: 5"
if [[ $VALIDATOR_COUNT -eq 5 ]]; then
log_success "✓ Genesis.json contains correct number of validators (5)"
echo ""
log_info "Genesis.json is correct - no action needed"
exit 0
elif [[ $VALIDATOR_COUNT -eq 0 ]]; then
log_warn "Could not determine validator count from extraData"
log_warn "extraData may be in RLP-encoded format"
echo ""
log_info "Note: QBFT uses dynamic validator management via validator contract"
log_info "The extraData field may contain RLP-encoded data that requires special parsing"
exit 0
else
log_error "✗ Genesis.json contains $VALIDATOR_COUNT validators (expected 5)"
echo ""
log_warn "WARNING: Genesis.json needs to be regenerated with the correct 5 validators"
log_warn "Current validator addresses:"
for i in "${!VALIDATOR_ADDRESSES[@]}"; do
echo " validator-$((i+1)): 0x${VALIDATOR_ADDRESSES[$i]}"
done
echo ""
log_info "To fix: Regenerate genesis.json using quorum-genesis-tool or besu"
log_info "with the 5 validator addresses listed above"
exit 1
fi

267
scripts/fix-rpc-2500.sh Executable file
View File

@@ -0,0 +1,267 @@
#!/usr/bin/env bash
# Fix RPC-01 (VMID 2500) configuration issues
# Usage: ./fix-rpc-2500.sh
set -e
VMID=2500
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Check if running on Proxmox host
if ! command -v pct &>/dev/null; then
log_error "This script must be run on Proxmox host (pct command not found)"
exit 1
fi
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Fixing RPC-01 (VMID $VMID)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# 1. Check container status
log_info "1. Checking container status..."
if ! pct status $VMID &>/dev/null | grep -q "running"; then
log_warn " Container is not running. Starting..."
pct start $VMID
sleep 5
fi
# 2. Stop service
log_info ""
log_info "2. Stopping Besu RPC service..."
pct exec $VMID -- systemctl stop besu-rpc.service 2>/dev/null || true
# 3. Check which config file the service expects
log_info ""
log_info "3. Checking service configuration..."
SERVICE_FILE="/etc/systemd/system/besu-rpc.service"
CONFIG_PATH=$(pct exec $VMID -- grep "config-file" "$SERVICE_FILE" 2>/dev/null | grep -oP 'config-file=\K[^\s]+' || echo "")
log_info " Service expects config: $CONFIG_PATH"
# Determine correct config file name
if echo "$CONFIG_PATH" | grep -q "config-rpc-public"; then
CONFIG_FILE="/etc/besu/config-rpc-public.toml"
elif echo "$CONFIG_PATH" | grep -q "config-rpc-core"; then
CONFIG_FILE="/etc/besu/config-rpc-core.toml"
else
CONFIG_FILE="/etc/besu/config-rpc.toml"
fi
log_info " Using config file: $CONFIG_FILE"
# 4. Create config file if missing
log_info ""
log_info "4. Ensuring config file exists..."
if ! pct exec $VMID -- test -f "$CONFIG_FILE" 2>/dev/null; then
log_warn " Config file missing. Creating from template..."
# Try to find template
TEMPLATE_FILE="$CONFIG_FILE.template"
if pct exec $VMID -- test -f "$TEMPLATE_FILE" 2>/dev/null; then
pct exec $VMID -- cp "$TEMPLATE_FILE" "$CONFIG_FILE"
log_success " Created from template"
else
log_error " Template not found. Creating minimal config..."
# Create minimal valid config
pct exec $VMID -- bash -c "cat > $CONFIG_FILE <<'EOF'
data-path=\"/data/besu\"
genesis-file=\"/genesis/genesis.json\"
network-id=138
p2p-host=\"0.0.0.0\"
p2p-port=30303
miner-enabled=false
sync-mode=\"FULL\"
fast-sync-min-peers=2
# RPC Configuration
rpc-http-enabled=true
rpc-http-host=\"0.0.0.0\"
rpc-http-port=8545
rpc-http-api=[\"ETH\",\"NET\",\"WEB3\"]
rpc-http-cors-origins=[\"*\"]
rpc-ws-enabled=true
rpc-ws-host=\"0.0.0.0\"
rpc-ws-port=8546
rpc-ws-api=[\"ETH\",\"NET\",\"WEB3\"]
rpc-ws-origins=[\"*\"]
# Metrics
metrics-enabled=true
metrics-port=9545
metrics-host=\"0.0.0.0\"
metrics-push-enabled=false
logging=\"INFO\"
permissions-nodes-config-file-enabled=true
permissions-nodes-config-file=\"/permissions/permissions-nodes.toml\"
permissions-accounts-config-file-enabled=false
tx-pool-max-size=8192
tx-pool-price-bump=10
tx-pool-retention-hours=6
static-nodes-file=\"/genesis/static-nodes.json\"
discovery-enabled=true
privacy-enabled=false
rpc-tx-feecap=\"0x0\"
max-peers=25
EOF"
log_success " Created minimal config"
fi
# Set ownership
pct exec $VMID -- chown besu:besu "$CONFIG_FILE"
else
log_success " Config file exists"
fi
# 5. Remove deprecated options
log_info ""
log_info "5. Removing deprecated configuration options..."
DEPRECATED_OPTS=(
"log-destination"
"max-remote-initiated-connections"
"trie-logs-enabled"
"accounts-enabled"
"database-path"
"rpc-http-host-allowlist"
)
for opt in "${DEPRECATED_OPTS[@]}"; do
if pct exec $VMID -- grep -q "^$opt" "$CONFIG_FILE" 2>/dev/null; then
log_info " Removing: $opt"
pct exec $VMID -- sed -i "/^$opt/d" "$CONFIG_FILE"
fi
done
log_success " Deprecated options removed"
# 6. Ensure RPC is enabled
log_info ""
log_info "6. Verifying RPC is enabled..."
if ! pct exec $VMID -- grep -q "rpc-http-enabled=true" "$CONFIG_FILE" 2>/dev/null; then
log_warn " RPC HTTP not enabled. Enabling..."
pct exec $VMID -- sed -i 's/rpc-http-enabled=false/rpc-http-enabled=true/' "$CONFIG_FILE" 2>/dev/null || \
pct exec $VMID -- bash -c "echo 'rpc-http-enabled=true' >> $CONFIG_FILE"
fi
if ! pct exec $VMID -- grep -q "rpc-ws-enabled=true" "$CONFIG_FILE" 2>/dev/null; then
log_warn " RPC WS not enabled. Enabling..."
pct exec $VMID -- sed -i 's/rpc-ws-enabled=false/rpc-ws-enabled=true/' "$CONFIG_FILE" 2>/dev/null || \
pct exec $VMID -- bash -c "echo 'rpc-ws-enabled=true' >> $CONFIG_FILE"
fi
log_success " RPC endpoints verified"
# 7. Update service file if needed
log_info ""
log_info "7. Verifying service file configuration..."
if ! pct exec $VMID -- grep -q "config-file=$CONFIG_FILE" "$SERVICE_FILE" 2>/dev/null; then
log_warn " Service file references wrong config. Updating..."
pct exec $VMID -- sed -i "s|--config-file=.*|--config-file=$CONFIG_FILE|" "$SERVICE_FILE"
pct exec $VMID -- systemctl daemon-reload
log_success " Service file updated"
else
log_success " Service file is correct"
fi
# 8. Verify required files exist
log_info ""
log_info "8. Checking required files..."
REQUIRED_FILES=(
"/genesis/genesis.json"
"/genesis/static-nodes.json"
"/permissions/permissions-nodes.toml"
)
MISSING_FILES=()
for file in "${REQUIRED_FILES[@]}"; do
if pct exec $VMID -- test -f "$file" 2>/dev/null; then
log_success " Found: $file"
else
log_error " Missing: $file"
MISSING_FILES+=("$file")
fi
done
if [ ${#MISSING_FILES[@]} -gt 0 ]; then
log_warn " Some required files are missing. Service may not start properly."
log_info " Missing files need to be copied from source project."
fi
# 9. Start service
log_info ""
log_info "9. Starting Besu RPC service..."
pct exec $VMID -- systemctl start besu-rpc.service
sleep 5
# 10. Check service status
log_info ""
log_info "10. Checking service status..."
SERVICE_STATUS=$(pct exec $VMID -- systemctl is-active besu-rpc.service 2>&1 || echo "unknown")
if [ "$SERVICE_STATUS" = "active" ]; then
log_success " Service is active!"
else
log_error " Service is not active. Status: $SERVICE_STATUS"
log_info " Recent logs:"
pct exec $VMID -- journalctl -u besu-rpc.service -n 20 --no-pager 2>&1 | tail -20
fi
# 11. Test RPC endpoint
log_info ""
log_info "11. Testing RPC endpoint..."
sleep 3
RPC_TEST=$(pct exec $VMID -- timeout 5 curl -s -X POST http://localhost:8545 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' 2>&1 || echo "FAILED")
if echo "$RPC_TEST" | grep -q "result"; then
BLOCK_NUM=$(echo "$RPC_TEST" | grep -oP '"result":"\K[^"]+' | head -1)
log_success " RPC endpoint is responding!"
log_info " Current block: $BLOCK_NUM"
else
log_warn " RPC endpoint not responding yet (may need more time to start)"
log_info " Response: $RPC_TEST"
fi
# Summary
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Fix Summary"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
if [ "$SERVICE_STATUS" = "active" ]; then
log_success "✅ RPC-01 (VMID $VMID) is now running!"
log_info ""
log_info "Next steps:"
log_info "1. Monitor logs: pct exec $VMID -- journalctl -u besu-rpc.service -f"
log_info "2. Test RPC: curl -X POST http://192.168.11.250:8545 -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}'"
else
log_error "❌ Service is still not active"
log_info ""
log_info "Troubleshooting:"
log_info "1. Check logs: pct exec $VMID -- journalctl -u besu-rpc.service -n 50"
log_info "2. Verify config: pct exec $VMID -- cat $CONFIG_FILE"
log_info "3. Check for missing files (genesis.json, static-nodes.json, etc.)"
fi
echo ""

72
scripts/fix-token-reference.sh Executable file
View File

@@ -0,0 +1,72 @@
#!/bin/bash
# Fix token reference - checks if token needs to be updated
# This script helps identify if the token value is still a placeholder
ENV_FILE="$HOME/.env"
if [ ! -f "$ENV_FILE" ]; then
echo "❌ .env file not found: $ENV_FILE"
exit 1
fi
# Check current token value
TOKEN_VALUE=$(grep "^PROXMOX_TOKEN_VALUE=" "$ENV_FILE" | cut -d'=' -f2- | tr -d '"' | tr -d "'")
PLACEHOLDERS=(
"your-token-secret-here"
"your-token-secret"
"your-token-secret-value"
""
)
IS_PLACEHOLDER=false
for placeholder in "${PLACEHOLDERS[@]}"; do
if [ "$TOKEN_VALUE" = "$placeholder" ]; then
IS_PLACEHOLDER=true
break
fi
done
if [ "$IS_PLACEHOLDER" = true ]; then
echo "⚠️ Token value is still a placeholder"
echo ""
echo "Current value: $TOKEN_VALUE"
echo ""
echo "To fix:"
echo " 1. Run: ./scripts/update-token.sh"
echo " 2. Or manually edit: $ENV_FILE"
echo " Change PROXMOX_TOKEN_VALUE to the actual token secret"
echo ""
echo "The token was created with ID: bff429d3-f408-4139-807a-7bf163525275"
echo "You need the SECRET value (shown only once when token was created)"
exit 1
else
TOKEN_LEN=${#TOKEN_VALUE}
if [ $TOKEN_LEN -lt 20 ]; then
echo "⚠️ Token value seems too short ($TOKEN_LEN chars)"
echo " Expected: 30+ characters (UUID format)"
else
echo "✅ Token value appears configured ($TOKEN_LEN characters)"
echo " Testing connection..."
# Test connection
source scripts/load-env.sh
load_env_file
API_RESPONSE=$(curl -k -s -w "\n%{http_code}" -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT:-8006}/api2/json/version" 2>&1)
HTTP_CODE=$(echo "$API_RESPONSE" | tail -1)
if [ "$HTTP_CODE" = "200" ]; then
echo "✅ API connection successful!"
exit 0
else
echo "❌ API connection failed (HTTP $HTTP_CODE)"
echo " Token may be incorrect or expired"
exit 1
fi
fi
fi

113
scripts/generate-review-report.sh Executable file
View File

@@ -0,0 +1,113 @@
#!/bin/bash
REPORT="$PROJECT_ROOT/logs/comprehensive-review-$(date +%Y%m%d-%H%M%S).md"
mkdir -p logs
cat > "$REPORT" << 'REPORTEOF'
# Comprehensive Project Review Report
Generated: $(date '+%Y-%m-%d %H:%M:%S')
## Summary
This report identifies inconsistencies, gaps, and dependency issues across the project.
---
## 1. VMID Consistency
### Expected VMID Ranges
- **Validators**: 1000-1004 (5 nodes)
- **Sentries**: 1500-1503 (4 nodes)
- **RPC**: 2500-2502 (3 nodes)
### Files with Old VMID References (106-117)
REPORTEOF
grep -rE "\b(106|107|108|109|110|111|112|113|114|115|116|117)\b" \
smom-dbis-138-proxmox/ docs/ 2>/dev/null | \
grep -v node_modules | grep -v ".git" | \
grep -v "EXPECTED_CONTAINERS.md" | \
grep -v "VMID_ALLOCATION.md" | \
grep -v "HISTORICAL" | \
cut -d: -f1 | sort -u | sed 's|^|- `|;s|$|`|' >> "$REPORT"
cat >> "$REPORT" << 'REPORTEOF'
---
## 2. IP Address Consistency
### Expected IP Range
- Base subnet: 192.168.11.0/24
- Validators: 192.168.11.100-104
- Sentries: 192.168.11.150-153
- RPC: 192.168.11.250-252
### Files with Old IP References (10.3.1.X)
REPORTEOF
grep -rE "10\.3\.1\." smom-dbis-138-proxmox/ docs/ 2>/dev/null | \
grep -v node_modules | grep -v ".git" | \
cut -d: -f1 | sort -u | sed 's|^|- `|;s|$|`|' >> "$REPORT"
cat >> "$REPORT" << 'REPORTEOF'
---
## 3. Dependencies
### Required Tools Status
REPORTEOF
for tool in pct jq sshpass timeout openssl curl wget; do
if command -v "$tool" >/dev/null 2>&1; then
echo "- ✅ $tool" >> "$REPORT"
else
echo "- ❌ $tool - MISSING" >> "$REPORT"
fi
done
cat >> "$REPORT" << 'REPORTEOF'
### Configuration Files Status
REPORTEOF
for file in "smom-dbis-138-proxmox/config/proxmox.conf" "smom-dbis-138-proxmox/config/network.conf"; do
if [[ -f "$file" ]]; then
echo "- ✅ \`$file\`" >> "$REPORT"
else
echo "- ❌ Missing: \`$file\`" >> "$REPORT"
fi
done
cat >> "$REPORT" << 'REPORTEOF'
---
## 4. Documentation Issues
### Documents with Outdated References
- \`docs/EXPECTED_CONTAINERS.md\` - Contains old VMID ranges (106-117)
- \`docs/VMID_ALLOCATION.md\` - Contains historical VMID ranges (1100-1122)
These are historical/migration documents and may be kept for reference but should be clearly marked.
---
## Recommendations
1. Update active documentation files to use current VMID ranges
2. Update IP address references from 10.3.1.X to 192.168.11.X
3. Mark historical documentation files clearly
4. Ensure all required tools are documented and available
5. Verify configuration consistency across all scripts
REPORTEOF
echo "Report generated: $REPORT"
cat "$REPORT"

View File

@@ -0,0 +1,44 @@
# Get Your Cloudflare Global API Key
The current API key in your `.env` file is not authenticating correctly. Follow these steps:
## Option 1: Get Global API Key (Current Method)
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Scroll down to **API Keys** section
3. Click **View** next to **Global API Key**
4. Enter your Cloudflare password
5. Copy the **Global API Key** (40 characters, alphanumeric)
6. Update `.env`:
```
CLOUDFLARE_API_KEY="your-global-api-key-here"
CLOUDFLARE_EMAIL="theoracle@defi-oracle.io"
```
## Option 2: Create API Token (Recommended - More Secure)
1. Go to: https://dash.cloudflare.com/profile/api-tokens
2. Click **Create Token**
3. Click **Edit zone DNS** template OR create **Custom token** with:
- **Permissions:**
- **Zone** → **DNS** → **Edit**
- **Account** → **Cloudflare Tunnel** → **Edit**
- **Zone Resources:** Include → Specific zone → `d-bis.org`
- **Account Resources:** Include → Specific account → Your account
4. Click **Continue to summary** → **Create Token**
5. **Copy the token immediately** (you won't see it again!)
6. Update `.env`:
```
CLOUDFLARE_API_TOKEN="your-api-token-here"
# Comment out or remove CLOUDFLARE_API_KEY and CLOUDFLARE_EMAIL
```
## Verify Your API Key
After updating, test with:
```bash
./scripts/test-cloudflare-api.sh
```
You should see: `✓ API Key works! Email: theoracle@defi-oracle.io`

View File

@@ -0,0 +1,290 @@
#!/usr/bin/env bash
# Install and configure Nginx on RPC containers with domain-specific SSL on port 443
# Domain mappings:
# rpc-http-pub.d-bis.org → 2501 (HTTP RPC)
# rpc-ws-pub.d-bis.org → 2501 (WebSocket RPC)
# rpc-http-prv.d-bis.org → 2502 (HTTP RPC)
# rpc-ws-prv.d-bis.org → 2502 (WebSocket RPC)
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
info() { echo -e "${GREEN}[INFO]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Domain mappings
declare -A RPC_CONFIG=(
[2501_HTTP]="rpc-http-pub.d-bis.org"
[2501_WS]="rpc-ws-pub.d-bis.org"
[2502_HTTP]="rpc-http-prv.d-bis.org"
[2502_WS]="rpc-ws-prv.d-bis.org"
)
declare -A RPC_IPS=(
[2501]="192.168.11.251"
[2502]="192.168.11.252"
)
declare -A RPC_HOSTNAMES=(
[2501]="besu-rpc-2"
[2502]="besu-rpc-3"
)
VMIDS=(2501 2502)
info "Installing Nginx on RPC containers with domain configuration..."
info "Proxmox Host: $PROXMOX_HOST"
info "Containers: ${VMIDS[*]}"
echo ""
# Function to create Nginx config
create_nginx_config() {
local vmid=$1
local http_domain="${RPC_CONFIG[${vmid}_HTTP]}"
local ws_domain="${RPC_CONFIG[${vmid}_WS]}"
local hostname="${RPC_HOSTNAMES[$vmid]}"
local ip="${RPC_IPS[$vmid]}"
local config="# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name $http_domain $ws_domain $hostname $ip;
return 301 https://\$host\$request_uri;
}
# HTTPS server - HTTP RPC API
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $http_domain $hostname $ip;
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;
add_header X-Frame-Options \"SAMEORIGIN\" always;
add_header X-Content-Type-Options \"nosniff\" always;
add_header X-XSS-Protection \"1; mode=block\" always;
access_log /var/log/nginx/rpc-http-access.log;
error_log /var/log/nginx/rpc-http-error.log;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
location / {
proxy_pass http://127.0.0.1:8545;
proxy_http_version 1.1;
proxy_set_header Host localhost;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header Connection \"\";
proxy_buffering off;
proxy_request_buffering off;
}
location /health {
access_log off;
return 200 \"healthy\\n\";
add_header Content-Type text/plain;
}
}
# HTTPS server - WebSocket RPC API
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $ws_domain;
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;
add_header X-Frame-Options \"SAMEORIGIN\" always;
add_header X-Content-Type-Options \"nosniff\" always;
add_header X-XSS-Protection \"1; mode=block\" always;
access_log /var/log/nginx/rpc-ws-access.log;
error_log /var/log/nginx/rpc-ws-error.log;
location / {
proxy_pass http://127.0.0.1:8546;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection \"upgrade\";
proxy_set_header Host localhost;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
location /health {
access_log off;
return 200 \"healthy\\n\";
add_header Content-Type text/plain;
}
}
"
echo "$config"
}
# Function to install Nginx on a container
install_nginx_on_container() {
local vmid=$1
local ip="${RPC_IPS[$vmid]}"
local hostname="${RPC_HOSTNAMES[$vmid]}"
local http_domain="${RPC_CONFIG[${vmid}_HTTP]}"
local ws_domain="${RPC_CONFIG[${vmid}_WS]}"
echo "=========================================="
info "Processing VMID $vmid ($hostname - $ip)"
info " HTTP Domain: $http_domain"
info " WS Domain: $ws_domain"
echo "=========================================="
# Check if container is running
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct status $vmid 2>/dev/null | awk '{print \$2}'" 2>/dev/null || echo "unknown")
if [[ "$STATUS" != "running" ]]; then
warn "Container $vmid is not running (status: $STATUS), skipping..."
return 1
fi
# Install Nginx
info "Installing Nginx on VMID $vmid..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- bash -c '
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq nginx openssl
'" || {
error "Failed to install Nginx on VMID $vmid"
return 1
}
info "✓ Nginx installed"
# Generate SSL certificate
info "Generating SSL certificate for $http_domain..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- bash -c '
mkdir -p /etc/nginx/ssl
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \\
-keyout /etc/nginx/ssl/rpc.key \\
-out /etc/nginx/ssl/rpc.crt \\
-subj \"/CN=$http_domain/O=RPC Node/C=US\" 2>/dev/null
chmod 600 /etc/nginx/ssl/rpc.key
chmod 644 /etc/nginx/ssl/rpc.crt
'" || {
error "Failed to generate SSL certificate"
return 1
}
info "✓ SSL certificate generated"
# Create and deploy Nginx configuration
info "Creating Nginx configuration..."
local nginx_config=$(create_nginx_config $vmid)
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- bash" <<EOF
cat > /etc/nginx/sites-available/rpc <<'NGINX_EOF'
$nginx_config
NGINX_EOF
# Enable the site
ln -sf /etc/nginx/sites-available/rpc /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default
# Test configuration
nginx -t
# Reload Nginx
systemctl enable nginx
systemctl restart nginx
EOF
if [[ $? -eq 0 ]]; then
info "✓ Nginx configured and started"
else
error "Failed to configure Nginx"
return 1
fi
# Verify
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- systemctl is-active nginx >/dev/null 2>&1"; then
info "✓ Nginx service is active"
else
error "Nginx service is not active"
return 1
fi
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- ss -tuln | grep -q ':443'"; then
info "✓ Port 443 is listening"
else
warn "Port 443 may not be listening"
fi
echo ""
return 0
}
# Install on each container
SUCCESS=0
FAILED=0
for vmid in "${VMIDS[@]}"; do
if install_nginx_on_container "$vmid"; then
SUCCESS=$((SUCCESS + 1))
else
FAILED=$((FAILED + 1))
fi
done
# Summary
echo "=========================================="
info "Installation Summary:"
echo " Success: $SUCCESS"
echo " Failed: $FAILED"
echo " Total: ${#VMIDS[@]}"
echo "=========================================="
if [[ $FAILED -gt 0 ]]; then
exit 1
fi
info "Nginx installation complete!"
echo ""
info "Domain mappings configured:"
echo " rpc-http-pub.d-bis.org → VMID 2501 (HTTP RPC on port 443)"
echo " rpc-ws-pub.d-bis.org → VMID 2501 (WebSocket RPC on port 443)"
echo " rpc-http-prv.d-bis.org → VMID 2502 (HTTP RPC on port 443)"
echo " rpc-ws-prv.d-bis.org → VMID 2502 (WebSocket RPC on port 443)"
echo ""
info "Next steps:"
echo " 1. Configure DNS records in Cloudflare to point to container IPs"
echo " 2. Replace self-signed certificates with Let's Encrypt if needed"
echo " 3. Test: curl -k https://rpc-http-pub.d-bis.org/health"

320
scripts/install-nginx-rpc.sh Executable file
View File

@@ -0,0 +1,320 @@
#!/usr/bin/env bash
# Install and configure Nginx on RPC containers (2500-2502) with SSL on port 443
# Usage: ./install-nginx-rpc.sh [vmid1] [vmid2] [vmid3]
# If no VMIDs provided, defaults to 2500, 2501, 2502
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
info() { echo -e "${GREEN}[INFO]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Get VMIDs (default to 2500-2502)
if [[ $# -eq 0 ]]; then
VMIDS=(2500 2501 2502)
else
VMIDS=("$@")
fi
# RPC container mapping
declare -A RPC_IPS=(
[2500]="192.168.11.250"
[2501]="192.168.11.251"
[2502]="192.168.11.252"
)
declare -A RPC_HOSTNAMES=(
[2500]="besu-rpc-1"
[2501]="besu-rpc-2"
[2502]="besu-rpc-3"
)
# Domain mappings for each container
declare -A RPC_HTTP_DOMAINS=(
[2501]="rpc-http-pub.d-bis.org"
[2502]="rpc-http-prv.d-bis.org"
)
declare -A RPC_WS_DOMAINS=(
[2501]="rpc-ws-pub.d-bis.org"
[2502]="rpc-ws-prv.d-bis.org"
)
info "Installing Nginx on RPC containers..."
info "Proxmox Host: $PROXMOX_HOST"
info "Containers: ${VMIDS[*]}"
echo ""
# Function to install Nginx on a container
install_nginx_on_container() {
local vmid=$1
local ip="${RPC_IPS[$vmid]}"
local hostname="${RPC_HOSTNAMES[$vmid]}"
local http_domain="${RPC_HTTP_DOMAINS[$vmid]:-}"
local ws_domain="${RPC_WS_DOMAINS[$vmid]:-}"
echo "=========================================="
info "Processing VMID $vmid ($hostname - $ip)"
echo "=========================================="
# Check if container is running
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct status $vmid 2>/dev/null | awk '{print \$2}'" 2>/dev/null || echo "unknown")
if [[ "$STATUS" != "running" ]]; then
warn "Container $vmid is not running (status: $STATUS), skipping..."
return 1
fi
# Check if Nginx is already installed
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- which nginx >/dev/null 2>&1"; then
warn "Nginx is already installed on VMID $vmid"
read -p "Reinstall/update configuration? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
info "Skipping VMID $vmid"
return 0
fi
fi
# Install Nginx
info "Installing Nginx on VMID $vmid..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- bash -c '
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq nginx openssl
'" || {
error "Failed to install Nginx on VMID $vmid"
return 1
}
info "✓ Nginx installed"
# Generate self-signed SSL certificate (or use Let's Encrypt later)
info "Generating SSL certificate..."
# Use first domain if available, otherwise use hostname
local cert_cn="${http_domain:-$hostname}"
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- bash -c '
mkdir -p /etc/nginx/ssl
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \\
-keyout /etc/nginx/ssl/rpc.key \\
-out /etc/nginx/ssl/rpc.crt \\
-subj \"/CN=$cert_cn/O=RPC Node/C=US\" 2>/dev/null
chmod 600 /etc/nginx/ssl/rpc.key
chmod 644 /etc/nginx/ssl/rpc.crt
'" || {
error "Failed to generate SSL certificate"
return 1
}
info "✓ SSL certificate generated for $cert_cn"
# Create Nginx configuration
info "Creating Nginx configuration..."
# Build server_name list
local server_names="$hostname $ip"
if [[ -n "$http_domain" ]]; then
server_names="$server_names $http_domain"
fi
if [[ -n "$ws_domain" ]]; then
server_names="$server_names $ws_domain"
fi
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- bash" <<EOF
cat > /etc/nginx/sites-available/rpc <<NGINX_CONFIG
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name $server_names;
# Redirect all HTTP to HTTPS
return 301 https://\$host\$request_uri;
}
# HTTPS server - HTTP RPC API (for rpc-http-*.d-bis.org)
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ${http_domain:-$hostname} $hostname $ip;
# SSL configuration
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Logging
access_log /var/log/nginx/rpc-http-access.log;
error_log /var/log/nginx/rpc-http-error.log;
# Increase timeouts for RPC calls
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
send_timeout 300s;
# HTTP RPC endpoint
location / {
proxy_pass http://127.0.0.1:8545;
proxy_http_version 1.1;
# Headers
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header Connection "";
# Buffer settings
proxy_buffering off;
proxy_request_buffering off;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
# HTTPS server - WebSocket RPC API (for rpc-ws-*.d-bis.org)
$(if [[ -n "$ws_domain" ]]; then echo "server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $ws_domain;"; fi)
# SSL configuration
ssl_certificate /etc/nginx/ssl/rpc.crt;
ssl_certificate_key /etc/nginx/ssl/rpc.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Logging
access_log /var/log/nginx/rpc-ws-access.log;
error_log /var/log/nginx/rpc-ws-error.log;
# WebSocket RPC endpoint
location / {
proxy_pass http://127.0.0.1:8546;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 86400;
proxy_send_timeout 86400;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
$(if [[ -n "$ws_domain" ]]; then echo ""; fi)
NGINX_CONFIG
# Enable the site
ln -sf /etc/nginx/sites-available/rpc /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default
# Test configuration
nginx -t
# Reload Nginx
systemctl enable nginx
systemctl restart nginx
EOF
if [[ $? -eq 0 ]]; then
info "✓ Nginx configured and started"
else
error "Failed to configure Nginx"
return 1
fi
# Verify Nginx is running
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- systemctl is-active nginx >/dev/null 2>&1"; then
info "✓ Nginx service is active"
else
error "Nginx service is not active"
return 1
fi
# Check if port 443 is listening
if ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- ss -tuln | grep -q ':443'"; then
info "✓ Port 443 is listening"
else
warn "Port 443 may not be listening"
fi
echo ""
return 0
}
# Install on each container
SUCCESS=0
FAILED=0
for vmid in "${VMIDS[@]}"; do
if install_nginx_on_container "$vmid"; then
((SUCCESS++))
else
((FAILED++))
fi
echo ""
done
# Summary
echo "=========================================="
info "Installation Summary:"
echo " Success: $SUCCESS"
echo " Failed: $FAILED"
echo " Total: ${#VMIDS[@]}"
echo "=========================================="
if [[ $FAILED -gt 0 ]]; then
exit 1
fi
info "Nginx installation complete!"
echo ""
info "Next steps:"
echo " 1. Test HTTPS: curl -k https://<container-ip>:443"
echo " 2. Test RPC: curl -k -X POST https://<container-ip>:443 -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}'"
echo " 3. Replace self-signed certificate with Let's Encrypt if needed"
echo " 4. Configure DNS records to point to container IPs"

46
scripts/list-containers.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
# Quick container list script
# Simple list of all containers on Proxmox host
set +e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/load-env.sh"
load_env_file
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
# Get first node
NODES_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes" 2>&1)
FIRST_NODE=$(echo "$NODES_RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin)['data'][0]['node'])" 2>/dev/null)
# Get containers
CONTAINERS_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes/${FIRST_NODE}/lxc" 2>&1)
echo "Containers on ${FIRST_NODE}:"
echo ""
echo "$CONTAINERS_RESPONSE" | python3 -c "
import sys, json
try:
data = json.load(sys.stdin)['data']
if len(data) == 0:
print('No containers found')
else:
print(f'{'VMID':<6} {'Name':<30} {'Status':<10}')
print('-' * 50)
for container in sorted(data, key=lambda x: x['vmid']):
vmid = container['vmid']
name = container.get('name', 'N/A')
status = container.get('status', 'unknown')
print(f'{vmid:<6} {name:<30} {status:<10}')
except:
print('Failed to parse container data')
" 2>/dev/null

419
scripts/list-proxmox-ips.sh Executable file
View File

@@ -0,0 +1,419 @@
#!/usr/bin/env bash
# List all private IP address assignments in Proxmox
# Works both via API (remote) and direct commands (on Proxmox host)
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Try to load environment if available
if [[ -f "$SCRIPT_DIR/load-env.sh" ]]; then
source "$SCRIPT_DIR/load-env.sh"
load_env_file 2>/dev/null || true
fi
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
PROXMOX_PORT="${PROXMOX_PORT:-8006}"
# Check if running on Proxmox host
ON_PROXMOX_HOST=false
if command -v pct >/dev/null 2>&1 && command -v qm >/dev/null 2>&1; then
ON_PROXMOX_HOST=true
fi
# Function to check if IP is private
is_private_ip() {
local ip="$1"
# Private IP ranges:
# 10.0.0.0/8
# 172.16.0.0/12
# 192.168.0.0/16
if [[ "$ip" =~ ^10\. ]] || \
[[ "$ip" =~ ^172\.(1[6-9]|2[0-9]|3[0-1])\. ]] || \
[[ "$ip" =~ ^192\.168\. ]]; then
return 0
fi
return 1
}
# Function to get LXC container IP from config
get_lxc_config_ip() {
local vmid="$1"
local config_file="/etc/pve/lxc/${vmid}.conf"
if [[ -f "$config_file" ]]; then
# Extract IP from net0: or net1: lines
local ip=$(grep -E "^net[0-9]+:" "$config_file" 2>/dev/null | \
grep -oE 'ip=([0-9]{1,3}\.){3}[0-9]{1,3}' | \
cut -d'=' -f2 | head -1)
if [[ -n "$ip" ]]; then
# Remove CIDR notation if present
echo "${ip%/*}"
fi
fi
}
# Function to get LXC container actual IP (if running)
get_lxc_actual_ip() {
local vmid="$1"
if ! pct status "$vmid" 2>/dev/null | grep -q "status: running"; then
return
fi
# Try to get IP from container
local ip=$(timeout 3s pct exec "$vmid" -- ip -4 addr show 2>/dev/null | \
grep -oE '([0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]+' | \
grep -v '127.0.0.1' | head -1 | cut -d'/' -f1)
if [[ -n "$ip" ]]; then
echo "$ip"
fi
}
# Function to get VM IP from config (cloud-init or static)
get_vm_config_ip() {
local vmid="$1"
local config_file="/etc/pve/qemu-server/${vmid}.conf"
if [[ -f "$config_file" ]]; then
# Check for cloud-init IP configuration
local ip=$(grep -E "^ipconfig[0-9]+:" "$config_file" 2>/dev/null | \
grep -oE 'ip=([0-9]{1,3}\.){3}[0-9]{1,3}' | \
cut -d'=' -f2 | head -1)
if [[ -n "$ip" ]]; then
echo "${ip%/*}"
fi
fi
}
# Function to get VM actual IP via guest agent
get_vm_actual_ip() {
local vmid="$1"
if ! qm status "$vmid" 2>/dev/null | grep -q "status: running"; then
return
fi
# Try guest agent
local ip=$(timeout 5s qm guest cmd "$vmid" network-get-interfaces 2>/dev/null | \
grep -oE '([0-9]{1,3}\.){3}[0-9]{1,3}' | \
grep -v "127.0.0.1" | head -1)
if [[ -n "$ip" ]]; then
echo "$ip"
fi
}
# Function to get VM IP via ARP table
get_vm_arp_ip() {
local vmid="$1"
local config_file="/etc/pve/qemu-server/${vmid}.conf"
if [[ ! -f "$config_file" ]]; then
return
fi
# Get MAC address from config
local mac=$(grep -E "^net[0-9]+:" "$config_file" 2>/dev/null | \
grep -oE "([0-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}" | head -1)
if [[ -n "$mac" ]]; then
local ip=$(ip neighbor show 2>/dev/null | grep -i "$mac" | \
grep -oE '([0-9]{1,3}\.){3}[0-9]{1,3}' | head -1)
if [[ -n "$ip" ]]; then
echo "$ip"
fi
fi
}
# Function to extract IP from config content
extract_ip_from_config() {
local config_content="$1"
local type="$2" # "lxc" or "qemu"
if [[ "$type" == "lxc" ]]; then
# Extract IP from net0: or net1: lines for LXC
echo "$config_content" | grep -E "^net[0-9]+:" | \
grep -oE 'ip=([0-9]{1,3}\.){3}[0-9]{1,3}' | \
cut -d'=' -f2 | head -1 | sed 's|/.*||'
else
# Extract IP from ipconfig0: or ipconfig1: lines for VMs
echo "$config_content" | grep -E "^ipconfig[0-9]+:" | \
grep -oE 'ip=([0-9]{1,3}\.){3}[0-9]{1,3}' | \
cut -d'=' -f2 | head -1 | sed 's|/.*||'
fi
}
# Function to get config IP via API
get_config_ip_api() {
local node="$1"
local vmid="$2"
local type="$3" # "qemu" or "lxc"
local config_response=$(curl -k -s -m 5 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes/${node}/${type}/${vmid}/config" 2>/dev/null)
if [[ -z "$config_response" ]]; then
return
fi
# Extract IP from JSON config
if [[ "$type" == "lxc" ]]; then
# For LXC: extract from net0, net1, etc. fields
echo "$config_response" | python3 -c "
import sys, json
try:
data = json.load(sys.stdin).get('data', {})
# Check net0, net1, net2, etc.
for i in range(10):
net_key = f'net{i}'
if net_key in data:
net_value = data[net_key]
if 'ip=' in net_value:
# Extract IP from ip=192.168.11.100/24 format
import re
match = re.search(r'ip=([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})', net_value)
if match:
print(match.group(1))
break
except:
pass
" 2>/dev/null
else
# For VM: extract from ipconfig0, ipconfig1, etc. fields
echo "$config_response" | python3 -c "
import sys, json
try:
data = json.load(sys.stdin).get('data', {})
# Check ipconfig0, ipconfig1, etc.
for i in range(10):
ipconfig_key = f'ipconfig{i}'
if ipconfig_key in data:
ipconfig_value = data[ipconfig_key]
if 'ip=' in ipconfig_value:
# Extract IP from ip=192.168.11.100/24 format
import re
match = re.search(r'ip=([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})', ipconfig_value)
if match:
print(match.group(1))
break
except:
pass
" 2>/dev/null
fi
}
# Function to list IPs via API
list_ips_via_api() {
local node="$1"
echo "Fetching data from Proxmox API..."
echo ""
# Get all VMs
local vms_response=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes/${node}/qemu" 2>/dev/null)
# Get all LXC containers
local lxc_response=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes/${node}/lxc" 2>/dev/null)
echo "VMID Type Name Status Config IP Actual IP"
echo "--------------------------------------------------------------------------------"
# Process VMs - store in temp file to avoid subshell issues
local temp_file=$(mktemp)
echo "$vms_response" | python3 -c "
import sys, json
try:
data = json.load(sys.stdin)['data']
for vm in sorted(data, key=lambda x: x['vmid']):
vmid = vm['vmid']
name = vm.get('name', 'N/A')
status = vm.get('status', 'unknown')
print(f'{vmid}|{name}|{status}')
except:
pass
" 2>/dev/null > "$temp_file"
while IFS='|' read -r vmid name status; do
[[ -z "$vmid" ]] && continue
config_ip=$(get_config_ip_api "$node" "$vmid" "qemu")
if [[ -n "$config_ip" ]] && is_private_ip "$config_ip"; then
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" \
"$vmid" "VM" "${name:0:23}" "$status" "$config_ip" "N/A (remote)"
elif [[ -z "$config_ip" ]]; then
# Show even without config IP if it's a known VM
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" \
"$vmid" "VM" "${name:0:23}" "$status" "N/A" "N/A (remote)"
fi
done < "$temp_file"
rm -f "$temp_file"
# Process LXC containers
echo "$lxc_response" | python3 -c "
import sys, json
try:
data = json.load(sys.stdin)['data']
for lxc in sorted(data, key=lambda x: x['vmid']):
vmid = lxc['vmid']
name = lxc.get('name', 'N/A')
status = lxc.get('status', 'unknown')
print(f'{vmid}|{name}|{status}')
except:
pass
" 2>/dev/null > "$temp_file"
while IFS='|' read -r vmid name status; do
[[ -z "$vmid" ]] && continue
config_ip=$(get_config_ip_api "$node" "$vmid" "lxc")
if [[ -n "$config_ip" ]] && is_private_ip "$config_ip"; then
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" \
"$vmid" "LXC" "${name:0:23}" "$status" "$config_ip" "N/A (remote)"
elif [[ -z "$config_ip" ]]; then
# Show even without config IP if it's a known container
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" \
"$vmid" "LXC" "${name:0:23}" "$status" "N/A" "N/A (remote)"
fi
done < "$temp_file"
rm -f "$temp_file"
echo ""
echo "Note: Actual IP addresses (runtime) require running on Proxmox host."
echo " Config IP shows static IP configuration from Proxmox config files."
}
# Function to list IPs directly (on Proxmox host)
list_ips_direct() {
echo "Listing private IP address assignments on Proxmox host..."
echo ""
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" "VMID" "Type" "Name" "Status" "Config IP" "Actual IP"
echo "--------------------------------------------------------------------------------"
# List all LXC containers
if command -v pct >/dev/null 2>&1; then
while IFS= read -r line; do
[[ -z "$line" ]] && continue
local vmid=$(echo "$line" | awk '{print $1}')
local status=$(echo "$line" | awk '{print $2}')
# Get container name
local name="N/A"
local config_file="/etc/pve/lxc/${vmid}.conf"
if [[ -f "$config_file" ]]; then
name=$(grep "^hostname:" "$config_file" 2>/dev/null | cut -d' ' -f2 || echo "N/A")
fi
# Get IPs
local config_ip=$(get_lxc_config_ip "$vmid")
local actual_ip=""
if [[ "$status" == "running" ]]; then
actual_ip=$(get_lxc_actual_ip "$vmid")
fi
# Only show private IPs
local display_config_ip=""
local display_actual_ip=""
if [[ -n "$config_ip" ]] && is_private_ip "$config_ip"; then
display_config_ip="$config_ip"
fi
if [[ -n "$actual_ip" ]] && is_private_ip "$actual_ip"; then
display_actual_ip="$actual_ip"
fi
# Only print if we have at least one private IP
if [[ -n "$display_config_ip" ]] || [[ -n "$display_actual_ip" ]]; then
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" \
"$vmid" "LXC" "${name:0:23}" "$status" \
"${display_config_ip:-N/A}" "${display_actual_ip:-N/A}"
fi
done < <(pct list 2>/dev/null | tail -n +2)
fi
# List all VMs
if command -v qm >/dev/null 2>&1; then
while IFS= read -r line; do
[[ -z "$line" ]] && continue
local vmid=$(echo "$line" | awk '{print $1}')
local status=$(echo "$line" | awk '{print $2}')
# Get VM name
local name="N/A"
local config_file="/etc/pve/qemu-server/${vmid}.conf"
if [[ -f "$config_file" ]]; then
name=$(grep "^name:" "$config_file" 2>/dev/null | cut -d' ' -f2 || echo "N/A")
fi
# Get IPs
local config_ip=$(get_vm_config_ip "$vmid")
local actual_ip=""
if [[ "$status" == "running" ]]; then
actual_ip=$(get_vm_actual_ip "$vmid")
# Fallback to ARP if guest agent fails
if [[ -z "$actual_ip" ]]; then
actual_ip=$(get_vm_arp_ip "$vmid")
fi
fi
# Only show private IPs
local display_config_ip=""
local display_actual_ip=""
if [[ -n "$config_ip" ]] && is_private_ip "$config_ip"; then
display_config_ip="$config_ip"
fi
if [[ -n "$actual_ip" ]] && is_private_ip "$actual_ip"; then
display_actual_ip="$actual_ip"
fi
# Only print if we have at least one private IP
if [[ -n "$display_config_ip" ]] || [[ -n "$display_actual_ip" ]]; then
printf "%-7s %-6s %-23s %-10s %-15s %-15s\n" \
"$vmid" "VM" "${name:0:23}" "$status" \
"${display_config_ip:-N/A}" "${display_actual_ip:-N/A}"
fi
done < <(qm list 2>/dev/null | tail -n +2)
fi
}
# Main execution
if [[ "$ON_PROXMOX_HOST" == "true" ]]; then
list_ips_direct
else
# Try API method
if [[ -n "${PROXMOX_USER:-}" ]] && [[ -n "${PROXMOX_TOKEN_NAME:-}" ]] && [[ -n "${PROXMOX_TOKEN_VALUE:-}" ]]; then
# Get first node
nodes_response=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT}/api2/json/nodes" 2>/dev/null)
first_node=$(echo "$nodes_response" | python3 -c "import sys, json; print(json.load(sys.stdin)['data'][0]['node'])" 2>/dev/null)
if [[ -n "$first_node" ]]; then
list_ips_via_api "$first_node"
else
echo "Error: Could not connect to Proxmox API or get node list"
echo "Please run this script on the Proxmox host for full IP information"
exit 1
fi
else
echo "Error: Not on Proxmox host and API credentials not configured"
echo "Please either:"
echo " 1. Run this script on the Proxmox host, or"
echo " 2. Configure PROXMOX_USER, PROXMOX_TOKEN_NAME, and PROXMOX_TOKEN_VALUE"
exit 1
fi
fi

42
scripts/load-env.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Standardized .env loader function
# This ensures all scripts use the same ~/.env file consistently
# Load environment variables from ~/.env file
# Usage: source load-env.sh or . load-env.sh
load_env_file() {
local env_file="${HOME}/.env"
if [[ -f "$env_file" ]]; then
# Source PROXMOX_* variables from ~/.env
set -a
source <(grep -E "^PROXMOX_" "$env_file" 2>/dev/null | sed 's/^/export /' || true)
set +a
# Ensure PROXMOX_TOKEN_SECRET is set from PROXMOX_TOKEN_VALUE for backwards compatibility
if [[ -z "${PROXMOX_TOKEN_SECRET:-}" ]] && [[ -n "${PROXMOX_TOKEN_VALUE:-}" ]]; then
export PROXMOX_TOKEN_SECRET="${PROXMOX_TOKEN_VALUE}"
fi
return 0
else
return 1
fi
}
# Auto-load if sourced directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Script is being executed directly
load_env_file
if [[ $? -eq 0 ]]; then
echo "✅ Loaded environment from ~/.env"
echo "PROXMOX_HOST=${PROXMOX_HOST:-not set}"
echo "PROXMOX_USER=${PROXMOX_USER:-not set}"
echo "PROXMOX_TOKEN_NAME=${PROXMOX_TOKEN_NAME:-not set}"
echo "PROXMOX_TOKEN_VALUE=${PROXMOX_TOKEN_VALUE:+***configured***}"
else
echo "❌ ~/.env file not found"
exit 1
fi
fi

View File

@@ -0,0 +1,34 @@
#!/bin/bash
# Create Snapshots Before Making Changes
# Usage: ./snapshot-before-change.sh <VMID> [snapshot-name]
set -euo pipefail
VMID="${1:-}"
SNAPSHOT_NAME="${2:-pre-change-$(date +%Y%m%d-%H%M%S)}"
if [[ -z "$VMID" ]]; then
echo "Usage: $0 <VMID> [snapshot-name]"
echo "Example: $0 106 pre-upgrade-20241219"
exit 1
fi
if ! command -v pct >/dev/null 2>&1; then
echo "Error: pct command not found. This script must be run on Proxmox host."
exit 1
fi
if [[ $EUID -ne 0 ]]; then
echo "Error: This script must be run as root"
exit 1
fi
echo "Creating snapshot '$SNAPSHOT_NAME' for container $VMID..."
if pct snapshot "$VMID" "$SNAPSHOT_NAME"; then
echo "✓ Snapshot created successfully"
echo " To restore: pct rollback $VMID $SNAPSHOT_NAME"
echo " To list: pct listsnapshot $VMID"
else
echo "✗ Failed to create snapshot"
exit 1
fi

View File

@@ -0,0 +1,31 @@
# Prometheus Configuration for Besu Metrics
# Add this to your prometheus.yml scrape_configs section
scrape_configs:
- job_name: 'besu'
scrape_interval: 15s
static_configs:
# Validators (VMID 1000-1004) - metrics enabled but may not expose RPC
- targets:
- '192.168.11.100:9545' # validator-1 (DHCP assigned)
- '192.168.11.101:9545' # validator-2 (DHCP assigned)
- '192.168.11.102:9545' # validator-3 (DHCP assigned)
- '192.168.11.103:9545' # validator-4 (DHCP assigned)
- '192.168.11.104:9545' # validator-5 (DHCP assigned)
labels:
role: 'validator'
# Sentries (VMID 1500-1503)
- targets:
- '192.168.11.150:9545' # sentry-1 (DHCP assigned)
- '192.168.11.151:9545' # sentry-2 (DHCP assigned)
- '192.168.11.152:9545' # sentry-3 (DHCP assigned)
- '192.168.11.153:9545' # sentry-4 (DHCP assigned)
labels:
role: 'sentry'
# RPC Nodes (VMID 2500-2502)
- targets:
- '192.168.11.250:9545' # rpc-1 (DHCP assigned)
- '192.168.11.251:9545' # rpc-2 (DHCP assigned)
- '192.168.11.252:9545' # rpc-3 (DHCP assigned)
labels:
role: 'rpc'

View File

@@ -0,0 +1,51 @@
#!/bin/bash
# Setup Health Check Cron Job
# Installs cron jobs to monitor Besu node health
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
if ! command -v pct >/dev/null 2>&1; then
echo "Error: pct command not found. This script must be run on Proxmox host."
exit 1
fi
LOG_DIR="$PROJECT_ROOT/logs/health-checks"
mkdir -p "$LOG_DIR"
# Create cron job script
cat > "$PROJECT_ROOT/scripts/monitoring/health-check-cron-wrapper.sh" << 'CRONSCRIPT'
#!/bin/bash
# Health check wrapper for cron
# Checks all Besu nodes and logs results
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
LOG_DIR="$PROJECT_ROOT/logs/health-checks"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2500 2501 2502; do
if [[ -f "$PROJECT_ROOT/scripts/health/check-node-health.sh" ]]; then
"$PROJECT_ROOT/scripts/health/check-node-health.sh" "$vmid" >> "$LOG_DIR/health-$vmid-$TIMESTAMP.log" 2>&1
fi
done
# Cleanup old logs (keep 7 days)
find "$LOG_DIR" -name "health-*.log" -mtime +7 -delete 2>/dev/null || true
CRONSCRIPT
chmod +x "$PROJECT_ROOT/scripts/monitoring/health-check-cron-wrapper.sh"
# Add to crontab (every 5 minutes)
CRON_JOB="*/5 * * * * $PROJECT_ROOT/scripts/monitoring/health-check-cron-wrapper.sh"
if crontab -l 2>/dev/null | grep -q "health-check-cron-wrapper.sh"; then
echo "Cron job already exists"
else
(crontab -l 2>/dev/null; echo "$CRON_JOB") | crontab -
echo "✓ Health check cron job installed (runs every 5 minutes)"
echo " Logs: $LOG_DIR/"
echo " To remove: crontab -e (then delete the line)"
fi

View File

@@ -0,0 +1,73 @@
#!/bin/bash
# Simple Alert Script
# Sends alerts when Besu services are down
# Can be extended to send email, Slack, etc.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Configuration
ALERT_EMAIL="${ALERT_EMAIL:-}"
ALERT_LOG="$PROJECT_ROOT/logs/alerts.log"
ALERT_SENT_LOG="$PROJECT_ROOT/logs/alerts-sent.log"
# Ensure log directory exists
mkdir -p "$(dirname "$ALERT_LOG")"
log_alert() {
local message="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo "[$timestamp] ALERT: $message" >> "$ALERT_LOG"
# Check if we've already sent this alert (avoid spam)
local alert_key=$(echo "$message" | md5sum | cut -d' ' -f1)
if ! grep -q "$alert_key" "$ALERT_SENT_LOG" 2>/dev/null; then
echo "[$timestamp] $alert_key" >> "$ALERT_SENT_LOG"
# Send email if configured
if [[ -n "$ALERT_EMAIL" ]] && command -v mail >/dev/null 2>&1; then
echo "$message" | mail -s "Besu Alert: Container Issue" "$ALERT_EMAIL" 2>/dev/null || true
fi
# Log to console
echo "ALERT: $message"
fi
}
# Check all containers
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2500 2501 2502; do
# Check if container is running
if ! pct status "$vmid" 2>/dev/null | grep -q running; then
log_alert "Container $vmid is not running"
continue
fi
# Determine service name
service_name=""
if [[ $vmid -ge 1000 ]] && [[ $vmid -le 1004 ]]; then
service_name="besu-validator"
elif [[ $vmid -ge 1500 ]] && [[ $vmid -le 1503 ]]; then
service_name="besu-sentry"
elif [[ $vmid -ge 2500 ]] && [[ $vmid -le 2502 ]]; then
service_name="besu-rpc"
fi
# Check service status
if [[ -n "$service_name" ]]; then
if ! pct exec "$vmid" -- systemctl is-active --quiet "$service_name" 2>/dev/null; then
log_alert "Service $service_name on container $vmid is not running"
fi
fi
done
# Check disk space (alert if < 10% free)
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 2500 2501 2502; do
if pct status "$vmid" 2>/dev/null | grep -q running; then
disk_usage=$(pct exec "$vmid" -- df -h / | awk 'NR==2 {print $5}' | sed 's/%//' 2>/dev/null || echo "0")
if [[ $disk_usage -gt 90 ]]; then
log_alert "Container $vmid disk usage is at ${disk_usage}%"
fi
fi
done

194
scripts/prune-historical-docs.sh Executable file
View File

@@ -0,0 +1,194 @@
#!/usr/bin/env bash
# Prune Historical and Obsolete Documentation
# Marks historical docs and removes truly obsolete files
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
DRY_RUN="${DRY_RUN:-true}"
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--execute)
DRY_RUN=false
shift
;;
--help)
cat << EOF
Usage: $0 [OPTIONS]
Prune historical and obsolete documentation files.
Options:
--execute Actually delete files (default: dry-run)
--help Show this help
EOF
exit 0
;;
*)
log_error "Unknown option: $1"
exit 1
;;
esac
done
log_info "========================================="
log_info "Prune Historical Documentation"
log_info "========================================="
log_info "Mode: $([ "$DRY_RUN" == "true" ] && echo "DRY-RUN" || echo "EXECUTE")"
log_info ""
# Files to mark as historical (add header)
HISTORICAL_FILES=(
"docs/EXPECTED_CONTAINERS.md"
"docs/VMID_ALLOCATION.md"
"docs/VMID_REFERENCE_AUDIT.md"
"docs/VMID_UPDATE_COMPLETE.md"
)
# Files to delete (truly obsolete, superseded by current docs)
OBSOLETE_FILES=(
# Status/completion reports (old, superseded)
"docs/DEPLOYMENT_STATUS.md"
"docs/DEPLOYMENT_REVIEW_COMPLETE.md"
"docs/DEPLOYMENT_REVIEW.md"
"docs/DEPLOYMENT_TIME_ESTIMATE.md"
"docs/DEPLOYMENT_TIME_ESTIMATE_BESU_ONLY.md"
"docs/DEPLOYMENT_VALIDATION_REPORT.md"
"docs/DEPLOYED_VMIDS_LIST.md"
"docs/DEPLOYMENT_OPTIMIZATION_COMPLETE.md"
"docs/DEPLOYMENT_OPTIMIZATION_RECOMMENDATIONS.md"
"docs/DEPLOYMENT_RECOMMENDATIONS_STATUS.md"
"docs/DEPLOYMENT_CONFIGURATION_VERIFICATION.md"
# Old status/completion reports
"docs/NEXT_STEPS_COMPREHENSIVE.md"
"docs/NEXT_STEPS_COMPLETE.md"
"docs/NEXT_STEPS_SUMMARY.md"
"docs/COMPLETION_REPORT.md"
"docs/FIXES_APPLIED.md"
"docs/REVIEW_FIXES_APPLIED.md"
"docs/MINOR_OBSERVATIONS_FIXED.md"
"docs/NON_CRITICAL_FIXES_COMPLETE.md"
"docs/QUICK_WINS_COMPLETED.md"
"docs/TASK_COMPLETION_SUMMARY.md"
"docs/IMPLEMENTATION_COMPLETE.md"
"docs/PREREQUISITES_COMPLETE.md"
"docs/SETUP_COMPLETE.md"
"docs/SETUP_COMPLETE_FINAL.md"
"docs/SETUP_STATUS.md"
"docs/VALIDATION_STATUS.md"
"docs/CONFIGURATION_ALIGNMENT.md"
# Old review documents
"docs/REVIEW_INCONSISTENCIES_GAPS.md"
"docs/REVIEW_SUMMARY.md"
"docs/COMPREHENSIVE_REVIEW.md"
"docs/FINAL_REVIEW.md"
"docs/DETAILED_ISSUES_REVIEW.md"
"docs/RECOMMENDATIONS_OVERVIEW.md"
# OS template analysis (historical)
"docs/OS_TEMPLATE_CHANGE.md"
"docs/UBUNTU_DEBIAN_ANALYSIS.md"
"docs/OS_TEMPLATE_ANALYSIS.md"
# Old DHCP documentation (containers now use static IPs)
"docs/DHCP_IP_ADDRESSES.md"
# VMID consistency reports (superseded by current state)
"docs/VMID_CONSISTENCY_REPORT.md"
"docs/ACTIVE_DOCS_UPDATE_SUMMARY.md"
"docs/PROJECT_UPDATE_COMPLETE.md"
)
# Mark historical files with header
log_info "=== Marking Historical Documentation ==="
for file in "${HISTORICAL_FILES[@]}"; do
if [[ -f "$PROJECT_ROOT/$file" ]]; then
# Check if already marked
if grep -q "^<!-- HISTORICAL: This document contains historical" "$PROJECT_ROOT/$file" 2>/dev/null; then
log_info "Already marked: $file"
else
if [[ "$DRY_RUN" == "true" ]]; then
log_info "Would mark as historical: $file"
else
# Add header comment
if [[ "$file" == *.md ]]; then
# For markdown, add HTML comment at top
TEMP_FILE=$(mktemp)
{
echo "<!-- HISTORICAL: This document contains historical VMID ranges and is kept for reference only. Current ranges: Validators 1000-1004, Sentries 1500-1503, RPC 2500-2502 -->"
echo ""
cat "$PROJECT_ROOT/$file"
} > "$TEMP_FILE"
mv "$TEMP_FILE" "$PROJECT_ROOT/$file"
log_success "Marked as historical: $file"
fi
fi
fi
else
log_warn "File not found: $file"
fi
done
echo ""
# Delete obsolete files
log_info "=== Removing Obsolete Documentation ==="
DELETED_COUNT=0
NOT_FOUND_COUNT=0
for file in "${OBSOLETE_FILES[@]}"; do
if [[ -f "$PROJECT_ROOT/$file" ]]; then
if [[ "$DRY_RUN" == "true" ]]; then
log_info "Would delete: $file"
DELETED_COUNT=$((DELETED_COUNT + 1))
else
if rm -f "$PROJECT_ROOT/$file" 2>/dev/null; then
log_success "Deleted: $file"
DELETED_COUNT=$((DELETED_COUNT + 1))
else
log_error "Failed to delete: $file"
fi
fi
else
NOT_FOUND_COUNT=$((NOT_FOUND_COUNT + 1))
fi
done
echo ""
# Summary
log_info "========================================="
log_info "Summary"
log_info "========================================="
log_info "Historical files marked: ${#HISTORICAL_FILES[@]}"
log_info "Obsolete files $([ "$DRY_RUN" == "true" ] && echo "to delete" || echo "deleted"): $DELETED_COUNT"
log_info "Files not found: $NOT_FOUND_COUNT"
echo ""
if [[ "$DRY_RUN" == "true" ]]; then
log_warn "DRY-RUN mode: No files were modified"
log_info "Run with --execute to actually mark/delete files"
else
log_success "Cleanup completed"
fi

View File

@@ -0,0 +1,135 @@
#!/usr/bin/env bash
# Safely prune old/obsolete documentation and content
# Creates a backup before deletion and provides detailed logging
set -euo pipefail
PROJECT_ROOT="/home/intlc/projects/proxmox"
cd "$PROJECT_ROOT"
# Create backup directory
BACKUP_DIR="$PROJECT_ROOT/backup-old-docs-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR"
echo "=== Pruning Old Documentation ==="
echo "Backup directory: $BACKUP_DIR"
echo ""
# List of historical/obsolete documents to remove (with confirmation)
OBSOLETE_DOCS=(
# Historical VMID migration docs (superseded by current ranges)
"docs/HISTORICAL_VMID_REFERENCES.md"
"docs/VMID_UPDATE_COMPLETE.md"
"docs/VMID_REFERENCE_AUDIT.md"
"docs/VMID_ALLOCATION.md" # Superseded by VMID_ALLOCATION_FINAL.md
# Old deployment status/completion reports
"docs/DEPLOYMENT_STATUS.md"
"docs/DEPLOYMENT_REVIEW_COMPLETE.md"
"docs/DEPLOYMENT_REVIEW.md"
"docs/DEPLOYMENT_TIME_ESTIMATE.md"
"docs/DEPLOYMENT_TIME_ESTIMATE_BESU_ONLY.md"
"docs/DEPLOYMENT_VALIDATION_REPORT.md"
"docs/DEPLOYED_VMIDS_LIST.md"
"docs/DEPLOYMENT_OPTIMIZATION_COMPLETE.md"
"docs/DEPLOYMENT_OPTIMIZATION_RECOMMENDATIONS.md"
"docs/DEPLOYMENT_RECOMMENDATIONS_STATUS.md"
"docs/DEPLOYMENT_CONFIGURATION_VERIFICATION.md"
# Old status/completion reports
"docs/NEXT_STEPS_COMPREHENSIVE.md"
"docs/NEXT_STEPS_COMPLETE.md"
"docs/NEXT_STEPS_SUMMARY.md"
"docs/COMPLETION_REPORT.md"
"docs/FIXES_APPLIED.md"
"docs/REVIEW_FIXES_APPLIED.md"
"docs/MINOR_OBSERVATIONS_FIXED.md"
"docs/NON_CRITICAL_FIXES_COMPLETE.md"
"docs/QUICK_WINS_COMPLETED.md"
"docs/TASK_COMPLETION_SUMMARY.md"
"docs/IMPLEMENTATION_COMPLETE.md"
"docs/PREREQUISITES_COMPLETE.md"
"docs/SETUP_COMPLETE.md"
"docs/SETUP_COMPLETE_FINAL.md"
"docs/SETUP_STATUS.md"
"docs/VALIDATION_STATUS.md"
"docs/CONFIGURATION_ALIGNMENT.md"
# Old review documents
"docs/REVIEW_INCONSISTENCIES_GAPS.md"
"docs/REVIEW_SUMMARY.md"
"docs/COMPREHENSIVE_REVIEW.md"
"docs/FINAL_REVIEW.md"
"docs/DETAILED_ISSUES_REVIEW.md"
"docs/RECOMMENDATIONS_OVERVIEW.md"
# OS template analysis (historical)
"docs/OS_TEMPLATE_CHANGE.md"
"docs/UBUNTU_DEBIAN_ANALYSIS.md"
"docs/OS_TEMPLATE_ANALYSIS.md"
# Old DHCP documentation (containers now use static IPs)
"docs/DHCP_IP_ADDRESSES.md"
)
echo "Files to be removed (will be backed up first):"
echo ""
for doc in "${OBSOLETE_DOCS[@]}"; do
if [[ -f "$doc" ]]; then
echo " - $doc"
fi
done
echo ""
echo "⚠️ WARNING: This will remove the files listed above"
echo " All files will be backed up to: $BACKUP_DIR"
echo ""
read -p "Continue with pruning? (yes/no): " -r
if [[ ! $REPLY =~ ^[Yy][Ee][Ss]$ ]]; then
echo "Pruning cancelled"
exit 0
fi
echo ""
echo "Creating backups and removing files..."
removed_count=0
skipped_count=0
for doc in "${OBSOLETE_DOCS[@]}"; do
if [[ -f "$doc" ]]; then
# Create backup
backup_path="$BACKUP_DIR/$doc"
mkdir -p "$(dirname "$backup_path")"
cp "$doc" "$backup_path"
# Remove original
rm "$doc"
echo " ✅ Removed: $doc (backed up)"
removed_count=$((removed_count + 1))
else
skipped_count=$((skipped_count + 1))
fi
done
echo ""
echo "=== Pruning Complete ==="
echo " Files removed: $removed_count"
echo " Files skipped (not found): $skipped_count"
echo " Backup location: $BACKUP_DIR"
echo ""
# Create index of removed files
cat > "$BACKUP_DIR/REMOVED_FILES_INDEX.txt" <<EOF
Removed Documentation Files
Generated: $(date)
Files removed:
$(for doc in "${OBSOLETE_DOCS[@]}"; do echo " - $doc"; done)
Total removed: $removed_count
EOF
echo "Backup index created: $BACKUP_DIR/REMOVED_FILES_INDEX.txt"

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
# Remove and Purge Containers 106-122
# Run this on the Proxmox host
set -euo pipefail
START_VMID=106
END_VMID=122
echo "========================================="
echo "Remove and Purge Containers 106-122"
echo "========================================="
echo ""
echo "WARNING: This will PERMANENTLY DELETE containers 106 through 122"
echo "All data in these containers will be lost!"
echo ""
read -p "Continue? [y/N]: " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Operation cancelled"
exit 0
fi
echo ""
for vmid in $(seq $START_VMID $END_VMID); do
echo "Processing container $vmid..."
# Check if container exists
if ! pct list 2>/dev/null | grep -q "^\s*$vmid\s"; then
echo "⚠ Container $vmid does not exist, skipping"
continue
fi
# Get container status
status=$(pct status "$vmid" 2>/dev/null | awk '{print $2}' || echo "unknown")
echo " Container $vmid status: $status"
# Stop container if running
if [[ "$status" == "running" ]]; then
echo " Stopping container $vmid..."
pct stop "$vmid" --timeout 30 2>/dev/null || {
echo " Force stopping container $vmid..."
pct stop "$vmid" --skiplock 2>/dev/null || true
}
sleep 2
fi
# Remove and purge container
echo " Removing and purging container $vmid..."
if pct destroy "$vmid" --purge 2>/dev/null; then
echo " ✓ Container $vmid removed and purged"
else
echo " ✗ Failed to remove container $vmid"
fi
done
echo ""
echo "========================================="
echo "Container removal completed!"
echo "========================================="

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env bash
# Remove and prune all containers with VMID 120 and above
# This will STOP, DESTROY, and PRUNE containers
set -euo pipefail
HOST="${1:-192.168.11.10}"
USER="${2:-root}"
PASS="${3:-}"
if [[ -z "$PASS" ]]; then
echo "Usage: $0 [HOST] [USER] [PASSWORD]"
echo "Example: $0 192.168.11.10 root 'password'"
exit 1
fi
export SSHPASS="$PASS"
echo "=== Removing Containers VMID 120+ ==="
echo "Host: $HOST"
echo ""
# Get list of containers to remove
echo "Fetching list of containers VMID 120+..."
containers=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk 'NR>1 && \$1 >= 120 {print \$1}' | sort -n")
if [[ -z "$containers" ]]; then
echo "No containers found with VMID >= 120"
exit 0
fi
echo "Containers to be removed:"
for vmid in $containers; do
status=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct status $vmid 2>/dev/null | awk '{print \$2}' || echo 'missing'")
name=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk '\$1 == $vmid {print \$3}' || echo 'unknown'")
echo " VMID $vmid: $name (status: $status)"
done
echo ""
echo "⚠️ WARNING: This will DESTROY all containers VMID 120+"
echo " This action cannot be undone!"
echo ""
# Process each container
for vmid in $containers; do
echo "Processing VMID $vmid..."
# Get container name for logging
name=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk '\$1 == $vmid {print \$3}' || echo 'unknown'")
# Check if container exists
if ! sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | grep -q \"^\s*$vmid\s\""; then
echo " ⚠️ VMID $vmid does not exist, skipping"
continue
fi
# Stop container if running
status=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct status $vmid 2>/dev/null | awk '{print \$2}' || echo 'stopped'")
if [[ "$status" == "running" ]]; then
echo " Stopping container $vmid ($name)..."
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct stop $vmid" || echo " ⚠️ Failed to stop $vmid (continuing)"
sleep 2
fi
# Destroy container
echo " Destroying container $vmid ($name)..."
if sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct destroy $vmid" 2>&1; then
echo " ✅ Container $vmid destroyed"
else
echo " ❌ Failed to destroy container $vmid"
fi
done
echo ""
echo "=== Cleanup and Pruning ==="
# Prune unused volumes (optional - be careful with this)
echo "Note: Volume pruning is not performed automatically to avoid data loss"
echo "To manually prune unused volumes, run on Proxmox host:"
echo " pvesm prune-orphans"
echo ""
echo "=== Container Removal Complete ==="
echo "Remaining containers:"
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list"

107
scripts/remove-containers.sh Executable file
View File

@@ -0,0 +1,107 @@
#!/usr/bin/env bash
# Remove and purge Proxmox LXC containers by VMID range
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Check if running on Proxmox host
if ! command -v pct >/dev/null 2>&1; then
log_error "This script must be run on a Proxmox host (pct command not found)"
exit 1
fi
# Check if running as root
if [[ $EUID -ne 0 ]]; then
log_error "This script must be run as root"
exit 1
fi
# Parse arguments
START_VMID="${1:-106}"
END_VMID="${2:-112}"
if [[ ! "$START_VMID" =~ ^[0-9]+$ ]] || [[ ! "$END_VMID" =~ ^[0-9]+$ ]]; then
log_error "Invalid VMID range. Usage: $0 [START_VMID] [END_VMID]"
exit 1
fi
if [[ $START_VMID -gt $END_VMID ]]; then
log_error "START_VMID ($START_VMID) must be less than or equal to END_VMID ($END_VMID)"
exit 1
fi
log_info "========================================="
log_info "Remove and Purge Containers"
log_info "========================================="
log_info "VMID Range: $START_VMID to $END_VMID"
log_info ""
# Confirm deletion
log_warn "WARNING: This will PERMANENTLY DELETE containers $START_VMID through $END_VMID"
log_warn "All data in these containers will be lost!"
echo ""
read -p "$(echo -e ${YELLOW}Continue? [y/N]: ${NC})" -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log_info "Operation cancelled"
exit 0
fi
echo ""
# Process each VMID
for vmid in $(seq $START_VMID $END_VMID); do
log_info "Processing container $vmid..."
# Check if container exists
if ! pct list | grep -q "^\s*$vmid\s"; then
log_warn "Container $vmid does not exist, skipping"
continue
fi
# Get container status
local status
status=$(pct status "$vmid" 2>/dev/null | awk '{print $2}' || echo "unknown")
log_info "Container $vmid status: $status"
# Stop container if running
if [[ "$status" == "running" ]]; then
log_info "Stopping container $vmid..."
pct stop "$vmid" --timeout 30 || {
log_warn "Failed to stop container $vmid gracefully, forcing stop..."
pct stop "$vmid" --skiplock || true
}
sleep 2
fi
# Remove container
log_info "Removing container $vmid..."
if pct destroy "$vmid" --purge 2>/dev/null; then
log_success "Container $vmid removed and purged"
else
log_error "Failed to remove container $vmid"
fi
done
log_info ""
log_success "========================================="
log_success "Container removal completed!"
log_success "========================================="
log_info ""
log_info "Removed containers: $START_VMID through $END_VMID"
log_info ""

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env bash
# Review all containers and identify old/unnecessary ones for pruning
# Provides detailed information about each container
set -euo pipefail
HOST="${1:-192.168.11.10}"
USER="${2:-root}"
PASS="${3:-}"
if [[ -z "$PASS" ]]; then
echo "Usage: $0 [HOST] [USER] [PASSWORD]"
echo "Example: $0 192.168.11.10 root 'password'"
exit 1
fi
export SSHPASS="$PASS"
echo "=== Container Review and Analysis ==="
echo "Host: $HOST"
echo ""
# Get all containers
containers=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk 'NR>1 {print \$1}' | sort -n")
if [[ -z "$containers" ]]; then
echo "No containers found"
exit 0
fi
echo "=== All Containers ==="
echo ""
# Infrastructure containers to keep (typically 100-105)
INFRASTRUCTURE_VMIDS=(100 101 102 103 104 105)
for vmid in $containers; do
status=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct status $vmid 2>/dev/null | awk '{print \$2}' || echo 'missing'")
name=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk '\$1 == $vmid {print \$3}' || echo 'unknown'")
# Get creation info
created=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "stat -c '%y' /etc/pve/lxc/$vmid.conf 2>/dev/null | cut -d' ' -f1 || echo 'unknown'")
# Get disk usage
disk_size=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct config $vmid 2>/dev/null | grep '^rootfs:' | cut -d',' -f2 | cut -d'=' -f2 || echo 'unknown'")
# Check if infrastructure
is_infra=false
for infra_vmid in "${INFRASTRUCTURE_VMIDS[@]}"; do
if [[ "$vmid" == "$infra_vmid" ]]; then
is_infra=true
break
fi
done
# Determine category
category="Other"
if [[ "$is_infra" == "true" ]]; then
category="Infrastructure"
elif [[ "$vmid" -ge 1000 && "$vmid" -lt 5000 ]]; then
category="Besu Network"
elif [[ "$vmid" -ge 5000 && "$vmid" -lt 6000 ]]; then
category="Explorer/Indexer"
elif [[ "$vmid" -ge 5200 && "$vmid" -lt 5300 ]]; then
category="Cacti/Interop"
elif [[ "$vmid" -ge 5400 && "$vmid" -lt 5600 ]]; then
category="CCIP/Chainlink"
elif [[ "$vmid" -ge 6000 && "$vmid" -lt 7000 ]]; then
category="Hyperledger"
fi
printf "VMID %-5s | %-12s | %-20s | %-15s | %-10s | %s\n" \
"$vmid" "$status" "$name" "$category" "$disk_size" "$created"
done
echo ""
echo "=== Summary ==="
echo ""
# Count by category
total=$(echo "$containers" | wc -l)
infra_count=0
stopped_count=0
running_count=0
for vmid in $containers; do
status=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct status $vmid 2>/dev/null | awk '{print \$2}' || echo 'missing'")
if [[ "$status" == "running" ]]; then
running_count=$((running_count + 1))
elif [[ "$status" == "stopped" ]]; then
stopped_count=$((stopped_count + 1))
fi
for infra_vmid in "${INFRASTRUCTURE_VMIDS[@]}"; do
if [[ "$vmid" == "$infra_vmid" ]]; then
infra_count=$((infra_count + 1))
break
fi
done
done
echo "Total containers: $total"
echo " Infrastructure (100-105): $infra_count"
echo " Running: $running_count"
echo " Stopped: $stopped_count"
echo ""
# Identify potential candidates for removal
echo "=== Potential Pruning Candidates ==="
echo ""
candidates_found=false
for vmid in $containers; do
status=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct status $vmid 2>/dev/null | awk '{print \$2}' || echo 'missing'")
name=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk '\$1 == $vmid {print \$3}' || echo 'unknown'")
# Skip infrastructure
is_infra=false
for infra_vmid in "${INFRASTRUCTURE_VMIDS[@]}"; do
if [[ "$vmid" == "$infra_vmid" ]]; then
is_infra=true
break
fi
done
if [[ "$is_infra" == "true" ]]; then
continue
fi
# Check if stopped for a long time (would need more detailed check)
if [[ "$status" == "stopped" ]]; then
echo " VMID $vmid: $name (stopped - may be candidate for removal)"
candidates_found=true
fi
done
if [[ "$candidates_found" == "false" ]]; then
echo " No obvious candidates found (all infrastructure or running)"
fi
echo ""
echo "=== Storage Usage ==="
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "df -h | grep -E '(local-lvm|local)' | head -5"
echo ""
echo "=== Recommendations ==="
echo ""
echo "Infrastructure containers (100-105) should be KEPT:"
for infra_vmid in "${INFRASTRUCTURE_VMIDS[@]}"; do
if echo "$containers" | grep -q "^$infra_vmid$"; then
name=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "pct list | awk '\$1 == $infra_vmid {print \$3}' || echo 'unknown'")
echo " - VMID $infra_vmid: $name"
fi
done
echo ""
echo "To remove specific containers, use:"
echo " pct stop <vmid>"
echo " pct destroy <vmid>"
echo ""
echo "Or use the removal script:"
echo " ./scripts/remove-containers-120-plus.sh"

View File

@@ -0,0 +1,170 @@
#!/usr/bin/env bash
# Review project content and identify old/outdated information to prune
# Focuses on old VMID references, obsolete documentation, and outdated configurations
set -euo pipefail
PROJECT_ROOT="/home/intlc/projects/proxmox"
cd "$PROJECT_ROOT"
echo "=== Project Content Review and Pruning Analysis ==="
echo ""
# Current valid VMID ranges (DO NOT PRUNE)
CURRENT_VALIDATORS="1000-1004"
CURRENT_SENTRIES="1500-1503"
CURRENT_RPC="2500-2502"
CURRENT_INFRASTRUCTURE="100-105"
# Old VMID ranges that should be removed/updated
OLD_VMIDS="106|107|108|109|110|111|112|113|114|115|116|117|118|119|120|121|122|123"
OLD_RANGES="106-110|111-114|115-117|120-129|130-139"
echo "Current Valid VMID Ranges:"
echo " Infrastructure: $CURRENT_INFRASTRUCTURE (KEEP)"
echo " Validators: $CURRENT_VALIDATORS"
echo " Sentries: $CURRENT_SENTRIES"
echo " RPC: $CURRENT_RPC"
echo ""
echo "=== Files with Old VMID References ==="
echo ""
# Find files with old VMID references
echo "Searching for files referencing old VMIDs ($OLD_VMIDS)..."
OLD_VMID_FILES=$(grep -rE "\b($OLD_VMIDS)\b" --include="*.md" --include="*.sh" --include="*.js" --include="*.py" --include="*.conf" --include="*.example" \
smom-dbis-138-proxmox/ docs/ scripts/ 2>/dev/null | cut -d: -f1 | sort -u)
if [[ -n "$OLD_VMID_FILES" ]]; then
echo "Found $(echo "$OLD_VMID_FILES" | wc -l) files with old VMID references:"
echo ""
for file in $OLD_VMID_FILES; do
# Skip if file references current VMIDs too (may be migration docs)
if grep -qE "\b(1000|1001|1002|1003|1004|1500|1501|1502|1503|2500|2501|2502)\b" "$file" 2>/dev/null; then
echo " ⚠️ $file (has both old and new VMIDs - migration/historical doc?)"
else
echo "$file (old VMIDs only - candidate for update/removal)"
fi
done
else
echo " ✅ No files found with old VMID references"
fi
echo ""
echo "=== Historical/Migration Documents (May be obsolete) ==="
echo ""
HISTORICAL_DOCS=(
"docs/HISTORICAL_VMID_REFERENCES.md"
"docs/VMID_UPDATE_COMPLETE.md"
"docs/VMID_REFERENCE_AUDIT.md"
"docs/VMID_ALLOCATION.md"
"docs/OS_TEMPLATE_CHANGE.md"
"docs/UBUNTU_DEBIAN_ANALYSIS.md"
"docs/OS_TEMPLATE_ANALYSIS.md"
"docs/DEPLOYMENT_REVIEW_COMPLETE.md"
"docs/DEPLOYMENT_REVIEW.md"
"docs/DEPLOYMENT_STATUS.md"
"docs/DEPLOYMENT_OPTIMIZATION_COMPLETE.md"
"docs/DEPLOYMENT_TIME_ESTIMATE.md"
"docs/DEPLOYMENT_TIME_ESTIMATE_BESU_ONLY.md"
"docs/DEPLOYMENT_VALIDATION_REPORT.md"
"docs/DEPLOYMENT_VALIDATION_REQUIREMENTS.md"
"docs/DEPLOYED_VMIDS_LIST.md"
"docs/NEXT_STEPS_COMPREHENSIVE.md"
"docs/NEXT_STEPS_COMPLETE.md"
"docs/NEXT_STEPS_SUMMARY.md"
"docs/COMPLETION_REPORT.md"
"docs/FIXES_APPLIED.md"
"docs/REVIEW_FIXES_APPLIED.md"
"docs/MINOR_OBSERVATIONS_FIXED.md"
"docs/NON_CRITICAL_FIXES_COMPLETE.md"
"docs/QUICK_WINS_COMPLETED.md"
"docs/TASK_COMPLETION_SUMMARY.md"
"docs/IMPLEMENTATION_COMPLETE.md"
"docs/PREREQUISITES_COMPLETE.md"
"docs/SETUP_COMPLETE.md"
"docs/SETUP_COMPLETE_FINAL.md"
"docs/SETUP_STATUS.md"
"docs/VALIDATION_STATUS.md"
"docs/CONFIGURATION_ALIGNMENT.md"
"docs/DEPLOYMENT_CONFIGURATION_VERIFICATION.md"
"docs/REVIEW_INCONSISTENCIES_GAPS.md"
"docs/REVIEW_SUMMARY.md"
"docs/COMPREHENSIVE_REVIEW.md"
"docs/FINAL_REVIEW.md"
"docs/DETAILED_ISSUES_REVIEW.md"
"docs/DEPLOYMENT_OPTIMIZATION_RECOMMENDATIONS.md"
"docs/DEPLOYMENT_RECOMMENDATIONS_STATUS.md"
"docs/RECOMMENDATIONS_OVERVIEW.md"
)
for doc in "${HISTORICAL_DOCS[@]}"; do
if [[ -f "$doc" ]]; then
echo " 📄 $doc"
fi
done
echo ""
echo "=== Duplicate/Similar Documentation Files ==="
echo ""
# Find potentially duplicate documentation
DUPLICATE_PATTERNS=(
"*QUICK_START*"
"*DEPLOYMENT*"
"*NEXT_STEPS*"
"*REVIEW*"
"*COMPLETE*"
"*STATUS*"
)
for pattern in "${DUPLICATE_PATTERNS[@]}"; do
files=$(find docs/ smom-dbis-138-proxmox/docs/ -name "$pattern" -type f 2>/dev/null | sort)
if [[ -n "$files" ]]; then
count=$(echo "$files" | wc -l)
if [[ $count -gt 1 ]]; then
echo " Found $count files matching '$pattern':"
echo "$files" | sed 's/^/ /'
echo ""
fi
fi
done
echo "=== Outdated Configuration Examples ==="
echo ""
# Check for old config examples
if [[ -f "smom-dbis-138-proxmox/config/proxmox.conf.example" ]]; then
if grep -qE "\b(106|110|115|120)\b" smom-dbis-138-proxmox/config/proxmox.conf.example 2>/dev/null; then
echo " ⚠️ smom-dbis-138-proxmox/config/proxmox.conf.example contains old VMID defaults"
fi
fi
echo ""
echo "=== Summary and Recommendations ==="
echo ""
echo "Files to REVIEW for removal/update:"
echo " 1. Historical migration documents (if no longer needed)"
echo " 2. Duplicate documentation files"
echo " 3. Files with old VMID references that don't also reference current ranges"
echo " 4. Old configuration examples"
echo ""
echo "Files to KEEP (may have old references but are historical):"
echo " - Migration/historical reference documents (for context)"
echo " - Current active documentation (even if examples need updating)"
echo " - Configuration files in use"
echo ""
echo "=== Next Steps ==="
echo ""
echo "1. Review historical documents list above - decide which to archive/delete"
echo "2. Update active documentation with old VMID references"
echo "3. Update configuration examples to use current VMID ranges"
echo "4. Remove or archive obsolete status/completion reports"
echo ""
echo "To safely remove files, use:"
echo " ./scripts/prune-old-documentation.sh"

View File

@@ -0,0 +1,183 @@
#!/usr/bin/env bash
# Review ml110 for duplicates, missing files, and gaps
# Comprehensive review of ml110 structure and completeness
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_PASS="L@kers2010"
REMOTE_BASE="/opt"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_section() { echo -e "${CYAN}=== $1 ===${NC}"; }
log_section "ml110 Completeness Review"
echo "Remote: ${REMOTE_USER}@${REMOTE_HOST}"
echo ""
# Test connection
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connected'" 2>/dev/null; then
log_error "Cannot connect to ${REMOTE_HOST}"
exit 1
fi
log_success "Connected to ${REMOTE_HOST}"
echo ""
# 1. Check for duplicate directories
log_section "1. Duplicate Directories Check"
DUPLICATE_DIRS=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && find . -maxdepth 1 -type d -name 'smom-dbis-138*' 2>/dev/null" 2>/dev/null)
if echo "$DUPLICATE_DIRS" | wc -l | grep -q "2"; then
log_success "Expected directories found (smom-dbis-138 and smom-dbis-138-proxmox)"
else
DUPLICATE_COUNT=$(echo "$DUPLICATE_DIRS" | wc -l)
if [[ $DUPLICATE_COUNT -gt 2 ]]; then
log_warn "Found $DUPLICATE_COUNT smom-dbis-138* directories (expected 2)"
echo "$DUPLICATE_DIRS"
else
log_info "Directories: $DUPLICATE_DIRS"
fi
fi
echo ""
# 2. Check for duplicate files
log_section "2. Duplicate Files Check"
# Check for files with same name in different locations
DUPLICATE_FILES=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && find smom-dbis-138* -type f -name '*.sh' -o -name '*.conf' -o -name '*.md' 2>/dev/null | xargs -I {} basename {} | sort | uniq -d" 2>/dev/null)
if [[ -n "$DUPLICATE_FILES" ]]; then
log_warn "Found files with duplicate names:"
echo "$DUPLICATE_FILES" | head -10 | sed 's/^/ /'
else
log_success "No duplicate file names found"
fi
echo ""
# 3. Missing Critical Files
log_section "3. Missing Critical Files Check"
CRITICAL_FILES=(
"smom-dbis-138-proxmox/config/proxmox.conf"
"smom-dbis-138-proxmox/config/network.conf"
"smom-dbis-138-proxmox/scripts/deployment/deploy-validated-set.sh"
"smom-dbis-138-proxmox/scripts/copy-besu-config.sh"
"smom-dbis-138-proxmox/scripts/fix-container-ips.sh"
"smom-dbis-138-proxmox/scripts/network/bootstrap-network.sh"
"smom-dbis-138/config/genesis.json"
)
MISSING_COUNT=0
for file in "${CRITICAL_FILES[@]}"; do
EXISTS=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && [ -f \"$file\" ] && echo 'yes' || echo 'no'" 2>/dev/null)
if [[ "$EXISTS" == "yes" ]]; then
log_success "$(basename "$file")"
else
log_error "Missing: $file"
MISSING_COUNT=$((MISSING_COUNT + 1))
fi
done
echo ""
# 4. Validator Keys Check
log_section "4. Validator Keys Completeness"
VALIDATOR_KEYS=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && if [ -d 'smom-dbis-138/keys/validators' ]; then find smom-dbis-138/keys/validators -type d -name 'validator-*' 2>/dev/null | sort -V; else echo 'DIR_NOT_FOUND'; fi" 2>/dev/null)
if [[ "$VALIDATOR_KEYS" == "DIR_NOT_FOUND" ]]; then
log_error "keys/validators directory not found"
else
KEY_COUNT=$(echo "$VALIDATOR_KEYS" | grep -c "validator-" || echo "0")
log_info "Found $KEY_COUNT validator key directories"
for i in 1 2 3 4 5; do
EXISTS=$(echo "$VALIDATOR_KEYS" | grep -q "validator-$i" && echo "yes" || echo "no")
if [[ "$EXISTS" == "yes" ]]; then
log_success "validator-$i"
else
log_error "Missing: validator-$i"
fi
done
fi
echo ""
# 5. Directory Structure Check
log_section "5. Directory Structure Check"
EXPECTED_DIRS=(
"smom-dbis-138-proxmox/config"
"smom-dbis-138-proxmox/scripts"
"smom-dbis-138-proxmox/scripts/deployment"
"smom-dbis-138-proxmox/scripts/network"
"smom-dbis-138-proxmox/scripts/validation"
"smom-dbis-138/config"
"smom-dbis-138/keys"
"smom-dbis-138/keys/validators"
)
for dir in "${EXPECTED_DIRS[@]}"; do
EXISTS=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && [ -d \"$dir\" ] && echo 'yes' || echo 'no'" 2>/dev/null)
if [[ "$EXISTS" == "yes" ]]; then
log_success "$dir/"
else
log_error "Missing directory: $dir/"
fi
done
echo ""
# 6. File Count Comparison
log_section "6. File Counts"
CONFIG_COUNT=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && find smom-dbis-138-proxmox/config -name '*.conf' -type f 2>/dev/null | wc -l" 2>/dev/null)
SCRIPT_COUNT=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && find smom-dbis-138-proxmox/scripts -name '*.sh' -type f 2>/dev/null | wc -l" 2>/dev/null)
DOC_COUNT=$(sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" "cd ${REMOTE_BASE} && find smom-dbis-138-proxmox/docs -name '*.md' -type f 2>/dev/null | wc -l" 2>/dev/null)
log_info "Configuration files: $CONFIG_COUNT"
log_info "Script files: $SCRIPT_COUNT"
log_info "Documentation files: $DOC_COUNT"
echo ""
# Summary
log_section "Summary"
if [[ $MISSING_COUNT -eq 0 ]]; then
log_success "All critical files present"
else
log_error "Found $MISSING_COUNT missing critical files"
fi
log_info "Review complete"

View File

@@ -0,0 +1,167 @@
#!/usr/bin/env bash
# Review consistency between proxmox deployment project and source smom-dbis-138 project
set -euo pipefail
PROJECT_ROOT="/home/intlc/projects/proxmox"
SOURCE_PROJECT="/home/intlc/projects/smom-dbis-138"
PROXMOX_PROJECT="$PROJECT_ROOT/smom-dbis-138-proxmox"
echo "=== Cross-Project Consistency Review ==="
echo ""
echo "Source Project: $SOURCE_PROJECT"
echo "Proxmox Project: $PROXMOX_PROJECT"
echo ""
if [[ ! -d "$SOURCE_PROJECT" ]]; then
echo "❌ Source project directory not found: $SOURCE_PROJECT"
exit 1
fi
if [[ ! -d "$PROXMOX_PROJECT" ]]; then
echo "❌ Proxmox project directory not found: $PROXMOX_PROJECT"
exit 1
fi
echo "=== 1. Checking IP Address References ==="
echo ""
# Expected IP ranges
EXPECTED_SUBNET="192.168.11"
OLD_SUBNET="10.3.1"
# Check source project for IP references
echo "Searching for IP address references in source project..."
SOURCE_IPS=$(grep -rE "(10\.3\.1\.|192\.168\.11\.)" "$SOURCE_PROJECT" \
--include="*.md" --include="*.sh" --include="*.js" --include="*.py" \
--include="*.toml" --include="*.json" --include="*.conf" \
2>/dev/null | grep -v node_modules | grep -v ".git" | head -20 || true)
if [[ -n "$SOURCE_IPS" ]]; then
echo "⚠️ IP addresses found in source project:"
echo "$SOURCE_IPS" | head -10
else
echo "✅ No IP address references found (or none in searched file types)"
fi
echo ""
echo "=== 2. Checking VMID References ==="
echo ""
# Check for VMID references in source project
SOURCE_VMIDS=$(grep -rE "\b(VMID|vmid)\b" "$SOURCE_PROJECT" \
--include="*.md" --include="*.sh" \
2>/dev/null | grep -v node_modules | grep -v ".git" | head -20 || true)
if [[ -n "$SOURCE_VMIDS" ]]; then
echo "⚠️ VMID references found in source project:"
echo "$SOURCE_VMIDS" | head -10
else
echo "✅ No VMID references found in source project (expected - it's deployment-specific)"
fi
echo ""
echo "=== 3. Checking Network Configuration Files ==="
echo ""
# Check for network config files
NETWORK_FILES=(
"$SOURCE_PROJECT/config/genesis.json"
"$SOURCE_PROJECT/config/permissions-nodes.toml"
"$SOURCE_PROJECT/config/permissions-accounts.toml"
"$SOURCE_PROJECT/config/config-validator.toml"
"$SOURCE_PROJECT/config/config-sentry.toml"
"$SOURCE_PROJECT/config/config-rpc*.toml"
)
echo "Checking for Besu configuration files in source project:"
for file in "${NETWORK_FILES[@]}"; do
if [[ -f "$file" ]] || ls "$file" 2>/dev/null | grep -q .; then
echo " ✅ Found: $(basename "$file" 2>/dev/null || echo "$file")"
else
echo " ⚠️ Not found: $(basename "$file" 2>/dev/null || echo "$file")"
fi
done
echo ""
echo "=== 4. Checking Validator Keys ==="
echo ""
KEYS_DIR="$SOURCE_PROJECT/keys/validators"
if [[ -d "$KEYS_DIR" ]]; then
KEY_COUNT=$(find "$KEYS_DIR" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | wc -l)
echo " ✅ Validator keys directory exists: $KEYS_DIR"
echo " Found $KEY_COUNT validator key directories"
if [[ $KEY_COUNT -ne 5 ]]; then
echo " ⚠️ Expected 5 validators, found $KEY_COUNT"
fi
else
echo " ⚠️ Validator keys directory not found: $KEYS_DIR"
fi
echo ""
echo "=== 5. Checking Chain ID Consistency ==="
echo ""
CHAIN_ID_PROXMOX=$(grep -rE "CHAIN_ID|chain.?id|chainId" "$PROXMOX_PROJECT/config" \
--include="*.conf" --include="*.toml" \
2>/dev/null | grep -iE "138" | head -1 || echo "")
CHAIN_ID_SOURCE=$(grep -rE "chain.?id|chainId" "$SOURCE_PROJECT/config" \
--include="*.json" --include="*.toml" \
2>/dev/null | grep -iE "138" | head -1 || echo "")
echo "Chain ID 138 references:"
if [[ -n "$CHAIN_ID_PROXMOX" ]]; then
echo " ✅ Proxmox project: Found Chain ID 138"
else
echo " ⚠️ Proxmox project: Chain ID 138 not explicitly found"
fi
if [[ -n "$CHAIN_ID_SOURCE" ]]; then
echo " ✅ Source project: Found Chain ID 138"
else
echo " ⚠️ Source project: Chain ID 138 not explicitly found"
fi
echo ""
echo "=== 6. Checking Documentation Consistency ==="
echo ""
# Check for README files
echo "Checking README files:"
for readme in "$SOURCE_PROJECT/README.md" "$PROXMOX_PROJECT/README.md"; do
if [[ -f "$readme" ]]; then
echo " ✅ Found: $(basename $(dirname "$readme"))/README.md"
else
echo " ⚠️ Not found: $(basename $(dirname "$readme"))/README.md"
fi
done
echo ""
echo "=== 7. Checking Service Configuration ==="
echo ""
# Check for service directories
SERVICES_DIR="$SOURCE_PROJECT/services"
if [[ -d "$SERVICES_DIR" ]]; then
SERVICE_COUNT=$(find "$SERVICES_DIR" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | wc -l)
echo " ✅ Services directory exists: $SERVICES_DIR"
echo " Found $SERVICE_COUNT service directories"
# List services
echo " Services:"
find "$SERVICES_DIR" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | while read dir; do
echo " - $(basename "$dir")"
done
else
echo " ⚠️ Services directory not found: $SERVICES_DIR"
fi
echo ""
echo "=== Summary ==="
echo ""
echo "Review complete. Check warnings above for potential inconsistencies."
echo ""

View File

@@ -0,0 +1,76 @@
#!/usr/bin/env bash
# Run deployment on ml110
# This script provides instructions and can optionally run the deployment
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_PASS="L@kers2010"
echo "=== Complete Validated Deployment on ml110 ==="
echo ""
echo "Target: ${REMOTE_USER}@${REMOTE_HOST}"
echo ""
# Test connection
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connected'" 2>/dev/null; then
echo "ERROR: Cannot connect to ${REMOTE_HOST}"
exit 1
fi
echo "✓ Connection successful"
echo ""
# Check if script exists on remote
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" \
"test -f /opt/smom-dbis-138-proxmox/scripts/deployment/deploy-validated-set.sh" 2>/dev/null; then
echo "ERROR: deploy-validated-set.sh not found on ${REMOTE_HOST}"
exit 1
fi
echo "✓ Deployment script found"
echo ""
# Check if source project exists
if ! sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" \
"test -d /opt/smom-dbis-138" 2>/dev/null; then
echo "ERROR: Source project /opt/smom-dbis-138 not found on ${REMOTE_HOST}"
exit 1
fi
echo "✓ Source project found"
echo ""
echo "Starting deployment..."
echo "This may take a while (up to 1 hour for full deployment)"
echo ""
# Run deployment with timeout (2 hours max)
sshpass -p "$REMOTE_PASS" ssh -o StrictHostKeyChecking=no \
"${REMOTE_USER}@${REMOTE_HOST}" \
"cd /opt/smom-dbis-138-proxmox && \
chmod +x ./scripts/deployment/deploy-validated-set.sh && \
timeout 7200 ./scripts/deployment/deploy-validated-set.sh \
--source-project /opt/smom-dbis-138" 2>&1
EXIT_CODE=$?
if [[ $EXIT_CODE -eq 0 ]]; then
echo ""
echo "✅ Deployment completed successfully!"
elif [[ $EXIT_CODE -eq 124 ]]; then
echo ""
echo "⚠ Deployment timed out (2 hours)"
echo "Check the deployment status manually"
else
echo ""
echo "❌ Deployment failed with exit code: $EXIT_CODE"
echo "Check the output above for errors"
fi
exit $EXIT_CODE

111
scripts/run-fixes-on-proxmox.sh Executable file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env bash
# Run Besu fixes on Proxmox host via SSH
# Uses sshpass for password authentication
set -euo pipefail
HOST="${1:-192.168.11.10}"
USER="${2:-root}"
PASS="${3:-}"
REMOTE_DIR="${4:-/opt/smom-dbis-138-proxmox}"
if [[ -z "$PASS" ]]; then
echo "Usage: $0 [HOST] [USER] [PASSWORD] [REMOTE_DIR]"
echo "Example: $0 192.168.11.10 root 'password' /opt/smom-dbis-138-proxmox"
exit 1
fi
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
BESU_PROJECT="$PROJECT_ROOT/smom-dbis-138-proxmox"
echo "=== Running Besu Fixes on Proxmox Host ==="
echo "Host: $HOST"
echo "User: $USER"
echo "Remote Directory: $REMOTE_DIR"
echo ""
# Check if sshpass is installed
if ! command -v sshpass >/dev/null 2>&1; then
echo "❌ sshpass not found. Please install it:"
echo " Ubuntu/Debian: sudo apt-get install sshpass"
echo " macOS: brew install sshpass"
exit 1
fi
# Export password for sshpass
export SSHPASS="$PASS"
# Test connection
echo "Testing SSH connection..."
if ! sshpass -e ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 "$USER@$HOST" "echo 'Connection successful'" 2>/dev/null; then
echo "❌ Cannot connect to $HOST"
exit 1
fi
echo "✅ SSH connection successful"
echo ""
# Check if remote directory exists and has scripts
echo "Checking remote directory..."
if sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "test -d $REMOTE_DIR/scripts" 2>/dev/null; then
echo "✅ Remote directory exists: $REMOTE_DIR"
HAS_SCRIPTS=$(sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "test -f $REMOTE_DIR/scripts/fix-all-besu.sh && echo yes || echo no" 2>/dev/null)
if [[ "$HAS_SCRIPTS" == "yes" ]]; then
echo "✅ Fix scripts already exist on remote host"
USE_REMOTE=true
else
echo "⚠️ Fix scripts not found, copying them..."
USE_REMOTE=false
fi
else
echo "⚠️ Remote directory not found, creating it..."
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "mkdir -p $REMOTE_DIR" 2>/dev/null
USE_REMOTE=false
fi
# Copy scripts if needed
if [[ "$USE_REMOTE" == "false" ]]; then
echo "Copying fix scripts to remote host..."
# Create scripts directory
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "mkdir -p $REMOTE_DIR/scripts" 2>/dev/null
# Copy fix scripts
for script in fix-container-ips.sh fix-besu-services.sh validate-besu-config.sh fix-all-besu.sh; do
if [[ -f "$BESU_PROJECT/scripts/$script" ]]; then
echo " Copying $script..."
sshpass -e scp -o StrictHostKeyChecking=no "$BESU_PROJECT/scripts/$script" "$USER@$HOST:$REMOTE_DIR/scripts/" 2>/dev/null
fi
done
# Copy lib directory if it exists
if [[ -d "$BESU_PROJECT/lib" ]]; then
echo " Copying lib directory..."
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "mkdir -p $REMOTE_DIR/lib" 2>/dev/null
sshpass -e scp -o StrictHostKeyChecking=no -r "$BESU_PROJECT/lib"/* "$USER@$HOST:$REMOTE_DIR/lib/" 2>/dev/null
fi
# Copy config directory
if [[ -d "$BESU_PROJECT/config" ]]; then
echo " Copying config directory..."
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "mkdir -p $REMOTE_DIR/config" 2>/dev/null
sshpass -e scp -o StrictHostKeyChecking=no -r "$BESU_PROJECT/config"/* "$USER@$HOST:$REMOTE_DIR/config/" 2>/dev/null
fi
# Make scripts executable
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "chmod +x $REMOTE_DIR/scripts/*.sh" 2>/dev/null
echo "✅ Scripts copied"
fi
echo ""
echo "=== Running fix-all-besu.sh on Proxmox host ==="
echo ""
# Run the fix script
sshpass -e ssh -o StrictHostKeyChecking=no "$USER@$HOST" "cd $REMOTE_DIR && bash scripts/fix-all-besu.sh"
echo ""
echo "=== Fix execution completed ==="

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Secure Validator Key Permissions
# Run on Proxmox host after validator keys are deployed
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
if ! command -v pct >/dev/null 2>&1; then
echo "Error: pct command not found. This script must be run on Proxmox host."
exit 1
fi
if [[ $EUID -ne 0 ]]; then
echo "Error: This script must be run as root"
exit 1
fi
# Secure keys in validator containers
for vmid in 1000 1001 1002 1003 1004; do
if pct status "$vmid" 2>/dev/null | grep -q running; then
log_info "Securing keys in container $vmid..."
# Set file permissions to 600 for key files
pct exec "$vmid" -- find /keys/validators -type f \( -name "*.pem" -o -name "*.priv" -o -name "key" \) -exec chmod 600 {} \; 2>/dev/null || true
# Set directory permissions
pct exec "$vmid" -- find /keys/validators -type d -exec chmod 700 {} \; 2>/dev/null || true
# Set ownership to besu:besu
pct exec "$vmid" -- chown -R besu:besu /keys/validators 2>/dev/null || true
log_success "Container $vmid secured"
else
log_warn "Container $vmid is not running, skipping"
fi
done
log_success "Validator key security check complete!"

View File

@@ -0,0 +1,157 @@
#!/usr/bin/env bash
# Set root password on all LXC containers
# Usage: ./set-container-passwords.sh [password]
# If no password provided, will prompt for one
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Function to print colored output
info() { echo -e "${GREEN}[INFO]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Get password
if [[ $# -eq 1 ]]; then
if [[ "$1" == "--generate" ]] || [[ "$1" == "-g" ]]; then
# Generate secure password
PASSWORD=$(openssl rand -base64 24 | tr -d "=+/" | cut -c1-20)
info "Generated secure password: $PASSWORD"
info "⚠️ SAVE THIS PASSWORD - it will not be shown again!"
echo ""
# Skip prompt if running non-interactively
if [[ -t 0 ]]; then
read -p "Press Enter to continue or Ctrl+C to cancel..."
echo ""
fi
else
PASSWORD="$1"
fi
else
echo "Enter root password for all LXC containers (or press Enter to generate one):"
read -s PASSWORD
echo ""
if [[ -z "$PASSWORD" ]]; then
# Generate secure password
PASSWORD=$(openssl rand -base64 24 | tr -d "=+/" | cut -c1-20)
info "Generated secure password: $PASSWORD"
info "⚠️ SAVE THIS PASSWORD - it will not be shown again!"
echo ""
read -p "Press Enter to continue or Ctrl+C to cancel..."
echo ""
else
echo "Confirm password:"
read -s PASSWORD_CONFIRM
echo ""
if [[ "$PASSWORD" != "$PASSWORD_CONFIRM" ]]; then
error "Passwords do not match!"
exit 1
fi
fi
if [[ ${#PASSWORD} -lt 8 ]]; then
error "Password must be at least 8 characters long!"
exit 1
fi
fi
info "Setting root password on LXC containers (VMID 1000+)..."
info "Proxmox Host: $PROXMOX_HOST"
echo ""
# Get list of all LXC containers
ALL_CONTAINERS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct list | tail -n +2 | awk '{print \$1}'" 2>/dev/null)
if [[ -z "$ALL_CONTAINERS" ]]; then
error "Failed to get container list from Proxmox host"
exit 1
fi
# Filter to only containers >= 1000
CONTAINERS=""
for vmid in $ALL_CONTAINERS; do
if [[ $vmid -ge 1000 ]]; then
CONTAINERS="$CONTAINERS $vmid"
fi
done
CONTAINERS=$(echo $CONTAINERS | xargs) # Trim whitespace
if [[ -z "$CONTAINERS" ]]; then
error "No containers found with VMID >= 1000"
exit 1
fi
# Count containers
CONTAINER_COUNT=$(echo "$CONTAINERS" | wc -w)
info "Found $CONTAINER_COUNT containers (VMID 1000+) to update"
echo ""
# Track results
SUCCESS=0
FAILED=0
SKIPPED=0
# Set password on each container
for vmid in $CONTAINERS; do
# Filter: only process containers with VMID >= 1000
if [[ $vmid -lt 1000 ]]; then
continue
fi
# Check if container is running
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct status $vmid 2>/dev/null | awk '{print \$2}'" 2>/dev/null || echo "unknown")
if [[ "$STATUS" != "running" ]]; then
warn "VMID $vmid: Container not running (status: $STATUS), skipping..."
SKIPPED=$((SKIPPED + 1))
continue
fi
# Get container hostname for display
HOSTNAME=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- hostname 2>/dev/null" || echo "unknown")
echo -n "VMID $vmid ($HOSTNAME): "
# Set password using chpasswd
RESULT=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $vmid -- chpasswd <<< \"root:${PASSWORD}\" 2>&1" 2>&1)
EXIT_CODE=$?
if [[ $EXIT_CODE -eq 0 ]]; then
echo -e "${GREEN}✓ Password set${NC}"
SUCCESS=$((SUCCESS + 1))
else
echo -e "${RED}✗ Failed${NC}"
if [[ -n "$RESULT" ]]; then
echo " Error: $RESULT"
fi
FAILED=$((FAILED + 1))
fi
done
echo ""
echo "=========================================="
info "Summary:"
echo " Success: $SUCCESS"
echo " Failed: $FAILED"
echo " Skipped: $SKIPPED"
echo " Total: $CONTAINER_COUNT"
echo "=========================================="
if [[ $FAILED -gt 0 ]]; then
exit 1
fi

100
scripts/setup-cloudflare-env.sh Executable file
View File

@@ -0,0 +1,100 @@
#!/usr/bin/env bash
# Interactive setup for Cloudflare API credentials
# Usage: ./setup-cloudflare-env.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ENV_FILE="$SCRIPT_DIR/../.env"
echo "Cloudflare API Credentials Setup"
echo "================================="
echo ""
# Check if .env exists
if [[ -f "$ENV_FILE" ]]; then
echo "Found existing .env file: $ENV_FILE"
read -p "Overwrite? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Keeping existing .env file"
exit 0
fi
fi
echo ""
echo "Choose authentication method:"
echo "1. API Token (Recommended - more secure)"
echo "2. Email + API Key (Legacy)"
read -p "Choice [1-2]: " -n 1 -r
echo ""
if [[ $REPLY == "1" ]]; then
echo ""
echo "Get your API Token from:"
echo " https://dash.cloudflare.com/profile/api-tokens"
echo " Create token with: Zone:Edit, Account:Cloudflare Tunnel:Edit permissions"
echo ""
read -p "Enter Cloudflare API Token: " -s CLOUDFLARE_API_TOKEN
echo ""
if [[ -z "$CLOUDFLARE_API_TOKEN" ]]; then
echo "Error: API Token cannot be empty"
exit 1
fi
cat > "$ENV_FILE" <<EOF
# Cloudflare API Configuration
CLOUDFLARE_API_TOKEN="${CLOUDFLARE_API_TOKEN}"
# Domain
DOMAIN="d-bis.org"
# Optional: Zone ID (will be auto-detected if not set)
# CLOUDFLARE_ZONE_ID=""
# Optional: Account ID (will be auto-detected if not set)
# CLOUDFLARE_ACCOUNT_ID=""
# Tunnel Token (already installed)
TUNNEL_TOKEN="eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9"
EOF
elif [[ $REPLY == "2" ]]; then
echo ""
read -p "Enter Cloudflare Email: " CLOUDFLARE_EMAIL
read -p "Enter Cloudflare API Key: " -s CLOUDFLARE_API_KEY
echo ""
if [[ -z "$CLOUDFLARE_EMAIL" ]] || [[ -z "$CLOUDFLARE_API_KEY" ]]; then
echo "Error: Email and API Key cannot be empty"
exit 1
fi
cat > "$ENV_FILE" <<EOF
# Cloudflare API Configuration
CLOUDFLARE_EMAIL="${CLOUDFLARE_EMAIL}"
CLOUDFLARE_API_KEY="${CLOUDFLARE_API_KEY}"
# Domain
DOMAIN="d-bis.org"
# Optional: Zone ID (will be auto-detected if not set)
# CLOUDFLARE_ZONE_ID=""
# Optional: Account ID (will be auto-detected if not set)
# CLOUDFLARE_ACCOUNT_ID=""
# Tunnel Token (already installed)
TUNNEL_TOKEN="eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0IjoiMTBhYjIyZGEtOGVhMy00ZTJlLWE4OTYtMjdlY2UyMjExYTA1IiwicyI6IlptRXlOMkkyTVRrdE1EZzFNeTAwTkRBNExXSXhaalF0Wm1KaE5XVmpaVEEzTVdGbCJ9"
EOF
else
echo "Invalid choice"
exit 1
fi
chmod 600 "$ENV_FILE"
echo ""
echo "✓ Credentials saved to: $ENV_FILE"
echo ""
echo "Next step: Run ./scripts/configure-cloudflare-api.sh"

View File

@@ -0,0 +1,206 @@
#!/usr/bin/env bash
# Setup Cloudflare Tunnel for RPC endpoints on VMID 102
# Usage: ./setup-cloudflare-tunnel-rpc.sh <TUNNEL_TOKEN>
# Example: ./setup-cloudflare-tunnel-rpc.sh eyJhIjoiNT...
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.10}"
CLOUDFLARED_VMID="${CLOUDFLARED_VMID:-102}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
info() { echo -e "${GREEN}[INFO]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Check if token provided
if [[ $# -eq 0 ]]; then
error "Tunnel token required!"
echo ""
echo "Usage: $0 <TUNNEL_TOKEN>"
echo ""
echo "Get your token from Cloudflare Dashboard:"
echo " Zero Trust → Networks → Tunnels → Create tunnel → Copy token"
echo ""
exit 1
fi
TUNNEL_TOKEN="$1"
info "Setting up Cloudflare Tunnel for RPC endpoints..."
info "Proxmox Host: $PROXMOX_HOST"
info "Cloudflared Container: VMID $CLOUDFLARED_VMID"
echo ""
# Check if container is running
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct status $CLOUDFLARED_VMID 2>/dev/null | awk '{print \$2}'" 2>/dev/null || echo "unknown")
if [[ "$STATUS" != "running" ]]; then
error "Container $CLOUDFLARED_VMID is not running (status: $STATUS)"
exit 1
fi
# Check if cloudflared is installed
if ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- which cloudflared >/dev/null 2>&1"; then
info "Installing cloudflared..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- bash -c '
mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkg.cloudflare.com/cloudflare-public-v2.gpg | tee /usr/share/keyrings/cloudflare-public-v2.gpg >/dev/null
echo \"deb [signed-by=/usr/share/keyrings/cloudflare-public-v2.gpg] https://pkg.cloudflare.com/cloudflared any main\" | tee /etc/apt/sources.list.d/cloudflared.list
apt-get update -qq && apt-get install -y -qq cloudflared
'" || {
error "Failed to install cloudflared"
exit 1
}
info "✓ cloudflared installed"
else
info "✓ cloudflared already installed"
fi
# Stop existing cloudflared service if running
info "Stopping existing cloudflared service..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- systemctl stop cloudflared 2>/dev/null || true"
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- systemctl disable cloudflared 2>/dev/null || true"
# Install tunnel service with token
info "Installing tunnel service with token..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- cloudflared service install $TUNNEL_TOKEN" || {
error "Failed to install tunnel service"
exit 1
}
info "✓ Tunnel service installed"
# Create tunnel configuration file
info "Creating tunnel configuration for RPC endpoints..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- bash" <<'EOF'
cat > /etc/cloudflared/config.yml <<'CONFIG'
# Cloudflare Tunnel Configuration for RPC Endpoints
# This file is auto-generated. Manual edits may be overwritten.
ingress:
# Public HTTP RPC
- hostname: rpc-http-pub.d-bis.org
service: https://192.168.11.251:443
originRequest:
noHappyEyeballs: true
connectTimeout: 30s
tcpKeepAlive: 30s
keepAliveConnections: 100
keepAliveTimeout: 90s
# Public WebSocket RPC
- hostname: rpc-ws-pub.d-bis.org
service: https://192.168.11.251:443
originRequest:
noHappyEyeballs: true
connectTimeout: 30s
tcpKeepAlive: 30s
keepAliveConnections: 100
keepAliveTimeout: 90s
httpHostHeader: rpc-ws-pub.d-bis.org
# Private HTTP RPC
- hostname: rpc-http-prv.d-bis.org
service: https://192.168.11.252:443
originRequest:
noHappyEyeballs: true
connectTimeout: 30s
tcpKeepAlive: 30s
keepAliveConnections: 100
keepAliveTimeout: 90s
# Private WebSocket RPC
- hostname: rpc-ws-prv.d-bis.org
service: https://192.168.11.252:443
originRequest:
noHappyEyeballs: true
connectTimeout: 30s
tcpKeepAlive: 30s
keepAliveConnections: 100
keepAliveTimeout: 90s
httpHostHeader: rpc-ws-prv.d-bis.org
# Catch-all (must be last)
- service: http_status:404
CONFIG
chmod 600 /etc/cloudflared/config.yml
EOF
if [[ $? -eq 0 ]]; then
info "✓ Tunnel configuration created"
else
error "Failed to create tunnel configuration"
exit 1
fi
# Enable and start tunnel service
info "Enabling and starting tunnel service..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- systemctl enable cloudflared" || {
warn "Failed to enable service (may already be enabled)"
}
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- systemctl start cloudflared" || {
error "Failed to start tunnel service"
exit 1
}
# Wait a moment for service to start
sleep 2
# Check service status
info "Checking tunnel service status..."
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- systemctl is-active cloudflared 2>/dev/null" || echo "inactive")
if [[ "$STATUS" == "active" ]]; then
info "✓ Tunnel service is running"
else
error "Tunnel service is not active"
warn "Checking logs..."
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- journalctl -u cloudflared -n 20 --no-pager"
exit 1
fi
# Show tunnel info
info "Tunnel information:"
ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $CLOUDFLARED_VMID -- cloudflared tunnel info 2>/dev/null | head -10" || {
warn "Could not retrieve tunnel info (may need a moment to connect)"
}
echo ""
info "Cloudflare Tunnel setup complete!"
echo ""
info "Next steps:"
echo " 1. Configure DNS records in Cloudflare:"
echo " - rpc-http-pub.d-bis.org → CNAME → <tunnel-id>.cfargotunnel.com (🟠 Proxied)"
echo " - rpc-ws-pub.d-bis.org → CNAME → <tunnel-id>.cfargotunnel.com (🟠 Proxied)"
echo " - rpc-http-prv.d-bis.org → CNAME → <tunnel-id>.cfargotunnel.com (🟠 Proxied)"
echo " - rpc-ws-prv.d-bis.org → CNAME → <tunnel-id>.cfargotunnel.com (🟠 Proxied)"
echo ""
echo " 2. Verify tunnel status in Cloudflare Dashboard:"
echo " Zero Trust → Networks → Tunnels → Your Tunnel"
echo ""
echo " 3. Test endpoints:"
echo " curl https://rpc-http-pub.d-bis.org/health"
echo ""
info "To view tunnel logs:"
echo " ssh root@$PROXMOX_HOST 'pct exec $CLOUDFLARED_VMID -- journalctl -u cloudflared -f'"

View File

@@ -0,0 +1,209 @@
#!/usr/bin/env bash
# Set up Let's Encrypt certificate using DNS-01 challenge for RPC-01 (VMID 2500)
# This is useful when port 80 is not accessible or for internal domains
# Usage: ./setup-letsencrypt-dns-01-rpc-2500.sh <domain> [cloudflare-api-token]
set -e
VMID=2500
PROXMOX_HOST="192.168.11.10"
if [ $# -lt 1 ]; then
echo "Usage: $0 <domain> [cloudflare-api-token]"
echo "Example: $0 rpc-core.yourdomain.com YOUR_CLOUDFLARE_API_TOKEN"
exit 1
fi
DOMAIN="$1"
API_TOKEN="${2:-}"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Setting up Let's Encrypt certificate (DNS-01) for RPC-01 (VMID $VMID)"
log_info "Domain: $DOMAIN"
echo ""
# Check if domain is .local
if echo "$DOMAIN" | grep -q "\.local$"; then
log_error "Let's Encrypt does not support .local domains"
log_info "Please use a public domain (e.g., rpc-core.yourdomain.com)"
exit 1
fi
# Install Certbot
log_info "1. Installing Certbot..."
if ! sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- which certbot >/dev/null 2>&1"; then
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq certbot
'"
log_success "Certbot installed"
else
log_success "Certbot already installed"
fi
# Check if Cloudflare API token provided
if [ -n "$API_TOKEN" ]; then
log_info ""
log_info "2. Setting up Cloudflare DNS plugin..."
# Install Cloudflare plugin
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '
export DEBIAN_FRONTEND=noninteractive
apt-get install -y -qq python3-certbot-dns-cloudflare python3-pip
pip3 install -q cloudflare 2>/dev/null || true
'"
# Create credentials file
log_info "Creating Cloudflare credentials file..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '
mkdir -p /etc/cloudflare
cat > /etc/cloudflare/credentials.ini <<EOF
dns_cloudflare_api_token = $API_TOKEN
EOF
chmod 600 /etc/cloudflare/credentials.ini
'"
log_success "Cloudflare credentials configured"
# Obtain certificate using DNS-01
log_info ""
log_info "3. Obtaining certificate using DNS-01 challenge..."
log_warn "This will use Let's Encrypt staging server for testing"
log_info "Press Ctrl+C to cancel, or wait 5 seconds..."
sleep 5
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials /etc/cloudflare/credentials.ini \
--non-interactive \
--agree-tos \
--staging \
--email admin@$(echo $DOMAIN | cut -d. -f2-) \
-d $DOMAIN 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Successfully received certificate\|Congratulations"; then
log_success "Certificate obtained successfully (STAGING)"
log_warn "To get production certificate, run without --staging flag"
else
log_error "Certificate acquisition failed"
log_info "Output: $CERTBOT_OUTPUT"
exit 1
fi
else
log_info ""
log_info "2. Manual DNS-01 challenge setup..."
log_info "No Cloudflare API token provided. Using manual DNS challenge."
log_info ""
log_info "Run this command and follow the prompts:"
log_info " pct exec $VMID -- certbot certonly --manual --preferred-challenges dns -d $DOMAIN"
log_info ""
log_info "You will need to:"
log_info " 1. Add a TXT record to your DNS"
log_info " 2. Wait for DNS propagation"
log_info " 3. Press Enter to continue"
exit 0
fi
# Update Nginx configuration
log_info ""
log_info "4. Updating Nginx configuration..."
CERT_PATH="/etc/letsencrypt/live/$DOMAIN/fullchain.pem"
KEY_PATH="/etc/letsencrypt/live/$DOMAIN/privkey.pem"
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<UPDATE_NGINX
# Update SSL certificate paths in Nginx config
sed -i "s|ssl_certificate /etc/nginx/ssl/rpc.crt;|ssl_certificate $CERT_PATH;|" /etc/nginx/sites-available/rpc-core
sed -i "s|ssl_certificate_key /etc/nginx/ssl/rpc.key;|ssl_certificate_key $KEY_PATH;|" /etc/nginx/sites-available/rpc-core
# Add domain to server_name if not present
if ! grep -q "$DOMAIN" /etc/nginx/sites-available/rpc-core; then
sed -i "s|server_name.*rpc-core.besu.local|server_name $DOMAIN rpc-core.besu.local|" /etc/nginx/sites-available/rpc-core
fi
# Test configuration
nginx -t
UPDATE_NGINX
if [ $? -eq 0 ]; then
log_success "Nginx configuration updated"
else
log_error "Failed to update Nginx configuration"
exit 1
fi
# Reload Nginx
log_info ""
log_info "5. Reloading Nginx..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl reload nginx"
log_success "Nginx reloaded"
# Set up auto-renewal
log_info ""
log_info "6. Setting up auto-renewal..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl enable certbot.timer && systemctl start certbot.timer"
log_success "Auto-renewal enabled"
# Verify certificate
log_info ""
log_info "7. Verifying certificate..."
CERT_INFO=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- openssl x509 -in $CERT_PATH -noout -subject -issuer -dates 2>&1")
log_info "Certificate details:"
echo "$CERT_INFO" | while read line; do
log_info " $line"
done
# Test HTTPS
log_info ""
log_info "8. Testing HTTPS endpoint..."
HTTPS_TEST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- timeout 5 curl -s -X POST https://localhost:443 \
-H 'Content-Type: application/json' \
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}' 2>&1" || echo "FAILED")
if echo "$HTTPS_TEST" | grep -q "result"; then
log_success "HTTPS endpoint is working!"
else
log_warn "HTTPS test inconclusive"
fi
echo ""
log_success "Let's Encrypt certificate setup complete!"
echo ""
log_info "Summary:"
log_info " ✓ Certificate obtained for: $DOMAIN"
log_info " ✓ Nginx configuration updated"
log_info " ✓ Auto-renewal enabled"
echo ""
if echo "$CERTBOT_OUTPUT" | grep -q "staging"; then
log_warn "NOTE: Certificate is from STAGING server (for testing)"
log_info "To get production certificate, run:"
log_info " pct exec $VMID -- certbot certonly --dns-cloudflare \\"
log_info " --dns-cloudflare-credentials /etc/cloudflare/credentials.ini \\"
log_info " --non-interactive --agree-tos \\"
log_info " --email admin@$(echo $DOMAIN | cut -d. -f2-) -d $DOMAIN"
fi

View File

@@ -0,0 +1,241 @@
#!/usr/bin/env bash
# Set up Let's Encrypt certificate for RPC-01 (VMID 2500)
# Usage: ./setup-letsencrypt-rpc-2500.sh [domain1] [domain2] ...
# If no domains provided, will use configured server_name from Nginx config
set -e
VMID=2500
PROXMOX_HOST="192.168.11.10"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Setting up Let's Encrypt certificate for RPC-01 (VMID $VMID)"
echo ""
# Get domains from arguments or from Nginx config
if [ $# -gt 0 ]; then
DOMAINS=("$@")
log_info "Using provided domains: ${DOMAINS[*]}"
else
log_info "Extracting domains from Nginx configuration..."
DOMAINS=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- grep -E 'server_name' /etc/nginx/sites-available/rpc-core | \
grep -v '^#' | sed 's/.*server_name //;s/;.*//' | tr ' ' '\n' | \
grep -v '^$' | grep -v '^besu-rpc-1$' | grep -v '^192\.168\.' | head -5" 2>&1)
if [ -z "$DOMAINS" ]; then
log_warn "No domains found in Nginx config"
log_info "Please provide domains as arguments:"
log_info " ./setup-letsencrypt-rpc-2500.sh rpc-core.besu.local rpc-core.chainid138.local"
exit 1
fi
DOMAINS_ARRAY=($DOMAINS)
log_info "Found domains: ${DOMAINS_ARRAY[*]}"
fi
# Check if certbot is installed
log_info ""
log_info "1. Checking Certbot installation..."
if ! sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- which certbot >/dev/null 2>&1"; then
log_info "Installing Certbot..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq certbot python3-certbot-nginx
'" || {
log_error "Failed to install Certbot"
exit 1
}
log_success "Certbot installed"
else
log_success "Certbot already installed"
fi
# Check if domains are accessible
log_info ""
log_info "2. Verifying domain accessibility..."
for domain in "${DOMAINS_ARRAY[@]}"; do
log_info "Checking domain: $domain"
# Check if domain resolves
RESOLVED_IP=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- getent hosts $domain 2>&1 | awk '{print \$1}' | head -1" || echo "")
if [ -z "$RESOLVED_IP" ]; then
log_warn "Domain $domain does not resolve. DNS may need to be configured."
log_info "Let's Encrypt will use HTTP-01 challenge (requires port 80 accessible)"
else
log_info "Domain $domain resolves to: $RESOLVED_IP"
fi
done
# Check if port 80 is accessible (required for HTTP-01 challenge)
log_info ""
log_info "3. Checking port 80 accessibility..."
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- ss -tln | grep -q ':80 '"; then
log_success "Port 80 is listening (required for HTTP-01 challenge)"
else
log_error "Port 80 is not listening. Let's Encrypt HTTP-01 challenge requires port 80."
log_info "Options:"
log_info " 1. Ensure port 80 is accessible from internet"
log_info " 2. Use DNS-01 challenge instead (requires DNS API access)"
exit 1
fi
# Obtain certificate
log_info ""
log_info "4. Obtaining Let's Encrypt certificate..."
log_info "Domains: ${DOMAINS_ARRAY[*]}"
log_warn "This will use Let's Encrypt staging server for testing first"
log_info "Press Ctrl+C to cancel, or wait 5 seconds to continue..."
sleep 5
# Use staging first for testing
STAGING_FLAG="--staging"
log_info "Using Let's Encrypt staging server (for testing)"
# Build certbot command
CERTBOT_CMD="certbot --nginx $STAGING_FLAG --non-interactive --agree-tos --email admin@$(echo ${DOMAINS_ARRAY[0]} | cut -d. -f2-)"
for domain in "${DOMAINS_ARRAY[@]}"; do
CERTBOT_CMD="$CERTBOT_CMD -d $domain"
done
log_info "Running: $CERTBOT_CMD"
# Run certbot
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '$CERTBOT_CMD' 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Congratulations\|Successfully"; then
log_success "Certificate obtained successfully!"
# If using staging, offer to get production certificate
if echo "$CERTBOT_CMD" | grep -q "staging"; then
log_info ""
log_warn "Certificate obtained from STAGING server (for testing)"
log_info "To get production certificate, run:"
log_info " pct exec $VMID -- certbot --nginx --non-interactive --agree-tos --email admin@$(echo ${DOMAINS_ARRAY[0]} | cut -d. -f2-) -d ${DOMAINS_ARRAY[*]}"
fi
else
log_error "Certificate acquisition failed"
log_info "Output: $CERTBOT_OUTPUT"
log_info ""
log_info "Common issues:"
log_info " 1. Domain not accessible from internet (DNS not configured)"
log_info " 2. Port 80 not accessible from internet (firewall/NAT issue)"
log_info " 3. Domain already has certificate (use --force-renewal)"
log_info ""
log_info "For DNS-01 challenge (if HTTP-01 fails):"
log_info " pct exec $VMID -- certbot certonly --manual --preferred-challenges dns -d ${DOMAINS_ARRAY[0]}"
exit 1
fi
# Verify certificate
log_info ""
log_info "5. Verifying certificate..."
CERT_PATH=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot certificates 2>&1 | grep -A1 '${DOMAINS_ARRAY[0]}' | grep 'Certificate Path' | awk '{print \$3}'" || echo "")
if [ -n "$CERT_PATH" ]; then
log_success "Certificate found at: $CERT_PATH"
# Check certificate details
CERT_INFO=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- openssl x509 -in $CERT_PATH -noout -subject -issuer -dates 2>&1")
log_info "Certificate details:"
echo "$CERT_INFO" | while read line; do
log_info " $line"
done
else
log_warn "Could not verify certificate path"
fi
# Test Nginx configuration
log_info ""
log_info "6. Testing Nginx configuration..."
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- nginx -t 2>&1 | grep -q 'successful'"; then
log_success "Nginx configuration is valid"
# Reload Nginx
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl reload nginx"
log_success "Nginx reloaded"
else
log_error "Nginx configuration test failed"
exit 1
fi
# Test HTTPS endpoint
log_info ""
log_info "7. Testing HTTPS endpoint..."
HTTPS_TEST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- timeout 5 curl -s -X POST https://localhost:443 \
-H 'Content-Type: application/json' \
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}' 2>&1" || echo "FAILED")
if echo "$HTTPS_TEST" | grep -q "result"; then
log_success "HTTPS endpoint is working!"
else
log_warn "HTTPS test inconclusive (may need external access)"
fi
# Set up auto-renewal
log_info ""
log_info "8. Setting up auto-renewal..."
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl is-enabled certbot.timer >/dev/null 2>&1"; then
log_success "Certbot timer already enabled"
else
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl enable certbot.timer && systemctl start certbot.timer"
log_success "Certbot timer enabled"
fi
# Test renewal
log_info ""
log_info "9. Testing certificate renewal..."
RENEWAL_TEST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot renew --dry-run 2>&1 | tail -5")
if echo "$RENEWAL_TEST" | grep -q "The dry run was successful\|Congratulations"; then
log_success "Certificate renewal test passed"
else
log_warn "Renewal test had issues (may be normal for staging cert)"
log_info "Output: $RENEWAL_TEST"
fi
echo ""
log_success "Let's Encrypt certificate setup complete!"
echo ""
log_info "Summary:"
log_info " ✓ Certbot installed"
log_info " ✓ Certificate obtained for: ${DOMAINS_ARRAY[*]}"
log_info " ✓ Nginx configuration updated"
log_info " ✓ Auto-renewal enabled"
echo ""
log_info "Certificate location:"
log_info " $(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} "pct exec $VMID -- certbot certificates 2>&1 | grep -A2 '${DOMAINS_ARRAY[0]}' | head -5")"
echo ""
if echo "$CERTBOT_CMD" | grep -q "staging"; then
log_warn "NOTE: Certificate is from STAGING server (for testing)"
log_info "To get production certificate, run:"
log_info " pct exec $VMID -- certbot --nginx --non-interactive --agree-tos --email admin@$(echo ${DOMAINS_ARRAY[0]} | cut -d. -f2-) -d ${DOMAINS_ARRAY[*]}"
fi

View File

@@ -0,0 +1,268 @@
#!/usr/bin/env bash
# Complete Let's Encrypt setup using Cloudflare Tunnel
# Falls back to public IP, then DNS-01 if needed
set -e
VMID=2500
DOMAIN="rpc-core.d-bis.org"
NAME="rpc-core"
IP="192.168.11.250"
PUBLIC_IP="45.49.67.248"
PROXMOX_HOST="192.168.11.10"
TUNNEL_VMID=102
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Let's Encrypt Setup - Cloudflare Tunnel Method"
log_info "Domain: $DOMAIN"
echo ""
# Load .env
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$SCRIPT_DIR/../.env" ]; then
source "$SCRIPT_DIR/../.env" 2>/dev/null
fi
# Step 1: Configure Cloudflare Tunnel
log_info "Step 1: Configuring Cloudflare Tunnel route..."
# Get tunnel ID from token
if [ -n "$CLOUDFLARE_TUNNEL_TOKEN" ]; then
TUNNEL_ID=$(echo "$CLOUDFLARE_TUNNEL_TOKEN" | base64 -d 2>/dev/null | python3 -c "import sys, json; data=json.load(sys.stdin); print(data.get('a', ''))" 2>/dev/null || echo "")
if [ -z "$TUNNEL_ID" ]; then
# Fallback: use account ID as tunnel ID (they're often the same)
TUNNEL_ID="${CLOUDFLARE_ACCOUNT_ID:-52ad57a71671c5fc009edf0744658196}"
fi
log_info "Tunnel ID: $TUNNEL_ID"
else
TUNNEL_ID="${CLOUDFLARE_ACCOUNT_ID:-52ad57a71671c5fc009edf0744658196}"
log_warn "Using account ID as tunnel ID: $TUNNEL_ID"
fi
# Check if tunnel is running
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $TUNNEL_VMID -- systemctl is-active cloudflared >/dev/null 2>&1"; then
log_success "Cloudflare Tunnel is running"
else
log_warn "Cloudflare Tunnel may not be running on VMID $TUNNEL_VMID"
fi
# Configure tunnel route via API
if [ -n "$CLOUDFLARE_API_KEY" ] && [ -n "$CLOUDFLARE_EMAIL" ] && [ -n "$CLOUDFLARE_ACCOUNT_ID" ] && [ -n "$TUNNEL_ID" ]; then
log_info "Configuring tunnel route via API..."
# Get current tunnel config
TUNNEL_CONFIG=$(curl -s -X GET "https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/cfd_tunnel/$TUNNEL_ID/configurations" \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-H "Content-Type: application/json")
# Check if route already exists in config
if echo "$TUNNEL_CONFIG" | grep -q "$DOMAIN"; then
log_info "Tunnel route already exists for $DOMAIN"
else
log_info "Adding tunnel route: $DOMAIN → http://$IP:443"
log_warn "Tunnel route configuration via API is complex - using manual method"
log_info "Please configure in Cloudflare Dashboard:"
log_info " Zero Trust → Networks → Tunnels → Your Tunnel → Configure"
log_info " Add Public Hostname: rpc-core.d-bis.org → http://$IP:443"
log_info ""
log_info "Continuing with DNS setup (tunnel route can be added manually)..."
fi
else
log_warn "Missing credentials for tunnel API configuration"
fi
# Step 2: Update DNS to use Tunnel (CNAME)
log_info ""
log_info "Step 2: Updating DNS to use Cloudflare Tunnel..."
# Delete existing A record if exists
if [ -f "$SCRIPT_DIR/create-dns-record-rpc-core.sh" ]; then
log_info "Checking existing DNS record..."
EXISTING_RECORD=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$CLOUDFLARE_ZONE_ID/dns_records?name=$DOMAIN" \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-H "Content-Type: application/json")
if echo "$EXISTING_RECORD" | grep -q '"id"'; then
RECORD_ID=$(echo "$EXISTING_RECORD" | grep -o '"id":"[^"]*' | head -1 | cut -d'"' -f4)
log_info "Deleting existing A record..."
curl -s -X DELETE "https://api.cloudflare.com/client/v4/zones/$CLOUDFLARE_ZONE_ID/dns_records/$RECORD_ID" \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-H "Content-Type: application/json" >/dev/null
fi
fi
# Create CNAME to tunnel
if [ -z "$TUNNEL_ID" ]; then
log_error "Tunnel ID not found. Cannot create CNAME."
exit 1
fi
TUNNEL_TARGET="${TUNNEL_ID}.cfargotunnel.com"
log_info "Creating CNAME: $NAME$TUNNEL_TARGET"
CNAME_RESPONSE=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/$CLOUDFLARE_ZONE_ID/dns_records" \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"CNAME\",
\"name\": \"$NAME\",
\"content\": \"$TUNNEL_TARGET\",
\"ttl\": 1,
\"proxied\": true
}")
if echo "$CNAME_RESPONSE" | grep -q '"success":true'; then
log_success "CNAME record created (proxied)"
else
log_error "Failed to create CNAME record"
log_info "Response: $CNAME_RESPONSE"
exit 1
fi
# Step 3: Wait for tunnel route to be active
log_info ""
log_info "Step 3: Waiting for tunnel route to be active (10 seconds)..."
sleep 10
# Step 4: Wait for DNS propagation
log_info ""
log_info "Step 4: Waiting for DNS propagation (30 seconds)..."
sleep 30
# Step 5: Try Let's Encrypt HTTP-01 through tunnel
log_info ""
log_info "Step 5: Attempting Let's Encrypt HTTP-01 challenge through tunnel..."
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot --nginx \
--non-interactive \
--agree-tos \
--email admin@d-bis.org \
-d $DOMAIN \
--redirect 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Successfully received certificate\|Congratulations"; then
log_success "Certificate obtained via HTTP-01 through tunnel!"
exit 0
else
log_warn "HTTP-01 through tunnel failed"
log_info "Trying fallback: Public IP method..."
fi
# Fallback: Try with public IP
log_info ""
log_info "Fallback: Trying with public IP $PUBLIC_IP..."
# Update DNS to point to public IP
log_info "Updating DNS to point to public IP..."
PUBLIC_IP_RESPONSE=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$CLOUDFLARE_ZONE_ID/dns_records/$RECORD_ID" \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"$NAME\",
\"content\": \"$PUBLIC_IP\",
\"ttl\": 1,
\"proxied\": false
}" 2>/dev/null || curl -s -X POST "https://api.cloudflare.com/client/v4/zones/$CLOUDFLARE_ZONE_ID/dns_records" \
-H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
-H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"$NAME\",
\"content\": \"$PUBLIC_IP\",
\"ttl\": 1,
\"proxied\": false
}")
if echo "$PUBLIC_IP_RESPONSE" | grep -q '"success":true'; then
log_success "DNS updated to public IP"
sleep 30
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot --nginx \
--non-interactive \
--agree-tos \
--email admin@d-bis.org \
-d $DOMAIN \
--redirect 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Successfully received certificate\|Congratulations"; then
log_success "Certificate obtained via HTTP-01 with public IP!"
exit 0
else
log_warn "HTTP-01 with public IP failed"
fi
else
log_warn "Failed to update DNS to public IP"
fi
# Final fallback: DNS-01 challenge
log_info ""
log_info "Final fallback: Using DNS-01 challenge..."
# Install DNS plugin
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- apt-get install -y -qq python3-certbot-dns-cloudflare" || {
log_error "Failed to install certbot-dns-cloudflare"
exit 1
}
# Create credentials file
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '
mkdir -p /etc/cloudflare
cat > /etc/cloudflare/credentials.ini <<EOF
dns_cloudflare_api_token = ${CLOUDFLARE_API_TOKEN:-}
dns_cloudflare_email = $CLOUDFLARE_EMAIL
dns_cloudflare_api_key = $CLOUDFLARE_API_KEY
EOF
chmod 600 /etc/cloudflare/credentials.ini
'"
# Try DNS-01
log_info "Obtaining certificate via DNS-01 challenge..."
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot certonly --dns-cloudflare \
--dns-cloudflare-credentials /etc/cloudflare/credentials.ini \
--non-interactive \
--agree-tos \
--email admin@d-bis.org \
-d $DOMAIN 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Successfully received certificate\|Congratulations"; then
log_success "Certificate obtained via DNS-01 challenge!"
# Update Nginx manually
log_info "Updating Nginx configuration..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash -c '
sed -i \"s|ssl_certificate /etc/nginx/ssl/rpc.crt;|ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;|\" /etc/nginx/sites-available/rpc-core
sed -i \"s|ssl_certificate_key /etc/nginx/ssl/rpc.key;|ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;|\" /etc/nginx/sites-available/rpc-core
nginx -t && systemctl reload nginx
'"
log_success "Nginx updated with Let's Encrypt certificate!"
exit 0
else
log_error "All methods failed"
log_info "Output: $CERTBOT_OUTPUT"
exit 1
fi

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env bash
# Complete Let's Encrypt setup with automated DNS record creation
# Usage: ./setup-letsencrypt-with-dns.sh [API_TOKEN]
set -e
VMID=2500
DOMAIN="rpc-core.d-bis.org"
NAME="rpc-core"
IP="192.168.11.250"
PROXMOX_HOST="192.168.11.10"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Complete Let's Encrypt Setup with Automated DNS"
log_info "Domain: $DOMAIN"
echo ""
# Get API token
if [ -n "$1" ]; then
API_TOKEN="$1"
log_info "Using provided API token"
elif [ -f .env ]; then
source .env 2>/dev/null
if [ -n "$CLOUDFLARE_API_TOKEN" ]; then
API_TOKEN="$CLOUDFLARE_API_TOKEN"
log_info "Using API token from .env file"
else
log_error "CLOUDFLARE_API_TOKEN not found in .env file"
log_info "Please provide API token: $0 <API_TOKEN>"
exit 1
fi
else
log_error "No API token provided and no .env file found"
log_info "Usage: $0 [API_TOKEN]"
log_info ""
log_info "To get API token:"
log_info " 1. Go to https://dash.cloudflare.com/profile/api-tokens"
log_info " 2. Create Token with: Zone → DNS:Edit → d-bis.org"
exit 1
fi
# Step 1: Create DNS record
log_info ""
log_info "Step 1: Creating DNS record..."
if [ -f scripts/create-dns-record-rpc-core.sh ]; then
./scripts/create-dns-record-rpc-core.sh "$API_TOKEN" 2>&1
DNS_RESULT=$?
else
log_error "create-dns-record-rpc-core.sh not found"
exit 1
fi
if [ $DNS_RESULT -ne 0 ]; then
log_error "Failed to create DNS record"
exit 1
fi
log_success "DNS record created"
# Step 2: Wait for DNS propagation
log_info ""
log_info "Step 2: Waiting for DNS propagation (30 seconds)..."
sleep 30
# Step 3: Verify DNS resolution
log_info ""
log_info "Step 3: Verifying DNS resolution..."
DNS_CHECK=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- getent hosts $DOMAIN 2>&1" || echo "NOT_RESOLVED")
if echo "$DNS_CHECK" | grep -q "NOT_RESOLVED\|not found"; then
log_warn "DNS not yet resolved. Waiting another 30 seconds..."
sleep 30
DNS_CHECK=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- getent hosts $DOMAIN 2>&1" || echo "NOT_RESOLVED")
fi
if echo "$DNS_CHECK" | grep -q "$IP\|NOT_RESOLVED"; then
log_info "DNS check: $DNS_CHECK"
log_warn "DNS may still be propagating. Continuing anyway..."
else
log_success "DNS resolved"
fi
# Step 4: Obtain Let's Encrypt certificate
log_info ""
log_info "Step 4: Obtaining Let's Encrypt certificate..."
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot --nginx \
--non-interactive \
--agree-tos \
--email admin@d-bis.org \
-d $DOMAIN \
--redirect 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Successfully received certificate\|Congratulations"; then
log_success "Certificate obtained successfully!"
elif echo "$CERTBOT_OUTPUT" | grep -q "NXDOMAIN\|DNS problem"; then
log_warn "DNS may still be propagating. Waiting 60 more seconds..."
sleep 60
log_info "Retrying certificate acquisition..."
CERTBOT_OUTPUT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot --nginx \
--non-interactive \
--agree-tos \
--email admin@d-bis.org \
-d $DOMAIN \
--redirect 2>&1" || echo "FAILED")
if echo "$CERTBOT_OUTPUT" | grep -q "Successfully received certificate\|Congratulations"; then
log_success "Certificate obtained successfully!"
else
log_error "Certificate acquisition failed"
log_info "Output: $CERTBOT_OUTPUT"
log_info ""
log_info "Possible issues:"
log_info " 1. DNS still propagating (wait 5-10 minutes and retry)"
log_info " 2. Port 80 not accessible from internet"
log_info " 3. Firewall blocking Let's Encrypt validation"
exit 1
fi
else
log_error "Certificate acquisition failed"
log_info "Output: $CERTBOT_OUTPUT"
exit 1
fi
# Step 5: Verify certificate
log_info ""
log_info "Step 5: Verifying certificate..."
CERT_INFO=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- certbot certificates 2>&1 | grep -A5 '$DOMAIN'" || echo "")
if [ -n "$CERT_INFO" ]; then
log_success "Certificate verified"
echo "$CERT_INFO" | while read line; do
log_info " $line"
done
else
log_warn "Could not verify certificate details"
fi
# Step 6: Test HTTPS
log_info ""
log_info "Step 6: Testing HTTPS endpoint..."
HTTPS_TEST=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- timeout 5 curl -s -X POST https://localhost:443 \
-H 'Content-Type: application/json' \
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}' 2>&1" || echo "FAILED")
if echo "$HTTPS_TEST" | grep -q "result"; then
log_success "HTTPS endpoint is working!"
log_info "Response: $HTTPS_TEST"
else
log_warn "HTTPS test inconclusive"
fi
# Step 7: Verify auto-renewal
log_info ""
log_info "Step 7: Verifying auto-renewal..."
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl is-enabled certbot.timer >/dev/null 2>&1"; then
log_success "Auto-renewal is enabled"
else
log_warn "Auto-renewal may not be enabled"
fi
echo ""
log_success "Let's Encrypt setup complete!"
echo ""
log_info "Summary:"
log_info " ✓ DNS record created: $DOMAIN$IP"
log_info " ✓ Certificate obtained: $DOMAIN"
log_info " ✓ Nginx configured with Let's Encrypt certificate"
log_info " ✓ Auto-renewal enabled"
echo ""
log_info "Certificate location:"
log_info " /etc/letsencrypt/live/$DOMAIN/"
echo ""
log_info "Test HTTPS:"
log_info " curl -X POST https://$DOMAIN -H 'Content-Type: application/json' -d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}'"

View File

@@ -0,0 +1,230 @@
#!/usr/bin/env bash
# Set up basic monitoring for Nginx on VMID 2500
# - Nginx status page
# - Log rotation
# - Basic health check script
set -e
VMID=2500
PROXMOX_HOST="192.168.11.10"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
log_info "Setting up monitoring for Nginx on VMID $VMID"
echo ""
# Enable Nginx status page
log_info "1. Enabling Nginx status page..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'STATUS_EOF'
# Add status location to nginx.conf if not present
if ! grep -q "stub_status" /etc/nginx/nginx.conf; then
# Add status location in http block
cat >> /etc/nginx/nginx.conf <<'NGINX_STATUS'
# Nginx status page (internal only)
server {
listen 127.0.0.1:8080;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
NGINX_STATUS
fi
# Test configuration
nginx -t
STATUS_EOF
if [ $? -eq 0 ]; then
log_success "Nginx status page enabled on port 8080"
else
log_warn "Status page configuration may need manual adjustment"
fi
# Configure log rotation
log_info ""
log_info "2. Configuring log rotation..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'LOGROTATE_EOF'
cat > /etc/logrotate.d/nginx-rpc <<'LOGROTATE_CONFIG'
/var/log/nginx/rpc-core-*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
LOGROTATE_CONFIG
# Test logrotate configuration
logrotate -d /etc/logrotate.d/nginx-rpc
LOGROTATE_EOF
if [ $? -eq 0 ]; then
log_success "Log rotation configured (14 days retention)"
else
log_warn "Log rotation may need manual configuration"
fi
# Create health check script
log_info ""
log_info "3. Creating health check script..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'HEALTH_EOF'
cat > /usr/local/bin/nginx-health-check.sh <<'HEALTH_SCRIPT'
#!/bin/bash
# Nginx health check script
EXIT_CODE=0
# Check Nginx service
if ! systemctl is-active --quiet nginx; then
echo "ERROR: Nginx service is not active"
EXIT_CODE=1
fi
# Check Nginx status page
if ! curl -s http://127.0.0.1:8080/nginx_status >/dev/null 2>&1; then
echo "WARNING: Nginx status page not accessible"
fi
# Check RPC endpoint
RPC_RESPONSE=$(curl -k -s -X POST https://localhost:443 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' 2>&1)
if echo "$RPC_RESPONSE" | grep -q "result"; then
echo "OK: RPC endpoint responding"
else
echo "ERROR: RPC endpoint not responding"
EXIT_CODE=1
fi
# Check ports
for port in 80 443 8443; do
if ! ss -tln | grep -q ":$port "; then
echo "ERROR: Port $port not listening"
EXIT_CODE=1
fi
done
exit $EXIT_CODE
HEALTH_SCRIPT
chmod +x /usr/local/bin/nginx-health-check.sh
HEALTH_EOF
if [ $? -eq 0 ]; then
log_success "Health check script created at /usr/local/bin/nginx-health-check.sh"
else
log_warn "Health check script creation failed"
fi
# Create systemd service for health monitoring (optional)
log_info ""
log_info "4. Creating monitoring service..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- bash" <<'SERVICE_EOF'
cat > /etc/systemd/system/nginx-health-monitor.service <<'SERVICE_CONFIG'
[Unit]
Description=Nginx Health Monitor
After=nginx.service
Requires=nginx.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/nginx-health-check.sh
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
SERVICE_CONFIG
# Create timer for periodic health checks
cat > /etc/systemd/system/nginx-health-monitor.timer <<'TIMER_CONFIG'
[Unit]
Description=Run Nginx Health Check Every 5 Minutes
Requires=nginx-health-monitor.service
[Timer]
OnBootSec=5min
OnUnitActiveSec=5min
Unit=nginx-health-monitor.service
[Install]
WantedBy=timers.target
TIMER_CONFIG
systemctl daemon-reload
systemctl enable nginx-health-monitor.timer
systemctl start nginx-health-monitor.timer
SERVICE_EOF
if [ $? -eq 0 ]; then
log_success "Health monitoring service enabled (checks every 5 minutes)"
else
log_warn "Health monitoring service may need manual setup"
fi
# Reload Nginx
log_info ""
log_info "5. Reloading Nginx..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- systemctl reload nginx"
if [ $? -eq 0 ]; then
log_success "Nginx reloaded"
else
log_error "Failed to reload Nginx"
exit 1
fi
# Test health check
log_info ""
log_info "6. Testing health check..."
HEALTH_RESULT=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no root@${PROXMOX_HOST} \
"pct exec $VMID -- /usr/local/bin/nginx-health-check.sh 2>&1")
echo "$HEALTH_RESULT"
if echo "$HEALTH_RESULT" | grep -q "OK"; then
log_success "Health check passed"
else
log_warn "Health check found issues (review output above)"
fi
echo ""
log_success "Monitoring setup complete!"
echo ""
log_info "Monitoring Summary:"
log_info " ✓ Nginx status page: http://127.0.0.1:8080/nginx_status"
log_info " ✓ Log rotation: 14 days retention, daily rotation"
log_info " ✓ Health check script: /usr/local/bin/nginx-health-check.sh"
log_info " ✓ Health monitoring service: Runs every 5 minutes"
echo ""
log_info "View status: pct exec $VMID -- curl http://127.0.0.1:8080/nginx_status"
log_info "Run health check: pct exec $VMID -- /usr/local/bin/nginx-health-check.sh"

39
scripts/setup-ssh-keys.md Normal file
View File

@@ -0,0 +1,39 @@
# SSH Key Authentication Setup Guide
## On Local Machine
1. Generate SSH key pair (if not exists):
```bash
ssh-keygen -t ed25519 -C "proxmox-deployment" -f ~/.ssh/id_ed25519_proxmox
```
2. Copy public key to Proxmox host:
```bash
ssh-copy-id -i ~/.ssh/id_ed25519_proxmox.pub root@192.168.11.10
```
## On Proxmox Host
1. Edit SSH config:
```bash
nano /etc/ssh/sshd_config
```
2. Set these options:
```
PasswordAuthentication no
PubkeyAuthentication yes
```
3. Restart SSH service:
```bash
systemctl restart sshd
```
## Test
```bash
ssh -i ~/.ssh/id_ed25519_proxmox root@192.168.11.10
```
**Note**: Keep password authentication enabled until SSH keys are verified working!

91
scripts/setup.sh Executable file
View File

@@ -0,0 +1,91 @@
#!/bin/bash
# Setup script for Proxmox MCP Server and workspace
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ENV_FILE="$HOME/.env"
ENV_EXAMPLE="$SCRIPT_DIR/.env.example"
CLAUDE_CONFIG_DIR="$HOME/.config/Claude"
CLAUDE_CONFIG="$CLAUDE_CONFIG_DIR/claude_desktop_config.json"
CLAUDE_CONFIG_EXAMPLE="$SCRIPT_DIR/claude_desktop_config.json.example"
echo "🚀 Proxmox MCP Server Setup"
echo "============================"
echo ""
# Step 1: Check if .env exists
if [ ! -f "$ENV_FILE" ]; then
echo "📝 Creating .env file from template..."
if [ -f "$ENV_EXAMPLE" ]; then
cp "$ENV_EXAMPLE" "$ENV_FILE"
echo "✅ Created $ENV_FILE"
echo "⚠️ Please edit $ENV_FILE and add your Proxmox credentials!"
else
# Create .env file directly if template doesn't exist
echo "📝 Creating .env file directly..."
cat > "$ENV_FILE" << 'EOF'
# Proxmox MCP Server Configuration
# Fill in your actual values below
# Proxmox Configuration (REQUIRED)
PROXMOX_HOST=your-proxmox-ip-or-hostname
PROXMOX_USER=root@pam
PROXMOX_TOKEN_NAME=your-token-name
PROXMOX_TOKEN_VALUE=your-token-secret
# Security Settings (REQUIRED)
# ⚠️ WARNING: Setting PROXMOX_ALLOW_ELEVATED=true enables DESTRUCTIVE operations
PROXMOX_ALLOW_ELEVATED=false
# Optional Settings
# PROXMOX_PORT=8006 # Defaults to 8006 if not specified
EOF
echo "✅ Created $ENV_FILE"
echo "⚠️ Please edit $ENV_FILE and add your Proxmox credentials!"
fi
else
echo "✅ .env file already exists at $ENV_FILE"
fi
# Step 2: Setup Claude Desktop config
echo ""
echo "📝 Setting up Claude Desktop configuration..."
if [ ! -d "$CLAUDE_CONFIG_DIR" ]; then
mkdir -p "$CLAUDE_CONFIG_DIR"
echo "✅ Created Claude config directory: $CLAUDE_CONFIG_DIR"
fi
if [ ! -f "$CLAUDE_CONFIG" ]; then
echo "📝 Creating Claude Desktop config from template..."
if [ -f "$CLAUDE_CONFIG_EXAMPLE" ]; then
# Replace the path in the example with the actual path
sed "s|/home/intlc/projects/proxmox|$SCRIPT_DIR|g" "$CLAUDE_CONFIG_EXAMPLE" > "$CLAUDE_CONFIG"
echo "✅ Created $CLAUDE_CONFIG"
echo "⚠️ Please verify the configuration and restart Claude Desktop!"
else
echo "❌ Template file not found: $CLAUDE_CONFIG_EXAMPLE"
exit 1
fi
else
echo "✅ Claude Desktop config already exists at $CLAUDE_CONFIG"
echo "⚠️ Skipping creation to avoid overwriting existing config"
echo " If you want to update it, manually edit: $CLAUDE_CONFIG"
fi
# Step 3: Install dependencies
echo ""
echo "📦 Installing workspace dependencies..."
cd "$SCRIPT_DIR"
pnpm install
echo ""
echo "✅ Setup complete!"
echo ""
echo "Next steps:"
echo "1. Edit $ENV_FILE with your Proxmox credentials"
echo "2. Verify Claude Desktop config at $CLAUDE_CONFIG"
echo "3. Restart Claude Desktop"
echo "4. Test the MCP server with: pnpm test:basic"

19
scripts/ssh-proxmox.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
# SSH to Proxmox host with locale warnings suppressed
# Usage: ./scripts/ssh-proxmox.sh [command]
HOST="${PROXMOX_HOST:-192.168.11.10}"
USER="${PROXMOX_USER:-root}"
# Suppress locale warnings
export LC_ALL=C
export LANG=C
if [[ $# -eq 0 ]]; then
# Interactive SSH (suppress locale warnings)
ssh "$USER@$HOST" "export LC_ALL=C; export LANG=C; bash"
else
# Execute command (suppress locale warnings)
ssh "$USER@$HOST" "export LC_ALL=C; export LANG=C; $@"
fi

View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
# Start the Cloudflare credentials setup web interface
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/.." && pwd)"
VENV_DIR="$PROJECT_DIR/venv"
cd "$PROJECT_DIR"
# Create virtual environment if it doesn't exist
if [[ ! -d "$VENV_DIR" ]]; then
echo "Creating virtual environment..."
python3 -m venv "$VENV_DIR"
fi
# Activate virtual environment
source "$VENV_DIR/bin/activate"
# Install dependencies if needed
if ! python -c "import flask" 2>/dev/null; then
echo "Installing Flask and requests..."
pip install flask requests --quiet
fi
# Start the web server
echo "Starting Cloudflare Setup Web Interface..."
python "$SCRIPT_DIR/cloudflare-setup-web.py"

147
scripts/sync-to-ml110.sh Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env bash
# Sync verified working files to ml110 (192.168.11.10)
# This script deletes old files and copies only verified working files
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_BASE="/opt"
LOCAL_PROJECT_ROOT="/home/intlc/projects/proxmox"
SOURCE_PROJECT="$LOCAL_PROJECT_ROOT/smom-dbis-138-proxmox"
SOURCE_SMOM="/home/intlc/projects/smom-dbis-138"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Check if sshpass is available
if ! command -v sshpass &> /dev/null; then
log_error "sshpass is required. Install with: apt-get install sshpass"
exit 1
fi
# Check if rsync is available
if ! command -v rsync &> /dev/null; then
log_error "rsync is required. Install with: apt-get install rsync"
exit 1
fi
log_info "=== Syncing Verified Working Files to ml110 ==="
log_info "Remote: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_BASE}"
log_info ""
# Test SSH connection
log_info "Testing SSH connection..."
if ! sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 \
"${REMOTE_USER}@${REMOTE_HOST}" "echo 'Connection successful'" 2>/dev/null; then
log_error "Cannot connect to ${REMOTE_HOST}. Check network and credentials."
exit 1
fi
log_info "✓ SSH connection successful"
echo ""
# Step 1: Backup existing files (optional but recommended)
log_info "=== Step 1: Creating backup of existing files ==="
BACKUP_DIR="/opt/backup-$(date +%Y%m%d-%H%M%S)"
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"mkdir -p ${BACKUP_DIR} && \
(test -d ${REMOTE_BASE}/smom-dbis-138 && cp -r ${REMOTE_BASE}/smom-dbis-138 ${BACKUP_DIR}/ || true) && \
(test -d ${REMOTE_BASE}/smom-dbis-138-proxmox && cp -r ${REMOTE_BASE}/smom-dbis-138-proxmox ${BACKUP_DIR}/ || true) && \
echo 'Backup created at ${BACKUP_DIR}'"
log_info "✓ Backup created at ${BACKUP_DIR}"
echo ""
# Step 2: Delete old files
log_info "=== Step 2: Removing old files ==="
log_warn "This will delete all existing files in /opt/smom-dbis-138 and /opt/smom-dbis-138-proxmox"
# Auto-confirm for non-interactive execution
log_info "Auto-confirming deletion (non-interactive mode)"
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"rm -rf ${REMOTE_BASE}/smom-dbis-138-proxmox && \
echo '✓ Removed old smom-dbis-138-proxmox directory'"
log_info "✓ Old files removed"
echo ""
# Step 3: Copy verified working files
log_info "=== Step 3: Copying verified working files ==="
# Copy smom-dbis-138-proxmox (entire directory - all files are verified)
log_info "Copying smom-dbis-138-proxmox directory..."
rsync -avz --delete \
--exclude='.git' \
--exclude='*.log' \
--exclude='logs/*' \
--exclude='node_modules' \
--exclude='__pycache__' \
-e "sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no" \
"${SOURCE_PROJECT}/" \
"${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_BASE}/smom-dbis-138-proxmox/"
log_info "✓ smom-dbis-138-proxmox copied"
echo ""
# Copy smom-dbis-138 (only config and keys, preserve existing keys)
log_info "Copying smom-dbis-138 (config and keys only)..."
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"mkdir -p ${REMOTE_BASE}/smom-dbis-138/config ${REMOTE_BASE}/smom-dbis-138/keys/validators ${REMOTE_BASE}/smom-dbis-138/keys/oracle"
# Copy config files
rsync -avz \
-e "sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no" \
"${SOURCE_SMOM}/config/" \
"${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_BASE}/smom-dbis-138/config/"
# Copy keys (preserve existing validator keys, but update if newer)
rsync -avz \
-e "sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no" \
"${SOURCE_SMOM}/keys/" \
"${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_BASE}/smom-dbis-138/keys/"
log_info "✓ smom-dbis-138 config and keys copied"
echo ""
# Step 4: Set permissions
log_info "=== Step 4: Setting permissions ==="
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"chmod +x ${REMOTE_BASE}/smom-dbis-138-proxmox/scripts/*.sh && \
chmod +x ${REMOTE_BASE}/smom-dbis-138-proxmox/scripts/*/*.sh && \
chmod +x ${REMOTE_BASE}/smom-dbis-138-proxmox/install/*.sh && \
chmod 600 ${REMOTE_BASE}/smom-dbis-138/keys/**/*.priv ${REMOTE_BASE}/smom-dbis-138/keys/**/*.pem 2>/dev/null || true && \
echo '✓ Permissions set'"
log_info "✓ Permissions configured"
echo ""
# Step 5: Verify copied files
log_info "=== Step 5: Verifying copied files ==="
sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"echo 'smom-dbis-138-proxmox:' && \
ls -la ${REMOTE_BASE}/smom-dbis-138-proxmox/ | head -10 && \
echo '' && \
echo 'smom-dbis-138:' && \
ls -la ${REMOTE_BASE}/smom-dbis-138/ | head -10 && \
echo '' && \
echo 'Validator keys:' && \
ls -d ${REMOTE_BASE}/smom-dbis-138/keys/validators/*/ 2>/dev/null | wc -l && \
echo 'validator directories found'"
log_info "✓ Verification complete"
echo ""
log_info "=== Sync Complete ==="
log_info "Files synced to: ${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_BASE}"
log_info "Backup location: ${BACKUP_DIR}"
log_info ""
log_info "Next steps:"
log_info " 1. SSH to ml110: ssh ${REMOTE_USER}@${REMOTE_HOST}"
log_info " 2. Verify files: cd ${REMOTE_BASE}/smom-dbis-138-proxmox && ls -la"
log_info " 3. Check config: cat config/proxmox.conf"
log_info " 4. Run deployment: ./deploy-all.sh"

57
scripts/test-cloudflare-api.sh Executable file
View File

@@ -0,0 +1,57 @@
#!/usr/bin/env bash
# Quick test script to verify Cloudflare API credentials
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Load .env if exists
if [[ -f "$SCRIPT_DIR/../.env" ]]; then
source "$SCRIPT_DIR/../.env"
fi
CLOUDFLARE_API_KEY="${CLOUDFLARE_API_KEY:-}"
CLOUDFLARE_EMAIL="${CLOUDFLARE_EMAIL:-}"
CLOUDFLARE_API_TOKEN="${CLOUDFLARE_API_TOKEN:-}"
echo "Testing Cloudflare API credentials..."
echo ""
if [[ -n "$CLOUDFLARE_API_TOKEN" ]]; then
echo "Testing with API Token..."
response=$(curl -s -X GET "https://api.cloudflare.com/client/v4/user" \
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" \
-H "Content-Type: application/json")
success=$(echo "$response" | jq -r '.success // false')
if [[ "$success" == "true" ]]; then
email=$(echo "$response" | jq -r '.result.email // "N/A"')
echo "✓ API Token works! Email: $email"
else
error=$(echo "$response" | jq -r '.errors[0].message // "Unknown error"')
echo "✗ API Token failed: $error"
fi
elif [[ -n "$CLOUDFLARE_API_KEY" ]]; then
if [[ -z "$CLOUDFLARE_EMAIL" ]]; then
echo "✗ CLOUDFLARE_API_KEY requires CLOUDFLARE_EMAIL"
echo " Please add CLOUDFLARE_EMAIL to .env file"
else
echo "Testing with API Key + Email..."
response=$(curl -s -X GET "https://api.cloudflare.com/client/v4/user" \
-H "X-Auth-Email: ${CLOUDFLARE_EMAIL}" \
-H "X-Auth-Key: ${CLOUDFLARE_API_KEY}" \
-H "Content-Type: application/json")
success=$(echo "$response" | jq -r '.success // false')
if [[ "$success" == "true" ]]; then
email=$(echo "$response" | jq -r '.result.email // "N/A"')
echo "✓ API Key works! Email: $email"
else
error=$(echo "$response" | jq -r '.errors[0].message // "Unknown error"')
echo "✗ API Key failed: $error"
fi
fi
else
echo "✗ No API credentials found in .env"
fi

90
scripts/test-connection.sh Executable file
View File

@@ -0,0 +1,90 @@
#!/bin/bash
# Quick connection test script for Proxmox MCP Server
set -e
ENV_FILE="$HOME/.env"
echo "🔍 Testing Proxmox MCP Server Connection"
echo "========================================"
echo ""
# Load environment variables
if [ -f "$ENV_FILE" ]; then
source <(grep -E "^PROXMOX_" "$ENV_FILE" | sed 's/^/export /')
else
echo "❌ .env file not found at $ENV_FILE"
exit 1
fi
echo "Connection Details:"
echo " Host: ${PROXMOX_HOST:-not set}"
echo " User: ${PROXMOX_USER:-not set}"
echo " Token: ${PROXMOX_TOKEN_NAME:-not set}"
echo " Port: ${PROXMOX_PORT:-8006}"
echo ""
# Test 1: Direct API connection
echo "Test 1: Direct Proxmox API Connection"
echo "--------------------------------------"
if [ -z "$PROXMOX_TOKEN_VALUE" ] || [ "$PROXMOX_TOKEN_VALUE" = "your-token-secret-here" ]; then
echo "⚠️ Token value not configured"
else
API_RESPONSE=$(curl -s -k -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${PROXMOX_HOST}:${PROXMOX_PORT:-8006}/api2/json/version" 2>&1)
if echo "$API_RESPONSE" | grep -q "data"; then
VERSION=$(echo "$API_RESPONSE" | grep -oP '"data":{"version":"\K[^"]+')
echo "✅ API connection successful"
echo " Proxmox version: $VERSION"
else
echo "❌ API connection failed"
echo " Response: $API_RESPONSE"
fi
fi
echo ""
echo "Test 2: MCP Server Tool List"
echo "-----------------------------"
cd "$(dirname "$0")/../mcp-proxmox" || exit 1
TIMEOUT=5
REQUEST='{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
RESPONSE=$(timeout $TIMEOUT node index.js <<< "$REQUEST" 2>&1 | grep -E "^\{" || echo "")
if echo "$RESPONSE" | grep -q "tools"; then
TOOL_COUNT=$(echo "$RESPONSE" | grep -o '"name"' | wc -l)
echo "✅ MCP server responding"
echo " Tools available: $TOOL_COUNT"
else
echo "⚠️ MCP server response unclear"
echo " Response preview: $(echo "$RESPONSE" | head -c 100)..."
fi
echo ""
echo "Test 3: Get Proxmox Nodes"
echo "-------------------------"
REQUEST='{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"proxmox_get_nodes","arguments":{}}}'
RESPONSE=$(timeout $TIMEOUT node index.js <<< "$REQUEST" 2>&1 | grep -E "^\{" || echo "")
if echo "$RESPONSE" | grep -q "data\|nodes\|Node"; then
echo "✅ Successfully retrieved Proxmox nodes"
NODES=$(echo "$RESPONSE" | grep -oP '"text":"[^"]*Node[^"]*"' | head -3 || echo "")
if [ -n "$NODES" ]; then
echo "$NODES" | sed 's/"text":"/ /' | sed 's/"$//'
fi
else
ERROR=$(echo "$RESPONSE" | grep -oP '"message":"\K[^"]*' || echo "Unknown error")
echo "⚠️ Could not retrieve nodes"
echo " Error: $ERROR"
fi
echo ""
echo "========================================"
echo "Connection test complete!"
echo ""
echo "If all tests passed, your MCP server is ready to use."
echo "Start it with: pnpm mcp:start"

95
scripts/test-scripts-dry-run.sh Executable file
View File

@@ -0,0 +1,95 @@
#!/bin/bash
# Dry-run test of deployment scripts (validates structure without executing)
# Run this locally before deploying to Proxmox host
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
echo "========================================="
echo "Dry-Run Test - Script Validation"
echo "========================================="
echo ""
ERRORS=0
# Test 1: Script syntax
echo "1. Testing script syntax..."
for script in \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/network/bootstrap-network.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/validation/validate-validator-set.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment/deploy-validated-set.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment/bootstrap-quick.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/health/check-node-health.sh"; do
if bash -n "$script" 2>&1; then
echo "$(basename $script)"
else
echo "$(basename $script) - Syntax error"
ERRORS=$((ERRORS + 1))
fi
done
# Test 2: Script executability
echo ""
echo "2. Checking script permissions..."
for script in \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/network/bootstrap-network.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/validation/validate-validator-set.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment/deploy-validated-set.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment/bootstrap-quick.sh" \
"$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/health/check-node-health.sh"; do
if [[ -x "$script" ]]; then
echo "$(basename $script) is executable"
else
echo "$(basename $script) is not executable"
ERRORS=$((ERRORS + 1))
fi
done
# Test 3: Required dependencies
echo ""
echo "3. Checking required dependencies..."
REQUIRED_FILES=(
"$PROJECT_ROOT/smom-dbis-138-proxmox/lib/common.sh"
"$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf.example"
)
for file in "${REQUIRED_FILES[@]}"; do
if [[ -f "$file" ]]; then
echo "$(basename $file)"
else
echo "$(basename $file) - Missing"
ERRORS=$((ERRORS + 1))
fi
done
# Test 4: Help/usage messages
echo ""
echo "4. Testing help/usage messages..."
if bash "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/deployment/deploy-validated-set.sh" --help >/dev/null 2>&1; then
echo " ✓ deploy-validated-set.sh --help works"
else
echo " ✗ deploy-validated-set.sh --help failed"
ERRORS=$((ERRORS + 1))
fi
if bash "$PROJECT_ROOT/smom-dbis-138-proxmox/scripts/health/check-node-health.sh" 2>&1 | grep -q "Usage:"; then
echo " ✓ check-node-health.sh usage message works"
else
echo " ✗ check-node-health.sh usage message failed"
ERRORS=$((ERRORS + 1))
fi
# Summary
echo ""
echo "========================================="
if [[ $ERRORS -eq 0 ]]; then
echo "✓ All tests passed!"
echo "Scripts are ready for deployment."
exit 0
else
echo "✗ Found $ERRORS error(s)"
echo "Please fix errors before deployment."
exit 1
fi

239
scripts/troubleshoot-rpc-2500.sh Executable file
View File

@@ -0,0 +1,239 @@
#!/usr/bin/env bash
# Troubleshoot RPC-01 at VMID 2500
# Usage: ./troubleshoot-rpc-2500.sh
set -e
VMID=2500
CONTAINER_NAME="besu-rpc-1"
EXPECTED_IP="192.168.11.250"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Troubleshooting RPC-01 (VMID $VMID)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Check if running on Proxmox host
if ! command -v pct &>/dev/null; then
log_error "This script must be run on Proxmox host (pct command not found)"
exit 1
fi
# 1. Check container status
log_info "1. Checking container status..."
CONTAINER_STATUS=$(pct status $VMID 2>&1 | grep -oP 'status: \K\w+' || echo "unknown")
log_info " Status: $CONTAINER_STATUS"
if [ "$CONTAINER_STATUS" != "running" ]; then
log_warn " Container is not running. Attempting to start..."
pct start $VMID 2>&1
sleep 5
CONTAINER_STATUS=$(pct status $VMID 2>&1 | grep -oP 'status: \K\w+' || echo "unknown")
if [ "$CONTAINER_STATUS" != "running" ]; then
log_error " Failed to start container"
log_info " Check container logs: pct config $VMID"
exit 1
fi
log_success " Container started"
fi
# 2. Check network configuration
log_info ""
log_info "2. Checking network configuration..."
ACTUAL_IP=$(pct exec $VMID -- ip -4 addr show eth0 2>/dev/null | grep -oP 'inet \K[0-9.]+' | head -1 || echo "")
if [ -n "$ACTUAL_IP" ]; then
log_info " IP Address: $ACTUAL_IP"
if [ "$ACTUAL_IP" = "$EXPECTED_IP" ]; then
log_success " IP address matches expected: $EXPECTED_IP"
else
log_warn " IP address mismatch. Expected: $EXPECTED_IP, Got: $ACTUAL_IP"
fi
else
log_error " Could not determine IP address"
fi
# 3. Check service status
log_info ""
log_info "3. Checking Besu RPC service status..."
SERVICE_STATUS=$(pct exec $VMID -- systemctl is-active besu-rpc.service 2>&1 || echo "unknown")
SERVICE_ENABLED=$(pct exec $VMID -- systemctl is-enabled besu-rpc.service 2>&1 || echo "unknown")
log_info " Service Status: $SERVICE_STATUS"
log_info " Service Enabled: $SERVICE_ENABLED"
if [ "$SERVICE_STATUS" != "active" ]; then
log_warn " Service is not active"
# Check recent logs for errors
log_info ""
log_info " Recent service logs (last 20 lines):"
pct exec $VMID -- journalctl -u besu-rpc.service -n 20 --no-pager 2>&1 | tail -20
fi
# 4. Check configuration files
log_info ""
log_info "4. Checking configuration files..."
CONFIG_FILE="/etc/besu/config-rpc.toml"
if pct exec $VMID -- test -f "$CONFIG_FILE" 2>/dev/null; then
log_success " Config file exists: $CONFIG_FILE"
# Check for common configuration issues
log_info " Checking for configuration issues..."
# Check RPC is enabled
if pct exec $VMID -- grep -q "rpc-http-enabled=true" "$CONFIG_FILE" 2>/dev/null; then
log_success " RPC HTTP is enabled"
else
log_error " RPC HTTP is NOT enabled!"
fi
# Check RPC port
RPC_PORT=$(pct exec $VMID -- grep -oP 'rpc-http-port=\K\d+' "$CONFIG_FILE" 2>/dev/null | head -1 || echo "")
if [ -n "$RPC_PORT" ]; then
log_info " RPC HTTP Port: $RPC_PORT"
if [ "$RPC_PORT" = "8545" ]; then
log_success " Port is correct (8545)"
else
log_warn " Port is $RPC_PORT (expected 8545)"
fi
fi
# Check for deprecated options
DEPRECATED_OPTS=$(pct exec $VMID -- grep -E "log-destination|max-remote-initiated-connections|trie-logs-enabled|accounts-enabled|database-path|rpc-http-host-allowlist" "$CONFIG_FILE" 2>/dev/null | wc -l)
if [ "$DEPRECATED_OPTS" -gt 0 ]; then
log_error " Found $DEPRECATED_OPTS deprecated configuration options"
log_info " Deprecated options found:"
pct exec $VMID -- grep -E "log-destination|max-remote-initiated-connections|trie-logs-enabled|accounts-enabled|database-path|rpc-http-host-allowlist" "$CONFIG_FILE" 2>/dev/null
else
log_success " No deprecated options found"
fi
else
log_error " Config file NOT found: $CONFIG_FILE"
log_info " Expected location: $CONFIG_FILE"
fi
# 5. Check required files
log_info ""
log_info "5. Checking required files..."
GENESIS_FILE="/genesis/genesis.json"
STATIC_NODES="/genesis/static-nodes.json"
PERMISSIONS_NODES="/permissions/permissions-nodes.toml"
for file in "$GENESIS_FILE" "$STATIC_NODES" "$PERMISSIONS_NODES"; do
if pct exec $VMID -- test -f "$file" 2>/dev/null; then
log_success " Found: $file"
else
log_error " Missing: $file"
fi
done
# 6. Check ports
log_info ""
log_info "6. Checking listening ports..."
PORTS=$(pct exec $VMID -- ss -tlnp 2>/dev/null | grep -E "8545|8546|30303|9545" || echo "")
if [ -n "$PORTS" ]; then
log_success " Listening ports:"
echo "$PORTS" | while read line; do
log_info " $line"
done
else
log_error " No Besu ports are listening (8545, 8546, 30303, 9545)"
fi
# 7. Check RPC endpoint
log_info ""
log_info "7. Testing RPC endpoint..."
RPC_RESPONSE=$(pct exec $VMID -- curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
http://localhost:8545 2>&1 || echo "FAILED")
if echo "$RPC_RESPONSE" | grep -q "result"; then
BLOCK_NUM=$(echo "$RPC_RESPONSE" | grep -oP '"result":"\K[^"]+' | head -1)
log_success " RPC endpoint is responding"
log_info " Current block: $BLOCK_NUM"
else
log_error " RPC endpoint is NOT responding"
log_info " Response: $RPC_RESPONSE"
fi
# 8. Check process
log_info ""
log_info "8. Checking Besu process..."
BESU_PROCESS=$(pct exec $VMID -- ps aux | grep -E "[b]esu|java.*besu" | head -1 || echo "")
if [ -n "$BESU_PROCESS" ]; then
log_success " Besu process is running"
log_info " Process: $(echo "$BESU_PROCESS" | awk '{print $2, $11, $12, $13}')"
else
log_error " Besu process is NOT running"
fi
# 9. Check disk space
log_info ""
log_info "9. Checking disk space..."
DISK_USAGE=$(pct exec $VMID -- df -h / 2>/dev/null | tail -1 | awk '{print $5}' || echo "unknown")
log_info " Disk usage: $DISK_USAGE"
# 10. Check memory
log_info ""
log_info "10. Checking memory..."
MEM_INFO=$(pct exec $VMID -- free -h 2>/dev/null | grep Mem || echo "")
if [ -n "$MEM_INFO" ]; then
log_info " $MEM_INFO"
fi
# 11. Recent errors
log_info ""
log_info "11. Checking for recent errors in logs..."
RECENT_ERRORS=$(pct exec $VMID -- journalctl -u besu-rpc.service --since "10 minutes ago" --no-pager 2>&1 | \
grep -iE "error|fail|exception|unable|cannot" | tail -10 || echo "")
if [ -n "$RECENT_ERRORS" ]; then
log_error " Recent errors found:"
echo "$RECENT_ERRORS" | while read line; do
log_error " $line"
done
else
log_success " No recent errors found"
fi
# Summary
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Troubleshooting Summary"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# Provide recommendations
if [ "$SERVICE_STATUS" != "active" ]; then
log_warn "RECOMMENDATIONS:"
log_info "1. Check service logs: pct exec $VMID -- journalctl -u besu-rpc.service -f"
log_info "2. Verify configuration: pct exec $VMID -- cat /etc/besu/config-rpc.toml"
log_info "3. Restart service: pct exec $VMID -- systemctl restart besu-rpc.service"
log_info "4. Check for configuration errors in logs"
fi
if [ -z "$PORTS" ]; then
log_warn "RECOMMENDATIONS:"
log_info "1. Service may not be running - check service status"
log_info "2. Check firewall rules if service is running"
log_info "3. Verify RPC is enabled in config file"
fi
echo ""
log_info "For detailed logs, run:"
log_info " pct exec $VMID -- journalctl -u besu-rpc.service -f"
echo ""

View File

@@ -0,0 +1,83 @@
#!/usr/bin/env bash
# Update service .env files with deployed contract addresses
# Usage: ./update-service-configs.sh <contract-addresses-file>
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ADDRESSES_FILE="${1:-/home/intlc/projects/smom-dbis-138/deployed-addresses-chain138.txt}"
if [ ! -f "$ADDRESSES_FILE" ]; then
echo "❌ Addresses file not found: $ADDRESSES_FILE"
exit 1
fi
# Source addresses
source "$ADDRESSES_FILE" 2>/dev/null || true
RPC_URL="http://192.168.11.250:8545"
WS_URL="ws://192.168.11.250:8546"
# Function to update container .env
update_container_env() {
local vmid="$1"
local service_name="$2"
local env_content="$3"
if ! pct status "$vmid" &>/dev/null; then
echo "⚠️ Container $vmid not found, skipping $service_name"
return
fi
echo "Updating $service_name (VMID $vmid)..."
pct exec "$vmid" -- bash -c "cat > /opt/$service_name/.env <<'ENVEOF'
$env_content
ENVEOF" 2>/dev/null && echo "✅ Updated $service_name" || echo "❌ Failed to update $service_name"
}
# Oracle Publisher (VMID 3500)
if [ -n "$ORACLE_CONTRACT_ADDRESS" ] || [ -n "$AGGREGATOR_ADDRESS" ]; then
ORACLE_ADDR="${ORACLE_CONTRACT_ADDRESS:-$AGGREGATOR_ADDRESS}"
update_container_env 3500 "oracle-publisher" "RPC_URL_138=$RPC_URL
WS_URL_138=$WS_URL
ORACLE_CONTRACT_ADDRESS=$ORACLE_ADDR
PRIVATE_KEY=\${PRIVATE_KEY}
UPDATE_INTERVAL=60
METRICS_PORT=8000"
fi
# CCIP Monitor (VMID 3501)
if [ -n "$CCIP_ROUTER_ADDRESS" ] && [ -n "$CCIP_SENDER_ADDRESS" ]; then
update_container_env 3501 "ccip-monitor" "RPC_URL_138=$RPC_URL
CCIP_ROUTER_ADDRESS=$CCIP_ROUTER_ADDRESS
CCIP_SENDER_ADDRESS=$CCIP_SENDER_ADDRESS
LINK_TOKEN_ADDRESS=\${LINK_TOKEN_ADDRESS}
METRICS_PORT=8000
CHECK_INTERVAL=60"
fi
# Keeper (VMID 3502)
if [ -n "$PRICE_FEED_KEEPER_ADDRESS" ]; then
update_container_env 3502 "keeper" "RPC_URL_138=$RPC_URL
PRICE_FEED_KEEPER_ADDRESS=$PRICE_FEED_KEEPER_ADDRESS
KEEPER_PRIVATE_KEY=\${KEEPER_PRIVATE_KEY}
UPDATE_INTERVAL=300
HEALTH_PORT=3000"
fi
# Financial Tokenization (VMID 3503)
if [ -n "$RESERVE_SYSTEM_ADDRESS" ]; then
update_container_env 3503 "financial-tokenization" "FIREFLY_API_URL=http://192.168.11.66:5000
FIREFLY_API_KEY=\${FIREFLY_API_KEY}
BESU_RPC_URL=$RPC_URL
CHAIN_ID=138
RESERVE_SYSTEM=$RESERVE_SYSTEM_ADDRESS
FLASK_ENV=production
FLASK_PORT=5001"
fi
echo ""
echo "✅ Service configurations updated"
echo ""
echo "Note: Private keys and API keys need to be set manually"

52
scripts/update-token.sh Executable file
View File

@@ -0,0 +1,52 @@
#!/bin/bash
# Script to update PROXMOX_TOKEN_VALUE in .env file
ENV_FILE="$HOME/.env"
if [ ! -f "$ENV_FILE" ]; then
echo "❌ .env file not found at $ENV_FILE"
exit 1
fi
echo "🔐 Update Proxmox API Token"
echo "============================"
echo ""
echo "Please paste the token secret you copied from Proxmox UI:"
echo "(The secret will be hidden as you type)"
echo ""
read -s TOKEN_VALUE
if [ -z "$TOKEN_VALUE" ]; then
echo "❌ No token value provided"
exit 1
fi
# Update the .env file
if grep -q "^PROXMOX_TOKEN_VALUE=" "$ENV_FILE"; then
# Use sed to update the line (works with special characters)
sed -i "s|^PROXMOX_TOKEN_VALUE=.*|PROXMOX_TOKEN_VALUE=$TOKEN_VALUE|" "$ENV_FILE"
echo ""
echo "✅ Token updated in $ENV_FILE"
else
echo "❌ PROXMOX_TOKEN_VALUE not found in .env file"
exit 1
fi
echo ""
echo "Verifying configuration..."
if grep -q "^PROXMOX_TOKEN_VALUE=$TOKEN_VALUE" "$ENV_FILE"; then
echo "✅ Token successfully configured!"
echo ""
echo "Current configuration:"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
grep "^PROXMOX_" "$ENV_FILE" | grep -v "TOKEN_VALUE" | sed 's/=.*/=***/'
echo "PROXMOX_TOKEN_VALUE=***configured***"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "You can now test the connection:"
echo " ./verify-setup.sh"
echo " pnpm test:basic"
else
echo "⚠️ Token may not have been updated correctly"
fi

View File

@@ -0,0 +1,555 @@
#!/bin/bash
# Comprehensive Deployment Validation Script for ml110-01
# Validates all prerequisites, configurations, and deployment readiness
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$SCRIPT_DIR"
DEPLOY_DIR="$PROJECT_ROOT/smom-dbis-138-proxmox"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Status counters
PASSED=0
FAILED=0
WARNINGS=0
# Functions
pass() {
echo -e "${GREEN}${NC} $1"
((PASSED++))
}
fail() {
echo -e "${RED}${NC} $1"
((FAILED++))
}
warn() {
echo -e "${YELLOW}⚠️${NC} $1"
((WARNINGS++))
}
info() {
echo -e "${BLUE}${NC} $1"
}
section() {
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${CYAN}$1${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
}
# Load environment
ENV_FILE="$HOME/.env"
if [ -f "$ENV_FILE" ]; then
source <(grep -E "^PROXMOX_" "$ENV_FILE" 2>/dev/null | sed 's/^/export /' || true)
fi
TARGET_HOST="${PROXMOX_HOST:-192.168.11.10}"
TARGET_NODE="${TARGET_NODE:-ml110-01}"
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} Deployment Validation for ml110-01 (${TARGET_HOST}) ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════════════════╝${NC}"
echo ""
# ============================================================================
# SECTION 1: PREREQUISITES
# ============================================================================
section "1. PREREQUISITES CHECK"
# Check if running on Proxmox host or remote
if command -v pct &> /dev/null; then
info "Running on Proxmox host - can execute pct commands directly"
ON_PROXMOX_HOST=true
else
info "Running remotely - will use Proxmox API"
ON_PROXMOX_HOST=false
fi
# Check required tools
REQUIRED_TOOLS=("curl" "jq" "git" "bash")
for tool in "${REQUIRED_TOOLS[@]}"; do
if command -v "$tool" &> /dev/null; then
pass "$tool installed"
else
fail "$tool not found (required)"
fi
done
# Check deployment directory
if [ -d "$DEPLOY_DIR" ]; then
pass "Deployment directory exists: $DEPLOY_DIR"
else
fail "Deployment directory not found: $DEPLOY_DIR"
fi
# ============================================================================
# SECTION 2: PROXMOX CONNECTION
# ============================================================================
section "2. PROXMOX CONNECTION VALIDATION"
# Test API connectivity
info "Testing connection to $TARGET_HOST:8006"
API_RESPONSE=$(curl -k -s -w "\n%{http_code}" -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/version" 2>&1 || echo "ERROR")
HTTP_CODE=$(echo "$API_RESPONSE" | tail -1)
RESPONSE_BODY=$(echo "$API_RESPONSE" | sed '$d')
if [ "$HTTP_CODE" = "200" ]; then
VERSION=$(echo "$RESPONSE_BODY" | jq -r '.data.version' 2>/dev/null || echo "unknown")
pass "API connection successful (Proxmox version: $VERSION)"
else
fail "API connection failed (HTTP $HTTP_CODE)"
if [ -n "$RESPONSE_BODY" ]; then
info "Response: $(echo "$RESPONSE_BODY" | head -c 200)"
fi
fi
# Get node list
info "Retrieving node list..."
NODES_RESPONSE=$(curl -k -s -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes" 2>&1)
if echo "$NODES_RESPONSE" | jq -e '.data' &>/dev/null; then
NODE_COUNT=$(echo "$NODES_RESPONSE" | jq '.data | length')
NODE_NAMES=$(echo "$NODES_RESPONSE" | jq -r '.data[].node' | tr '\n' ' ')
pass "Retrieved $NODE_COUNT node(s): $NODE_NAMES"
# Check if target node exists
if echo "$NODES_RESPONSE" | jq -e ".data[] | select(.node==\"$TARGET_NODE\")" &>/dev/null; then
pass "Target node '$TARGET_NODE' found"
else
FIRST_NODE=$(echo "$NODES_RESPONSE" | jq -r '.data[0].node')
warn "Target node '$TARGET_NODE' not found, using first node: $FIRST_NODE"
TARGET_NODE="$FIRST_NODE"
fi
else
fail "Failed to retrieve nodes"
fi
# Get node status
info "Checking node status for $TARGET_NODE..."
NODE_STATUS=$(curl -k -s -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/status" 2>&1)
if echo "$NODE_STATUS" | jq -e '.data.status' &>/dev/null; then
STATUS=$(echo "$NODE_STATUS" | jq -r '.data.status')
if [ "$STATUS" = "online" ]; then
pass "Node $TARGET_NODE is online"
else
fail "Node $TARGET_NODE status: $STATUS"
fi
else
warn "Could not retrieve node status"
fi
# ============================================================================
# SECTION 3: STORAGE VALIDATION
# ============================================================================
section "3. STORAGE VALIDATION"
# Get storage list
info "Checking storage availability..."
STORAGE_RESPONSE=$(curl -k -s -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/storage" 2>&1)
if echo "$STORAGE_RESPONSE" | jq -e '.data' &>/dev/null; then
STORAGE_COUNT=$(echo "$STORAGE_RESPONSE" | jq '.data | length')
pass "Found $STORAGE_COUNT storage pool(s)"
# Check for common storage types
STORAGE_TYPES=$(echo "$STORAGE_RESPONSE" | jq -r '.data[].type' | sort -u | tr '\n' ' ')
info "Storage types available: $STORAGE_TYPES"
# Check for container storage (rootdir)
CONTAINER_STORAGE=$(echo "$STORAGE_RESPONSE" | jq -r '.data[] | select(.content | contains("rootdir")) | .storage' | head -1)
if [ -n "$CONTAINER_STORAGE" ]; then
pass "Container storage found: $CONTAINER_STORAGE"
# Check storage capacity
STORAGE_INFO=$(echo "$STORAGE_RESPONSE" | jq ".[] | select(.storage==\"$CONTAINER_STORAGE\")")
if echo "$STORAGE_INFO" | jq -e '.content' &>/dev/null; then
TOTAL=$(echo "$STORAGE_INFO" | jq -r '.total // 0' | numfmt --to=iec-i --suffix=B 2>/dev/null || echo "unknown")
FREE=$(echo "$STORAGE_INFO" | jq -r '.avail // 0' | numfmt --to=iec-i --suffix=B 2>/dev/null || echo "unknown")
USED=$(echo "$STORAGE_INFO" | jq -r '.used // 0' | numfmt --to=iec-i --suffix=B 2>/dev/null || echo "unknown")
info "Storage $CONTAINER_STORAGE: Total=$TOTAL, Free=$FREE, Used=$USED"
fi
else
fail "No container storage (rootdir) found"
fi
# Check for template storage (vztmpl)
TEMPLATE_STORAGE=$(echo "$STORAGE_RESPONSE" | jq -r '.data[] | select(.content | contains("vztmpl")) | .storage' | head -1)
if [ -n "$TEMPLATE_STORAGE" ]; then
pass "Template storage found: $TEMPLATE_STORAGE"
else
warn "No template storage (vztmpl) found"
fi
else
fail "Failed to retrieve storage information"
fi
# ============================================================================
# SECTION 4: TEMPLATE VALIDATION
# ============================================================================
section "4. LXC TEMPLATE VALIDATION"
if [ -n "$TEMPLATE_STORAGE" ]; then
info "Checking templates on storage: $TEMPLATE_STORAGE"
TEMPLATES_RESPONSE=$(curl -k -s -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/storage/${TEMPLATE_STORAGE}/content?content=vztmpl" 2>&1)
if echo "$TEMPLATES_RESPONSE" | jq -e '.data' &>/dev/null; then
TEMPLATE_COUNT=$(echo "$TEMPLATES_RESPONSE" | jq '.data | length')
pass "Found $TEMPLATE_COUNT template(s)"
# Check for Debian 12 template
DEBIAN_TEMPLATE=$(echo "$TEMPLATES_RESPONSE" | jq -r '.data[] | select(.volid | contains("debian-12")) | .volid' | head -1)
if [ -n "$DEBIAN_TEMPLATE" ]; then
pass "Debian 12 template found: $DEBIAN_TEMPLATE"
else
fail "Debian 12 template not found (required for deployment)"
info "Available templates:"
echo "$TEMPLATES_RESPONSE" | jq -r '.data[].volid' | head -10 | sed 's/^/ - /'
fi
else
warn "Could not retrieve template list"
fi
else
warn "Cannot check templates - no template storage found"
fi
# ============================================================================
# SECTION 5: CONFIGURATION FILES
# ============================================================================
section "5. CONFIGURATION FILES VALIDATION"
# Check if config files exist
CONFIG_DIR="$DEPLOY_DIR/config"
if [ -d "$CONFIG_DIR" ]; then
pass "Config directory exists: $CONFIG_DIR"
# Check proxmox.conf
if [ -f "$CONFIG_DIR/proxmox.conf" ]; then
pass "proxmox.conf exists"
source "$CONFIG_DIR/proxmox.conf" 2>/dev/null || true
# Validate config values
if [ -n "${PROXMOX_HOST:-}" ]; then
if [ "$PROXMOX_HOST" = "$TARGET_HOST" ] || [ "$PROXMOX_HOST" = "proxmox.example.com" ]; then
if [ "$PROXMOX_HOST" = "$TARGET_HOST" ]; then
pass "PROXMOX_HOST configured: $PROXMOX_HOST"
else
warn "PROXMOX_HOST still set to example: $PROXMOX_HOST"
fi
else
warn "PROXMOX_HOST mismatch: configured=$PROXMOX_HOST, expected=$TARGET_HOST"
fi
else
fail "PROXMOX_HOST not set in proxmox.conf"
fi
if [ -n "${PROXMOX_NODE:-}" ]; then
pass "PROXMOX_NODE configured: $PROXMOX_NODE"
else
warn "PROXMOX_NODE not set (will use default)"
fi
if [ -n "${PROXMOX_STORAGE:-}" ]; then
pass "PROXMOX_STORAGE configured: $PROXMOX_STORAGE"
else
warn "PROXMOX_STORAGE not set (will use default)"
fi
if [ -n "${CONTAINER_OS_TEMPLATE:-}" ]; then
pass "CONTAINER_OS_TEMPLATE configured: $CONTAINER_OS_TEMPLATE"
else
warn "CONTAINER_OS_TEMPLATE not set (will use default)"
fi
else
if [ -f "$CONFIG_DIR/proxmox.conf.example" ]; then
fail "proxmox.conf not found (example exists - needs to be copied and configured)"
else
fail "proxmox.conf not found"
fi
fi
# Check network.conf
if [ -f "$CONFIG_DIR/network.conf" ]; then
pass "network.conf exists"
source "$CONFIG_DIR/network.conf" 2>/dev/null || true
if [ -n "${SUBNET_BASE:-}" ]; then
pass "SUBNET_BASE configured: $SUBNET_BASE"
else
warn "SUBNET_BASE not set"
fi
if [ -n "${GATEWAY:-}" ]; then
pass "GATEWAY configured: $GATEWAY"
else
warn "GATEWAY not set"
fi
else
if [ -f "$CONFIG_DIR/network.conf.example" ]; then
warn "network.conf not found (example exists)"
else
warn "network.conf not found"
fi
fi
else
fail "Config directory not found: $CONFIG_DIR"
fi
# ============================================================================
# SECTION 6: DEPLOYMENT SCRIPTS
# ============================================================================
section "6. DEPLOYMENT SCRIPTS VALIDATION"
SCRIPTS_DIR="$DEPLOY_DIR/scripts/deployment"
if [ -d "$SCRIPTS_DIR" ]; then
pass "Deployment scripts directory exists"
REQUIRED_SCRIPTS=(
"deploy-all.sh"
"deploy-besu-nodes.sh"
"deploy-services.sh"
"deploy-hyperledger-services.sh"
"deploy-monitoring.sh"
"deploy-explorer.sh"
)
for script in "${REQUIRED_SCRIPTS[@]}"; do
SCRIPT_PATH="$SCRIPTS_DIR/$script"
if [ -f "$SCRIPT_PATH" ]; then
if [ -x "$SCRIPT_PATH" ]; then
pass "$script exists and is executable"
else
warn "$script exists but is not executable"
fi
# Check for syntax errors
if bash -n "$SCRIPT_PATH" 2>&1 | grep -q "error"; then
fail "$script has syntax errors"
fi
else
warn "$script not found"
fi
done
else
fail "Deployment scripts directory not found: $SCRIPTS_DIR"
fi
# Check common library
LIB_DIR="$DEPLOY_DIR/lib"
if [ -f "$LIB_DIR/common.sh" ]; then
pass "common.sh library exists"
else
fail "common.sh library not found"
fi
if [ -f "$LIB_DIR/proxmox-api.sh" ]; then
pass "proxmox-api.sh library exists"
else
warn "proxmox-api.sh library not found"
fi
# ============================================================================
# SECTION 7: RESOURCE REQUIREMENTS
# ============================================================================
section "7. RESOURCE REQUIREMENTS VALIDATION"
# Calculate total resource requirements
TOTAL_MEMORY=0
TOTAL_CORES=0
TOTAL_DISK=0
# Load resource config if available
if [ -f "$CONFIG_DIR/proxmox.conf" ]; then
source "$CONFIG_DIR/proxmox.conf" 2>/dev/null || true
# Calculate based on default counts
VALIDATOR_COUNT="${VALIDATORS_COUNT:-4}"
SENTRY_COUNT="${SENTRIES_COUNT:-3}"
RPC_COUNT="${RPC_COUNT:-3}"
SERVICE_COUNT="${SERVICES_COUNT:-4}"
TOTAL_MEMORY=$((VALIDATOR_MEMORY * VALIDATOR_COUNT + SENTRY_MEMORY * SENTRY_COUNT + RPC_MEMORY * RPC_COUNT + SERVICE_MEMORY * SERVICE_COUNT + MONITORING_MEMORY))
TOTAL_CORES=$((VALIDATOR_CORES * VALIDATOR_COUNT + SENTRY_CORES * SENTRY_COUNT + RPC_CORES * RPC_COUNT + SERVICE_CORES * SERVICE_COUNT + MONITORING_CORES))
TOTAL_DISK=$((VALIDATOR_DISK * VALIDATOR_COUNT + SENTRY_DISK * SENTRY_COUNT + RPC_DISK * RPC_COUNT + SERVICE_DISK * SERVICE_COUNT + MONITORING_DISK))
info "Estimated resource requirements:"
info " Memory: $((TOTAL_MEMORY / 1024))GB"
info " CPU Cores: $TOTAL_CORES"
info " Disk: ${TOTAL_DISK}GB"
# Get node resources
if [ -n "$NODE_STATUS" ]; then
NODE_MEMORY=$(echo "$NODE_STATUS" | jq -r '.data.memory.total // 0')
NODE_FREE_MEMORY=$(echo "$NODE_STATUS" | jq -r '.data.memory.free // 0')
NODE_CPU_COUNT=$(echo "$NODE_STATUS" | jq -r '.data.cpuinfo.cpus // 0')
if [ "$NODE_MEMORY" != "0" ] && [ -n "$NODE_MEMORY" ]; then
MEMORY_GB=$((NODE_MEMORY / 1024 / 1024 / 1024))
FREE_MEMORY_GB=$((NODE_FREE_MEMORY / 1024 / 1024 / 1024))
info "Node resources:"
info " Total Memory: ${MEMORY_GB}GB (Free: ${FREE_MEMORY_GB}GB)"
info " CPU Cores: $NODE_CPU_COUNT"
if [ $FREE_MEMORY_GB -lt $((TOTAL_MEMORY / 1024)) ]; then
warn "Insufficient free memory: need $((TOTAL_MEMORY / 1024))GB, have ${FREE_MEMORY_GB}GB"
else
pass "Sufficient memory available"
fi
if [ $NODE_CPU_COUNT -lt $TOTAL_CORES ]; then
warn "Limited CPU cores: need $TOTAL_CORES, have $NODE_CPU_COUNT (containers can share CPU)"
else
pass "Sufficient CPU cores available"
fi
fi
fi
fi
# ============================================================================
# SECTION 8: NETWORK CONFIGURATION
# ============================================================================
section "8. NETWORK CONFIGURATION VALIDATION"
# Get network interfaces
info "Checking network configuration..."
NETWORK_RESPONSE=$(curl -k -s -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/network" 2>&1)
if echo "$NETWORK_RESPONSE" | jq -e '.data' &>/dev/null; then
BRIDGES=$(echo "$NETWORK_RESPONSE" | jq -r '.data[] | select(.type=="bridge") | .iface' | tr '\n' ' ')
if [ -n "$BRIDGES" ]; then
pass "Network bridges found: $BRIDGES"
# Check for vmbr0
if echo "$BRIDGES" | grep -q "vmbr0"; then
pass "vmbr0 bridge exists (required)"
else
warn "vmbr0 bridge not found (may cause network issues)"
fi
else
warn "No network bridges found"
fi
else
warn "Could not retrieve network configuration"
fi
# ============================================================================
# SECTION 9: EXISTING CONTAINERS
# ============================================================================
section "9. EXISTING CONTAINERS CHECK"
info "Checking for existing containers..."
CONTAINERS_RESPONSE=$(curl -k -s -H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/lxc" 2>&1)
if echo "$CONTAINERS_RESPONSE" | jq -e '.data' &>/dev/null; then
EXISTING_COUNT=$(echo "$CONTAINERS_RESPONSE" | jq '.data | length')
if [ "$EXISTING_COUNT" -gt 0 ]; then
info "Found $EXISTING_COUNT existing container(s)"
# Check for VMID conflicts
EXPECTED_VMIDS=(106 107 108 109 110 111 112 115 116 117 120 121 122 130 140 150 151 152 153)
CONFLICTS=()
for vmid in "${EXPECTED_VMIDS[@]}"; do
if echo "$CONTAINERS_RESPONSE" | jq -e ".data[] | select(.vmid==$vmid)" &>/dev/null; then
CONFLICTS+=("$vmid")
fi
done
if [ ${#CONFLICTS[@]} -gt 0 ]; then
warn "VMID conflicts detected: ${CONFLICTS[*]} (containers already exist)"
else
pass "No VMID conflicts detected"
fi
else
pass "No existing containers (clean slate)"
fi
else
warn "Could not retrieve container list"
fi
# ============================================================================
# SECTION 10: INSTALLATION SCRIPTS
# ============================================================================
section "10. INSTALLATION SCRIPTS VALIDATION"
INSTALL_DIR="$DEPLOY_DIR/install"
if [ -d "$INSTALL_DIR" ]; then
pass "Install scripts directory exists"
REQUIRED_INSTALL_SCRIPTS=(
"besu-validator-install.sh"
"besu-sentry-install.sh"
"besu-rpc-install.sh"
"oracle-publisher-install.sh"
"ccip-monitor-install.sh"
"keeper-install.sh"
)
for script in "${REQUIRED_INSTALL_SCRIPTS[@]}"; do
if [ -f "$INSTALL_DIR/$script" ]; then
pass "$script exists"
else
warn "$script not found"
fi
done
else
fail "Install scripts directory not found: $INSTALL_DIR"
fi
# ============================================================================
# SUMMARY
# ============================================================================
section "VALIDATION SUMMARY"
echo -e "${CYAN}Results:${NC}"
echo -e " ${GREEN}Passed:${NC} $PASSED"
echo -e " ${RED}Failed:${NC} $FAILED"
echo -e " ${YELLOW}Warnings:${NC} $WARNINGS"
echo ""
if [ $FAILED -eq 0 ]; then
echo -e "${GREEN}✅ Deployment validation PASSED${NC}"
echo ""
echo "Next steps:"
echo " 1. Review configuration files in $DEPLOY_DIR/config/"
echo " 2. Run deployment: cd $DEPLOY_DIR && ./scripts/deployment/deploy-all.sh"
exit 0
else
echo -e "${RED}❌ Deployment validation FAILED${NC}"
echo ""
echo "Please fix the issues above before proceeding with deployment."
exit 1
fi

View File

@@ -0,0 +1,449 @@
#!/bin/bash
# Comprehensive Deployment Validation for ml110-01
# Validates all prerequisites, configurations, and deployment readiness
set +e # Don't exit on errors - we want to collect all results
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
DEPLOY_DIR="$PROJECT_ROOT/smom-dbis-138-proxmox"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Status tracking
declare -a PASSED_ITEMS
declare -a FAILED_ITEMS
declare -a WARNING_ITEMS
pass() {
echo -e "${GREEN}${NC} $1"
PASSED_ITEMS+=("$1")
}
fail() {
echo -e "${RED}${NC} $1"
FAILED_ITEMS+=("$1")
}
warn() {
echo -e "${YELLOW}⚠️${NC} $1"
WARNING_ITEMS+=("$1")
}
info() {
echo -e "${BLUE}${NC} $1"
}
section() {
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${CYAN}$1${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
}
# Load environment
ENV_FILE="$HOME/.env"
if [ -f "$ENV_FILE" ]; then
source <(grep -E "^PROXMOX_" "$ENV_FILE" 2>/dev/null | sed 's/^/export /' || true)
fi
TARGET_HOST="${PROXMOX_HOST:-192.168.11.10}"
TARGET_NODE="${PROXMOX_NODE:-ml110-01}"
echo ""
echo -e "${BLUE}╔════════════════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}${NC} Deployment Validation for ml110-01 (${TARGET_HOST}) ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════════════════╝${NC}"
echo ""
# ============================================================================
# SECTION 1: PREREQUISITES
# ============================================================================
section "1. PREREQUISITES CHECK"
# Check required tools
REQUIRED_TOOLS=("curl" "jq" "git" "bash")
for tool in "${REQUIRED_TOOLS[@]}"; do
if command -v "$tool" &> /dev/null; then
VERSION=$($tool --version 2>&1 | head -1 | cut -d' ' -f1-3)
pass "$tool installed ($VERSION)"
else
fail "$tool not found (required)"
fi
done
# Check deployment directory
if [ -d "$DEPLOY_DIR" ]; then
pass "Deployment directory exists: $DEPLOY_DIR"
else
fail "Deployment directory not found: $DEPLOY_DIR"
exit 1
fi
# ============================================================================
# SECTION 2: PROXMOX CONNECTION
# ============================================================================
section "2. PROXMOX CONNECTION VALIDATION"
if [ -z "${PROXMOX_USER:-}" ] || [ -z "${PROXMOX_TOKEN_NAME:-}" ] || [ -z "${PROXMOX_TOKEN_VALUE:-}" ]; then
fail "Proxmox credentials not configured in .env"
info "Required: PROXMOX_USER, PROXMOX_TOKEN_NAME, PROXMOX_TOKEN_VALUE"
else
pass "Proxmox credentials found in .env"
fi
# Test API connectivity
info "Testing connection to $TARGET_HOST:8006"
API_RESPONSE=$(curl -k -s -w "\n%{http_code}" -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/version" 2>&1)
HTTP_CODE=$(echo "$API_RESPONSE" | tail -1)
RESPONSE_BODY=$(echo "$API_RESPONSE" | sed '$d')
if [ "$HTTP_CODE" = "200" ]; then
VERSION=$(echo "$RESPONSE_BODY" | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['version'])" 2>/dev/null || echo "unknown")
pass "API connection successful (Proxmox version: $VERSION)"
else
fail "API connection failed (HTTP $HTTP_CODE)"
if [ -n "$RESPONSE_BODY" ]; then
info "Response: $(echo "$RESPONSE_BODY" | head -c 200)"
fi
fi
# Get node list
info "Retrieving node list..."
NODES_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes" 2>&1)
if echo "$NODES_RESPONSE" | python3 -c "import sys, json; json.load(sys.stdin)['data']" 2>/dev/null; then
NODE_COUNT=$(echo "$NODES_RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))" 2>/dev/null)
NODE_NAMES=$(echo "$NODES_RESPONSE" | python3 -c "import sys, json; print(' '.join([n['node'] for n in json.load(sys.stdin)['data']]))" 2>/dev/null)
pass "Retrieved $NODE_COUNT node(s): $NODE_NAMES"
# Check if target node exists
if echo "$NODES_RESPONSE" | python3 -c "import sys, json; data=json.load(sys.stdin)['data']; exit(0 if any(n['node']=='${TARGET_NODE}' for n in data) else 1)" 2>/dev/null; then
pass "Target node '$TARGET_NODE' found"
else
FIRST_NODE=$(echo "$NODES_RESPONSE" | python3 -c "import sys, json; print(json.load(sys.stdin)['data'][0]['node'])" 2>/dev/null)
warn "Target node '$TARGET_NODE' not found, will use: $FIRST_NODE"
TARGET_NODE="$FIRST_NODE"
fi
else
fail "Failed to retrieve nodes"
fi
# ============================================================================
# SECTION 3: STORAGE VALIDATION
# ============================================================================
section "3. STORAGE VALIDATION"
STORAGE_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/storage" 2>&1)
if echo "$STORAGE_RESPONSE" | python3 -c "import sys, json; json.load(sys.stdin)['data']" 2>/dev/null; then
STORAGE_COUNT=$(echo "$STORAGE_RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))" 2>/dev/null)
pass "Found $STORAGE_COUNT storage pool(s)"
# Check for container storage
CONTAINER_STORAGE=$(echo "$STORAGE_RESPONSE" | python3 -c "import sys, json; data=json.load(sys.stdin)['data']; result=[s['storage'] for s in data if 'rootdir' in s.get('content', '').split(',')]; print(result[0] if result else '')" 2>/dev/null)
if [ -n "$CONTAINER_STORAGE" ]; then
pass "Container storage found: $CONTAINER_STORAGE"
else
fail "No container storage (rootdir) found"
fi
# Check for template storage
TEMPLATE_STORAGE=$(echo "$STORAGE_RESPONSE" | python3 -c "import sys, json; data=json.load(sys.stdin)['data']; result=[s['storage'] for s in data if 'vztmpl' in s.get('content', '').split(',')]; print(result[0] if result else '')" 2>/dev/null)
if [ -n "$TEMPLATE_STORAGE" ]; then
pass "Template storage found: $TEMPLATE_STORAGE"
else
warn "No template storage (vztmpl) found"
fi
else
fail "Failed to retrieve storage information"
fi
# ============================================================================
# SECTION 4: TEMPLATE VALIDATION
# ============================================================================
section "4. LXC TEMPLATE VALIDATION"
if [ -n "$TEMPLATE_STORAGE" ]; then
info "Checking templates on storage: $TEMPLATE_STORAGE"
TEMPLATES_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/storage/${TEMPLATE_STORAGE}/content?content=vztmpl" 2>&1)
if echo "$TEMPLATES_RESPONSE" | python3 -c "import sys, json; json.load(sys.stdin)['data']" 2>/dev/null; then
TEMPLATE_COUNT=$(echo "$TEMPLATES_RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))" 2>/dev/null)
pass "Found $TEMPLATE_COUNT template(s)"
# Check for Debian 12 template
DEBIAN_TEMPLATE=$(echo "$TEMPLATES_RESPONSE" | python3 -c "import sys, json; data=json.load(sys.stdin)['data']; result=[t['volid'] for t in data if 'debian-12' in t.get('volid', '')]; print(result[0] if result else '')" 2>/dev/null)
if [ -n "$DEBIAN_TEMPLATE" ]; then
pass "Debian 12 template found: $DEBIAN_TEMPLATE"
else
fail "Debian 12 template not found (required)"
info "Available templates:"
echo "$TEMPLATES_RESPONSE" | python3 -c "import sys, json; [print(' -', t['volid']) for t in json.load(sys.stdin)['data'][:10]]" 2>/dev/null
fi
else
warn "Could not retrieve template list"
fi
else
warn "Cannot check templates - no template storage configured"
fi
# ============================================================================
# SECTION 5: CONFIGURATION FILES
# ============================================================================
section "5. CONFIGURATION FILES VALIDATION"
CONFIG_DIR="$DEPLOY_DIR/config"
if [ -d "$CONFIG_DIR" ]; then
pass "Config directory exists: $CONFIG_DIR"
# Check proxmox.conf
if [ -f "$CONFIG_DIR/proxmox.conf" ]; then
pass "proxmox.conf exists"
source "$CONFIG_DIR/proxmox.conf" 2>/dev/null || true
if [ -n "${PROXMOX_HOST:-}" ]; then
if [ "$PROXMOX_HOST" = "$TARGET_HOST" ]; then
pass "PROXMOX_HOST configured correctly: $PROXMOX_HOST"
elif [ "$PROXMOX_HOST" = "proxmox.example.com" ]; then
warn "PROXMOX_HOST still set to example value"
else
warn "PROXMOX_HOST mismatch: $PROXMOX_HOST (expected: $TARGET_HOST)"
fi
else
fail "PROXMOX_HOST not set in proxmox.conf"
fi
[ -n "${PROXMOX_NODE:-}" ] && pass "PROXMOX_NODE configured: $PROXMOX_NODE" || warn "PROXMOX_NODE not set"
[ -n "${PROXMOX_STORAGE:-}" ] && pass "PROXMOX_STORAGE configured: $PROXMOX_STORAGE" || warn "PROXMOX_STORAGE not set"
[ -n "${CONTAINER_OS_TEMPLATE:-}" ] && pass "CONTAINER_OS_TEMPLATE configured: $CONTAINER_OS_TEMPLATE" || warn "CONTAINER_OS_TEMPLATE not set"
else
if [ -f "$CONFIG_DIR/proxmox.conf.example" ]; then
fail "proxmox.conf not found - copy from proxmox.conf.example and configure"
else
fail "proxmox.conf not found"
fi
fi
# Check network.conf
if [ -f "$CONFIG_DIR/network.conf" ]; then
pass "network.conf exists"
source "$CONFIG_DIR/network.conf" 2>/dev/null || true
[ -n "${SUBNET_BASE:-}" ] && pass "SUBNET_BASE configured: $SUBNET_BASE" || warn "SUBNET_BASE not set"
[ -n "${GATEWAY:-}" ] && pass "GATEWAY configured: $GATEWAY" || warn "GATEWAY not set"
else
if [ -f "$CONFIG_DIR/network.conf.example" ]; then
warn "network.conf not found (example exists)"
else
warn "network.conf not found"
fi
fi
else
fail "Config directory not found: $CONFIG_DIR"
fi
# ============================================================================
# SECTION 6: DEPLOYMENT SCRIPTS
# ============================================================================
section "6. DEPLOYMENT SCRIPTS VALIDATION"
SCRIPTS_DIR="$DEPLOY_DIR/scripts/deployment"
if [ -d "$SCRIPTS_DIR" ]; then
pass "Deployment scripts directory exists"
REQUIRED_SCRIPTS=(
"deploy-all.sh"
"deploy-besu-nodes.sh"
"deploy-services.sh"
"deploy-hyperledger-services.sh"
"deploy-monitoring.sh"
"deploy-explorer.sh"
)
for script in "${REQUIRED_SCRIPTS[@]}"; do
SCRIPT_PATH="$SCRIPTS_DIR/$script"
if [ -f "$SCRIPT_PATH" ]; then
if [ -x "$SCRIPT_PATH" ]; then
pass "$script exists and is executable"
# Check syntax
if bash -n "$SCRIPT_PATH" 2>&1 | grep -q "error"; then
fail "$script has syntax errors"
fi
else
warn "$script exists but is not executable"
fi
else
warn "$script not found"
fi
done
else
fail "Deployment scripts directory not found: $SCRIPTS_DIR"
fi
# Check libraries
LIB_DIR="$DEPLOY_DIR/lib"
[ -f "$LIB_DIR/common.sh" ] && pass "common.sh library exists" || fail "common.sh library not found"
[ -f "$LIB_DIR/proxmox-api.sh" ] && pass "proxmox-api.sh library exists" || warn "proxmox-api.sh library not found"
# ============================================================================
# SECTION 7: INSTALLATION SCRIPTS
# ============================================================================
section "7. INSTALLATION SCRIPTS VALIDATION"
INSTALL_DIR="$DEPLOY_DIR/install"
if [ -d "$INSTALL_DIR" ]; then
pass "Install scripts directory exists"
REQUIRED_INSTALL_SCRIPTS=(
"besu-validator-install.sh"
"besu-sentry-install.sh"
"besu-rpc-install.sh"
"oracle-publisher-install.sh"
"ccip-monitor-install.sh"
"keeper-install.sh"
"monitoring-stack-install.sh"
"blockscout-install.sh"
)
for script in "${REQUIRED_INSTALL_SCRIPTS[@]}"; do
[ -f "$INSTALL_DIR/$script" ] && pass "$script exists" || warn "$script not found"
done
else
fail "Install scripts directory not found: $INSTALL_DIR"
fi
# ============================================================================
# SECTION 8: RESOURCE REQUIREMENTS
# ============================================================================
section "8. RESOURCE REQUIREMENTS VALIDATION"
# Load config if available
if [ -f "$CONFIG_DIR/proxmox.conf" ]; then
source "$CONFIG_DIR/proxmox.conf" 2>/dev/null || true
# Estimate resources
VALIDATOR_COUNT="${VALIDATORS_COUNT:-4}"
SENTRY_COUNT="${SENTRIES_COUNT:-3}"
RPC_COUNT="${RPC_COUNT:-3}"
ESTIMATED_MEMORY=$(( (VALIDATOR_MEMORY * VALIDATOR_COUNT) + (SENTRY_MEMORY * SENTRY_COUNT) + (RPC_MEMORY * RPC_COUNT) + MONITORING_MEMORY ))
ESTIMATED_DISK=$(( (VALIDATOR_DISK * VALIDATOR_COUNT) + (SENTRY_DISK * SENTRY_COUNT) + (RPC_DISK * RPC_COUNT) + MONITORING_DISK ))
info "Estimated requirements:"
info " Memory: ~$((ESTIMATED_MEMORY / 1024))GB"
info " Disk: ~${ESTIMATED_DISK}GB"
# Get node resources
NODE_STATUS=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/status" 2>&1)
if echo "$NODE_STATUS" | python3 -c "import sys, json; json.load(sys.stdin)['data']" 2>/dev/null; then
NODE_MEMORY=$(echo "$NODE_STATUS" | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['memory']['total'])" 2>/dev/null || echo "0")
NODE_FREE_MEMORY=$(echo "$NODE_STATUS" | python3 -c "import sys, json; print(json.load(sys.stdin)['data']['memory']['free'])" 2>/dev/null || echo "0")
if [ "$NODE_MEMORY" != "0" ] && [ -n "$NODE_MEMORY" ]; then
MEMORY_GB=$((NODE_MEMORY / 1024 / 1024 / 1024))
FREE_MEMORY_GB=$((NODE_FREE_MEMORY / 1024 / 1024 / 1024))
info "Node resources: ${MEMORY_GB}GB total, ${FREE_MEMORY_GB}GB free"
if [ $FREE_MEMORY_GB -ge $((ESTIMATED_MEMORY / 1024)) ]; then
pass "Sufficient memory available"
else
warn "Limited free memory: need ~$((ESTIMATED_MEMORY / 1024))GB, have ${FREE_MEMORY_GB}GB"
fi
fi
fi
fi
# ============================================================================
# SECTION 9: EXISTING CONTAINERS
# ============================================================================
section "9. EXISTING CONTAINERS CHECK"
CONTAINERS_RESPONSE=$(curl -k -s -m 10 \
-H "Authorization: PVEAPIToken=${PROXMOX_USER}!${PROXMOX_TOKEN_NAME}=${PROXMOX_TOKEN_VALUE}" \
"https://${TARGET_HOST}:8006/api2/json/nodes/${TARGET_NODE}/lxc" 2>&1)
if echo "$CONTAINERS_RESPONSE" | python3 -c "import sys, json; json.load(sys.stdin)['data']" 2>/dev/null; then
EXISTING_COUNT=$(echo "$CONTAINERS_RESPONSE" | python3 -c "import sys, json; print(len(json.load(sys.stdin)['data']))" 2>/dev/null)
if [ "$EXISTING_COUNT" -gt 0 ]; then
info "Found $EXISTING_COUNT existing container(s)"
# Check for VMID conflicts
EXPECTED_VMIDS=(106 107 108 109 110 111 112 115 116 117 120 121 122 130 140 150 151 152 153)
CONFLICTS=$(echo "$CONTAINERS_RESPONSE" | python3 -c "
import sys, json
data = json.load(sys.stdin)['data']
existing = [c['vmid'] for c in data]
expected = [106, 107, 108, 109, 110, 111, 112, 115, 116, 117, 120, 121, 122, 130, 140, 150, 151, 152, 153]
conflicts = [v for v in expected if v in existing]
print(' '.join(map(str, conflicts)))" 2>/dev/null)
if [ -n "$CONFLICTS" ]; then
warn "VMID conflicts: $CONFLICTS (containers already exist)"
else
pass "No VMID conflicts detected"
fi
else
pass "No existing containers (clean slate)"
fi
else
warn "Could not retrieve container list"
fi
# ============================================================================
# SUMMARY
# ============================================================================
section "VALIDATION SUMMARY"
TOTAL_PASSED=${#PASSED_ITEMS[@]}
TOTAL_FAILED=${#FAILED_ITEMS[@]}
TOTAL_WARNINGS=${#WARNING_ITEMS[@]}
echo -e "${CYAN}Results:${NC}"
echo -e " ${GREEN}Passed:${NC} $TOTAL_PASSED"
echo -e " ${RED}Failed:${NC} $TOTAL_FAILED"
echo -e " ${YELLOW}Warnings:${NC} $TOTAL_WARNINGS"
echo ""
if [ $TOTAL_FAILED -eq 0 ]; then
echo -e "${GREEN}✅ Deployment validation PASSED${NC}"
echo ""
echo "Next steps:"
echo " 1. Review and update configuration files in $CONFIG_DIR/"
echo " 2. Run deployment: cd $DEPLOY_DIR && ./scripts/deployment/deploy-all.sh"
exit 0
else
echo -e "${RED}❌ Deployment validation FAILED${NC}"
echo ""
echo "Critical issues found. Please fix the failures above before deployment."
exit 1
fi

View File

@@ -0,0 +1,374 @@
#!/usr/bin/env bash
# Prerequisites Check Script
# Validates that all required files and directories exist in the source project
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/lib/common.sh" 2>/dev/null || {
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_warn() { echo "[WARNING] $1"; }
log_error() { echo "[ERROR] $1"; }
}
# Source project path
SOURCE_PROJECT="${1:-}"
if [[ -z "$SOURCE_PROJECT" ]]; then
SOURCE_PROJECT="${SOURCE_PROJECT:-/home/intlc/projects/smom-dbis-138}"
log_info "Using default source project: $SOURCE_PROJECT"
fi
# Convert to absolute path
if [[ "$SOURCE_PROJECT" != /* ]]; then
SOURCE_PROJECT="$(cd "$PROJECT_ROOT" && cd "$SOURCE_PROJECT" && pwd 2>/dev/null || echo "$PROJECT_ROOT/$SOURCE_PROJECT")"
fi
log_info "========================================="
log_info "Prerequisites Check"
log_info "========================================="
log_info ""
log_info "Source project: $SOURCE_PROJECT"
log_info ""
ERRORS=0
WARNINGS=0
# Check source project exists
if [[ ! -d "$SOURCE_PROJECT" ]]; then
log_error "Source project directory not found: $SOURCE_PROJECT"
exit 1
fi
log_success "Source project directory exists"
# Check for required top-level directories
log_info ""
log_info "=== Checking Required Directories ==="
REQUIRED_DIRS=(
"config"
"keys/validators"
)
for dir in "${REQUIRED_DIRS[@]}"; do
if [[ -d "$SOURCE_PROJECT/$dir" ]]; then
log_success "$dir/ exists"
else
log_error "$dir/ missing"
ERRORS=$((ERRORS + 1))
fi
done
# Check for config/nodes/ structure (node-specific directories)
HAS_NODE_DIRS=false
if [[ -d "$SOURCE_PROJECT/config/nodes" ]]; then
log_success "✓ config/nodes/ directory exists"
HAS_NODE_DIRS=true
# Check for node subdirectories
log_info ""
log_info "Checking node-specific directories in config/nodes/:"
NODE_DIRS=$(find "$SOURCE_PROJECT/config/nodes" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | wc -l)
if [[ $NODE_DIRS -gt 0 ]]; then
log_success "✓ Found $NODE_DIRS node directory(ies)"
log_info " Sample node directories:"
find "$SOURCE_PROJECT/config/nodes" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | head -5 | while read dir; do
log_info " - $(basename "$dir")"
done
else
log_warn "⚠ config/nodes/ exists but is empty"
WARNINGS=$((WARNINGS + 1))
fi
else
log_info "config/nodes/ not found (using flat config structure)"
fi
# Check for required files (flat structure)
log_info ""
log_info "=== Checking Required Files ==="
REQUIRED_FILES=(
"config/genesis.json"
"config/permissions-nodes.toml"
"config/permissions-accounts.toml"
)
OPTIONAL_FILES=(
"config/static-nodes.json"
"config/config-validator.toml"
"config/config-sentry.toml"
"config/config-rpc-public.toml"
"config/config-rpc-core.toml"
)
for file in "${REQUIRED_FILES[@]}"; do
if [[ -f "$SOURCE_PROJECT/$file" ]]; then
log_success "$file exists"
else
log_error "$file missing (REQUIRED)"
ERRORS=$((ERRORS + 1))
fi
done
for file in "${OPTIONAL_FILES[@]}"; do
if [[ -f "$SOURCE_PROJECT/$file" ]]; then
log_success "$file exists"
else
log_warn "$file not found (optional, may be in config/nodes/)"
WARNINGS=$((WARNINGS + 1))
fi
done
# Check node-specific files if config/nodes/ exists
if [[ "$HAS_NODE_DIRS" == "true" ]]; then
log_info ""
log_info "=== Checking Node-Specific Files ==="
# Check first node directory for expected files
FIRST_NODE_DIR=$(find "$SOURCE_PROJECT/config/nodes" -mindepth 1 -maxdepth 1 -type d 2>/dev/null | head -1)
if [[ -n "$FIRST_NODE_DIR" ]] && [[ -d "$FIRST_NODE_DIR" ]]; then
NODE_NAME=$(basename "$FIRST_NODE_DIR")
log_info "Checking structure in: config/nodes/$NODE_NAME/"
NODE_FILES=(
"config.toml"
"nodekey"
"nodekey.pub"
)
for file in "${NODE_FILES[@]}"; do
if [[ -f "$FIRST_NODE_DIR/$file" ]]; then
log_success "✓ config/nodes/$NODE_NAME/$file exists"
else
log_warn "⚠ config/nodes/$NODE_NAME/$file not found"
WARNINGS=$((WARNINGS + 1))
fi
done
# List all files in the node directory
log_info ""
log_info "Files in config/nodes/$NODE_NAME/:"
find "$FIRST_NODE_DIR" -type f -maxdepth 1 2>/dev/null | while read file; do
log_info " - $(basename "$file")"
done
fi
fi
# Check validator keys
log_info ""
log_info "=== Checking Validator Keys ==="
KEYS_DIR="$SOURCE_PROJECT/keys/validators"
if [[ -d "$KEYS_DIR" ]]; then
KEY_COUNT=$(find "$KEYS_DIR" -mindepth 1 -maxdepth 1 -type d -name "validator-*" 2>/dev/null | wc -l)
if [[ $KEY_COUNT -gt 0 ]]; then
log_success "✓ Found $KEY_COUNT validator key directory(ies)"
log_info " Validator directories:"
find "$KEYS_DIR" -mindepth 1 -maxdepth 1 -type d -name "validator-*" 2>/dev/null | while read dir; do
log_info " - $(basename "$dir")"
done
# Try to get expected count from config files, or use default
EXPECTED_VALIDATORS=5
# Try multiple possible config file locations
CONFIG_PATHS=(
"$SCRIPT_DIR/../../smom-dbis-138-proxmox/config/proxmox.conf" # scripts/validation -> smom-dbis-138-proxmox/config
"$SCRIPT_DIR/../../config/proxmox.conf" # smom-dbis-138-proxmox/scripts/validation -> smom-dbis-138-proxmox/config
"$SCRIPT_DIR/../../../smom-dbis-138-proxmox/config/proxmox.conf" # scripts/validation -> smom-dbis-138-proxmox/config
"$PROJECT_ROOT/smom-dbis-138-proxmox/config/proxmox.conf"
"$PROJECT_ROOT/config/proxmox.conf"
"$(dirname "$SOURCE_PROJECT")/../proxmox/smom-dbis-138-proxmox/config/proxmox.conf"
"/home/intlc/projects/proxmox/smom-dbis-138-proxmox/config/proxmox.conf"
)
for CONFIG_FILE in "${CONFIG_PATHS[@]}"; do
# Resolve relative paths
if [[ "$CONFIG_FILE" != /* ]]; then
CONFIG_FILE="$(cd "$SCRIPT_DIR" && cd "$(dirname "$CONFIG_FILE")" 2>/dev/null && pwd)/$(basename "$CONFIG_FILE")" 2>/dev/null || echo "$CONFIG_FILE"
fi
if [[ -f "$CONFIG_FILE" ]]; then
# Extract VALIDATOR_COUNT value, handling comments on same line
CONFIG_VALIDATOR_COUNT=$(grep "^VALIDATOR_COUNT=" "$CONFIG_FILE" 2>/dev/null | head -1 | sed 's/^VALIDATOR_COUNT=//' | sed 's/[[:space:]]*#.*$//' | tr -d ' "' || echo "")
if [[ -n "$CONFIG_VALIDATOR_COUNT" ]] && [[ "$CONFIG_VALIDATOR_COUNT" =~ ^[0-9]+$ ]]; then
EXPECTED_VALIDATORS="$CONFIG_VALIDATOR_COUNT"
log_info " Using VALIDATOR_COUNT=$EXPECTED_VALIDATORS from $(basename "$(dirname "$CONFIG_FILE")")/$(basename "$CONFIG_FILE")"
break
fi
fi
done
if [[ $KEY_COUNT -eq $EXPECTED_VALIDATORS ]]; then
log_success "✓ Validator key count matches expected: $EXPECTED_VALIDATORS"
else
log_warn "⚠ Validator key count mismatch: expected $EXPECTED_VALIDATORS, found $KEY_COUNT"
log_info " Note: This may be acceptable if you're deploying fewer validators than configured"
log_info " Update VALIDATOR_COUNT=$KEY_COUNT in config/proxmox.conf if this is intentional"
WARNINGS=$((WARNINGS + 1))
fi
else
log_warn "⚠ keys/validators/ exists but no validator-* directories found"
WARNINGS=$((WARNINGS + 1))
fi
else
log_error "✗ keys/validators/ directory missing"
ERRORS=$((ERRORS + 1))
fi
# Validate genesis.json structure
log_info ""
log_info "=== Validating Genesis.json ==="
GENESIS_FILE="$SOURCE_PROJECT/config/genesis.json"
if [[ -f "$GENESIS_FILE" ]]; then
# Check JSON syntax
if python3 -m json.tool "$GENESIS_FILE" >/dev/null 2>&1; then
log_success "✓ genesis.json syntax valid"
# Check for QBFT configuration
if python3 -c "import sys, json; data=json.load(open('$GENESIS_FILE')); exit(0 if 'config' in data and 'qbft' in data.get('config', {}) else 1)" 2>/dev/null; then
log_success "✓ QBFT configuration present in genesis.json"
else
log_error "✗ QBFT configuration missing in genesis.json"
ERRORS=$((ERRORS + 1))
fi
# Check for extraData field
if python3 -c "import sys, json; data=json.load(open('$GENESIS_FILE')); exit(0 if 'extraData' in data else 1)" 2>/dev/null; then
log_success "✓ extraData field present in genesis.json"
# Validate extraData format
EXTRA_DATA=$(python3 -c "import sys, json; print(json.load(open('$GENESIS_FILE')).get('extraData', ''))" 2>/dev/null)
if [[ "$EXTRA_DATA" =~ ^0x[0-9a-fA-F]*$ ]] || [[ -z "$EXTRA_DATA" ]]; then
log_success "✓ extraData format valid"
else
log_error "✗ extraData format invalid: $EXTRA_DATA"
ERRORS=$((ERRORS + 1))
fi
else
log_error "✗ extraData field missing in genesis.json"
ERRORS=$((ERRORS + 1))
fi
# For dynamic validators, verify no validators array in QBFT
if python3 -c "import sys, json; data=json.load(open('$GENESIS_FILE')); qbft=data.get('config', {}).get('qbft', {}); exit(0 if 'validators' not in qbft else 1)" 2>/dev/null; then
log_success "✓ Dynamic validator setup confirmed (no validators array in QBFT)"
else
log_warn "⚠ Validators array found in QBFT config (expected for dynamic validators)"
WARNINGS=$((WARNINGS + 1))
fi
else
log_error "✗ genesis.json syntax invalid"
ERRORS=$((ERRORS + 1))
fi
else
log_error "✗ genesis.json not found"
ERRORS=$((ERRORS + 1))
fi
# Summary
log_info ""
log_info "========================================="
log_info "Prerequisites Check Summary"
log_info "========================================="
log_info "Errors: $ERRORS"
log_info "Warnings: $WARNINGS"
log_info ""
if [[ $ERRORS -eq 0 ]]; then
if [[ $WARNINGS -eq 0 ]]; then
log_success "✓ All prerequisites met!"
log_info ""
log_info "Recommended next steps:"
log_info " 1. Verify storage configuration: sudo ./scripts/validation/verify-storage-config.sh"
log_info " 2. Verify network configuration: ./scripts/validation/verify-network-config.sh"
log_info " 3. Pre-cache OS template: sudo ./scripts/deployment/pre-cache-os-template.sh"
log_info ""
log_info "Or run all verifications: ./scripts/validation/verify-all.sh $SOURCE_PROJECT"
exit 0
else
log_warn "⚠ Prerequisites met with $WARNINGS warning(s)"
log_info "Review warnings above, but deployment may proceed"
log_info ""
log_info "Recommended next steps:"
log_info " 1. Review warnings above"
log_info " 2. Verify storage configuration: sudo ./scripts/validation/verify-storage-config.sh"
log_info " 3. Verify network configuration: ./scripts/validation/verify-network-config.sh"
exit 0
fi
else
log_error "✗ Prerequisites check failed with $ERRORS error(s)"
log_info "Please fix the errors before proceeding with deployment"
exit 1
fi
# Check for CCIP configuration files (if CCIP deployment planned)
log_info ""
log_info "=== Checking CCIP Configuration Files ==="
CCIP_REQUIRED_FILES=(
"config/ccip/ops-config.yaml"
"config/ccip/commit-config.yaml"
"config/ccip/exec-config.yaml"
"config/ccip/rmn-config.yaml"
)
CCIP_OPTIONAL_FILES=(
"config/ccip/monitor-config.yaml"
"config/ccip/secrets.yaml"
)
CCIP_FILES_EXIST=false
if [[ -d "$SOURCE_PROJECT/config/ccip" ]] || [[ -d "$SOURCE_PROJECT/ccip" ]]; then
CCIP_FILES_EXIST=true
log_info "CCIP configuration directory found"
for file in "${CCIP_REQUIRED_FILES[@]}"; do
if [[ -f "$SOURCE_PROJECT/$file" ]]; then
log_success "$file exists"
else
# Try alternative location
alt_file="${file/config\/ccip/ccip}"
if [[ -f "$SOURCE_PROJECT/$alt_file" ]]; then
log_success "$alt_file exists (alternative location)"
else
log_warn "$file not found (may be optional or in different location)"
WARNINGS=$((WARNINGS + 1))
fi
fi
done
for file in "${CCIP_OPTIONAL_FILES[@]}"; do
if [[ -f "$SOURCE_PROJECT/$file" ]]; then
log_success "$file exists"
else
log_info " (optional) $file not found"
fi
done
else
log_info "CCIP configuration directory not found (CCIP deployment may not be planned)"
fi
# Check for other service configurations
log_info ""
log_info "=== Checking Other Service Configuration Files ==="
SERVICE_CONFIGS=(
"config/blockscout.env"
"config/firefly/config.yaml"
"config/fabric/network-config.yaml"
"config/indy/pool-config.yaml"
"config/cacti/config.yaml"
)
for config in "${SERVICE_CONFIGS[@]}"; do
service_name=$(basename "$(dirname "$config")")
if [[ -f "$SOURCE_PROJECT/$config" ]]; then
log_success "$config exists"
else
log_info " (optional) $config not found - $service_name may use different config location"
fi
done

View File

@@ -0,0 +1,89 @@
#!/usr/bin/env bash
# Quick Verification - Run All Checks
# Convenience script to run all verification checks before deployment
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Try to find project root - could be at same level or in smom-dbis-138-proxmox subdirectory
if [[ -d "$SCRIPT_DIR/../../smom-dbis-138-proxmox" ]]; then
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../smom-dbis-138-proxmox" && pwd)"
elif [[ -d "$SCRIPT_DIR/../.." ]]; then
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
else
PROJECT_ROOT="$SCRIPT_DIR/../.."
fi
SOURCE_PROJECT="${1:-/home/intlc/projects/smom-dbis-138}"
echo "════════════════════════════════════════════════════════"
echo " Complete Verification - All Checks"
echo "════════════════════════════════════════════════════════"
echo ""
echo "Source Project: $SOURCE_PROJECT"
echo ""
# 1. Prerequisites Check
echo "=== 1. Prerequisites Check ==="
if [[ -f "$SCRIPT_DIR/check-prerequisites.sh" ]]; then
"$SCRIPT_DIR/check-prerequisites.sh" "$SOURCE_PROJECT" || {
echo ""
echo "❌ Prerequisites check failed. Fix issues before continuing."
exit 1
}
elif [[ -f "$PROJECT_ROOT/scripts/validation/check-prerequisites.sh" ]]; then
"$PROJECT_ROOT/scripts/validation/check-prerequisites.sh" "$SOURCE_PROJECT" || {
echo ""
echo "❌ Prerequisites check failed. Fix issues before continuing."
exit 1
}
else
echo "⚠ check-prerequisites.sh not found, skipping..."
fi
echo ""
echo "=== 2. Storage Configuration (requires root on Proxmox host) ==="
if [[ $EUID -eq 0 ]] && command -v pvesm &>/dev/null; then
if [[ -f "$SCRIPT_DIR/verify-storage-config.sh" ]]; then
"$SCRIPT_DIR/verify-storage-config.sh" || {
echo ""
echo "⚠ Storage verification had issues. Review output above."
}
elif [[ -f "$PROJECT_ROOT/scripts/validation/verify-storage-config.sh" ]]; then
"$PROJECT_ROOT/scripts/validation/verify-storage-config.sh" || {
echo ""
echo "⚠ Storage verification had issues. Review output above."
}
else
echo "⚠ verify-storage-config.sh not found, skipping..."
fi
else
echo " Skipping (not running as root on Proxmox host)"
echo " To verify storage, run: sudo ./scripts/validation/verify-storage-config.sh"
fi
echo ""
echo "=== 3. Network Configuration ==="
if [[ -f "$SCRIPT_DIR/verify-network-config.sh" ]]; then
"$SCRIPT_DIR/verify-network-config.sh" || {
echo ""
echo "⚠ Network verification had issues. Review output above."
}
elif [[ -f "$PROJECT_ROOT/scripts/validation/verify-network-config.sh" ]]; then
"$PROJECT_ROOT/scripts/validation/verify-network-config.sh" || {
echo ""
echo "⚠ Network verification had issues. Review output above."
}
else
echo "⚠ verify-network-config.sh not found, skipping..."
fi
echo ""
echo "════════════════════════════════════════════════════════"
echo " Verification Complete"
echo "════════════════════════════════════════════════════════"
echo ""
echo "If all checks passed, you're ready to deploy:"
echo " sudo ./scripts/deployment/deploy-phased.sh --source-project $SOURCE_PROJECT"
echo ""

View File

@@ -0,0 +1,181 @@
#!/usr/bin/env bash
# Verify Network Configuration
# Checks network connectivity and bandwidth to Proxmox host
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/lib/common.sh" 2>/dev/null || {
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_error() { echo "[ERROR] $1"; }
log_warn() { echo "[WARN] $1"; }
}
# Load configuration
load_config "$PROJECT_ROOT/config/proxmox.conf" 2>/dev/null || true
log_info "========================================="
log_info "Network Configuration Verification"
log_info "========================================="
log_info ""
# Check network connectivity
log_info "=== Network Connectivity ==="
# Get Proxmox host IP/name (if configured)
PROXMOX_HOST="${PROXMOX_HOST:-localhost}"
log_info "Proxmox host: $PROXMOX_HOST"
# Check if we can resolve the host
if command_exists ping; then
if ping -c 1 -W 2 "$PROXMOX_HOST" &>/dev/null; then
log_success "Proxmox host is reachable"
# Get average latency
LATENCY=$(ping -c 3 -W 2 "$PROXMOX_HOST" 2>/dev/null | grep "avg" | awk -F'/' '{print $5}' || echo "N/A")
if [[ "$LATENCY" != "N/A" ]]; then
log_info " Average latency: ${LATENCY}ms"
if (( $(echo "$LATENCY > 100" | bc -l 2>/dev/null || echo 0) )); then
log_warn " High latency detected (>100ms). May impact deployment performance."
fi
fi
else
log_warn "Cannot ping Proxmox host (may be normal if running locally)"
fi
fi
# Check network interfaces
log_info ""
log_info "=== Network Interfaces ==="
if [[ -f /proc/net/dev ]]; then
# List active network interfaces
INTERFACES=$(ip -o link show 2>/dev/null | awk -F': ' '{print $2}' | grep -v "lo" || true)
if [[ -n "$INTERFACES" ]]; then
for iface in $INTERFACES; do
if ip link show "$iface" 2>/dev/null | grep -q "state UP"; then
log_info " $iface: UP"
# Get link speed if available
if [[ -f "/sys/class/net/$iface/speed" ]]; then
SPEED=$(cat "/sys/class/net/$iface/speed" 2>/dev/null || echo "unknown")
if [[ "$SPEED" != "-1" ]] && [[ "$SPEED" != "unknown" ]]; then
if [[ $SPEED -ge 1000 ]]; then
log_success " Speed: ${SPEED}Mbps (Gigabit or better - recommended)"
elif [[ $SPEED -ge 100 ]]; then
log_warn " Speed: ${SPEED}Mbps (Fast Ethernet - may limit performance)"
else
log_warn " Speed: ${SPEED}Mbps (Slow - may significantly impact deployment)"
fi
fi
fi
else
log_info " $iface: DOWN"
fi
done
fi
else
log_warn "Cannot check network interfaces (/proc/net/dev not available)"
fi
# Check Proxmox bridge configuration (if on Proxmox host)
log_info ""
log_info "=== Proxmox Bridge Configuration ==="
CONFIGURED_BRIDGE="${PROXMOX_BRIDGE:-vmbr0}"
log_info "Configured bridge: $CONFIGURED_BRIDGE"
if command_exists ip; then
if ip link show "$CONFIGURED_BRIDGE" &>/dev/null; then
log_success "Bridge '$CONFIGURED_BRIDGE' exists"
BRIDGE_STATUS=$(ip link show "$CONFIGURED_BRIDGE" 2>/dev/null | grep -o "state [A-Z]*" | awk '{print $2}')
log_info " Status: $BRIDGE_STATUS"
if [[ "$BRIDGE_STATUS" == "UP" ]]; then
log_success " Bridge is UP"
else
log_warn " Bridge is DOWN - containers may not have network access"
fi
else
log_error "Bridge '$CONFIGURED_BRIDGE' not found"
log_info " Available bridges:"
ip link show type bridge 2>/dev/null | grep "^[0-9]" | awk -F': ' '{print " - " $2}' || log_info " (none found)"
fi
fi
# Check DNS resolution (important for package downloads)
log_info ""
log_info "=== DNS Configuration ==="
if command_exists nslookup || command_exists host; then
TEST_DOMAIN="github.com"
if nslookup "$TEST_DOMAIN" &>/dev/null || host "$TEST_DOMAIN" &>/dev/null; then
log_success "DNS resolution working ($TEST_DOMAIN)"
else
log_error "DNS resolution failed - package downloads may fail"
fi
else
log_warn "Cannot test DNS (nslookup/host not available)"
fi
# Check internet connectivity (for package downloads)
log_info ""
log_info "=== Internet Connectivity ==="
if command_exists curl; then
if curl -s --max-time 5 https://github.com &>/dev/null; then
log_success "Internet connectivity working (GitHub reachable)"
# Test download speed (rough estimate)
log_info " Testing download speed..."
SPEED_TEST=$(curl -s -o /dev/null -w "%{speed_download}" --max-time 10 https://github.com 2>/dev/null || echo "0")
if [[ -n "$SPEED_TEST" ]] && [[ "$SPEED_TEST" != "0" ]]; then
# Convert bytes/s to Mbps
SPEED_MBPS=$(echo "scale=2; $SPEED_TEST * 8 / 1024 / 1024" | bc 2>/dev/null || echo "N/A")
if [[ "$SPEED_MBPS" != "N/A" ]]; then
log_info " Approximate download speed: ${SPEED_MBPS} Mbps"
if (( $(echo "$SPEED_MBPS > 100" | bc -l 2>/dev/null || echo 0) )); then
log_success " Speed is good for deployment (recommended: >100 Mbps)"
elif (( $(echo "$SPEED_MBPS > 10" | bc -l 2>/dev/null || echo 0) )); then
log_warn " Speed is acceptable but may slow deployment (recommended: >100 Mbps)"
else
log_warn " Speed is slow - deployment may be significantly slower"
log_info " Consider:"
log_info " • Using local package mirrors"
log_info " • Pre-caching packages"
log_info " • Upgrading network connection"
fi
fi
fi
else
log_warn "Internet connectivity test failed (may be normal in isolated networks)"
fi
else
log_warn "Cannot test internet connectivity (curl not available)"
fi
# Network optimization recommendations
log_info ""
log_info "=== Network Optimization Recommendations ==="
log_info ""
log_info "For optimal deployment performance:"
log_info " ✓ Use Gigabit Ethernet (1 Gbps) or faster"
log_info " ✓ Ensure low latency (<50ms) to Proxmox host"
log_info " ✓ Use wired connection instead of wireless"
log_info " ✓ Consider local package mirrors for apt"
log_info " ✓ Pre-cache OS templates (saves 5-10 minutes)"
log_info " ✓ Monitor network usage during deployment"
log_info ""
log_info "Expected network usage:"
log_info " • OS template download: ~200-500 MB (one-time)"
log_info " • Package downloads: ~500 MB - 2 GB per container"
log_info " • Configuration file transfers: Minimal"
log_info " • Total for 67 containers: ~35-135 GB"
log_info ""
log_success "Network configuration verification complete"

View File

@@ -0,0 +1,119 @@
#!/usr/bin/env bash
# Verify Storage Configuration
# Checks if storage is configured for optimal deployment performance
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
source "$PROJECT_ROOT/lib/common.sh" 2>/dev/null || {
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_error() { echo "[ERROR] $1"; }
log_warn() { echo "[WARN] $1"; }
}
# Load configuration
load_config "$PROJECT_ROOT/config/proxmox.conf" 2>/dev/null || true
log_info "========================================="
log_info "Storage Configuration Verification"
log_info "========================================="
log_info ""
# Check if running on Proxmox host
if ! command_exists pvesm; then
log_error "pvesm command not found. This script must be run on Proxmox host."
exit 1
fi
# Get configured storage
CONFIGURED_STORAGE="${PROXMOX_STORAGE:-local-lvm}"
log_info "Configured storage: $CONFIGURED_STORAGE"
# List available storage
log_info ""
log_info "=== Available Storage ==="
pvesm status 2>/dev/null || {
log_error "Failed to list storage. Are you running as root?"
exit 1
}
log_info ""
log_info "=== Storage Details ==="
# Check if configured storage exists
if pvesm status 2>/dev/null | grep -q "^$CONFIGURED_STORAGE"; then
log_success "Configured storage '$CONFIGURED_STORAGE' exists"
# Get storage details
STORAGE_TYPE=$(pvesm status 2>/dev/null | grep "^$CONFIGURED_STORAGE" | awk '{print $2}')
STORAGE_STATUS=$(pvesm status 2>/dev/null | grep "^$CONFIGURED_STORAGE" | awk '{print $3}')
STORAGE_TOTAL=$(pvesm status 2>/dev/null | grep "^$CONFIGURED_STORAGE" | awk '{print $4}')
STORAGE_USED=$(pvesm status 2>/dev/null | grep "^$CONFIGURED_STORAGE" | awk '{print $5}')
STORAGE_AVAIL=$(pvesm status 2>/dev/null | grep "^$CONFIGURED_STORAGE" | awk '{print $6}')
log_info " Type: $STORAGE_TYPE"
log_info " Status: $STORAGE_STATUS"
log_info " Total: $STORAGE_TOTAL"
log_info " Used: $STORAGE_USED"
log_info " Available: $STORAGE_AVAIL"
# Check storage type
log_info ""
log_info "=== Storage Type Analysis ==="
if [[ "$STORAGE_TYPE" == "lvm" ]] || [[ "$STORAGE_TYPE" == "lvmthin" ]] || [[ "$STORAGE_TYPE" == "zfspool" ]] || [[ "$STORAGE_TYPE" == "dir" ]]; then
if echo "$CONFIGURED_STORAGE" | grep -q "local"; then
log_success "Storage is local (recommended for deployment performance)"
log_info " Local storage provides:"
log_info " • Faster container creation (15-30 min saved)"
log_info " • Faster OS template installation"
log_info " • Lower latency for I/O operations"
else
log_warn "Storage appears to be network-based"
log_info " Network storage considerations:"
log_info " • Slower container creation"
log_info " • Higher latency"
log_info " • May benefit from caching"
log_info ""
log_info " Recommendation: Use local storage if possible for deployment"
fi
else
log_warn "Storage type '$STORAGE_TYPE' detected"
log_info " Verify this is optimal for your deployment needs"
fi
# Check available space (estimate requirement: ~100GB per container, 67 containers = ~6.7TB)
log_info ""
log_info "=== Storage Capacity Check ==="
ESTIMATED_NEED="6.7T" # Rough estimate for 67 containers
log_info "Estimated storage needed: ~$ESTIMATED_NEED (for 67 containers)"
log_info "Available storage: $STORAGE_AVAIL"
# Note: Actual space check would require parsing storage sizes
log_info " Verify sufficient space is available for deployment"
else
log_error "Configured storage '$CONFIGURED_STORAGE' not found"
log_info ""
log_info "Available storage options:"
pvesm status 2>/dev/null | awk '{print " - " $1 " (" $2 ")"}'
log_info ""
log_info "To fix: Update PROXMOX_STORAGE in config/proxmox.conf"
exit 1
fi
log_info ""
log_info "=== Storage Performance Recommendations ==="
log_info ""
log_info "For optimal deployment performance:"
log_info " ✓ Use local storage (local-lvm, local, local-zfs)"
log_info " ✓ Ensure sufficient available space"
log_info " ✓ Monitor storage I/O during deployment"
log_info " ✓ Consider SSD-based storage for faster operations"
log_info ""
log_success "Storage configuration verification complete"

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Verify deployment files are present before deployment
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
DEPLOY_SOURCE="$PROJECT_ROOT/smom-dbis-138-proxmox"
echo "Checking deployment files..."
echo ""
REQUIRED_FILES=(
"scripts/deployment/deploy-all.sh"
"scripts/deployment/deploy-besu-nodes.sh"
"scripts/deployment/deploy-services.sh"
"install/besu-validator-install.sh"
"install/besu-sentry-install.sh"
"config/proxmox.conf"
)
MISSING=0
for file in "${REQUIRED_FILES[@]}"; do
if [ -f "$DEPLOY_SOURCE/$file" ]; then
echo "$file"
else
echo "$file (missing)"
((MISSING++))
fi
done
echo ""
if [ $MISSING -eq 0 ]; then
echo "✅ All required files present"
exit 0
else
echo "$MISSING file(s) missing"
exit 1
fi

68
scripts/verify-ml110-sync.sh Executable file
View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
# Verify files synced to ml110 match local verified files
set -euo pipefail
REMOTE_HOST="192.168.11.10"
REMOTE_USER="root"
REMOTE_BASE="/opt"
LOCAL_PROJECT_ROOT="/home/intlc/projects/proxmox"
SOURCE_PROJECT="$LOCAL_PROJECT_ROOT/smom-dbis-138-proxmox"
log_info() { echo "[INFO] $1"; }
log_success() { echo "[✓] $1"; }
log_warn() { echo "[⚠] $1"; }
log_error() { echo "[✗] $1"; }
log_info "=== Verifying ml110 Sync ==="
log_info ""
# Check critical files
CRITICAL_FILES=(
"smom-dbis-138-proxmox/config/proxmox.conf"
"smom-dbis-138-proxmox/config/network.conf"
"smom-dbis-138-proxmox/config/inventory.example"
"smom-dbis-138-proxmox/scripts/fix-container-ips.sh"
"smom-dbis-138-proxmox/scripts/fix-besu-services.sh"
"smom-dbis-138-proxmox/scripts/validate-besu-config.sh"
"smom-dbis-138-proxmox/scripts/fix-all-besu.sh"
"smom-dbis-138-proxmox/deploy-all.sh"
"smom-dbis-138/config/genesis.json"
"smom-dbis-138/config/permissions-nodes.toml"
"smom-dbis-138/keys/validators/validator-1/key.priv"
)
log_info "Checking critical files..."
missing=0
for file in "${CRITICAL_FILES[@]}"; do
if sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"test -f ${REMOTE_BASE}/${file}" 2>/dev/null; then
log_success "${file}"
else
log_error "${file} - MISSING"
missing=$((missing + 1))
fi
done
echo ""
log_info "Checking validator keys..."
key_count=$(sshpass -p 'L@kers2010' ssh -o StrictHostKeyChecking=no "${REMOTE_USER}@${REMOTE_HOST}" \
"find ${REMOTE_BASE}/smom-dbis-138/keys/validators -mindepth 1 -maxdepth 1 -type d 2>/dev/null | wc -l" | tr -d ' ')
if [[ $key_count -eq 4 ]]; then
log_warn "Found $key_count validator keys (expected 5 - validator-5 missing)"
elif [[ $key_count -eq 5 ]]; then
log_success "Found $key_count validator keys"
else
log_error "Unexpected validator key count: $key_count"
fi
echo ""
if [[ $missing -eq 0 ]]; then
log_success "All critical files present!"
exit 0
else
log_error "$missing critical files missing"
exit 1
fi

Some files were not shown because too many files have changed in this diff Show More