8.6 KiB
8.6 KiB
Nginx RPC-01 (VMID 2500) - Complete Setup Summary
Date: $(date)
Container: besu-rpc-1 (Core RPC Node)
VMID: 2500
IP: 192.168.11.250
✅ Installation Complete
Nginx has been fully installed, configured, and secured on VMID 2500.
📋 What Was Configured
1. Core Nginx Installation ✅
- Nginx: Installed and running
- OpenSSL: Installed for certificate generation
- SSL Certificate: Self-signed certificate (10-year validity)
- Service: Enabled and active
2. Reverse Proxy Configuration ✅
Ports:
- 80: HTTP to HTTPS redirect
- 443: HTTPS RPC API (proxies to Besu port 8545)
- 8443: HTTPS WebSocket RPC (proxies to Besu port 8546)
Server Names:
besu-rpc-1192.168.11.250rpc-core.besu.localrpc-core.chainid138.localrpc-core-ws.besu.localrpc-core-ws.chainid138.local
3. Security Features ✅
SSL/TLS
- Protocols: TLSv1.2, TLSv1.3
- Ciphers: Strong ciphers (ECDHE, DHE)
- Certificate: Self-signed (replace with Let's Encrypt for production)
Security Headers
- Strict-Transport-Security: 1 year HSTS
- X-Frame-Options: SAMEORIGIN
- X-Content-Type-Options: nosniff
- X-XSS-Protection: 1; mode=block
- Referrer-Policy: strict-origin-when-cross-origin
- Permissions-Policy: Restricted
Rate Limiting
- HTTP RPC: 10 requests/second (burst: 20)
- WebSocket RPC: 50 requests/second (burst: 50)
- Connection Limiting: 10 connections per IP (HTTP), 5 (WebSocket)
Firewall Rules
- Port 80: Allowed (HTTP redirect)
- Port 443: Allowed (HTTPS RPC)
- Port 8443: Allowed (HTTPS WebSocket)
- Port 8545: Internal only (127.0.0.1)
- Port 8546: Internal only (127.0.0.1)
- Port 30303: Allowed (Besu P2P)
- Port 9545: Internal only (127.0.0.1, Metrics)
4. Monitoring Setup ✅
Nginx Status Page
- URL:
http://127.0.0.1:8080/nginx_status - Access: Internal only (127.0.0.1)
- Metrics: Active connections, requests, etc.
Log Rotation
- Retention: 14 days
- Rotation: Daily
- Compression: Enabled (delayed)
- Logs:
/var/log/nginx/rpc-core-*.log
Health Check
- Script:
/usr/local/bin/nginx-health-check.sh - Service:
nginx-health-monitor.service - Timer: Runs every 5 minutes
- Checks: Service status, RPC endpoint, ports
🧪 Testing & Verification
Health Check
# From container
pct exec 2500 -- curl -k https://localhost:443/health
# Returns: healthy
# Health check script
pct exec 2500 -- /usr/local/bin/nginx-health-check.sh
RPC Endpoint
# Get block number
curl -k -X POST https://192.168.11.250:443 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
# Get chain ID
curl -k -X POST https://192.168.11.250:443 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
Nginx Status
pct exec 2500 -- curl http://127.0.0.1:8080/nginx_status
Rate Limiting Test
# Test rate limiting (should handle bursts)
for i in {1..25}; do
curl -k -X POST https://192.168.11.250:443 \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' &
done
wait
📊 Configuration Files
Main Configuration
- Site Config:
/etc/nginx/sites-available/rpc-core - Enabled Link:
/etc/nginx/sites-enabled/rpc-core - Nginx Config:
/etc/nginx/nginx.conf
SSL Certificates
- Certificate:
/etc/nginx/ssl/rpc.crt - Private Key:
/etc/nginx/ssl/rpc.key
Logs
- HTTP Access:
/var/log/nginx/rpc-core-http-access.log - HTTP Error:
/var/log/nginx/rpc-core-http-error.log - WebSocket Access:
/var/log/nginx/rpc-core-ws-access.log - WebSocket Error:
/var/log/nginx/rpc-core-ws-error.log
Scripts
- Health Check:
/usr/local/bin/nginx-health-check.sh - Configuration Script:
scripts/configure-nginx-rpc-2500.sh - Security Script:
scripts/configure-nginx-security-2500.sh - Monitoring Script:
scripts/setup-nginx-monitoring-2500.sh
🔧 Management Commands
Service Management
# Check status
pct exec 2500 -- systemctl status nginx
# Reload configuration
pct exec 2500 -- systemctl reload nginx
# Restart service
pct exec 2500 -- systemctl restart nginx
# Test configuration
pct exec 2500 -- nginx -t
Monitoring
# View status page
pct exec 2500 -- curl http://127.0.0.1:8080/nginx_status
# Run health check
pct exec 2500 -- /usr/local/bin/nginx-health-check.sh
# View logs
pct exec 2500 -- tail -f /var/log/nginx/rpc-core-http-access.log
pct exec 2500 -- tail -f /var/log/nginx/rpc-core-http-error.log
# Check health monitor
pct exec 2500 -- systemctl status nginx-health-monitor.timer
pct exec 2500 -- journalctl -u nginx-health-monitor.service -n 20
Firewall
# View firewall rules
pct exec 2500 -- iptables -L -n
# Save firewall rules (if needed)
pct exec 2500 -- iptables-save > /etc/iptables/rules.v4
🔐 Security Recommendations
Production Checklist
- Replace self-signed certificate with Let's Encrypt
- Configure DNS records for domain names
- Review and adjust CORS settings
- Configure IP allowlist if needed
- Set up fail2ban for additional protection
- Enable additional logging/auditing
- Review rate limiting thresholds
- Set up external monitoring (Prometheus/Grafana)
Let's Encrypt Certificate
# Install Certbot
pct exec 2500 -- apt-get install -y certbot python3-certbot-nginx
# Obtain certificate
pct exec 2500 -- certbot --nginx \
-d rpc-core.besu.local \
-d rpc-core.chainid138.local
# Test renewal
pct exec 2500 -- certbot renew --dry-run
📈 Performance Tuning
Current Settings
- Proxy Timeouts: 300s (5 minutes)
- WebSocket Timeouts: 86400s (24 hours)
- Client Max Body Size: 10M
- Buffering: Disabled (real-time RPC)
Adjust if Needed
Edit /etc/nginx/sites-available/rpc-core:
proxy_read_timeout: Adjust for long-running queriesproxy_send_timeout: Adjust for large responsesclient_max_body_size: Increase if needed- Rate limiting thresholds: Adjust based on usage
🔄 Integration Options
Option 1: Standalone (Current)
Nginx handles SSL termination and routing directly on the RPC node.
Pros:
- Direct control
- No additional dependencies
- Simple architecture
Cons:
- Certificate management per node
- No centralized management
Option 2: With nginx-proxy-manager (VMID 105)
Use nginx-proxy-manager as central proxy, forward to Nginx on RPC nodes.
Configuration:
- Domain:
rpc-core.besu.local - Forward to:
192.168.11.250:443(HTTPS) - SSL: Handle at nginx-proxy-manager or pass through
Pros:
- Centralized management
- Single SSL certificate management
- Easy to add/remove nodes
Option 3: Direct to Besu
Remove Nginx from RPC nodes, use nginx-proxy-manager directly to Besu.
Configuration:
- Forward to:
192.168.11.250:8545(HTTP) - SSL: Handle at nginx-proxy-manager
Pros:
- Simplest architecture
- Single point of SSL termination
- Less resource usage on RPC nodes
✅ Verification Checklist
- Nginx installed
- SSL certificate generated
- Configuration file created
- Site enabled
- Nginx service active
- Port 80 listening (HTTP redirect)
- Port 443 listening (HTTPS RPC)
- Port 8443 listening (HTTPS WebSocket)
- Configuration test passed
- RPC endpoint responding
- Health check working
- Rate limiting configured
- Security headers configured
- Firewall rules configured
- Log rotation configured
- Monitoring enabled
- Health check service active
📚 Related Documentation
- Nginx RPC 2500 Configuration
- Nginx Architecture for RPC Nodes
- RPC Node Types Architecture
- Cloudflare Nginx Integration
🎯 Summary
Status: ✅ FULLY CONFIGURED AND OPERATIONAL
All next steps have been completed:
- ✅ Nginx installed and configured
- ✅ SSL/TLS encryption enabled
- ✅ Security features configured (rate limiting, headers, firewall)
- ✅ Monitoring setup (status page, health checks, log rotation)
- ✅ Documentation created
The RPC node is now ready for production use with proper security and monitoring in place.
Setup Date: $(date)
Last Updated: $(date)