Complete markdown files cleanup and organization
- Organized 252 files across project - Root directory: 187 → 2 files (98.9% reduction) - Moved configuration guides to docs/04-configuration/ - Moved troubleshooting guides to docs/09-troubleshooting/ - Moved quick start guides to docs/01-getting-started/ - Moved reports to reports/ directory - Archived temporary files - Generated comprehensive reports and documentation - Created maintenance scripts and guides All files organized according to established standards.
This commit is contained in:
172
docs/01-getting-started/CHAIN138_QUICK_START.md
Normal file
172
docs/01-getting-started/CHAIN138_QUICK_START.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# ChainID 138 Configuration - Quick Start Guide
|
||||
|
||||
**Quick reference for configuring Besu nodes for ChainID 138**
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Step 1: Run Main Configuration
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox
|
||||
./scripts/configure-besu-chain138-nodes.sh
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- Collects enodes from all Besu nodes
|
||||
- Generates `static-nodes.json` and `permissioned-nodes.json`
|
||||
- Deploys to all containers (including new: 1504, 2503)
|
||||
- Configures discovery settings
|
||||
- Restarts Besu services
|
||||
|
||||
**Expected time:** 5-10 minutes
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Verify Configuration
|
||||
|
||||
```bash
|
||||
./scripts/verify-chain138-config.sh
|
||||
```
|
||||
|
||||
**What it checks:**
|
||||
- Files exist and are readable
|
||||
- Discovery settings are correct
|
||||
- Peer connections are working
|
||||
|
||||
---
|
||||
|
||||
## 📋 Node List
|
||||
|
||||
| VMID | Hostname | Role | Discovery |
|
||||
|------|----------|------|-----------|
|
||||
| 1000-1004 | besu-validator-* | Validator | Enabled |
|
||||
| 1500-1504 | besu-sentry-* | Sentry | Enabled |
|
||||
| 2500 | besu-rpc-core | RPC Core | **Disabled** |
|
||||
| 2501 | besu-rpc-perm | RPC Permissioned | Enabled |
|
||||
| 2502 | besu-rpc-public | RPC Public | Enabled |
|
||||
| 2503 | besu-rpc-4 | RPC Permissioned | **Disabled** |
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Manual Steps (if needed)
|
||||
|
||||
### Check Configuration Files
|
||||
|
||||
```bash
|
||||
# On Proxmox host
|
||||
pct exec <VMID> -- ls -la /var/lib/besu/static-nodes.json
|
||||
pct exec <VMID> -- ls -la /var/lib/besu/permissions/permissioned-nodes.json
|
||||
```
|
||||
|
||||
### Check Discovery Setting
|
||||
|
||||
```bash
|
||||
# For RPC nodes that should have discovery disabled (2500, 2503)
|
||||
pct exec 2503 -- grep discovery-enabled /etc/besu/*.toml
|
||||
```
|
||||
|
||||
### Check Peer Count
|
||||
|
||||
```bash
|
||||
# Via RPC
|
||||
curl -X POST http://<RPC_IP>:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}'
|
||||
```
|
||||
|
||||
### Restart Besu Service
|
||||
|
||||
```bash
|
||||
pct exec <VMID> -- systemctl restart besu*.service
|
||||
pct exec <VMID> -- systemctl status besu*.service
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Issue: Node not connecting to peers
|
||||
|
||||
1. **Check files exist:**
|
||||
```bash
|
||||
pct exec <VMID> -- ls -la /var/lib/besu/static-nodes.json
|
||||
```
|
||||
|
||||
2. **Check file ownership:**
|
||||
```bash
|
||||
pct exec <VMID> -- chown -R besu:besu /var/lib/besu
|
||||
```
|
||||
|
||||
3. **Check network connectivity:**
|
||||
```bash
|
||||
pct exec <VMID> -- ping <PEER_IP>
|
||||
```
|
||||
|
||||
### Understanding: RPC Nodes Reporting chainID 0x1 to MetaMask
|
||||
|
||||
**Note**: This is **intentional behavior** for wallet compatibility. RPC nodes report `chainID = 0x1` (Ethereum mainnet) to MetaMask wallets to work around MetaMask's technical limitations for regulated financial entities.
|
||||
|
||||
**How it works:**
|
||||
- Nodes are connected to ChainID 138 (private network)
|
||||
- Nodes report chainID 0x1 to MetaMask (wallet compatibility)
|
||||
- Discovery is disabled to prevent actual connection to Ethereum mainnet
|
||||
- MetaMask works with the private network while thinking it's mainnet
|
||||
|
||||
**If discovery needs to be disabled (should already be configured):**
|
||||
|
||||
```bash
|
||||
for vmid in 2503 2504 2505 2506 2507 2508; do
|
||||
pct exec $vmid -- sed -i 's/^discovery-enabled=.*/discovery-enabled=false/' /etc/besu/*.toml
|
||||
pct exec $vmid -- systemctl restart besu*.service
|
||||
done
|
||||
```
|
||||
|
||||
### Issue: Permission denied errors
|
||||
|
||||
```bash
|
||||
# Fix ownership
|
||||
pct exec <VMID> -- chown -R besu:besu /var/lib/besu
|
||||
pct exec <VMID> -- chmod 644 /var/lib/besu/static-nodes.json
|
||||
pct exec <VMID> -- chmod 644 /var/lib/besu/permissions/permissioned-nodes.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 Scripts Reference
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `configure-besu-chain138-nodes.sh` | Main configuration script |
|
||||
| `setup-new-chain138-containers.sh` | Quick setup for new containers |
|
||||
| `verify-chain138-config.sh` | Verify configuration |
|
||||
|
||||
---
|
||||
|
||||
## 📖 Full Documentation
|
||||
|
||||
- **Complete Guide:** [CHAIN138_BESU_CONFIGURATION.md](CHAIN138_BESU_CONFIGURATION.md)
|
||||
- **Summary:** [CHAIN138_CONFIGURATION_SUMMARY.md](CHAIN138_CONFIGURATION_SUMMARY.md)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Checklist
|
||||
|
||||
- [ ] Run main configuration script
|
||||
- [ ] Verify all nodes have configuration files
|
||||
- [ ] Check discovery settings (disabled for 2500, 2503)
|
||||
- [ ] Verify peer connections
|
||||
- [ ] Test RPC endpoints
|
||||
- [ ] Check service status on all nodes
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
If you encounter issues:
|
||||
|
||||
1. Check logs: `pct exec <VMID> -- journalctl -u besu*.service -n 50`
|
||||
2. Run verification: `./scripts/verify-chain138-config.sh`
|
||||
3. Review documentation: `docs/CHAIN138_BESU_CONFIGURATION.md`
|
||||
|
||||
56
docs/01-getting-started/LIST_VMS_QUICK_START.md
Normal file
56
docs/01-getting-started/LIST_VMS_QUICK_START.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Quick Start: List All Proxmox VMs
|
||||
|
||||
## Quick Start (Python Script)
|
||||
|
||||
```bash
|
||||
# 1. Install dependencies (if not already installed)
|
||||
cd /home/intlc/projects/proxmox
|
||||
source venv/bin/activate
|
||||
pip install proxmoxer requests
|
||||
|
||||
# 2. Ensure ~/.env has Proxmox credentials
|
||||
# (Should already be configured)
|
||||
|
||||
# 3. Run the script
|
||||
python3 list_vms.py
|
||||
```
|
||||
|
||||
## Quick Start (Shell Script)
|
||||
|
||||
```bash
|
||||
# 1. Set Proxmox host (or use default)
|
||||
export PROXMOX_HOST=192.168.11.10
|
||||
export PROXMOX_USER=root
|
||||
|
||||
# 2. Run the script
|
||||
./list_vms.sh
|
||||
```
|
||||
|
||||
## Expected Output
|
||||
|
||||
```
|
||||
VMID | Name | Type | IP Address | FQDN | Description
|
||||
-------|-------------------------|------|-------------------|-------------------------|----------------
|
||||
100 | vm-example | QEMU | 192.168.1.100 | vm-example.local | Example VM
|
||||
101 | container-example | LXC | 192.168.1.101 | container.local | Example container
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Connection timeout?**
|
||||
- Check: `ping $(grep PROXMOX_HOST ~/.env | cut -d= -f2)`
|
||||
- Verify firewall allows port 8006
|
||||
|
||||
**Authentication failed?**
|
||||
- Check credentials in `~/.env`
|
||||
- Verify API token is valid
|
||||
|
||||
**No IP addresses?**
|
||||
- QEMU: Install QEMU guest agent in VM
|
||||
- LXC: Container must be running
|
||||
|
||||
## Files
|
||||
|
||||
- `list_vms.py` - Python script (recommended)
|
||||
- `list_vms.sh` - Shell script (requires SSH)
|
||||
- `LIST_VMS_README.md` - Full documentation
|
||||
147
docs/01-getting-started/LIST_VMS_README.md
Normal file
147
docs/01-getting-started/LIST_VMS_README.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# List Proxmox VMs Scripts
|
||||
|
||||
Two scripts to list all Proxmox VMs with VMID, Name, IP Address, FQDN, and Description.
|
||||
|
||||
## Scripts
|
||||
|
||||
### 1. `list_vms.py` (Python - Recommended)
|
||||
|
||||
Python script using the Proxmox API. More robust and feature-rich.
|
||||
|
||||
**Features:**
|
||||
- Supports both API token and password authentication
|
||||
- Automatically loads credentials from `~/.env` file
|
||||
- Retrieves IP addresses via QEMU guest agent or network config
|
||||
- Gets FQDN from hostname configuration
|
||||
- Handles both QEMU VMs and LXC containers
|
||||
- Graceful error handling
|
||||
|
||||
**Prerequisites:**
|
||||
```bash
|
||||
pip install proxmoxer requests
|
||||
# Or if using venv:
|
||||
source venv/bin/activate
|
||||
pip install proxmoxer requests
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
|
||||
**Option 1: Using ~/.env file (Recommended)**
|
||||
```bash
|
||||
# Create/edit ~/.env file with:
|
||||
PROXMOX_HOST=your-proxmox-host
|
||||
PROXMOX_USER=root@pam
|
||||
PROXMOX_TOKEN_NAME=your-token-name
|
||||
PROXMOX_TOKEN_VALUE=your-token-value
|
||||
# OR use password:
|
||||
PROXMOX_PASSWORD=your-password
|
||||
|
||||
# Then run:
|
||||
python3 list_vms.py
|
||||
```
|
||||
|
||||
**Option 2: Environment variables**
|
||||
```bash
|
||||
export PROXMOX_HOST=your-proxmox-host
|
||||
export PROXMOX_USER=root@pam
|
||||
export PROXMOX_TOKEN_NAME=your-token-name
|
||||
export PROXMOX_TOKEN_VALUE=your-token-value
|
||||
python3 list_vms.py
|
||||
```
|
||||
|
||||
**Option 3: JSON config file**
|
||||
```bash
|
||||
export PROXMOX_MCP_CONFIG=/path/to/config.json
|
||||
python3 list_vms.py
|
||||
```
|
||||
|
||||
### 2. `list_vms.sh` (Shell Script)
|
||||
|
||||
Shell script using `pvesh` via SSH. Requires SSH access to Proxmox node.
|
||||
|
||||
**Prerequisites:**
|
||||
- SSH access to Proxmox node
|
||||
- `pvesh` command available on Proxmox node
|
||||
- Python3 for JSON parsing
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
export PROXMOX_HOST=your-proxmox-host
|
||||
export PROXMOX_USER=root
|
||||
./list_vms.sh
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
Both scripts output a formatted table:
|
||||
|
||||
```
|
||||
VMID | Name | Type | IP Address | FQDN | Description
|
||||
-------|-------------------------|------|-------------------|-------------------------|----------------
|
||||
100 | vm-example | QEMU | 192.168.1.100 | vm-example.local | Example VM
|
||||
101 | container-example | LXC | 192.168.1.101 | container.local | Example container
|
||||
```
|
||||
|
||||
## How IP Addresses are Retrieved
|
||||
|
||||
### For QEMU VMs:
|
||||
1. First tries QEMU guest agent (`network-get-interfaces`)
|
||||
2. Falls back to network configuration parsing
|
||||
3. Shows "N/A" if neither method works
|
||||
|
||||
### For LXC Containers:
|
||||
1. Executes `hostname -I` command inside container
|
||||
2. Filters out localhost addresses
|
||||
3. Shows "N/A" if command fails or container is stopped
|
||||
|
||||
## How FQDN is Retrieved
|
||||
|
||||
1. Gets hostname from VM/container configuration
|
||||
2. For running VMs, tries to execute `hostname -f` command
|
||||
3. Falls back to hostname from config if command fails
|
||||
4. Shows "N/A" if no hostname is configured
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Timeout
|
||||
- Verify Proxmox host is reachable: `ping your-proxmox-host`
|
||||
- Check firewall rules allow port 8006
|
||||
- Verify credentials in `~/.env` are correct
|
||||
|
||||
### Authentication Failed
|
||||
- Verify API token is valid and not expired
|
||||
- Check user permissions in Proxmox
|
||||
- Try using password authentication instead
|
||||
|
||||
### IP Address Shows "N/A"
|
||||
- For QEMU: Ensure QEMU guest agent is installed and running in VM
|
||||
- For LXC: Container must be running to execute commands
|
||||
- Check network configuration in VM/container
|
||||
|
||||
### FQDN Shows "N/A"
|
||||
- Set hostname in VM/container configuration
|
||||
- For running VMs, ensure hostname command is available
|
||||
|
||||
## Examples
|
||||
|
||||
### List all VMs
|
||||
```bash
|
||||
python3 list_vms.py
|
||||
```
|
||||
|
||||
### List VMs from specific host
|
||||
```bash
|
||||
PROXMOX_HOST=192.168.11.10 python3 list_vms.py
|
||||
```
|
||||
|
||||
### Using shell script
|
||||
```bash
|
||||
PROXMOX_HOST=192.168.11.10 PROXMOX_USER=root ./list_vms.sh
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Scripts automatically sort VMs by VMID
|
||||
- Both QEMU VMs and LXC containers are included
|
||||
- Scripts handle missing information gracefully (shows "N/A")
|
||||
- Python script is recommended for better error handling and features
|
||||
270
docs/01-getting-started/METAMASK_QUICK_START_GUIDE.md
Normal file
270
docs/01-getting-started/METAMASK_QUICK_START_GUIDE.md
Normal file
@@ -0,0 +1,270 @@
|
||||
# MetaMask Quick Start Guide - ChainID 138
|
||||
|
||||
**Date**: $(date)
|
||||
**Network**: SMOM-DBIS-138 (ChainID 138)
|
||||
**Purpose**: Get started with MetaMask on ChainID 138 in 5 minutes
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start (5 Minutes)
|
||||
|
||||
### Step 1: Add Network to MetaMask
|
||||
|
||||
**Option A: Manual Addition** (Recommended for first-time users)
|
||||
|
||||
1. Open MetaMask extension
|
||||
2. Click network dropdown (top of MetaMask)
|
||||
3. Click "Add Network" → "Add a network manually"
|
||||
4. Enter the following:
|
||||
- **Network Name**: `Defi Oracle Meta Mainnet` or `SMOM-DBIS-138`
|
||||
- **RPC URL**: `https://rpc-http-pub.d-bis.org` ⚠️ **Important: Must be public endpoint**
|
||||
- **Chain ID**: `138` (must be decimal, not hex)
|
||||
- **Currency Symbol**: `ETH`
|
||||
- **Block Explorer URL**: `https://explorer.d-bis.org` (optional)
|
||||
5. Click "Save"
|
||||
|
||||
**Note**: If you get "Could not fetch chain ID" error, the RPC endpoint may require authentication. The public endpoint (`rpc-http-pub.d-bis.org`) should NOT require authentication. If it does, contact network administrators.
|
||||
|
||||
**Option B: Programmatic Addition** (For dApps)
|
||||
|
||||
If you're building a dApp, you can add the network programmatically:
|
||||
|
||||
```javascript
|
||||
await window.ethereum.request({
|
||||
method: 'wallet_addEthereumChain',
|
||||
params: [{
|
||||
chainId: '0x8a', // 138 in hex
|
||||
chainName: 'SMOM-DBIS-138',
|
||||
nativeCurrency: {
|
||||
name: 'Ether',
|
||||
symbol: 'ETH',
|
||||
decimals: 18
|
||||
},
|
||||
rpcUrls: ['https://rpc-http-pub.d-bis.org'],
|
||||
blockExplorerUrls: ['https://explorer.d-bis.org']
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Import Tokens
|
||||
|
||||
**WETH9 (Wrapped Ether)**
|
||||
|
||||
1. In MetaMask, click "Import tokens"
|
||||
2. Enter:
|
||||
- **Token Contract Address**: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`
|
||||
- **Token Symbol**: `WETH`
|
||||
- **Decimals of Precision**: `18` ⚠️ **Important: Must be 18**
|
||||
3. Click "Add Custom Token"
|
||||
|
||||
**WETH10 (Wrapped Ether v10)**
|
||||
|
||||
1. Click "Import tokens" again
|
||||
2. Enter:
|
||||
- **Token Contract Address**: `0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f`
|
||||
- **Token Symbol**: `WETH10`
|
||||
- **Decimals of Precision**: `18`
|
||||
3. Click "Add Custom Token"
|
||||
|
||||
**Note**: If you see incorrect balances (like "6,000,000,000.0T"), ensure decimals are set to 18. See [WETH9 Display Fix](./METAMASK_WETH9_FIX_INSTRUCTIONS.md) for details.
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Get Test ETH
|
||||
|
||||
**For Testing Purposes**:
|
||||
|
||||
If you need test ETH on ChainID 138:
|
||||
1. Contact network administrators
|
||||
2. Use a faucet (if available)
|
||||
3. Bridge from another chain (if configured)
|
||||
|
||||
**Current Network Status**:
|
||||
- ✅ Network: Operational
|
||||
- ✅ RPC: `https://rpc-core.d-bis.org`
|
||||
- ✅ Explorer: `https://explorer.d-bis.org`
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Verify Connection
|
||||
|
||||
**Check Network**:
|
||||
1. In MetaMask, verify you're on "SMOM-DBIS-138"
|
||||
2. Check your ETH balance (should display correctly)
|
||||
3. Verify token balances (WETH, WETH10)
|
||||
|
||||
**Test Transaction** (Optional):
|
||||
1. Send a small amount of ETH to another address
|
||||
2. Verify transaction appears in block explorer
|
||||
3. Confirm balance updates
|
||||
|
||||
---
|
||||
|
||||
## 📊 Reading Price Feeds
|
||||
|
||||
### Get ETH/USD Price
|
||||
|
||||
**Oracle Contract**: `0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6`
|
||||
|
||||
**Using Web3.js**:
|
||||
```javascript
|
||||
const Web3 = require('web3');
|
||||
const web3 = new Web3('https://rpc-core.d-bis.org');
|
||||
|
||||
const oracleABI = [{
|
||||
"inputs": [],
|
||||
"name": "latestRoundData",
|
||||
"outputs": [
|
||||
{"name": "roundId", "type": "uint80"},
|
||||
{"name": "answer", "type": "int256"},
|
||||
{"name": "startedAt", "type": "uint256"},
|
||||
{"name": "updatedAt", "type": "uint256"},
|
||||
{"name": "answeredInRound", "type": "uint80"}
|
||||
],
|
||||
"stateMutability": "view",
|
||||
"type": "function"
|
||||
}];
|
||||
|
||||
const oracle = new web3.eth.Contract(oracleABI, '0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6');
|
||||
|
||||
async function getPrice() {
|
||||
const result = await oracle.methods.latestRoundData().call();
|
||||
const price = result.answer / 1e8; // Convert from 8 decimals
|
||||
console.log(`ETH/USD: $${price}`);
|
||||
return price;
|
||||
}
|
||||
|
||||
getPrice();
|
||||
```
|
||||
|
||||
**Using Ethers.js**:
|
||||
```javascript
|
||||
const { ethers } = require('ethers');
|
||||
const provider = new ethers.providers.JsonRpcProvider('https://rpc-core.d-bis.org');
|
||||
|
||||
const oracleABI = [
|
||||
"function latestRoundData() external view returns (uint80, int256, uint256, uint256, uint80)"
|
||||
];
|
||||
|
||||
const oracle = new ethers.Contract(
|
||||
'0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6',
|
||||
oracleABI,
|
||||
provider
|
||||
);
|
||||
|
||||
async function getPrice() {
|
||||
const result = await oracle.latestRoundData();
|
||||
const price = result.answer.toNumber() / 1e8;
|
||||
console.log(`ETH/USD: $${price}`);
|
||||
return price;
|
||||
}
|
||||
|
||||
getPrice();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Common Tasks
|
||||
|
||||
### Send ETH
|
||||
|
||||
1. Click "Send" in MetaMask
|
||||
2. Enter recipient address
|
||||
3. Enter amount
|
||||
4. Review gas fees
|
||||
5. Confirm transaction
|
||||
|
||||
### Wrap ETH to WETH9
|
||||
|
||||
1. Go to WETH9 contract: `0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2`
|
||||
2. Call `deposit()` function
|
||||
3. Send ETH amount with transaction
|
||||
4. Receive WETH9 tokens
|
||||
|
||||
### Check Transaction Status
|
||||
|
||||
1. Copy transaction hash from MetaMask
|
||||
2. Visit: `https://explorer.d-bis.org/tx/<tx-hash>`
|
||||
3. View transaction details, gas used, status
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Troubleshooting
|
||||
|
||||
### Network Not Connecting
|
||||
|
||||
**Issue**: Can't connect to network
|
||||
|
||||
**Solutions**:
|
||||
1. Verify RPC URL: `https://rpc-core.d-bis.org`
|
||||
2. Check Chain ID: Must be `138` (not 0x8a in decimal)
|
||||
3. Try removing and re-adding network
|
||||
4. Clear MetaMask cache and reload
|
||||
|
||||
### Token Balance Display Incorrect
|
||||
|
||||
**Issue**: Shows "6,000,000,000.0T WETH" instead of "6 WETH"
|
||||
|
||||
**Solution**:
|
||||
- Remove token from MetaMask
|
||||
- Re-import with decimals set to `18`
|
||||
- See [WETH9 Display Fix](./METAMASK_WETH9_FIX_INSTRUCTIONS.md) for details
|
||||
|
||||
### Price Feed Not Updating
|
||||
|
||||
**Issue**: Oracle price seems stale
|
||||
|
||||
**Solutions**:
|
||||
1. Check Oracle contract: `0x3304b747e565a97ec8ac220b0b6a1f6ffdb837e6`
|
||||
2. Verify `updatedAt` timestamp is recent (within 60 seconds)
|
||||
3. Check Oracle Publisher service status
|
||||
|
||||
### Transaction Failing
|
||||
|
||||
**Issue**: Transactions not going through
|
||||
|
||||
**Solutions**:
|
||||
1. Check you have sufficient ETH for gas
|
||||
2. Verify network is selected correctly
|
||||
3. Check transaction nonce (may need to reset)
|
||||
4. Increase gas limit if needed
|
||||
|
||||
---
|
||||
|
||||
## 📚 Additional Resources
|
||||
|
||||
- [Full Integration Requirements](./METAMASK_FULL_INTEGRATION_REQUIREMENTS.md)
|
||||
- [Oracle Integration Guide](./METAMASK_ORACLE_INTEGRATION.md)
|
||||
- [WETH9 Display Bug Fix](./METAMASK_WETH9_FIX_INSTRUCTIONS.md)
|
||||
- [Contract Addresses Reference](./CONTRACT_ADDRESSES_REFERENCE.md)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Verification Checklist
|
||||
|
||||
After setup, verify:
|
||||
|
||||
- [ ] Network "SMOM-DBIS-138" appears in MetaMask
|
||||
- [ ] Can switch to ChainID 138 network
|
||||
- [ ] ETH balance displays correctly
|
||||
- [ ] WETH9 token imported with correct decimals (18)
|
||||
- [ ] WETH10 token imported with correct decimals (18)
|
||||
- [ ] Can read price from Oracle contract
|
||||
- [ ] Can send test transaction
|
||||
- [ ] Transaction appears in block explorer
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
1. **Explore dApps**: Connect to dApps built on ChainID 138
|
||||
2. **Bridge Assets**: Use CCIP bridges to transfer assets cross-chain
|
||||
3. **Deploy Contracts**: Deploy your own smart contracts
|
||||
4. **Build dApps**: Create applications using the network
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: $(date)
|
||||
|
||||
34
docs/01-getting-started/REMINING_STEPS_QUICK_REFERENCE.md
Normal file
34
docs/01-getting-started/REMINING_STEPS_QUICK_REFERENCE.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Remaining Steps - Quick Reference
|
||||
|
||||
## ✅ Completed
|
||||
- All contracts deployed (7/7) ✅
|
||||
- All contracts have bytecode ✅
|
||||
- CCIP Monitor service running ✅
|
||||
- Service configurations updated ✅
|
||||
|
||||
## ⏳ Remaining Steps
|
||||
|
||||
### 1. Verify Contracts on Blockscout (High Priority)
|
||||
```bash
|
||||
./scripts/verify-all-contracts.sh 0.8.20
|
||||
```
|
||||
Status: 0/7 verified
|
||||
|
||||
### 2. Validate Contract Functionality (Medium Priority)
|
||||
- Test contract functions
|
||||
- Verify events
|
||||
- Test integrations
|
||||
|
||||
### 3. Update Documentation (Low Priority)
|
||||
- Update verification status
|
||||
- Document results
|
||||
|
||||
## Tools
|
||||
- Verify: `./scripts/verify-all-contracts.sh`
|
||||
- Check: `./scripts/check-all-contracts-status.sh`
|
||||
- Monitor: `./scripts/check-ccip-monitor.sh`
|
||||
|
||||
## Documentation
|
||||
- `docs/ALL_REMAINING_STEPS.md` - Complete list
|
||||
- `docs/BLOCKSCOUT_VERIFICATION_GUIDE.md` - Verification guide
|
||||
- `docs/CONTRACT_VALIDATION_CHECKLIST.md` - Validation checklist
|
||||
240
docs/01-getting-started/THIRDWEB_RPC_CLOUDFLARE_QUICKSTART.md
Normal file
240
docs/01-getting-started/THIRDWEB_RPC_CLOUDFLARE_QUICKSTART.md
Normal file
@@ -0,0 +1,240 @@
|
||||
# ThirdWeb RPC (VMID 2400) - Cloudflare Tunnel Quick Start
|
||||
|
||||
**Status:** Ready to Execute
|
||||
**VMID:** 2400
|
||||
**IP:** 192.168.11.240
|
||||
**Domain:** `defi-oracle.io`
|
||||
**FQDN:** `rpc.public-0138.defi-oracle.io`
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide will set up a Cloudflare tunnel for VMID 2400 (ThirdWeb RPC node) since we can't access pve2 where the existing tunnel is located.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Create Cloudflare Tunnel (Manual - Cloudflare Dashboard)
|
||||
|
||||
### 1.1 Go to Cloudflare Dashboard
|
||||
|
||||
1. Open: https://one.dash.cloudflare.com/
|
||||
2. Login to your Cloudflare account
|
||||
|
||||
### 1.2 Navigate to Tunnels
|
||||
|
||||
1. Click on **Zero Trust** (in the left sidebar)
|
||||
2. Click on **Networks** → **Tunnels**
|
||||
|
||||
### 1.3 Create New Tunnel
|
||||
|
||||
1. Click **Create a tunnel** button (top right)
|
||||
2. Select **Cloudflared** as the connector type
|
||||
3. Name: `thirdweb-rpc-2400`
|
||||
4. Click **Save tunnel**
|
||||
|
||||
### 1.4 Copy the Tunnel Token
|
||||
|
||||
After creating the tunnel, you'll see a screen with a token. It looks like:
|
||||
```
|
||||
eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0Ijoi...
|
||||
```
|
||||
|
||||
**IMPORTANT:** Copy this entire token - you'll need it in the next step.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Run the Installation Script (Automated)
|
||||
|
||||
### 2.1 Run the Script
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox
|
||||
|
||||
# Replace <TUNNEL_TOKEN> with the token you copied from Step 1.4
|
||||
./scripts/setup-cloudflared-vmid2400.sh <TUNNEL_TOKEN>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
./scripts/setup-cloudflared-vmid2400.sh eyJhIjoiNTJhZDU3YTcxNjcxYzVmYzAwOWVkZjA3NDQ2NTgxOTYiLCJ0Ijoi...
|
||||
```
|
||||
|
||||
The script will:
|
||||
- ✅ Check SSH access to Proxmox host (192.168.11.10)
|
||||
- ✅ Verify VMID 2400 is running
|
||||
- ✅ Install cloudflared in the container
|
||||
- ✅ Install and start the tunnel service
|
||||
- ✅ Verify the setup
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Configure Tunnel Route (Manual - Cloudflare Dashboard)
|
||||
|
||||
### 3.1 Go Back to Tunnel Configuration
|
||||
|
||||
1. In Cloudflare Dashboard: **Zero Trust** → **Networks** → **Tunnels**
|
||||
2. Click on your tunnel name: `thirdweb-rpc-2400`
|
||||
3. Click **Configure** button
|
||||
|
||||
### 3.2 Add Public Hostname
|
||||
|
||||
1. Go to **Public Hostname** tab
|
||||
2. Click **Add a public hostname**
|
||||
|
||||
### 3.3 Configure the Route
|
||||
|
||||
Fill in the following:
|
||||
|
||||
```
|
||||
Subdomain: rpc.public-0138
|
||||
Domain: defi-oracle.io
|
||||
Service Type: HTTP
|
||||
URL: http://127.0.0.1:8545
|
||||
```
|
||||
|
||||
**Important Notes:**
|
||||
- The subdomain is `rpc.public-0138` (not just `rpc`)
|
||||
- The full domain will be: `rpc.public-0138.defi-oracle.io`
|
||||
- Use `http://127.0.0.1:8545` to connect directly to Besu RPC
|
||||
- If you have Nginx on port 443, use `https://127.0.0.1:443` instead
|
||||
|
||||
### 3.4 Save Configuration
|
||||
|
||||
1. Click **Save hostname**
|
||||
2. Wait a few seconds for the configuration to apply
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Configure DNS Record (Manual - Cloudflare Dashboard)
|
||||
|
||||
### 4.1 Navigate to DNS
|
||||
|
||||
1. In Cloudflare Dashboard, go to your account overview
|
||||
2. Select domain: **defi-oracle.io**
|
||||
3. Click **DNS** in the left sidebar
|
||||
4. Click **Records**
|
||||
|
||||
### 4.2 Add CNAME Record
|
||||
|
||||
1. Click **Add record**
|
||||
|
||||
2. Fill in:
|
||||
```
|
||||
Type: CNAME
|
||||
Name: rpc.public-0138
|
||||
Target: <your-tunnel-id>.cfargotunnel.com
|
||||
Proxy: 🟠 Proxied (orange cloud)
|
||||
TTL: Auto
|
||||
```
|
||||
|
||||
3. **To find your tunnel ID:**
|
||||
- Go back to **Zero Trust** → **Networks** → **Tunnels**
|
||||
- Click on your tunnel: `thirdweb-rpc-2400`
|
||||
- The tunnel ID is shown in the URL or in the tunnel details
|
||||
- Format: `xxxx-xxxx-xxxx-xxxx` (UUID format)
|
||||
|
||||
### 4.3 Save DNS Record
|
||||
|
||||
1. Click **Save**
|
||||
2. Wait 1-2 minutes for DNS propagation
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Verify Setup
|
||||
|
||||
### 5.1 Check Tunnel Status
|
||||
|
||||
```bash
|
||||
# From your local machine, check if the tunnel is running
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status cloudflared"
|
||||
```
|
||||
|
||||
### 5.2 Test DNS Resolution
|
||||
|
||||
```bash
|
||||
# Test DNS resolution
|
||||
dig rpc.public-0138.defi-oracle.io
|
||||
nslookup rpc.public-0138.defi-oracle.io
|
||||
|
||||
# Should resolve to Cloudflare IPs (if proxied) or tunnel endpoint
|
||||
```
|
||||
|
||||
### 5.3 Test RPC Endpoint
|
||||
|
||||
```bash
|
||||
# Test HTTP RPC endpoint
|
||||
curl -k https://rpc.public-0138.defi-oracle.io \
|
||||
-X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
|
||||
|
||||
# Expected: JSON response with block number
|
||||
```
|
||||
|
||||
### 5.4 Verify in Cloudflare Dashboard
|
||||
|
||||
1. Go to **Zero Trust** → **Networks** → **Tunnels**
|
||||
2. Click on `thirdweb-rpc-2400`
|
||||
3. Status should show **Healthy** (green)
|
||||
4. You should see the hostname `rpc.public-0138.defi-oracle.io` listed
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tunnel Not Connecting
|
||||
|
||||
```bash
|
||||
# Check cloudflared logs inside the container
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- journalctl -u cloudflared -f"
|
||||
|
||||
# Check if service is running
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status cloudflared"
|
||||
```
|
||||
|
||||
### DNS Not Resolving
|
||||
|
||||
- Wait a few more minutes for DNS propagation
|
||||
- Verify the CNAME target matches your tunnel ID
|
||||
- Check that the tunnel is healthy in Cloudflare Dashboard
|
||||
|
||||
### Connection Refused
|
||||
|
||||
```bash
|
||||
# Verify Besu RPC is running
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status besu-rpc"
|
||||
|
||||
# Test Besu RPC locally
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- curl -X POST http://127.0.0.1:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{\"jsonrpc\":\"2.0\",\"method\":\"eth_blockNumber\",\"params\":[],\"id\":1}'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
After completing all steps:
|
||||
|
||||
✅ Cloudflare tunnel created
|
||||
✅ Cloudflared installed on VMID 2400
|
||||
✅ Tunnel service running and connected
|
||||
✅ Tunnel route configured for `rpc.public-0138.defi-oracle.io`
|
||||
✅ DNS CNAME record created
|
||||
✅ RPC endpoint accessible at `https://rpc.public-0138.defi-oracle.io`
|
||||
|
||||
**Next Steps:**
|
||||
- Update Thirdweb listing with the new RPC URL
|
||||
- Test with Thirdweb SDK
|
||||
- Monitor tunnel status
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Script Location:** `scripts/setup-cloudflared-vmid2400.sh`
|
||||
**Documentation:** `docs/04-configuration/THIRDWEB_RPC_CLOUDFLARE_SETUP.md`
|
||||
**VMID:** 2400
|
||||
**IP:** 192.168.11.240
|
||||
**FQDN:** `rpc.public-0138.defi-oracle.io`
|
||||
421
docs/01-getting-started/THIRDWEB_RPC_NEXT_STEPS.md
Normal file
421
docs/01-getting-started/THIRDWEB_RPC_NEXT_STEPS.md
Normal file
@@ -0,0 +1,421 @@
|
||||
# ThirdWeb RPC Nodes - Complete Next Steps
|
||||
|
||||
## Overview
|
||||
This document lists all next steps to complete the ThirdWeb RPC node setup, from deployment to integration.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Deploy Containers
|
||||
|
||||
### Step 1.1: Run the Setup Script
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox
|
||||
./scripts/setup-thirdweb-rpc-nodes.sh
|
||||
```
|
||||
|
||||
**Expected outcome:**
|
||||
- Creates 3 LXC containers (VMIDs 2400-2402)
|
||||
- Installs Besu RPC software
|
||||
- Configures static IPs (192.168.11.240-242)
|
||||
- Sets up systemd services
|
||||
|
||||
**Troubleshooting:**
|
||||
- If containers fail to create, check storage: `ssh root@192.168.11.10 'pvesm status'`
|
||||
- Verify template exists: `ssh root@192.168.11.10 'pvesm list local'`
|
||||
- Check SSH access: `ssh root@192.168.11.10 'echo OK'`
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Verify Deployment
|
||||
|
||||
### Step 2.1: Check Container Status
|
||||
```bash
|
||||
# List all ThirdWeb containers
|
||||
ssh root@192.168.11.10 "pct list | grep -E '240[0-2]'"
|
||||
|
||||
# Check individual container status
|
||||
ssh root@192.168.11.10 "pct status 2400"
|
||||
ssh root@192.168.11.10 "pct status 2401"
|
||||
ssh root@192.168.11.10 "pct status 2402"
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
2400 2400 thirdweb-rpc-1 running
|
||||
2401 2401 thirdweb-rpc-2 running
|
||||
2402 2402 thirdweb-rpc-3 running
|
||||
```
|
||||
|
||||
### Step 2.2: Verify IP Addresses
|
||||
```bash
|
||||
# Check IP configuration for each container
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- hostname -I"
|
||||
ssh root@192.168.11.10 "pct exec 2401 -- hostname -I"
|
||||
ssh root@192.168.11.10 "pct exec 2402 -- hostname -I"
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
- Container 2400: `192.168.11.240`
|
||||
- Container 2401: `192.168.11.241`
|
||||
- Container 2402: `192.168.11.242`
|
||||
|
||||
### Step 2.3: Test Network Connectivity
|
||||
```bash
|
||||
# Ping each container
|
||||
ping -c 3 192.168.11.240
|
||||
ping -c 3 192.168.11.241
|
||||
ping -c 3 192.168.11.242
|
||||
|
||||
# Test port accessibility
|
||||
nc -zv 192.168.11.240 8545 # HTTP RPC
|
||||
nc -zv 192.168.11.240 8546 # WebSocket RPC
|
||||
nc -zv 192.168.11.240 9545 # Metrics
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Configure Besu Services
|
||||
|
||||
### Step 3.1: Verify Besu Installation
|
||||
```bash
|
||||
# Check Besu version on each container
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- /opt/besu/bin/besu --version"
|
||||
ssh root@192.168.11.10 "pct exec 2401 -- /opt/besu/bin/besu --version"
|
||||
ssh root@192.168.11.10 "pct exec 2402 -- /opt/besu/bin/besu --version"
|
||||
```
|
||||
|
||||
### Step 3.2: Verify Configuration Files
|
||||
```bash
|
||||
# Check config file exists and is correct
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- cat /etc/besu/config-rpc-thirdweb.toml"
|
||||
```
|
||||
|
||||
**Verify key settings:**
|
||||
- `network-id=138`
|
||||
- `rpc-http-enabled=true`
|
||||
- `rpc-http-port=8545`
|
||||
- `rpc-ws-enabled=true`
|
||||
- `rpc-ws-port=8546`
|
||||
- `rpc-http-api=["ETH","NET","WEB3","DEBUG","TRACE"]`
|
||||
|
||||
### Step 3.3: Check Genesis and Permissions Files
|
||||
```bash
|
||||
# Verify genesis file exists
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- ls -la /genesis/genesis.json"
|
||||
|
||||
# Verify static nodes file exists
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- ls -la /genesis/static-nodes.json"
|
||||
|
||||
# Verify permissions file exists
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- ls -la /permissions/permissions-nodes.toml"
|
||||
```
|
||||
|
||||
**If files are missing:**
|
||||
- Copy from existing RPC nodes or source project
|
||||
- See `smom-dbis-138/genesis/` and `smom-dbis-138/permissions/` directories
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Start and Monitor Services
|
||||
|
||||
### Step 4.1: Start Besu Services
|
||||
```bash
|
||||
# Start services on all containers
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- systemctl start besu-rpc.service"
|
||||
ssh root@192.168.11.10 "pct exec 2401 -- systemctl start besu-rpc.service"
|
||||
ssh root@192.168.11.10 "pct exec 2402 -- systemctl start besu-rpc.service"
|
||||
|
||||
# Enable auto-start on boot
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- systemctl enable besu-rpc.service"
|
||||
ssh root@192.168.11.10 "pct exec 2401 -- systemctl enable besu-rpc.service"
|
||||
ssh root@192.168.11.10 "pct exec 2402 -- systemctl enable besu-rpc.service"
|
||||
```
|
||||
|
||||
### Step 4.2: Check Service Status
|
||||
```bash
|
||||
# Check if services are running
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- systemctl status besu-rpc.service"
|
||||
ssh root@192.168.11.10 "pct exec 2401 -- systemctl status besu-rpc.service"
|
||||
ssh root@192.168.11.10 "pct exec 2402 -- systemctl status besu-rpc.service"
|
||||
```
|
||||
|
||||
**Expected status:** `Active: active (running)`
|
||||
|
||||
### Step 4.3: Monitor Service Logs
|
||||
```bash
|
||||
# View recent logs
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- journalctl -u besu-rpc.service -n 100"
|
||||
|
||||
# Follow logs in real-time (Ctrl+C to exit)
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- journalctl -u besu-rpc.service -f"
|
||||
```
|
||||
|
||||
**Look for:**
|
||||
- `Besu is listening on` messages
|
||||
- `P2P started` message
|
||||
- Any error messages
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Test RPC Endpoints
|
||||
|
||||
### Step 5.1: Test HTTP RPC Endpoints
|
||||
```bash
|
||||
# Test each RPC endpoint
|
||||
curl -X POST http://192.168.11.240:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
|
||||
|
||||
curl -X POST http://192.168.11.241:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
|
||||
|
||||
curl -X POST http://192.168.11.242:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
|
||||
```
|
||||
|
||||
**Expected response:**
|
||||
```json
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x..."}
|
||||
```
|
||||
|
||||
### Step 5.2: Test WebSocket Endpoints
|
||||
```bash
|
||||
# Install wscat if needed: npm install -g wscat
|
||||
|
||||
# Test WebSocket connection
|
||||
wscat -c ws://192.168.11.240:8546
|
||||
|
||||
# Then send: {"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}
|
||||
```
|
||||
|
||||
### Step 5.3: Test Additional RPC Methods
|
||||
```bash
|
||||
# Get chain ID
|
||||
curl -X POST http://192.168.11.240:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
|
||||
|
||||
# Get network ID
|
||||
curl -X POST http://192.168.11.240:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"net_version","params":[],"id":1}'
|
||||
|
||||
# Get client version
|
||||
curl -X POST http://192.168.11.240:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"web3_clientVersion","params":[],"id":1}'
|
||||
```
|
||||
|
||||
### Step 5.4: Check Metrics Endpoints
|
||||
```bash
|
||||
# Check metrics (Prometheus format)
|
||||
curl http://192.168.11.240:9545/metrics | head -20
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: ThirdWeb Integration
|
||||
|
||||
### Step 6.1: Configure ThirdWeb SDK
|
||||
|
||||
**JavaScript/TypeScript:**
|
||||
```javascript
|
||||
import { ThirdwebSDK } from "@thirdweb-dev/sdk";
|
||||
|
||||
// HTTP RPC endpoint
|
||||
const sdk = new ThirdwebSDK("http://192.168.11.240:8545", {
|
||||
supportedChains: [138], // Your ChainID
|
||||
});
|
||||
|
||||
// Or with WebSocket for subscriptions
|
||||
const sdk = new ThirdwebSDK("ws://192.168.11.240:8546", {
|
||||
supportedChains: [138],
|
||||
});
|
||||
```
|
||||
|
||||
### Step 6.2: Set Environment Variables
|
||||
```bash
|
||||
# Add to your .env file
|
||||
echo "THIRDWEB_RPC_URL=http://192.168.11.240:8545" >> .env
|
||||
echo "THIRDWEB_RPC_WS_URL=ws://192.168.11.240:8546" >> .env
|
||||
echo "THIRDWEB_CHAIN_ID=138" >> .env
|
||||
```
|
||||
|
||||
### Step 6.3: Configure ThirdWeb Dashboard
|
||||
|
||||
1. Go to ThirdWeb Dashboard → Settings → Networks
|
||||
2. Click "Add Custom Network"
|
||||
3. Enter:
|
||||
- **Network Name**: ChainID 138 (Custom)
|
||||
- **RPC URL**: `http://192.168.11.240:8545`
|
||||
- **Chain ID**: `138`
|
||||
- **Currency Symbol**: Your token symbol
|
||||
- **Block Explorer**: (Optional) Your explorer URL
|
||||
|
||||
### Step 6.4: Test ThirdWeb Connection
|
||||
```javascript
|
||||
// Test connection
|
||||
const provider = await sdk.getProvider();
|
||||
const network = await provider.getNetwork();
|
||||
console.log("Connected to:", network.chainId);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 7: Production Configuration
|
||||
|
||||
### Step 7.1: Set Up Load Balancing (Optional)
|
||||
|
||||
**Nginx Configuration:**
|
||||
```nginx
|
||||
upstream thirdweb_rpc {
|
||||
least_conn;
|
||||
server 192.168.11.240:8545;
|
||||
server 192.168.11.241:8545;
|
||||
server 192.168.11.242:8545;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name rpc.thirdweb.yourdomain.com;
|
||||
|
||||
location / {
|
||||
proxy_pass http://thirdweb_rpc;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_connect_timeout 60s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 7.2: Configure Cloudflare Tunnel (Optional)
|
||||
|
||||
**Add to cloudflared config:**
|
||||
```yaml
|
||||
ingress:
|
||||
- hostname: rpc-thirdweb.d-bis.org
|
||||
service: http://192.168.11.240:8545
|
||||
- hostname: rpc-thirdweb-2.d-bis.org
|
||||
service: http://192.168.11.241:8545
|
||||
- hostname: rpc-thirdweb-3.d-bis.org
|
||||
service: http://192.168.11.242:8545
|
||||
```
|
||||
|
||||
### Step 7.3: Set Up Monitoring
|
||||
|
||||
**Monitor metrics:**
|
||||
```bash
|
||||
# Set up Prometheus scraping
|
||||
# Add to prometheus.yml:
|
||||
scrape_configs:
|
||||
- job_name: 'thirdweb-rpc'
|
||||
static_configs:
|
||||
- targets:
|
||||
- '192.168.11.240:9545'
|
||||
- '192.168.11.241:9545'
|
||||
- '192.168.11.242:9545'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 8: Documentation and Maintenance
|
||||
|
||||
### Step 8.1: Update Documentation
|
||||
- [ ] Update infrastructure documentation with new IPs
|
||||
- [ ] Document ThirdWeb RPC endpoints
|
||||
- [ ] Add monitoring dashboards
|
||||
- [ ] Document load balancing setup (if applicable)
|
||||
|
||||
### Step 8.2: Create Backup Procedures
|
||||
```bash
|
||||
# Backup Besu data directories
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- tar -czf /tmp/besu-backup-$(date +%Y%m%d).tar.gz /data/besu"
|
||||
|
||||
# Backup configuration files
|
||||
ssh root@192.168.11.10 "pct exec 2400 -- tar -czf /tmp/besu-config-$(date +%Y%m%d).tar.gz /etc/besu"
|
||||
```
|
||||
|
||||
### Step 8.3: Set Up Health Checks
|
||||
|
||||
**Create health check script:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# health-check-thirdweb-rpc.sh
|
||||
|
||||
for ip in 192.168.11.240 192.168.11.241 192.168.11.242; do
|
||||
if curl -s -X POST http://${ip}:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
|
||||
| grep -q "result"; then
|
||||
echo "✓ ${ip}:8545 is healthy"
|
||||
else
|
||||
echo "✗ ${ip}:8545 is down"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting Checklist
|
||||
|
||||
If containers fail to start:
|
||||
- [ ] Check storage availability: `pvesm status`
|
||||
- [ ] Verify template exists: `pvesm list local`
|
||||
- [ ] Check container logs: `pct config <VMID>`
|
||||
|
||||
If Besu services fail:
|
||||
- [ ] Check service logs: `journalctl -u besu-rpc.service -f`
|
||||
- [ ] Verify config file syntax: `besu --config-file=/etc/besu/config-rpc-thirdweb.toml validate`
|
||||
- [ ] Check disk space: `df -h`
|
||||
- [ ] Verify network connectivity to validators/sentries
|
||||
|
||||
If RPC endpoints don't respond:
|
||||
- [ ] Verify firewall rules: `iptables -L -n | grep 8545`
|
||||
- [ ] Check Besu is listening: `netstat -tlnp | grep 8545`
|
||||
- [ ] Verify chain sync: Check logs for sync progress
|
||||
- [ ] Test connectivity: `ping` and `nc` tests
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Commands
|
||||
|
||||
```bash
|
||||
# Status check
|
||||
ssh root@192.168.11.10 "pct list | grep 240"
|
||||
|
||||
# Restart all services
|
||||
for vmid in 2400 2401 2402; do
|
||||
ssh root@192.168.11.10 "pct exec $vmid -- systemctl restart besu-rpc.service"
|
||||
done
|
||||
|
||||
# View all logs
|
||||
for vmid in 2400 2401 2402; do
|
||||
echo "=== Container $vmid ==="
|
||||
ssh root@192.168.11.10 "pct exec $vmid -- journalctl -u besu-rpc.service -n 20"
|
||||
done
|
||||
|
||||
# Test all endpoints
|
||||
for ip in 240 241 242; do
|
||||
curl -X POST http://192.168.11.${ip}:8545 \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Completion Checklist
|
||||
|
||||
- [ ] All containers created and running
|
||||
- [ ] IP addresses configured correctly
|
||||
- [ ] Besu services started and enabled
|
||||
- [ ] RPC endpoints responding
|
||||
- [ ] ThirdWeb SDK configured
|
||||
- [ ] Load balancing configured (if needed)
|
||||
- [ ] Monitoring set up (if needed)
|
||||
- [ ] Documentation updated
|
||||
- [ ] Health checks implemented
|
||||
73
docs/01-getting-started/THIRDWEB_RPC_QUICKSTART.md
Normal file
73
docs/01-getting-started/THIRDWEB_RPC_QUICKSTART.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# ThirdWeb RPC Nodes - Quick Start
|
||||
|
||||
## Summary
|
||||
|
||||
Setup complete! Ready to deploy ThirdWeb RPC node LXC containers.
|
||||
|
||||
## What Was Created
|
||||
|
||||
1. **Setup Script**: `scripts/setup-thirdweb-rpc-nodes.sh`
|
||||
- Creates 3 LXC containers (VMIDs 2600-2602)
|
||||
- Installs and configures Besu RPC nodes
|
||||
- Optimized for ThirdWeb SDK integration
|
||||
|
||||
2. **Configuration**: `smom-dbis-138/config/config-rpc-thirdweb.toml`
|
||||
- ThirdWeb-optimized Besu configuration
|
||||
- WebSocket support enabled
|
||||
- Extended APIs (DEBUG, TRACE)
|
||||
- Increased transaction pool and timeout settings
|
||||
|
||||
3. **Documentation**: `docs/THIRDWEB_RPC_SETUP.md`
|
||||
- Complete setup and usage guide
|
||||
- Integration examples
|
||||
- Troubleshooting tips
|
||||
|
||||
## Container Details
|
||||
|
||||
| VMID | Hostname | IP Address | Status |
|
||||
|------|----------|------------|--------|
|
||||
| 2400 | thirdweb-rpc-1 | 192.168.11.240 | Ready to deploy |
|
||||
| 2401 | thirdweb-rpc-2 | 192.168.11.241 | Ready to deploy |
|
||||
| 2402 | thirdweb-rpc-3 | 192.168.11.242 | Ready to deploy |
|
||||
|
||||
**Note**: VMIDs align with IP addresses - VMID 2400 = 192.168.11.240
|
||||
|
||||
## Quick Deploy
|
||||
|
||||
```bash
|
||||
# Run the setup script
|
||||
cd /home/intlc/projects/proxmox
|
||||
./scripts/setup-thirdweb-rpc-nodes.sh
|
||||
```
|
||||
|
||||
## RPC Endpoints
|
||||
|
||||
After deployment, you'll have:
|
||||
|
||||
- **HTTP RPC**: `http://192.168.11.240:8545`
|
||||
- **WebSocket RPC**: `ws://192.168.11.240:8546`
|
||||
- **Metrics**: `http://192.168.11.240:9545/metrics`
|
||||
|
||||
## ThirdWeb Integration
|
||||
|
||||
```javascript
|
||||
import { ThirdwebSDK } from "@thirdweb-dev/sdk";
|
||||
|
||||
const sdk = new ThirdwebSDK("http://192.168.11.240:8545", {
|
||||
supportedChains: [138],
|
||||
});
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Review the full documentation: `docs/THIRDWEB_RPC_SETUP.md`
|
||||
2. Run the setup script to create containers
|
||||
3. Verify endpoints are accessible
|
||||
4. Configure ThirdWeb Dashboard to use the RPC endpoints
|
||||
5. Test with your ThirdWeb dApps
|
||||
|
||||
## Support
|
||||
|
||||
- Check container status: `ssh root@192.168.11.10 'pct list | grep 240'`
|
||||
- View logs: `ssh root@192.168.11.10 'pct exec 2600 -- journalctl -u besu-rpc.service -f'`
|
||||
- Test RPC: `curl -X POST http://192.168.11.240:8545 -H 'Content-Type: application/json' --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'`
|
||||
Reference in New Issue
Block a user