chore: update submodule references and documentation
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
Some checks failed
Deploy to Phoenix / deploy (push) Has been cancelled
- Marked submodules ai-mcp-pmm-controller, explorer-monorepo, and smom-dbis-138 as dirty to reflect recent changes. - Updated documentation to clarify operator script usage, including dotenv loading and task execution instructions. - Enhanced the README and various index files to provide clearer navigation and task completion guidance. Made-with: Cursor
This commit is contained in:
@@ -26,9 +26,10 @@
|
||||
|
||||
## Optional work
|
||||
|
||||
- **Checklist:** [OPTIONAL_TASKS_CHECKLIST.md](OPTIONAL_TASKS_CHECKLIST.md) — consolidated optional tasks (Done / Pending / Operator-only).
|
||||
- **Infrastructure:** Phase 1 VLAN, NPMplus HA. (2506–2508 destroyed 2026-02-08; RPC 2500–2505 only.) [OPTIONAL_RECOMMENDATIONS_INDEX.md](../OPTIONAL_RECOMMENDATIONS_INDEX.md), [MISSING_CONTAINERS_LIST.md](../03-deployment/MISSING_CONTAINERS_LIST.md), [NPMPLUS_HA_SETUP_GUIDE.md](../04-configuration/NPMPLUS_HA_SETUP_GUIDE.md).
|
||||
- **Docs/tooling:** Documentation consolidation; Paymaster deploy when ready.
|
||||
- **MetaMask/explorer:** Token-aggregation, CoinGecko, Snap features, explorer enhancements. [COINGECKO_SUBMISSION.md](../../smom-dbis-138/services/token-aggregation/docs/COINGECKO_SUBMISSION.md).
|
||||
- **MetaMask/explorer:** Token-aggregation, CoinGecko, Snap features, explorer enhancements; Wallet link runbook: [EXPLORER_WALLET_LINK_QUICK_WIN.md](../04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md). [COINGECKO_SUBMISSION.md](../../smom-dbis-138/services/token-aggregation/docs/COINGECKO_SUBMISSION.md).
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
|
||||
## Remaining tasks (summary)
|
||||
|
||||
Steps 1–2 and the Chain 138 “all in one” run (step 3) are **done** (2026-03-02). **Task check (2026-03-02):** Each remaining task was verified; see [TASK_CHECK_REPORT.md](TASK_CHECK_REPORT.md) for per-task status and what can be completed only by Operator/LAN or externally. What remains:
|
||||
Steps 1–2 and the Chain 138 “all in one” run (step 3) are **done** (2026-03-02). **Single-page summary of what remains:** [REMAINING_SUMMARY.md](REMAINING_SUMMARY.md) (operator/LAN and external only). **Task check (2026-03-02):** See [TASK_CHECK_REPORT.md](TASK_CHECK_REPORT.md) for per-task status. What remains:
|
||||
|
||||
| # | Task | Who | Command / doc |
|
||||
|---|------|-----|----------------|
|
||||
|
||||
@@ -3,6 +3,10 @@
|
||||
**Last Updated:** 2026-03-02
|
||||
**Purpose:** Single list of what **you** need to do next (no infra/automation). Everything else the repo can do has been completed or documented.
|
||||
|
||||
**Completed (next steps run):** `run-completable-tasks-from-anywhere.sh` — config OK, on-chain 38/38, validation OK, reconcile-env. `preflight-chain138-deploy.sh` — passed. `run-all-next-steps-chain138.sh` — preflight passed; TransactionMirror and cUSDT/cUSDC pool already present; all 12 c* already GRU-registered; verification 38/38. `validate-config-files.sh` — passed. `run-e2e-flow-tasks-full-parallel.sh --dry-run` — waves E0–E7 listed.
|
||||
|
||||
**Continue and complete (2026-02-27):** Re-ran `run-completable-tasks-from-anywhere.sh` — all 4 steps passed (config, on-chain 38/38, validation, reconcile-env). Re-ran `run-all-operator-tasks-from-lan.sh --skip-backup` — dotenv loaded automatically; Blockscout verification completed (W0-1 NPMplus failed off-LAN as expected). Docs: REMAINING_SUMMARY "Continue and complete" section added; TODOS_CONSOLIDATED and NEXT_STEPS_FOR_YOU updated for operator script loading dotenv.
|
||||
|
||||
**Completed 2026-03-02:** Documentation consolidation: [MASTER_INDEX.md](../MASTER_INDEX.md), [README.md](../README.md), [RUNBOOKS_MASTER_INDEX.md](../RUNBOOKS_MASTER_INDEX.md) created; deprecated content (ALL_IMPROVEMENTS_AND_GAPS_INDEX) marked redirect-only. `run-completable-tasks-from-anywhere.sh` run: config OK, on-chain 38/38, validation OK, reconcile-env. **Preflight** and **run-all-next-steps-chain138.sh** run: preflight passed; mirror/pool already deployed; all 12 c* already registered as GRU; verification 38/38. Next steps index and TODOS_CONSOLIDATED updated.
|
||||
|
||||
**Completed 2026-02-27:** Chain 138 "run all next steps" script added: `./scripts/deployment/run-all-next-steps-chain138.sh` (preflight → mirror+pool → register c* as GRU → verify). Docs updated: NEXT_STEPS_INDEX, DEPLOYMENT_ORDER_OF_OPERATIONS, TODOS_CONSOLIDATED, CONTRACT_NEXT_STEPS_LIST.
|
||||
@@ -62,7 +66,7 @@ These can be run from your current machine (dev, WSL, CI) without Proxmox or Led
|
||||
- **Lighter option:** `./scripts/maintenance/address-all-remaining-502s.sh` — backends + NPMplus proxy (if `NPM_PASSWORD` in .env) + RPC diagnostics; add `--run-besu-fix --e2e` to fix Besu config and re-run E2E.
|
||||
- Full runbook: [502_DEEP_DIVE_ROOT_CAUSES_AND_FIXES.md](502_DEEP_DIVE_ROOT_CAUSES_AND_FIXES.md).
|
||||
|
||||
**Single script (from repo root on LAN with smom-dbis-138/.env):**
|
||||
**Single script (from repo root on LAN; loads dotenv automatically from .env and smom-dbis-138/.env):**
|
||||
- `./scripts/run-all-operator-tasks-from-lan.sh --dry-run` — print steps
|
||||
- `./scripts/run-all-operator-tasks-from-lan.sh` — backup + Blockscout verify
|
||||
- `./scripts/run-all-operator-tasks-from-lan.sh --deploy` — + deploy contracts (phased + TransactionMirror if needed)
|
||||
@@ -77,8 +81,9 @@ These can be run from your current machine (dev, WSL, CI) without Proxmox or Led
|
||||
|
||||
- **Blockscout verification:** From a host that can reach Blockscout (e.g. LAN), run:
|
||||
```bash
|
||||
source smom-dbis-138/.env 2>/dev/null; ./scripts/verify/run-contract-verification-with-proxy.sh
|
||||
./scripts/run-all-operator-tasks-from-lan.sh --skip-backup
|
||||
```
|
||||
(Script loads dotenv from .env and smom-dbis-138/.env automatically.) Or run only verify: `./scripts/verify/run-contract-verification-with-proxy.sh` after sourcing .env.
|
||||
Or verify each contract manually at https://explorer.d-bis.org/address/<ADDRESS>#verify-contract.
|
||||
|
||||
- **On-chain contract check:** Re-run when you add new contracts (or to confirm from LAN):
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
# Next Steps — Index
|
||||
|
||||
**Last Updated:** 2026-03-02
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Single entry point for "what to do next." Pick by audience and granularity.
|
||||
|
||||
**Documentation index:** [../MASTER_INDEX.md](../MASTER_INDEX.md) — canonical docs, deprecated list, and navigation.
|
||||
|
||||
**Continue and complete (operator/LAN):** (1) `./scripts/run-completable-tasks-from-anywhere.sh` then (2) `./scripts/run-all-operator-tasks-from-lan.sh` (use `--skip-backup` if `NPM_PASSWORD` not set). Operator scripts load dotenv automatically.
|
||||
|
||||
---
|
||||
|
||||
## Next steps (ordered)
|
||||
@@ -19,7 +21,7 @@
|
||||
| 6 | Repos & PRs (Ledger, Trust, Chainlist, on-ramps; forms pending) | [REPOSITORIES_AND_PRS_CHAIN138.md](REPOSITORIES_AND_PRS_CHAIN138.md) | Remaining (External) |
|
||||
| 7 | PR-ready files (Chainlist, Trust Wallet) | [04-configuration/pr-ready/README.md](../04-configuration/pr-ready/README.md) | Remaining |
|
||||
|
||||
**Remaining tasks (full list):** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md) § Remaining tasks.
|
||||
**Remaining (one page):** [REMAINING_SUMMARY.md](REMAINING_SUMMARY.md) — in-repo complete; operator/LAN and external only. **Remaining tasks (full list):** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md) § Remaining tasks.
|
||||
|
||||
**Full list:** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md) § Next steps (ordered). **E2E flows (swap, bridge, swap-bridge-swap):** [TASKS_TO_INCREASE_ALL_E2E_FLOWS.md](TASKS_TO_INCREASE_ALL_E2E_FLOWS.md). Run E2E tasks in full parallel: `./scripts/run-e2e-flow-tasks-full-parallel.sh [--dry-run] [--wave E1]`. **Task list review (deprecated/duplicates):** [TASK_LIST_REVIEW_2026_03_01.md](TASK_LIST_REVIEW_2026_03_01.md).
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
|
||||
This document is the **single source of truth** for all next steps and remaining tasks across the project. Use it for prioritization, sprint planning, and status reporting.
|
||||
|
||||
**Consolidated checklist (all next steps + remaining TODOs):** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md) — single list with Operator/LAN vs in-repo marked. **Single-file task list:** [TODOS_CONSOLIDATED.md](TODOS_CONSOLIDATED.md).
|
||||
**Consolidated checklist (all next steps + remaining TODOs):** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md) — single list with Operator/LAN vs in-repo marked. **Single-file task list:** [TODOS_CONSOLIDATED.md](TODOS_CONSOLIDATED.md). **Optional tasks only (Done / Pending):** [OPTIONAL_TASKS_CHECKLIST.md](OPTIONAL_TASKS_CHECKLIST.md).
|
||||
|
||||
**Your next actions:** [NEXT_STEPS_FOR_YOU.md](NEXT_STEPS_FOR_YOU.md) — Ledger form ✅ submitted (2026-02-13); all remaining steps optional (Blockscout, on-chain check, etc.).
|
||||
**Remaining components, tasks, and all recommendations:** [REMAINING_COMPONENTS_TASKS_AND_RECOMMENDATIONS.md](REMAINING_COMPONENTS_TASKS_AND_RECOMMENDATIONS.md) — single list of what’s left and what to implement.
|
||||
|
||||
69
docs/00-meta/OPERATOR_CREDENTIALS_CHECKLIST.md
Normal file
69
docs/00-meta/OPERATOR_CREDENTIALS_CHECKLIST.md
Normal file
@@ -0,0 +1,69 @@
|
||||
# Operator Credentials and Secrets — Checklist
|
||||
|
||||
**Purpose:** Before running Operator/LAN tasks, confirm you have the required credentials and access. **Does Operator/LAN have all necessary creds?** Use this checklist; if any row is **No**, obtain or set that credential before running the task.
|
||||
|
||||
**Where to set:** Unless noted, use `smom-dbis-138/.env` (gitignored). Copy from `smom-dbis-138/.env.example` or see [REMAINING_WORK_DETAILED_STEPS](REMAINING_WORK_DETAILED_STEPS.md) for per-step blockers.
|
||||
|
||||
**Operator scripts load dotenv automatically:** [run-all-operator-tasks-from-lan.sh](../../scripts/run-all-operator-tasks-from-lan.sh) and [run-operator-tasks-from-lan.sh](../../scripts/run-operator-tasks-from-lan.sh) source `scripts/lib/load-project-env.sh`, which loads repo root `.env` and `smom-dbis-138/.env`. No need to `source .env` before running.
|
||||
|
||||
---
|
||||
|
||||
## Required credentials (summary)
|
||||
|
||||
| Credential / access | Used for | Where to set / get |
|
||||
|--------------------|----------|---------------------|
|
||||
| **LAN (192.168.11.x)** | NPMplus API, RPC, Blockscout, Proxmox | Be on same network or VPN |
|
||||
| **PRIVATE_KEY** (64-char hex, no 0x) | Chain 138 deploy, bridge send, any `forge script --broadcast` | `smom-dbis-138/.env` |
|
||||
| **RPC_URL_138** (Chain 138 Core) | Deploy, verify, on-chain check | e.g. `http://192.168.11.211:8545` in `.env` |
|
||||
| **NPM_PASSWORD** | NPMplus backup, proxy host updates (502 fix) | `smom-dbis-138/.env` or root `.env`; from NPMplus UI |
|
||||
| **SSH to Proxmox** (e.g. root@192.168.11.10) | run-all-maintenance-via-proxmox-ssh, VM/CT creation, token-aggregation fix | SSH key or password to Proxmox host |
|
||||
| **LINK** (on Chain 138 for bridge) | sendCrossChain (real); CCIP fees | Deployer wallet must hold LINK and approve bridge |
|
||||
| **Native gas (ETH/138)** | All Chain 138 deploys and txs | Deployer `0x4A66...` funded on 138 |
|
||||
| **Per-chain RPC + gas (Celo, Wemix, Gnosis)** | CCIP bridges deploy | CELO ~0.1, WEMIX ~0.4; RPC URLs in .env |
|
||||
| **ADD_LIQUIDITY_* amounts + token balance** | Add liquidity to PMM pools | Deployer holds cUSDT/cUSDC/USDT/USDC; set in .env or runbook |
|
||||
|
||||
---
|
||||
|
||||
## Per-task requirements (Operator/LAN)
|
||||
|
||||
| Task | LAN | PRIVATE_KEY | NPM_PASSWORD | RPC_URL_138 | SSH Proxmox | Other |
|
||||
|------|-----|-------------|--------------|-------------|-------------|--------|
|
||||
| Full deployment order (Phase 0–6) | Yes | Yes | — | Yes | Optional | Gas on 138; per-phase env (see runbook) |
|
||||
| Add liquidity (PMM pools) | Yes | Yes | — | Yes | — | Token balance; ADD_LIQUIDITY_BASE_AMOUNT, ADD_LIQUIDITY_QUOTE_AMOUNT |
|
||||
| run-all-operator-tasks-from-lan (backup + verify) | Yes | — | Yes (backup) | Yes (verify) | Optional | Blockscout reachable |
|
||||
| run-all-operator-tasks-from-lan --deploy | Yes | Yes | Yes | Yes | Optional | Gas on 138 |
|
||||
| E2E 502 fix (address-all-remaining-502s) | Yes | — | Yes (NPMplus proxy update) | — | Yes (Besu fix) | Proxmox reachable |
|
||||
| Blockscout verification only | Yes | — | — | Yes | — | Host can reach explorer.d-bis.org |
|
||||
| Gnosis / Celo / Wemix CCIP bridges | Yes | Yes | — | Yes + per-chain RPC | — | Per-chain gas (xDAI, CELO, WEMIX); CCIP router/LINK addresses in .env |
|
||||
| LINK support on Mainnet relay | Yes | Yes (if deploy) | — | Yes | Yes (restart relay) | Mainnet RPC; LINK on mainnet if funding relay |
|
||||
| sendCrossChain (real) | Yes | Yes | — | Yes | — | LINK approved for bridge; recipient address |
|
||||
| NPMplus backup | Yes | — | Yes | — | — | NPMplus API reachable |
|
||||
| NPMplus RPC proxy fix (405) | Yes | — | Yes | — | — | — |
|
||||
| Token-aggregation DB + migrations | Yes | — | — | — | Yes | PostgreSQL on VMID 5000 or same host; DATABASE_URL |
|
||||
| Explorer Wallet link (edit nav) | — | — | — | — | Yes (to explorer VM) | SSH to VMID 5000 or host serving explorer |
|
||||
| E2E flow waves E1–E7 | Yes | Yes (if deploy/fund) | Yes (if NPM) | Yes | Optional | Depends on wave; see TASKS_TO_INCREASE_ALL_E2E_FLOWS |
|
||||
|
||||
**—** = not required for that task.
|
||||
|
||||
---
|
||||
|
||||
## Quick verification (do you have them?)
|
||||
|
||||
```bash
|
||||
# From repo root, with smom-dbis-138/.env present:
|
||||
source smom-dbis-138/.env 2>/dev/null
|
||||
echo "PRIVATE_KEY set: $( [ -n "$PRIVATE_KEY" ] && echo yes || echo no )"
|
||||
echo "NPM_PASSWORD set: $( [ -n "$NPM_PASSWORD" ] && echo yes || echo no )"
|
||||
echo "RPC_URL_138 set: $( [ -n "$RPC_URL_138" ] && echo yes || echo no )"
|
||||
# LAN: ping or curl from your machine to 192.168.11.211:8545 (or your RPC host)
|
||||
# SSH: ssh root@192.168.11.10 (or your Proxmox host) echo ok
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- **Operator commands:** [OPERATOR_READY_CHECKLIST.md](OPERATOR_READY_CHECKLIST.md)
|
||||
- **LAN + secrets steps:** [STEPS_FROM_PROXMOX_OR_LAN_WITH_SECRETS.md](STEPS_FROM_PROXMOX_OR_LAN_WITH_SECRETS.md)
|
||||
- **Wave 0 (sendCrossChain, backup):** [REMAINING_WORK_DETAILED_STEPS.md](REMAINING_WORK_DETAILED_STEPS.md) § W0-2, W0-3
|
||||
- **Remaining summary:** [REMAINING_SUMMARY.md](REMAINING_SUMMARY.md)
|
||||
@@ -1,10 +1,14 @@
|
||||
# Operator Ready Checklist — Copy-Paste Commands
|
||||
|
||||
**Last Updated:** 2026-03-02
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Single page with exact commands to complete every pending todo. Run from **repo root** on a host with **LAN** access (and `smom-dbis-138/.env` with `PRIVATE_KEY`, `NPM_PASSWORD` where noted).
|
||||
|
||||
**Do you have all necessary creds?** See [OPERATOR_CREDENTIALS_CHECKLIST.md](OPERATOR_CREDENTIALS_CHECKLIST.md) — per-task list of LAN, PRIVATE_KEY, NPM_PASSWORD, RPC_URL_138, SSH, LINK, gas, token balance.
|
||||
|
||||
**From anywhere (no LAN):** `./scripts/run-completable-tasks-from-anywhere.sh`
|
||||
|
||||
**Remaining for full network coverage (13-chain max execution):** [REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md](../03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md) — Phase A (mint + add liquidity 138) → B (Celo/Wemix CCIP + LINK) → C (cW* + edge pools). **2026-03-04:** Celo CCIP bridges ✅ deployed; Phase C runbook and Phase D checklist added. Mint (A.1) retry if timeout; Wemix needs 0.4 WEMIX.
|
||||
|
||||
---
|
||||
|
||||
## 1. High: Gnosis, Celo, Wemix CCIP bridges
|
||||
|
||||
80
docs/00-meta/OPTIONAL_TASKS_CHECKLIST.md
Normal file
80
docs/00-meta/OPTIONAL_TASKS_CHECKLIST.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Optional Tasks — Consolidated Checklist
|
||||
|
||||
**Purpose:** Single checklist of optional tasks across the repo with status (Done / Pending / Operator-only). Use for prioritization and tracking.
|
||||
|
||||
**Sources:** [REMAINING_TASKS.md](../REMAINING_TASKS.md), [DBIS_RAIL_AND_PROJECT_COMPLETION_MASTER_V1.md](../dbis-rail/DBIS_RAIL_AND_PROJECT_COMPLETION_MASTER_V1.md) § B/D/E, [IMPLEMENTATION_CHECKLIST.md](../10-best-practices/IMPLEMENTATION_CHECKLIST.md), [OPTIONAL_RECOMMENDATIONS_INDEX.md](../OPTIONAL_RECOMMENDATIONS_INDEX.md).
|
||||
|
||||
---
|
||||
|
||||
## Completed (optional)
|
||||
|
||||
| Task | Source | Notes |
|
||||
|------|--------|--------|
|
||||
| MCP plan upgrades (8 items: multi-chain allowlist, Uniswap get_pool_state, bot_state, webhook, merge script, rate limits, audit log, router stub) | MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES §5 | [Implementation status](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md#51-implementation-status-all-completed) |
|
||||
| Allowlist sync with mesh (generate-mcp-allowlist-from-chain138.sh) | PMM plan | Script + doc |
|
||||
| Per-chain allowlist from deployment-status | SINGLE_SIDED runbook | generate-mcp-allowlist-from-deployment-status.sh |
|
||||
| Merge multi-chain allowlist script | MCP plan rec #5 | scripts/merge-mcp-allowlist-multichain.sh |
|
||||
| Explorer Wallet link runbook | Quick win | [EXPLORER_WALLET_LINK_QUICK_WIN.md](../04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md) — runbook written; operator still runs steps on VM |
|
||||
|
||||
---
|
||||
|
||||
## Pending — Quick wins (< 1 hour)
|
||||
|
||||
| Task | Effort | Blocker | Reference |
|
||||
|------|--------|---------|------------|
|
||||
| Add Wallet link to explorer navbar | 15 min | SSH to explorer VM | [EXPLORER_WALLET_LINK_QUICK_WIN.md](../04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md) |
|
||||
| CoinGecko submission | 1 hour | External | [COINGECKO_SUBMISSION_GUIDE.md](../04-configuration/coingecko/COINGECKO_SUBMISSION_GUIDE.md) |
|
||||
| Consensys outreach | 1 hour | External | metamask-integration/docs/CONSENSYS_OUTREACH_PACKAGE.md |
|
||||
| Test Snap in MetaMask Flask | 1 hour | Local/Flask | REMAINING_TASKS § Quick Wins |
|
||||
|
||||
---
|
||||
|
||||
## Pending — MetaMask & Explorer (optional)
|
||||
|
||||
| Task | Priority | Reference |
|
||||
|------|----------|-----------|
|
||||
| Token-aggregation service deployment | Medium | REMAINING_TASKS § 1; smom-dbis-138/services/token-aggregation |
|
||||
| Chain 138 Snap: market data, swap, bridge flows | Low | REMAINING_TASKS § 1, § 4 |
|
||||
| Explorer: sync status indicator, network selector, dark mode | Low | REMAINING_TASKS § 2 |
|
||||
| Token-aggregation: production deploy, API keys, monitoring, auth | Medium/Low | REMAINING_TASKS § 3 |
|
||||
|
||||
---
|
||||
|
||||
## Pending — DBIS Rail optional (B, D, E)
|
||||
|
||||
| ID | Task | Reference |
|
||||
|----|------|-----------|
|
||||
| B1–B7 | Signer effective-from/revoked-at; idempotency; Merkle root; validator governance; Public Overview PDF; control mapping; code audit | [DBIS_RAIL_AND_PROJECT_COMPLETION_MASTER_V1.md](../dbis-rail/DBIS_RAIL_AND_PROJECT_COMPLETION_MASTER_V1.md) § B |
|
||||
| D1–D7 | EnhancedSwapRouter; trustless stack; CCIP other chains; LINK relay; cW* edge pools; R1–R24; full 139 recommendations | § D |
|
||||
| E1–E5 | Wave 0–3 (NPMplus, backup, sendCrossChain, validation); Phase 1 VLAN; NPMplus HA | § E |
|
||||
|
||||
---
|
||||
|
||||
## Pending — Implementation checklist (security, monitoring, backup)
|
||||
|
||||
| Category | Tasks | Reference |
|
||||
|----------|--------|-----------|
|
||||
| Security | .env permissions; validator key permissions; SSH key auth; firewall Proxmox API; VLANs | [IMPLEMENTATION_CHECKLIST.md](../10-best-practices/IMPLEMENTATION_CHECKLIST.md) § High |
|
||||
| Monitoring | Metrics (9545); health checks; alert script | § High |
|
||||
| Backup | Automated backup; validator key backup (encrypted); config backup | § High |
|
||||
| Testing / Docs | Integration tests for deploy scripts; runbooks (validator add/remove, upgrade, key rotation) | § High |
|
||||
| Medium/Low | Retry/error handling; structured logging; performance; automation; UI/security | § Medium, § Low |
|
||||
|
||||
---
|
||||
|
||||
## Operator-only (LAN / credentials / external)
|
||||
|
||||
| Task | Notes |
|
||||
|------|--------|
|
||||
| Wave 0: NPMplus RPC fix, sendCrossChain (real), NPMplus backup | [COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md](COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md) |
|
||||
| Run validation (run-all-validation.sh, validate-config-files.sh) | Wave 1 |
|
||||
| Add Wallet link (run runbook on explorer VM) | [EXPLORER_WALLET_LINK_QUICK_WIN.md](../04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md) |
|
||||
| Token-aggregation deploy (PostgreSQL, env) | Requires host/DB |
|
||||
| CoinGecko / Consensys | External submission |
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
- Update this checklist when optional items are completed or new ones are added.
|
||||
- Link from [OPTIONAL_RECOMMENDATIONS_INDEX.md](../OPTIONAL_RECOMMENDATIONS_INDEX.md) and [NEXT_STEPS_MASTER.md](NEXT_STEPS_MASTER.md) as needed.
|
||||
92
docs/00-meta/REMAINING_SUMMARY.md
Normal file
92
docs/00-meta/REMAINING_SUMMARY.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# Remaining Work — Summary
|
||||
|
||||
**Last Updated:** 2026-02-27
|
||||
**Purpose:** Single place for what remains. All in-repo runnable tasks are **complete**; remaining work is **operator/LAN** or **external**.
|
||||
|
||||
---
|
||||
|
||||
## Continue and complete (run these)
|
||||
|
||||
To complete all automatable steps from this repo:
|
||||
|
||||
1. **From anywhere (no LAN):**
|
||||
`./scripts/run-completable-tasks-from-anywhere.sh`
|
||||
— Config validation, on-chain 38/38 check, run-all-validation --skip-genesis, reconcile-env.
|
||||
|
||||
2. **From LAN (with dotenv):**
|
||||
`./scripts/run-all-operator-tasks-from-lan.sh`
|
||||
— Loads dotenv from repo `.env` and `smom-dbis-138/.env` automatically. Runs NPMplus RPC fix, backup (if NPM_PASSWORD set), Blockscout verification. Add `--deploy` or `--create-vms` as needed.
|
||||
|
||||
Optional: `--skip-backup` if NPM_PASSWORD not set; `--dry-run` to print steps only.
|
||||
|
||||
---
|
||||
|
||||
## In-repo (complete)
|
||||
|
||||
| Item | Status |
|
||||
|------|--------|
|
||||
| Config validation | ✅ `validate-config-files.sh` passed |
|
||||
| On-chain check (Chain 138) | ✅ 38/38 contracts present |
|
||||
| run-all-validation (--skip-genesis) | ✅ Passed |
|
||||
| Preflight (dotenv, RPC, nonce) | ✅ Passed |
|
||||
| run-all-next-steps-chain138 | ✅ Preflight; mirror/pool present; 12 c* GRU-registered; 38/38 verify |
|
||||
| run-completable-tasks-from-anywhere | ✅ All 4 steps passed |
|
||||
| MCP plan upgrades (8 items) | ✅ Implemented (multi-chain, Uniswap, bot_state, webhook, merge script, limits, audit, router stub) |
|
||||
| Optional docs/runbooks | ✅ Explorer Wallet link runbook; optional tasks checklist; merge allowlist script |
|
||||
|
||||
**Re-run anytime:** `./scripts/run-completable-tasks-from-anywhere.sh`, `./scripts/deployment/preflight-chain138-deploy.sh`, `./scripts/deployment/run-all-next-steps-chain138.sh`.
|
||||
|
||||
---
|
||||
|
||||
## Operator / LAN — Do you have the necessary creds?
|
||||
|
||||
**Check before running:** [OPERATOR_CREDENTIALS_CHECKLIST.md](OPERATOR_CREDENTIALS_CHECKLIST.md) — per-task list of required credentials (LAN, PRIVATE_KEY, NPM_PASSWORD, RPC_URL_138, SSH to Proxmox, LINK, gas, token balance). If any required credential is missing, obtain or set it first (e.g. in `smom-dbis-138/.env`).
|
||||
|
||||
**Summary of what Operator/LAN typically needs:**
|
||||
- **LAN** — host on 192.168.11.x (or VPN) to reach NPMplus, RPC, Blockscout, Proxmox.
|
||||
- **PRIVATE_KEY** — for any deploy or on-chain tx (Chain 138 and bridges).
|
||||
- **NPM_PASSWORD** — for NPMplus backup and proxy updates (502 fix).
|
||||
- **RPC_URL_138** — Chain 138 Core RPC (e.g. http://192.168.11.211:8545).
|
||||
- **SSH to Proxmox** — for maintenance scripts, token-aggregation fix, Explorer VM edit.
|
||||
- **LINK** (on 138) — for sendCrossChain and CCIP fees; deployer must approve bridge.
|
||||
- **Gas** — deployer funded on Chain 138 (and on Celo/Wemix/Gnosis if deploying CCIP there).
|
||||
- **Token balance** — for add liquidity: deployer holds cUSDT/cUSDC/USDT/USDC; set ADD_LIQUIDITY_* in .env.
|
||||
|
||||
---
|
||||
|
||||
## Remaining — Operator / LAN
|
||||
|
||||
| # | Task | Command / doc |
|
||||
|---|------|----------------|
|
||||
| 1 | Full deployment order (Phase 0–6) | [DEPLOYMENT_ORDER_OF_OPERATIONS.md](../03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md) |
|
||||
| 2 | Add liquidity (PMM pools), ensure DODOPMMProvider registered | [PRE_DEPLOYMENT_CHECKLIST](../03-deployment/PRE_DEPLOYMENT_CHECKLIST.md), [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md) |
|
||||
| 3 | Blockscout verify, 502 fix, NPMplus backup, optional deploy | `./scripts/run-all-operator-tasks-from-lan.sh [--deploy]` · [OPERATOR_READY_CHECKLIST.md](OPERATOR_READY_CHECKLIST.md) |
|
||||
| 4 | E2E 502 fix | `./scripts/maintenance/address-all-remaining-502s.sh [--run-besu-fix] [--e2e]` · [502_DEEP_DIVE_ROOT_CAUSES_AND_FIXES.md](502_DEEP_DIVE_ROOT_CAUSES_AND_FIXES.md) |
|
||||
| 5 | Gnosis / Celo / Wemix CCIP bridges | [CONFIG_READY_CHAINS_COMPLETION_RUNBOOK](../07-ccip/CONFIG_READY_CHAINS_COMPLETION_RUNBOOK.md) |
|
||||
| 6 | LINK support on Mainnet relay | [RELAY_BRIDGE_ADD_LINK_SUPPORT_RUNBOOK](../07-ccip/RELAY_BRIDGE_ADD_LINK_SUPPORT_RUNBOOK.md) |
|
||||
| 7 | E2E flow waves E1–E7 (liquidity, CCIP fund, token-aggregation, Blockscout, L2 PMM, bridge UI) | `./scripts/run-e2e-flow-tasks-full-parallel.sh` · [TASKS_TO_INCREASE_ALL_E2E_FLOWS.md](TASKS_TO_INCREASE_ALL_E2E_FLOWS.md) |
|
||||
| 8 | Token-aggregation DB + deploy | `./scripts/apply-token-aggregation-fix.sh` (VMID 5000; may need postgres) |
|
||||
| 9 | Explorer Wallet link (add to navbar) | [EXPLORER_WALLET_LINK_QUICK_WIN.md](../04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md) — run on explorer VM |
|
||||
|
||||
---
|
||||
|
||||
## Remaining — External / third-party
|
||||
|
||||
| # | Task | Doc |
|
||||
|---|------|-----|
|
||||
| 1 | Ledger | Tally form submitted; await response. [ADD_CHAIN138_TO_LEDGER_LIVE](../04-configuration/ADD_CHAIN138_TO_LEDGER_LIVE.md) |
|
||||
| 2 | Trust Wallet | Open PR to wallet-core. [ADD_CHAIN138_TO_TRUST_WALLET](../04-configuration/ADD_CHAIN138_TO_TRUST_WALLET.md) |
|
||||
| 3 | Consensys | Outreach for Swaps/Bridge. [CONSENSYS_OUTREACH_PACKAGE](../../metamask-integration/docs/CONSENSYS_OUTREACH_PACKAGE.md) |
|
||||
| 4 | CoinGecko / CMC | Submit chain and tokens. [COINGECKO_SUBMISSION_GUIDE](../04-configuration/coingecko/COINGECKO_SUBMISSION_GUIDE.md) |
|
||||
| 5 | Chainlist / PR-ready | [04-configuration/pr-ready/README.md](../04-configuration/pr-ready/README.md) |
|
||||
| 6 | On-ramps / off-ramps | [REPOSITORIES_AND_PRS_CHAIN138.md](REPOSITORIES_AND_PRS_CHAIN138.md) |
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- **Operator credentials (do you have them?):** [OPERATOR_CREDENTIALS_CHECKLIST.md](OPERATOR_CREDENTIALS_CHECKLIST.md)
|
||||
- **Full remaining list:** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md)
|
||||
- **Operator copy-paste:** [OPERATOR_READY_CHECKLIST.md](OPERATOR_READY_CHECKLIST.md)
|
||||
- **What’s left (operator + external):** [WHATS_LEFT_OPERATOR_AND_EXTERNAL.md](WHATS_LEFT_OPERATOR_AND_EXTERNAL.md)
|
||||
- **Optional tasks:** [OPTIONAL_TASKS_CHECKLIST.md](OPTIONAL_TASKS_CHECKLIST.md)
|
||||
@@ -1,8 +1,8 @@
|
||||
# Remaining Tasks
|
||||
|
||||
**Last Updated:** 2026-03-02
|
||||
**Purpose:** Single-page list of what is left to do. Completed: preflight, run-all-next-steps-chain138 (38/38 on-chain, 12 c* GRU-registered), nginx+explorer config, Blockscout verification run, E2E wave E3.
|
||||
**Detail:** [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md) § Remaining tasks · **Operator commands:** [OPERATOR_READY_CHECKLIST.md](OPERATOR_READY_CHECKLIST.md).
|
||||
**Last Updated:** 2026-02-27
|
||||
**Purpose:** Single-page list of what is left to do. **In-repo: complete** (completable tasks, preflight, run-all-next-steps-chain138: 38/38 on-chain, 12 c* GRU-registered; MCP plan upgrades; optional runbooks).
|
||||
**Summary of all remaining (operator + external):** [REMAINING_SUMMARY.md](00-meta/REMAINING_SUMMARY.md). **Detail:** [NEXT_STEPS_AND_REMAINING_TODOS.md](00-meta/NEXT_STEPS_AND_REMAINING_TODOS.md) § Remaining tasks · **Operator commands:** [OPERATOR_READY_CHECKLIST.md](00-meta/OPERATOR_READY_CHECKLIST.md).
|
||||
|
||||
**Task check (2026-03-02):** Each task below was verified before completion. See **[TASK_CHECK_REPORT.md](TASK_CHECK_REPORT.md)** for per-task status, what is already done (e.g. Phase 0–3, DODOPMMProvider, pools), and what still requires Operator/LAN or external submission. Completable + preflight both passed.
|
||||
|
||||
|
||||
180
docs/00-meta/REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md
Normal file
180
docs/00-meta/REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# Required Fixes, Gaps, and Additional Deployments — Master List
|
||||
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Single consolidated list of all required fixes, gaps, and additional deployments. Sources: REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS, REMAINING_SUMMARY, TOKEN_CONTRACT_DEPLOYMENTS_REMAINING, PRE_DEPLOYMENT_CHECKLIST, RECOMMENDATIONS_AND_FIXES_BEFORE_DEPLOY, DETAILED_GAPS_AND_ISSUES_LIST, GAPS_STATUS, WHATS_LEFT_OPERATOR_AND_EXTERNAL, and token-aggregation build.
|
||||
|
||||
---
|
||||
|
||||
## Verified (LAN/Operator) — 2026-03-03
|
||||
|
||||
Commands run from repo root on operator/LAN host. Use as baseline; re-run when env or network changes.
|
||||
|
||||
| Check | Command | Result |
|
||||
|-------|---------|--------|
|
||||
| Preflight | `./scripts/deployment/preflight-chain138-deploy.sh` | **PASSED** — dotenv, RPC_URL_138, PRIVATE_KEY, nonce consistent, Core RPC chainId 138. |
|
||||
| Core RPC (2101) | `curl -s -o /dev/null -w "%{http_code}" http://192.168.11.211:8545` | **200/201** — reachable. |
|
||||
| Deployer balance | `RPC_URL_138=http://192.168.11.211:8545 ./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh` | **OK** — native ETH sufficient; WETH/cUSDT/cUSDC = 0 (add liquidity blocked until tokens funded). |
|
||||
| On-chain contracts | `./scripts/verify/check-contracts-on-chain-138.sh http://192.168.11.211:8545` | **38 present, 0 missing.** |
|
||||
| Clear tx pool script | `test -f scripts/clear-all-transaction-pools.sh` | **exists** |
|
||||
| Maintenance scripts | `make-rpc-vmids-writable-via-ssh.sh`, `health-check-rpc-2101.sh` | **exist** |
|
||||
| Test-all-contracts script | `test -f scripts/deployment/test-all-contracts-before-deploy.sh` | **exists** |
|
||||
| Token-aggregation build | `cd smom-dbis-138/services/token-aggregation && npm run build` | **PASSES** (fixed 2026-03-03: token-mapping, bridge route, cross-chain-bridges config, indexer types). See §1.3 for historical ref. |
|
||||
| Token-aggregation /health | `curl -s -o /dev/null -w "%{http_code}" http://192.168.11.140:3001/health` (or localhost:3001) | **200** — service running and healthy at tested endpoint. |
|
||||
| DODOPMMIntegration token addresses (2026-03-04) | `eth_call` to `compliantUSDT()` / `compliantUSDC()` at `0x79cdbaFBaA0FdF9F55D26F360F54cddE5c743F7D` | **PASSED** — returns canonical cUSDT/cUSDC; Explorer, mint script, and PMM aligned. See [EXPLORER_TOKEN_LIST_CROSSCHECK](../11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) §8. |
|
||||
|
||||
**Remaining to complete (verified):** Add liquidity to PMM pools once deployer has cUSDT/cUSDC (or mint); Celo/Wemix CCIP bridges; LINK relay; operator run Blockscout verify (run-all-operator-tasks-from-lan.sh); E2E 502 fix; external (Ledger, Trust, CoinGecko/CMC, on-ramps). See §4–5 and [TODOS_CONSOLIDATED](TODOS_CONSOLIDATED.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Required fixes (blocking)
|
||||
|
||||
### 1.1 RPC 2101 (Core) — read-only
|
||||
|
||||
- **Status:** Not fixed if host storage I/O errors occur.
|
||||
- **Action:** Run `./scripts/maintenance/make-rpc-vmids-writable-via-ssh.sh` then `./scripts/maintenance/health-check-rpc-2101.sh`. Do not use Public RPC for deployments.
|
||||
- **Ref:** [RPC_2101_READONLY_FIX.md](../03-deployment/RPC_2101_READONLY_FIX.md), [REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS.md](../03-deployment/REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS.md).
|
||||
|
||||
### 1.2 Stuck transactions / nonce
|
||||
|
||||
- **Action:** Run `./scripts/clear-all-transaction-pools.sh` (validators + 2101 + 2201); wait ~60s before deploying. Use scripts that check nonce (e.g. `deploy-transaction-mirror-and-pmm-pool-after-txpool-clear.sh`).
|
||||
|
||||
### 1.3 Token-aggregation service — TypeScript build (fixed 2026-03-03)
|
||||
|
||||
**Status: fixed.** The token-aggregation service now builds. Historical fixes applied:
|
||||
|
||||
| Error | File | Fix |
|
||||
|-------|------|-----|
|
||||
| Duplicate identifier `require`; `import.meta` not allowed | `src/api/routes/token-mapping.ts` (line 14–15) | Remove or replace `require`/`import.meta` usage; or set tsconfig `module` to `NodeNext`/`ES2020` and fix duplicate. |
|
||||
| Cannot find module `./routes/bridge` | `src/api/server.ts` | Create `src/api/routes/bridge.ts` or remove the import and route mount if bridge is elsewhere. |
|
||||
| Cannot find module `../config/cross-chain-bridges` | `src/indexer/cross-chain-indexer.ts` | Create `src/config/cross-chain-bridges.ts` or point to existing bridge config. |
|
||||
| Parameter implicitly has `any` type | `src/indexer/cross-chain-indexer.ts` (lines 107, 256, 382, 409, 410) | Add explicit types for `l`, `b`, `lane`. |
|
||||
|
||||
**Ref:** `smom-dbis-138/services/token-aggregation/` — run `npm run build` to verify.
|
||||
|
||||
---
|
||||
|
||||
## 2. Gaps (missing or incomplete)
|
||||
|
||||
### 2.1 Pre-deployment / env
|
||||
|
||||
- **Core RPC = IP:port:** In `smom-dbis-138/.env` set `RPC_URL_138=http://192.168.11.211:8545` (not FQDN). See [RPC_ENDPOINTS_MASTER](../04-configuration/RPC_ENDPOINTS_MASTER.md).
|
||||
- **Deployer gas (Chain 138):** ≥ ~0.006 ETH (recommended 1–2 ETH). Check: `./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh`.
|
||||
- **Env from smom-dbis-138/.env only:** Required: `PRIVATE_KEY`, `RPC_URL_138`. For PMM: `DODO_PMM_INTEGRATION_ADDRESS`, `DODO_PMM_PROVIDER_ADDRESS`, pool addresses. Verify: `cd smom-dbis-138 && ./scripts/deployment/check-env-required.sh`.
|
||||
- **POOL_MANAGER_ROLE:** Deployer must have this role on DODOPMMIntegration for pool creation and DODOPMMProvider registration.
|
||||
- **TRANSACTION_MIRROR_ADDRESS:** Set in `smom-dbis-138/.env` after deploy (from script output).
|
||||
|
||||
### 2.2 Config / canonical
|
||||
|
||||
- **Wemix (1111) token addresses:** Confirm WETH, USDT, USDC on scan.wemix.com; update `config/token-mapping-multichain.json` and WEMIX_TOKEN_VERIFICATION.md if needed.
|
||||
- **Canonical addresses:** CUSDC_ADDRESS_138, CUSDT_ADDRESS_138 (and others) in env or smart-contracts-master.json; token-aggregation uses env override.
|
||||
- **CCIPWETH9Bridge:** Use canonical bridge only; set `CCIPWETH9_BRIDGE_CHAIN138` in env. Do not use deprecated address.
|
||||
- **Token mapping:** When adding tokens, update `config/token-mapping.json` and optionally CHAIN138_TOKEN_ADDRESSES.
|
||||
|
||||
### 2.3 Explorer / token-aggregation ops
|
||||
|
||||
- **Token-aggregation DB:** If `/health` returns "database token_aggregation does not exist", create DB, run migrations, set `DATABASE_URL`, restart service. See [DETAILED_GAPS_AND_ISSUES_LIST](../04-configuration/DETAILED_GAPS_AND_ISSUES_LIST.md) §2.
|
||||
- **Nginx proxy order (VMID 5000):** Ensure `location /api/v1/` is defined **before** `location /api/` so token-aggregation is used for `/api/v1/`. Use `fix-nginx-conflicts-vmid5000.sh`.
|
||||
|
||||
### 2.4 Real-robinhood / heatmap
|
||||
|
||||
- **Heatmap API:** Implemented in token-aggregation but service must build and run (see §1.3).
|
||||
- **Bridge/oracle metrics:** `/v1/routes/health` and `/v1/bridges/metrics` currently stub; fill from relay/CCIP telemetry when available.
|
||||
- **Stabilization dashboard page:** Placeholder until oracle_metrics and peg-band data are wired.
|
||||
|
||||
---
|
||||
|
||||
## 3. Additional deployments
|
||||
|
||||
**Ordered plan for full network coverage:** [REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE](../03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md) — Phase A (hub liquidity: mint + add liquidity) → Phase B (Celo/Wemix CCIP + LINK fund) → Phase C (cW* + edge pools on public chains) → Phase D (optional: XAU, vaults, trustless). **2026-03-04:** Celo CCIP bridges deployed; Phase C runbook [PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK](../03-deployment/PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md); Phase D [PHASE_D_OPTIONAL_CHECKLIST](../03-deployment/PHASE_D_OPTIONAL_CHECKLIST.md). Mint (A.1) attempted, tx timeout—retry; Wemix (B.2) blocked until deployer has 0.4 WEMIX.
|
||||
|
||||
### 3.1 Chain 138 — already done (for reference)
|
||||
|
||||
- TransactionMirror, DODOPMMIntegration, three PMM pools (cUSDT/cUSDC, cUSDT/USDT, cUSDC/USDC), DODOPMMProvider, CompliantFiatTokens (10 tokens). On-chain verification: 38/38. See [REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS](../03-deployment/REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS.md).
|
||||
|
||||
### 3.2 Chain 138 — remaining (optional / follow-on)
|
||||
|
||||
| Item | Status | Script / notes |
|
||||
|------|--------|----------------|
|
||||
| **EnhancedSwapRouter** | Not deployed | Deploy when Uniswap/Balancer pools exist on 138; set quoter/poolId. |
|
||||
| **Add liquidity to PMM pools** | Pending | Use [ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK](../03-deployment/ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md); set ADD_LIQUIDITY_* in .env; approve and addLiquidity per pool. |
|
||||
| **Optional cCADT** | Not deployed | Add to DeployCompliantFiatTokens.s.sol if Tether-style CAD needed. |
|
||||
| **cAUSDT** | Not deployed | No script in repo; deploy when Alltra compliant USD token is defined. |
|
||||
| **Vault ac* / vdc* / sdc*** | After base tokens | DeployAcVdcSdcVaults; extend for each new base token. |
|
||||
|
||||
### 3.3 Token deployments — remaining (by category)
|
||||
|
||||
| Category | Chain(s) | What | Ref |
|
||||
|----------|----------|------|-----|
|
||||
| **Canonical 138 (extra)** | 138 | cEURC, cEURT, cGBP*, cAUD*, cJPY*, cCHF*, cCADC, cXAU* — **Done** via DeployCompliantFiatTokens. Optional: cCADT. | [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md) |
|
||||
| **ALL Mainnet (Alltra)** | 651940 | ACADT (and optionally ACADC) — no script in repo; TBD when Alltra adds CAD. | Same. |
|
||||
| **Compliant Wrapped (cW*)** | 1, 56, 137, 10, 42161, 8453, 43114, etc. | Deploy or bridge cW* per chain; create/fund PMM edge pools per pool-matrix. | Same; [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md). |
|
||||
| **D-WIN W on 138 / 651940** | 138, 651940 | Optional; extend DeployISO4217WSystem if desired. | Same. |
|
||||
|
||||
### 3.4 Cross-chain / CCIP / bridge
|
||||
|
||||
| Item | Status | Action |
|
||||
|------|--------|--------|
|
||||
| **Gnosis CCIP bridges** | Deployed (2026-03-04) | WETH9 `0x4ab39b5BaB7b463435209A9039bd40Cf241F5a82`, WETH10 `0xC15ACdBAC59B3C7Cb4Ea4B3D58334A4b143B4b44`; .env updated. Run complete-config when Chain 138 RPC confirms txs. |
|
||||
| **Celo, Wemix CCIP** | Pending | Fund deployer (CELO ~0.1, WEMIX ~0.4); run `deploy-bridges-config-ready-chains.sh [celo|wemix]`, then `complete-config-ready-chains.sh`; fund LINK. [CONFIG_READY_CHAINS_COMPLETION_RUNBOOK](../07-ccip/CONFIG_READY_CHAINS_COMPLETION_RUNBOOK.md). |
|
||||
| **LINK support on Mainnet relay** | Pending | [RELAY_BRIDGE_ADD_LINK_SUPPORT_RUNBOOK](../07-ccip/RELAY_BRIDGE_ADD_LINK_SUPPORT_RUNBOOK.md). |
|
||||
| **Fund CCIP bridges with LINK** | Pending | Run `./scripts/deployment/fund-ccip-bridges-with-link.sh` (after dry-run check). |
|
||||
| **AlltraAdapter setBridgeFee** | After deploy | Call `setBridgeFee(uint256)`; set `ALLTRA_BRIDGE_FEE`, `ALLTRA_ADAPTER_CHAIN138` in .env. |
|
||||
|
||||
### 3.5 Mainnet / L2 dry-run and deploy
|
||||
|
||||
- **Mainnet dry-run:** Run when mainnet RPC is reachable: `./scripts/deployment/dry-run-mainnet-deployment.sh` (or per-script with `--dry-run`). Requires `PRIVATE_KEY`, `ETHEREUM_MAINNET_RPC` in .env.
|
||||
- **cW* and PMM on public chains:** No deployment from repo yet; when path exists (bridge + factory or DODO), run gas estimate and dry-run per chain.
|
||||
|
||||
---
|
||||
|
||||
## 4. Operator / LAN tasks (run from host with LAN + creds)
|
||||
|
||||
| # | Task | Command / doc |
|
||||
|---|------|----------------|
|
||||
| 1 | Full deployment order (Phase 0–6) | [DEPLOYMENT_ORDER_OF_OPERATIONS](../03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md) |
|
||||
| 2 | Add liquidity (PMM pools), DODOPMMProvider registered | [PRE_DEPLOYMENT_CHECKLIST](../03-deployment/PRE_DEPLOYMENT_CHECKLIST.md), [ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK](../03-deployment/ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md) |
|
||||
| 3 | Blockscout verify, 502 fix, NPMplus backup, optional deploy | `./scripts/run-all-operator-tasks-from-lan.sh [--deploy]` · [OPERATOR_READY_CHECKLIST](OPERATOR_READY_CHECKLIST.md) |
|
||||
| 4 | E2E 502 fix | `./scripts/maintenance/address-all-remaining-502s.sh [--run-besu-fix] [--e2e]` · 502_DEEP_DIVE_ROOT_CAUSES_AND_FIXES |
|
||||
| 5 | Token-aggregation DB + migrations + restart | Create DB if needed; run migrations; restart service. [TOKEN_AGGREGATION_REPORT_API_RUNBOOK](../04-configuration/TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md) |
|
||||
| 6 | Explorer Wallet link (navbar) | [EXPLORER_WALLET_LINK_QUICK_WIN](../04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md) — run on explorer VM |
|
||||
| 7 | Apply nginx + explorer config on VMID 5000 | `./scripts/apply-remaining-operator-fixes.sh` (if not already done) |
|
||||
|
||||
**Credentials:** [OPERATOR_CREDENTIALS_CHECKLIST](OPERATOR_CREDENTIALS_CHECKLIST.md).
|
||||
|
||||
---
|
||||
|
||||
## 5. External / third-party (outreach or submission)
|
||||
|
||||
| # | Task | Doc |
|
||||
|---|------|-----|
|
||||
| 1 | Ledger | Tally form submitted; await response. [ADD_CHAIN138_TO_LEDGER_LIVE](../04-configuration/ADD_CHAIN138_TO_LEDGER_LIVE.md) |
|
||||
| 2 | Trust Wallet | Open PR to trustwallet/wallet-core. [ADD_CHAIN138_TO_TRUST_WALLET](../04-configuration/ADD_CHAIN138_TO_TRUST_WALLET.md) |
|
||||
| 3 | Consensys | Outreach for Swaps/Bridge. [CONSENSYS_OUTREACH_PACKAGE](../../metamask-integration/docs/CONSENSYS_OUTREACH_PACKAGE.md) |
|
||||
| 4 | CoinGecko / CMC | Submit chain and tokens. [CMC_COINGECKO_SUBMISSION_RUNBOOK](../04-configuration/coingecko/CMC_COINGECKO_SUBMISSION_RUNBOOK.md) |
|
||||
| 5 | Chainlist / PR-ready | [04-configuration/pr-ready/README.md](../04-configuration/pr-ready/README.md) |
|
||||
| 6 | On-ramps / off-ramps | [REPOSITORIES_AND_PRS_CHAIN138](REPOSITORIES_AND_PRS_CHAIN138.md) |
|
||||
|
||||
---
|
||||
|
||||
## 6. Quick reference — run before any deploy
|
||||
|
||||
1. **Preflight:** `./scripts/deployment/preflight-chain138-deploy.sh [--cost]`
|
||||
2. **Test contracts:** `./scripts/deployment/test-all-contracts-before-deploy.sh` (optionally `--no-match "Fork|Mainnet|Integration|e2e"` for unit-only)
|
||||
3. **Gas check:** `RPC_URL_138=http://192.168.11.211:8545 ./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh`
|
||||
4. **No stuck nonce:** If needed, `./scripts/clear-all-transaction-pools.sh` then wait 60s
|
||||
5. **Core RPC writable:** If read-only, `./scripts/maintenance/make-rpc-vmids-writable-via-ssh.sh` then `./scripts/maintenance/health-check-rpc-2101.sh`
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS](../03-deployment/REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS.md)
|
||||
- [REMAINING_SUMMARY](REMAINING_SUMMARY.md)
|
||||
- [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md)
|
||||
- [PRE_DEPLOYMENT_CHECKLIST](../03-deployment/PRE_DEPLOYMENT_CHECKLIST.md)
|
||||
- [RECOMMENDATIONS_AND_FIXES_BEFORE_DEPLOY](../03-deployment/RECOMMENDATIONS_AND_FIXES_BEFORE_DEPLOY.md)
|
||||
- [DETAILED_GAPS_AND_ISSUES_LIST](../04-configuration/DETAILED_GAPS_AND_ISSUES_LIST.md)
|
||||
- [GAPS_STATUS](GAPS_STATUS.md)
|
||||
- [WHATS_LEFT_OPERATOR_AND_EXTERNAL](WHATS_LEFT_OPERATOR_AND_EXTERNAL.md)
|
||||
- [OPERATOR_READY_CHECKLIST](OPERATOR_READY_CHECKLIST.md)
|
||||
- [OPERATOR_CREDENTIALS_CHECKLIST](OPERATOR_CREDENTIALS_CHECKLIST.md)
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
**From anywhere (no LAN/creds):** See [run-completable-tasks-from-anywhere.sh](../../scripts/run-completable-tasks-from-anywhere.sh) — config validation, on-chain check (SKIP_EXIT=1 if RPC unreachable), run-all-validation --skip-genesis, reconcile-env.
|
||||
|
||||
**Single script (LAN + secrets):** [run-all-operator-tasks-from-lan.sh](../../scripts/run-all-operator-tasks-from-lan.sh) — optional phases: backup, contract verify, contract deploy, VM/container creation. Use `--dry-run` to print steps.
|
||||
**Single script (LAN + secrets):** [run-all-operator-tasks-from-lan.sh](../../scripts/run-all-operator-tasks-from-lan.sh) — **always loads dotenv** from repo `.env` and `smom-dbis-138/.env` (NPM_PASSWORD, PRIVATE_KEY, RPC, etc.). Optional phases: backup, contract verify, contract deploy, VM/container creation. Use `--dry-run` to print steps.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,18 +1,33 @@
|
||||
# TODOs — Consolidated Task List
|
||||
|
||||
**Last Updated:** 2026-03-02
|
||||
**Purpose:** Single checklist of all next steps and remaining tasks. Source of truth for the full list: [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md). **Token deployments remaining:** [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md).
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Single checklist of all next steps and remaining tasks. Source of truth for the full list: [NEXT_STEPS_AND_REMAINING_TODOS.md](NEXT_STEPS_AND_REMAINING_TODOS.md). **Token deployments remaining:** [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md). **Verified list (LAN/Operator):** [REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md](REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md) — run bash/curl to confirm; doc updated 2026-03-03.
|
||||
|
||||
**Quick run:** From anywhere (no LAN): `./scripts/run-completable-tasks-from-anywhere.sh`. Before Chain 138 deploy: `./scripts/deployment/preflight-chain138-deploy.sh [--cost]`. **Chain 138 next steps (all in one):** `./scripts/deployment/run-all-next-steps-chain138.sh [--dry-run] [--skip-mirror] [--skip-register-gru] [--skip-verify]` — preflight → mirror+pool → register c* as GRU → verify. From LAN with secrets: `./scripts/run-all-operator-tasks-from-lan.sh [--deploy] [--create-vms]`. **E2E flows (full parallel):** `./scripts/run-e2e-flow-tasks-full-parallel.sh [--dry-run]` — [TASKS_TO_INCREASE_ALL_E2E_FLOWS](TASKS_TO_INCREASE_ALL_E2E_FLOWS.md).
|
||||
|
||||
**Full deployment order:** [DEPLOYMENT_ORDER_OF_OPERATIONS.md](../03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md) — Phase 0–6 (prereqs → core → PMM pools → provider → optional → cW* → verify). **Full plan (required/optional/recommended):** [COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md](COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md).
|
||||
**Full deployment order:** [DEPLOYMENT_ORDER_OF_OPERATIONS.md](../03-deployment/DEPLOYMENT_ORDER_OF_OPERATIONS.md) — Phase 0–6. **Remaining for full network coverage:** [REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md](../03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md) — A: mint + add liquidity (138) → B–D. **Status to continue (before Phase A mint):** [REMAINING_DEPLOYMENTS § Status to continue](../03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md) and [CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS §7](../04-configuration/CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md) — restart validator 1004, clear tx pool, then mint. **Phase execution 2026-03-04:** A.1 attempted (tx timeout); A.2 pending; B.1 Celo ✅; B.2 Wemix blocked; B.3 fund-ccip ready; Phase C/D runbooks. **Full plan:** [COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md](COMPLETE_REQUIRED_OPTIONAL_RECOMMENDED_INDEX.md).
|
||||
|
||||
**Last completable run (2026-03-02):** Config validation OK; on-chain 38/38; run-all-validation --skip-genesis OK; reconcile-env. **Preflight** and **run-all-next-steps-chain138.sh** run: preflight passed; mirror/pool present; 12 c* already GRU-registered; verification 38/38. Documentation: MASTER_INDEX, README, RUNBOOKS_MASTER_INDEX created; deprecated list and consolidation plan updated. Progress indicators (Step 1/4–4/4) in run-completable-tasks-from-anywhere.sh. E2E flow tasks script and doc updates (ADDRESS_MATRIX_AND_STATUS, RECOMMENDATIONS R2, NEXT_STEPS_FOR_YOU) completed. **Optional completed (2026-02-27 / 2026-03-01):** DeployCompliantFiatTokens (10 tokens); Blockscout verification; MCP allowlist-138; add-liquidity runbook; token-aggregation fallbacks + ENV_EXAMPLE_CONTENT; E2E routing verification; PMM_DEX_ROUTING_STATUS + REQUIRED_FIXES_AND_DEPLOYMENTS_STATUS updated; cCADT line (commented) in DeployCompliantFiatTokens.s.sol. **Within-scope list (2026-02-27):** CompliantWrappedToken.sol; DeployCompliantFiatTokensForChain.s.sol (c* any chain); DeployCWTokens.s.sol (cWUSDT/cWUSDC); deploy-tokens-and-weth-all-chains-skip-canonical.sh extended with --deploy-c, --deploy-cw, 651940 env validation; TOKENS_DEPLOYER_DEPLOYED_ON_OTHER_CHAINS §6 implemented; ENV_EXAMPLE_CONTENT c*/cW*/651940 vars. **2026-02-27:** Deployment order doc, preflight script, deployment safety added; todos synced.
|
||||
|
||||
**Verified (LAN/Operator) 2026-03-03:** Preflight ✅; Core RPC 192.168.11.211:8545 ✅; deployer balance script ✅ (native ETH OK; WETH/cUSDT/cUSDC = 0 → add liquidity blocked until funded); on-chain contracts 38/38 ✅; clear-tx-pool + maintenance + test-all-contracts scripts exist ✅; token-aggregation **build** ✅ (fixed 2026-03-03); token-aggregation /health 200 at tested endpoint. **real-robinhood:** Changes committed and pushed to Gitea (dashboard, data, docs). **Next steps run:** run-completable-tasks-from-anywhere.sh ✅; run-all-operator-tasks-from-lan.sh (Wave 0 NPMplus RPC fix running); run-all-next-steps-chain138.sh (preflight + verify) run on 2026-03-04. **On-chain verification 2026-03-04:** DODOPMMIntegration at `0x79cdbaFBaA0FdF9F55D26F360F54cddE5c743F7D` returns canonical cUSDT/cUSDC; Explorer, mint script, and PMM aligned — [EXPLORER_TOKEN_LIST_CROSSCHECK](../11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md) §8.
|
||||
|
||||
**Operator copy-paste commands:** [OPERATOR_READY_CHECKLIST.md](OPERATOR_READY_CHECKLIST.md) — one page with exact commands for every pending todo.
|
||||
|
||||
---
|
||||
|
||||
## Remaining to complete (verified 2026-03-03)
|
||||
|
||||
| # | Task | Verified | Notes |
|
||||
|---|------|----------|--------|
|
||||
| V1 | **Token-aggregation build** | ✅ Fixed | Fixed 2026-03-03: token-mapping (createRequire + process.cwd), bridge route stub, cross-chain-bridges config, cross-chain-indexer types. `npm run build` passes. |
|
||||
| V2 | **Add liquidity (Chain 138 PMM)** | ⏳ Blocked | Deployer WETH/cUSDT/cUSDC = 0. Mint/fund per [TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER](../11-references/TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER.md), then run AddLiquidityPMMPoolsChain138. |
|
||||
| V3 | **RPC 2101 read-only** | ⚠️ If needed | Run `make-rpc-vmids-writable-via-ssh.sh` + `health-check-rpc-2101.sh` only when host I/O errors occur. |
|
||||
| V4 | **Wemix / Gnosis / Celo CCIP bridges** | 🔄 Partial | Celo + **Gnosis** CCIP bridges deployed (2026-03-04). Gnosis: 0x4ab39b5B… (WETH9), 0xC15ACdBA… (WETH10); .env updated. Wemix blocked (need 0.4 WEMIX). Cronos: set CRONOS_RPC and CCIP_ROUTER_CRONOS in .env then run deploy-bridges-config-ready-chains.sh cronos. complete-config (138→chains) fails while Chain 138 RPC tx confirmation times out. |
|
||||
| V5 | **LINK relay, E2E 502s, operator run** | ⏳ Pending | LINK support runbook; `run-all-maintenance-via-proxmox-ssh.sh --e2e`; `run-all-operator-tasks-from-lan.sh`. |
|
||||
| V6 | **External (Ledger, Trust, CoinGecko/CMC, on-ramps)** | ⏳ Pending | Per REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST §4–5. |
|
||||
|
||||
---
|
||||
|
||||
## First (before any Chain 138 deploy)
|
||||
|
||||
| # | Task | Owner | Ref |
|
||||
@@ -65,7 +80,7 @@
|
||||
|
||||
| # | Task | Owner | Ref |
|
||||
|---|------|--------|-----|
|
||||
| 7 | **Blockscout verification:** `source smom-dbis-138/.env 2>/dev/null; ./scripts/verify/run-contract-verification-with-proxy.sh` | Operator/LAN | CONTRACT_DEPLOYMENT_RUNBOOK |
|
||||
| 7 | **Blockscout verification:** `./scripts/run-all-operator-tasks-from-lan.sh` (loads dotenv) or `./scripts/verify/run-contract-verification-with-proxy.sh` | Operator/LAN | CONTRACT_DEPLOYMENT_RUNBOOK |
|
||||
| 8 | **Fix E2E 502s (if needed):** `./scripts/maintenance/run-all-maintenance-via-proxmox-ssh.sh --e2e` or `address-all-remaining-502s.sh` | Operator/LAN | 502_DEEP_DIVE_ROOT_CAUSES_AND_FIXES |
|
||||
| 9 | **Operator tasks script:** `./scripts/run-all-operator-tasks-from-lan.sh [--deploy] [--create-vms]` | Operator/LAN | STEPS_FROM_PROXMOX_OR_LAN_WITH_SECRETS |
|
||||
| 10 | **sendCrossChain (real):** `bash scripts/bridge/run-send-cross-chain.sh 0.01` (when PRIVATE_KEY and LINK ready) | Operator/LAN | NEXT_STEPS_OPERATOR W0-2 |
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# What’s Left — Operator and External Only
|
||||
|
||||
**Last Updated:** 2026-02-28
|
||||
**Purpose:** After completing in-repo and on-chain tasks (preflight, PMM pools, DODOPMMProvider, operator script NPMplus/backup/verify, Wemix re-check), these items require **operator (LAN/Proxmox/credentials)** or **you (third-party)**.
|
||||
**Last Updated:** 2026-02-27
|
||||
**Purpose:** After completing in-repo and on-chain tasks (preflight, PMM pools, DODOPMMProvider, operator script NPMplus/backup/verify, Wemix re-check), these items require **operator (LAN/Proxmox/credentials)** or **you (third-party)**. **Short summary:** [REMAINING_SUMMARY.md](REMAINING_SUMMARY.md).
|
||||
|
||||
---
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
- **Docs:** PRE_DEPLOYMENT_CHECKLIST, LIQUIDITY_POOLS_MASTER_MAP updated with new pool and provider addresses.
|
||||
- **Dotenv:** `set-missing-dotenv-chain138.sh` run — DODO_PMM_PROVIDER_ADDRESS, POOL_* appended to `smom-dbis-138/.env`.
|
||||
- **Repositories/PRs:** [REPOSITORIES_AND_PRS_CHAIN138.md](REPOSITORIES_AND_PRS_CHAIN138.md) — Ledger, Trust, Chainlist, Consensys, CoinGecko/CMC, on-ramps/off-ramps (forms submitted; awaiting feedback).
|
||||
- **Bridges:** ENV_CONFIG_READY_CHAINS.example filled with Gnosis/Celo/Wemix CCIP router, LINK, and WETH9/WETH10 (WXDAI, WCELO, WWEMIX). **Gnosis deployed 2026-02-28:** CCIPWETH9=0xE37c332a88f112F9e039C5d92D821402A89c7052, CCIPWETH10=0x04B2AE3c3bb3d70Df506FAd8717b0FBFC78ED7E6; destinations 138↔Gnosis configured. **Celo/Wemix:** Fund deployer with CELO (~0.1) and WEMIX (~0.4) then run `deploy-bridges-config-ready-chains.sh celo` and `wemix`, then `complete-config-ready-chains.sh`.
|
||||
- **Bridges:** ENV_CONFIG_READY_CHAINS.example filled with Gnosis/Celo/Wemix CCIP router, LINK, and WETH9/WETH10 (WXDAI, WCELO, WWEMIX). **Gnosis deployed 2026-03-04:** CCIPWETH9=0x4ab39b5BaB7b463435209A9039bd40Cf241F5a82, CCIPWETH10=0xC15ACdBAC59B3C7Cb4Ea4B3D58334A4b143B4b44; .env updated; add destinations via complete-config when 138 RPC confirms. **Celo/Wemix:** Fund deployer with CELO (~0.1) and WEMIX (~0.4) then run `deploy-bridges-config-ready-chains.sh celo` and `wemix`, then `complete-config-ready-chains.sh`.
|
||||
- **PR-ready:** [04-configuration/pr-ready/](../04-configuration/pr-ready/) — eip155-138.json (Chainlist) and trust-wallet-registry-chain138.json (Trust Wallet); see README for submission steps.
|
||||
- **Maintenance:** `run-all-maintenance-via-proxmox-ssh.sh --e2e` was started via SSH; check `/tmp/proxmox-maintenance-out.log` for progress (steps 0–4 run; E2E runs at step 5).
|
||||
|
||||
|
||||
87
docs/02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md
Normal file
87
docs/02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# AI Agents 57xx — MCP Addendum (Multi-Chain, Uniswap, Automation)
|
||||
|
||||
**Purpose:** Addendum to [AI_AGENTS_57XX_DEPLOYMENT_PLAN.md](AI_AGENTS_57XX_DEPLOYMENT_PLAN.md) and [AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md](AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md) for: multi-chain MCP, Uniswap pool profile, dashboard/API alignment, and automation triggers. Supports the dedicated MCP/AI for Dodoex and Uniswap pool management per [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Multi-chain MCP
|
||||
|
||||
**Option:** One MCP server that supports **multiple chains** so a single AI can read and manage pools on Chain 138 and on public chains (cW* / HUB) without running one MCP instance per chain.
|
||||
|
||||
- **Allowlist shape:** Each pool entry includes **chainId** (e.g. `"chainId": "138"`, `"chainId": "137"`). The MCP uses the appropriate RPC per chain when calling `get_pool_state`, `quote_add_liquidity`, etc.
|
||||
- **RPC config:** Maintain a map `chainId → RPC_URL` (e.g. from env `RPC_URL_138`, `POLYGON_MAINNET_RPC`, or a config file). When a tool is invoked for a pool, the MCP looks up the chain and uses the corresponding RPC.
|
||||
- **Implementation note:** If the current ai-mcp-pmm-controller is single-chain (`CHAIN`, `RPC_URL`), extend the allowlist schema and server to accept `chainId` per pool and to select RPC by `chainId`. Alternatively, run one MCP instance per chain and have the AI/orchestrator call the appropriate MCP by chain.
|
||||
|
||||
**Reference:** [POOL_ACCESS_DASHBOARD_API_MCP.md](../11-references/POOL_ACCESS_DASHBOARD_API_MCP.md); allowlist generation from [generate-mcp-allowlist-from-deployment-status.sh](../../scripts/generate-mcp-allowlist-from-deployment-status.sh) (outputs per-chain fragments that can be merged into one multi-chain allowlist).
|
||||
|
||||
---
|
||||
|
||||
## 2. Uniswap pool profile
|
||||
|
||||
To allow the MCP to read **Uniswap V2** and **Uniswap V3** pool state (reserves, price, liquidity) so the same AI can manage both DODO and Uniswap pools:
|
||||
|
||||
- **Profile ID:** `uniswap_v2_pair` and/or `uniswap_v3_pool`.
|
||||
- **Expected view methods (Uniswap V2 pair):**
|
||||
- `getReserves() → (uint112 reserve0, uint112 reserve1, uint32 blockTimestampLast)`
|
||||
- `token0() → address`
|
||||
- `token1() → address`
|
||||
- **Expected view methods (Uniswap V3 pool):**
|
||||
- `slot0() → (uint160 sqrtPriceX96, int24 tick, uint16 observationIndex, ...)`
|
||||
- `liquidity() → uint128`
|
||||
- `token0() → address`, `token1() → address`
|
||||
- **MCP behavior:** For allowlisted pools with profile `uniswap_v2_pair` or `uniswap_v3_pool`, the MCP calls the corresponding view methods on the pool contract and returns normalized state (e.g. reserves, derived price) so the AI can reason about liquidity and rebalancing the same way as for DODO pools.
|
||||
|
||||
**Config:** Add entries to `ai-mcp-pmm-controller/config/pool_profiles.json` (see below). Pools created on a chain that uses Uniswap (from token-aggregation indexer or deployment-status) should be added to the allowlist with this profile.
|
||||
|
||||
**Example pool_profiles addition:**
|
||||
|
||||
```json
|
||||
"uniswap_v2_pair": {
|
||||
"methods": {
|
||||
"get_reserves": "getReserves",
|
||||
"token0": "token0",
|
||||
"token1": "token1"
|
||||
}
|
||||
},
|
||||
"uniswap_v3_pool": {
|
||||
"methods": {
|
||||
"slot0": "slot0",
|
||||
"liquidity": "liquidity",
|
||||
"token0": "token0",
|
||||
"token1": "token1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Dashboard and API alignment
|
||||
|
||||
The **token-aggregation API** and the **MCP** should expose **the same set of pools** for a given chain (DODO + Uniswap once indexed):
|
||||
|
||||
- **Source of truth:** (1) **Chain 138:** DODOPMMIntegration.getAllPools() + poolConfigs (drives both indexer and MCP allowlist via `generate-mcp-allowlist-from-chain138.sh`). (2) **Public chains:** deployment-status.json `pmmPools` (and any Uniswap pool list) drives both indexer config (CHAIN_*_DODO_*, CHAIN_*_UNISWAP_*) and MCP allowlist (via `generate-mcp-allowlist-from-deployment-status.sh`).
|
||||
- **Practice:** After deploying or creating pools on a chain, (1) update deployment-status.json (and for 138, the integration has the pools on-chain); (2) run the appropriate allowlist generator script; (3) ensure the token-aggregation indexer has the correct factory/integration env for that chain and has been run. Then the custom dashboard (using the API) and the MCP/AI (using the allowlist) see the same pools.
|
||||
|
||||
---
|
||||
|
||||
## 4. Automation triggers
|
||||
|
||||
How the dedicated AI (pool manager) is **triggered** to read state and optionally execute rebalance/add/remove liquidity:
|
||||
|
||||
| Trigger | Description |
|
||||
|--------|-------------|
|
||||
| **Scheduled** | Cron or scheduler (e.g. every 5–15 min) calls MCP/API to get pool state for all allowlisted pools; AI (or a rule engine) decides whether to rebalance, add, or remove liquidity; if allowed by policy, executor submits tx. |
|
||||
| **Event-driven** | Indexer or chain watcher emits events (e.g. "reserve delta > X" or "price deviation > band"); triggers the AI to fetch state via MCP/API and decide action; executor runs within cooldown and circuit-break rules. |
|
||||
| **Manual** | Operator asks the AI (via MCP or chat) for a quote or recommendation (e.g. "quote add liquidity for pool X"); AI returns suggestion; operator executes tx manually. |
|
||||
|
||||
**Policy:** Document which triggers are enabled (scheduled vs event vs manual), max trade size, cooldown, and circuit-break so the AI/executor stays within guardrails. See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) and allowlist `limits` in allowlist-138.json.
|
||||
|
||||
---
|
||||
|
||||
## 5. References
|
||||
|
||||
- [AI_AGENTS_57XX_DEPLOYMENT_PLAN.md](AI_AGENTS_57XX_DEPLOYMENT_PLAN.md)
|
||||
- [AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md](AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md)
|
||||
- [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](../03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md)
|
||||
- [POOL_ACCESS_DASHBOARD_API_MCP.md](../11-references/POOL_ACCESS_DASHBOARD_API_MCP.md)
|
||||
- [PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md](../03-deployment/PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md) §2.9
|
||||
@@ -82,6 +82,7 @@ So: **no** new chain-specific contracts are “for the MCP” itself; the MCP on
|
||||
|
||||
## 5. References
|
||||
|
||||
- [AI_AGENTS_57XX_MCP_ADDENDUM.md](AI_AGENTS_57XX_MCP_ADDENDUM.md) — Multi-chain MCP, Uniswap pool profile, dashboard/API alignment, automation triggers
|
||||
- MCP allowlist shape: `ai-mcp-pmm-controller/config/allowlist.json`
|
||||
- MCP pool profile (view methods): `ai-mcp-pmm-controller/config/pool_profiles.json`
|
||||
- Chain 138 tokens: `docs/11-references/CHAIN138_TOKEN_ADDRESSES.md`
|
||||
|
||||
@@ -16,6 +16,8 @@ This document defines the target architecture for a **13-node Dell PowerEdge R63
|
||||
|
||||
**Scope:** All 13 R630s as Proxmox cluster nodes; optional separate management node (e.g. ml110) or integration of management on a subset of R630s. Design assumes **hyper-converged** (Proxmox + Ceph on same nodes) for shared storage and true HA.
|
||||
|
||||
**Extended inventory:** The same site includes 3× Dell R750 servers, 2× Dell Precision 7920 workstations, and 2× UniFi Dream Machine Pro (gateways). See [HARDWARE_INVENTORY_MASTER.md](../11-references/HARDWARE_INVENTORY_MASTER.md), [13_NODE_NETWORK_AND_CABLING_CHECKLIST.md](../11-references/13_NODE_NETWORK_AND_CABLING_CHECKLIST.md), and [13_NODE_AND_ASSETS_BRING_ONLINE_CHECKLIST.md](../11-references/13_NODE_AND_ASSETS_BRING_ONLINE_CHECKLIST.md) for network topology, cabling, and bring-online order.
|
||||
|
||||
---
|
||||
|
||||
## 2. Cluster Design — 13 Nodes
|
||||
@@ -45,6 +47,14 @@ This document defines the target architecture for a **13-node Dell PowerEdge R63
|
||||
- **VLANs:** Same VLAN-aware bridge (e.g. vmbr0) on all nodes so VMs/containers keep IPs when failed over.
|
||||
- **IP plan for 13 R630s:** Reserve 13 consecutive IPs (e.g. 192.168.11.11–192.168.11.23 for r630-01 … r630-13). Document in `config/ip-addresses.conf` and DNS.
|
||||
|
||||
### 2.4 Switching (10G backbone)
|
||||
|
||||
**Inventory:** 2 × UniFi XG 10G 16-port switches (see [HARDWARE_INVENTORY_MASTER.md](../11-references/HARDWARE_INVENTORY_MASTER.md)).
|
||||
|
||||
- Use for **Ceph cluster network** and inter-node traffic; connect all 13 R630s via 10G for storage and replication.
|
||||
- **Redundancy:** Two switches allow dual-attach per node (e.g. one link per switch or LACP) for HA.
|
||||
- **Management:** Can stay on existing 1G LAN or use 10G for management if NICs support it.
|
||||
|
||||
---
|
||||
|
||||
## 3. RAM Specifications — R630
|
||||
|
||||
@@ -66,6 +66,7 @@ Ensure the deployer has approved (or the script will approve) base/quote tokens
|
||||
|
||||
## 3. References
|
||||
|
||||
- [PMM_POOLS_FUNDING_PLAN.md](PMM_POOLS_FUNDING_PLAN.md) — Full funding plan (three pools, amounts, cast commands, checklist)
|
||||
- [DODO_PMM_INTEGRATION.md](../../smom-dbis-138/docs/integration/DODO_PMM_INTEGRATION.md) — `addLiquidity(pool, baseAmount, quoteAmount)`
|
||||
- [PRE_DEPLOYMENT_CHECKLIST](PRE_DEPLOYMENT_CHECKLIST.md) § Step 3
|
||||
- [DEPLOYMENT_ORDER_OF_OPERATIONS](DEPLOYMENT_ORDER_OF_OPERATIONS.md) § Phase 3.1
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
## Chain 138 deployment requirements (learned 2026-02-12)
|
||||
|
||||
- **Gas price:** Chain 138 enforces a minimum gas price. Always use **`--with-gas-price 1000000000`** (1 gwei) for `forge script` and `forge create` when deploying to Chain 138; otherwise transactions fail with "Gas price below configured minimum gas price".
|
||||
- **Gas 32xxx when deploying:** If you see gas-related RPC errors (e.g. -32000, execution reverted, or out of gas), add **`--gas-estimate-multiplier 150`** (or 200) to `forge script ... --broadcast` so the broadcast uses a higher gas limit. See [RPC_ERRORS_32001_32602.md](../09-troubleshooting/RPC_ERRORS_32001_32602.md).
|
||||
- **On-chain check:** After deployments, run `./scripts/verify/check-contracts-on-chain-138.sh` (uses `RPC_URL_138`; optional URL arg). Address list comes from `config/smart-contracts-master.json` when available. See [CONTRACT_ADDRESSES_REFERENCE](../11-references/CONTRACT_ADDRESSES_REFERENCE.md), [ADDRESS_MATRIX_AND_STATUS](../11-references/ADDRESS_MATRIX_AND_STATUS.md).
|
||||
- **TransactionMirror:** The deploy script can hit a Forge broadcast constructor-args decode error. If so, deploy manually: `forge create contracts/mirror/TransactionMirror.sol:TransactionMirror --constructor-args <ADMIN_ADDRESS> --rpc-url $RPC_URL_138 --private-key $PRIVATE_KEY --gas-price 1000000000`.
|
||||
|
||||
@@ -291,6 +292,7 @@ BLOCKSCOUT_URL=http://192.168.11.140:4000 node forge-verification-proxy/server.j
|
||||
| `No route to host` | Dev machine cannot reach RPC (RPC_URL_138, e.g. 192.168.11.211:8545) | Run from machine on LAN or VPN; or set RPC_URL_138=https://rpc-core.d-bis.org |
|
||||
| `PRIVATE_KEY not set` | Missing in .env | Add deployer key to smom-dbis-138/.env |
|
||||
| `Gas price below configured minimum gas price` | Chain 138 minimum gas not met | Use `--with-gas-price 1000000000` for all `forge script` / `forge create` on Chain 138 |
|
||||
| RPC -32xxx / out of gas when deploying | Gas estimate too low or estimation failed | Use `--gas-estimate-multiplier 150` (or 200) with `forge script ... --broadcast`; ensure deployer has enough ETH. See [RPC_ERRORS_32001_32602.md](../09-troubleshooting/RPC_ERRORS_32001_32602.md). |
|
||||
| `Failed to decode constructor arguments` (TransactionMirror) | Forge broadcast decode bug | Deploy via `forge create ... --constructor-args <ADMIN> --gas-price 1000000000` |
|
||||
| `pam_chauthtok failed` (Blockscout) | Container PAM restriction | Use Proxmox Web UI: Container 5000 → Options → Password |
|
||||
| `pvesm not found` (verify-storage) | Script must run ON Proxmox host | `ssh root@r630-01` then run script |
|
||||
|
||||
170
docs/03-deployment/DEFI_AGGREGATOR_DEX_ROUTING_FLOWS_DIAGRAM.md
Normal file
170
docs/03-deployment/DEFI_AGGREGATOR_DEX_ROUTING_FLOWS_DIAGRAM.md
Normal file
@@ -0,0 +1,170 @@
|
||||
# DeFi Aggregator and DEX Routing Flows — Visual Reference
|
||||
|
||||
**Purpose:** Single diagram of all DeFi aggregator and DEX routing flows for swaps (Chain 138, public chains, bridge, MCP/AI, and bots).
|
||||
|
||||
---
|
||||
|
||||
## Full flow diagram (Mermaid)
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph Users["Entry points"]
|
||||
U1["User / Frontend / dApp"]
|
||||
U2["Swap–bridge–swap orchestrator"]
|
||||
U3["MCP / AI (pool management)"]
|
||||
U4["Deviation bot (cW* peg)"]
|
||||
end
|
||||
|
||||
subgraph Aggregators["Aggregator & quote layer"]
|
||||
direction TB
|
||||
TA["Token-aggregation API<br/>GET /quote, /tokens, /tokens/:addr/pools<br/>Single-hop quote; indexes DODO + Uniswap"]
|
||||
BQ["Bridge quote API<br/>POST /api/bridge/quote<br/>sourceSwapQuote + bridge + destinationSwapQuote"]
|
||||
EXT["External aggregators<br/>1inch, 0x, ParaSwap<br/>(Chain 138 not supported until they add it)"]
|
||||
BA["Bridge aggregator (explorer backend)<br/>Li.Fi, Socket, Squid, Symbiosis, Relay, Stargate<br/>Bridge routes only"]
|
||||
end
|
||||
|
||||
subgraph Chain138["Chain 138 (SMOM-DBIS-138)"]
|
||||
direction TB
|
||||
INT["DODOPMMIntegration<br/>createPool, addLiquidity, swapExactIn<br/>swapCUSDTForUSDC, swapCUSDTForUSDT, ..."]
|
||||
PROV["DODOPMMProvider<br/>getQuote, executeSwap<br/>registerPool; routes to integration"]
|
||||
MESH["Full mesh: 66 c* vs c* pools<br/>+ c* vs official USDT/USDC<br/>All routable via swapExactIn"]
|
||||
POOLS138["Pools: cUSDT/cUSDC, cUSDT/USDT, cUSDC/USDC<br/>+ full mesh when create-pmm-full-mesh-chain138.sh run"]
|
||||
INT --> POOLS138
|
||||
PROV --> INT
|
||||
MESH --> INT
|
||||
end
|
||||
|
||||
subgraph PublicChains["Public chains (1, 56, 137, 10, 100, 25, 42161, 8453, 43114, 42220, 1111)"]
|
||||
direction TB
|
||||
CW["cW* / HUB single-sided pools<br/>cWUSDT/USDC, cWUSDC/USDC, ...<br/>Per pool-matrix.json"]
|
||||
DEX["Native DEXs<br/>Uniswap V2/V3, DODO (if official)<br/>Used for destination swap & aggregator routing"]
|
||||
CW --> DEX
|
||||
end
|
||||
|
||||
subgraph Bridge["Bridge layer"]
|
||||
direction LR
|
||||
CCIP["CCIP (WETH9/WETH10)"]
|
||||
LIFI["Li.Fi / Socket / Squid<br/>Symbiosis / Relay / Stargate"]
|
||||
CCIP --> LIFI
|
||||
end
|
||||
|
||||
subgraph Optional["Optional (when deployed)"]
|
||||
ESR["EnhancedSwapRouter<br/>Size/slippage-based: Dodoex, Uniswap, Balancer, Curve<br/>Multi-provider for Chain 138"]
|
||||
COORD["SwapBridgeSwapCoordinator<br/>Swap source → bridge → swap dest"]
|
||||
ESR --> INT
|
||||
end
|
||||
|
||||
subgraph MCP_AI["MCP & AI (pool management)"]
|
||||
direction TB
|
||||
ALLOW["Allowlist (per chain)<br/>pool_address, base_token, quote_token, profile"]
|
||||
TOOLS["MCP tools: get_pool_state, identify_pool_interface<br/>quote_add_liquidity, add_liquidity, remove_liquidity"]
|
||||
ALLOW --> TOOLS
|
||||
TOOLS --> INT
|
||||
TOOLS --> CW
|
||||
TOOLS --> DEX
|
||||
end
|
||||
|
||||
subgraph Bot["Bot (cross-chain-pmm-lps)"]
|
||||
direction TB
|
||||
WATCH["Deviation watcher<br/>IDLE / ABOVE_BAND / BELOW_BAND"]
|
||||
ACT["Actions: buy/sell T, inventory adjust<br/>Cooldown, circuit break"]
|
||||
WATCH --> ACT
|
||||
ACT --> CW
|
||||
end
|
||||
|
||||
%% User flows
|
||||
U1 --> TA
|
||||
U1 --> BQ
|
||||
U1 --> EXT
|
||||
U1 --> BA
|
||||
TA --> PROV
|
||||
TA --> POOLS138
|
||||
BQ --> INT
|
||||
BQ --> Bridge
|
||||
BQ --> DEX
|
||||
U2 --> BQ
|
||||
Bridge --> PublicChains
|
||||
U3 --> ALLOW
|
||||
U4 --> WATCH
|
||||
U4 --> TA
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Swap routing paths (sequence view)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User / dApp
|
||||
participant API as Token-aggregation API
|
||||
participant Prov as DODOPMMProvider
|
||||
participant Int as DODOPMMIntegration
|
||||
participant Pool as PMM Pool (138)
|
||||
participant Br as Bridge / Quote API
|
||||
participant Dest as Destination DEX / cW* pool
|
||||
|
||||
Note over U,Dest: Flow A: Same-chain swap (Chain 138)
|
||||
U->>API: GET /quote?chainId=138&tokenIn&tokenOut&amountIn
|
||||
API->>Prov: (indexed pool)
|
||||
API-->>U: amountOut, poolAddress
|
||||
U->>Int: approve(tokenIn); swapExactIn(pool, tokenIn, amountIn, minOut)
|
||||
Int->>Pool: sellBase or sellQuote
|
||||
Pool-->>Int: amountOut
|
||||
Int-->>U: transfer tokenOut
|
||||
|
||||
Note over U,Dest: Flow B: Swap–bridge–swap
|
||||
U->>Br: POST /api/bridge/quote (source=138, dest, token, amount)
|
||||
Br->>API: source quote (138)
|
||||
Br->>Dest: destination quote (public chain)
|
||||
Br-->>U: sourceSwapQuote, bridgeRoute, destinationSwapQuote
|
||||
U->>Int: swap on 138 (optional)
|
||||
U->>Br: bridge tx (CCIP / Li.Fi / …)
|
||||
U->>Dest: swap on destination (optional)
|
||||
|
||||
Note over U,Dest: Flow C: MCP / AI pool management
|
||||
participant MCP as MCP (allowlist)
|
||||
U->>MCP: get_pool_state(pool_address)
|
||||
MCP->>Int: RPC getPoolConfig, getPoolReserves, getMidPrice
|
||||
Int-->>MCP: config, reserves, price
|
||||
MCP-->>U: state (for rebalance / add liquidity decision)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Flow descriptions (key paths)
|
||||
|
||||
| Flow | Path | Notes |
|
||||
|------|------|--------|
|
||||
| **Same-chain swap (138)** | User → Token-aggregation API (GET /quote) → DODOPMMProvider.getQuote / executeSwap → DODOPMMIntegration (swapExactIn or legacy swap) → Pool | Single-hop; full mesh supported via swapExactIn. |
|
||||
| **Same-chain swap (public)** | User → External aggregator (1inch, 0x, ParaSwap) or token-aggregation (if chain indexed) → Native DEX (Uniswap/DODO) or cW* / HUB pool | cW* pools used when deployed and indexed. |
|
||||
| **Swap–bridge–swap** | User → POST /api/bridge/quote → sourceSwapQuote (138: DODOPMMIntegration) → Bridge (CCIP / Li.Fi / …) → destinationSwapQuote (public DEX or cW* pool) | Optional SwapBridgeSwapCoordinator for one-tx. |
|
||||
| **Bridge only** | User → GET /api/v1/bridge/routes, token-mapping → Bridge (CCIP, etc.) | No DEX swap on 138 or destination. |
|
||||
| **MCP reads pool state** | AI/MCP → Allowlist → get_pool_state (RPC) → DODOPMMIntegration (138) or cW* pool or Uniswap (public) | One MCP per chain or multi-chain allowlist. |
|
||||
| **MCP / AI maintenance** | AI → MCP quote_add_liquidity / add_liquidity / remove_liquidity → DODOPMMIntegration or public DEX | Dedicated MCP/AI for Dodoex + Uniswap pool management. |
|
||||
| **Bot peg maintenance** | Bot → Deviation watcher (pool vs oracle) → buy/sell cW* or inventory adjust → cW* / HUB pools on public chain | State machine: IDLE, ABOVE_BAND, BELOW_BAND, COOLDOWN, CIRCUIT_BREAK. |
|
||||
| **Multi-provider (future)** | User / Contract → EnhancedSwapRouter → DODOPMMProvider + Uniswap + Balancer + Curve (by size/slippage) → Pools on 138 | When EnhancedSwapRouter deployed and pools exist. |
|
||||
|
||||
---
|
||||
|
||||
## Component summary
|
||||
|
||||
| Component | Role in routing |
|
||||
|-----------|------------------|
|
||||
| **Token-aggregation API** | Single-hop quote aggregator over indexed DODO (and Uniswap) on 138 and configured public chains. |
|
||||
| **DODOPMMIntegration** | Creates pools, adds liquidity, executes swaps (legacy pairs + swapExactIn for full mesh). |
|
||||
| **DODOPMMProvider** | Routing front: getQuote, executeSwap; registers pools; uses integration for execution. |
|
||||
| **Bridge quote API** | Orchestrates source swap + bridge + destination swap; uses token-aggregation or destination DEX for quotes. |
|
||||
| **External aggregators** | 1inch, 0x, ParaSwap: multi-DEX routing on supported chains; 138 not supported until they add it. |
|
||||
| **Bridge aggregator** | Explorer backend: Li.Fi, Socket, etc., for bridge routes only. |
|
||||
| **MCP** | Read (and optionally execute) pool state and liquidity ops; allowlist per chain or multi-chain. |
|
||||
| **Bot** | Maintains cW* peg on public chains via single-sided cW* / HUB pools; deviation and inventory. |
|
||||
| **EnhancedSwapRouter** | (Optional) Multi-provider router on 138 when Uniswap/Balancer/Curve pools exist. |
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [DEX_AND_AGGREGATORS_CHAIN138_EXPLAINER.md](../04-configuration/DEX_AND_AGGREGATORS_CHAIN138_EXPLAINER.md)
|
||||
- [PMM_DEX_ROUTING_STATUS.md](../11-references/PMM_DEX_ROUTING_STATUS.md)
|
||||
- [PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md](PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md)
|
||||
- [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md)
|
||||
144
docs/03-deployment/EXPLORER_FRONTEND_404_FIX_RUNBOOK.md
Normal file
144
docs/03-deployment/EXPLORER_FRONTEND_404_FIX_RUNBOOK.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Explorer frontend 404 fix — runbook
|
||||
|
||||
**Date:** 2026-03-02
|
||||
**Issue:** Root path (`/`) at https://explorer.d-bis.org returns 404 "Page not found".
|
||||
**Cause:** Nginx proxies `/` to Blockscout :4000; Blockscout is API-only and has no route for GET `/`.
|
||||
|
||||
**Status:** ✅ **FIXED** (2026-03-02). Nginx now serves the custom frontend from `/var/www/html` for `/` and SPA paths; `/api/v1/` and `/api/config/*` preserved. All endpoints verified 200.
|
||||
|
||||
---
|
||||
|
||||
## 1. How the UI is supposed to be served
|
||||
|
||||
This deployment uses a **custom frontend** (SolaceScanScout), not the built-in Blockscout web UI:
|
||||
|
||||
- **Static frontend:** Built files under `/var/www/html/` on VMID 5000:
|
||||
- `index.html` (main SPA shell, contains "SolaceScanScout")
|
||||
- `explorer-spa.js`, `favicon.ico`, `apple-touch-icon.png`, `/snap/`, etc.
|
||||
- **Blockscout (port 4000):** API only. The Phoenix router has no route for `GET /`; it serves `/api/*` and returns 404 for `/`.
|
||||
- **Nginx:** Should serve the static frontend for `/` and SPA paths, and proxy only `/api/`, `/api/v1/`, `/api/config/*`, `/health` to the appropriate backends (Blockscout :4000, token-aggregation :3001, or static config files).
|
||||
|
||||
**Relevant docs/config:**
|
||||
|
||||
- `explorer-monorepo/scripts/fix-nginx-serve-custom-frontend.sh` — nginx config that serves `/var/www/html` for `/` and SPA paths.
|
||||
- `explorer-monorepo/scripts/fix-nginx-conflicts-vmid5000.sh` — current “conflicts” config: proxies `location /` to :4000 (no static root).
|
||||
- `explorer-monorepo/scripts/deploy-frontend-to-vmid5000.sh` — deploys frontend files and can apply the custom-frontend nginx config.
|
||||
- `docs/archive/fixes/BLOCKSCOUT_WEB_INTERFACE_404_FIX.md` — historical 404 investigation.
|
||||
- `explorer-monorepo/docs/BLOCKSCOUT_START_AND_BUILD.md` — Blockscout container/assets; UI in this setup is the custom frontend, not Blockscout’s own UI.
|
||||
|
||||
---
|
||||
|
||||
## 2. What we confirmed on VMID 5000
|
||||
|
||||
- **Custom frontend present:** `/var/www/html/index.html` exists (~60KB), contains "SolaceScanScout"; `explorer-spa.js`, favicon, `/snap/`, etc. are present.
|
||||
- **Blockscout logs:** For `GET /` to :4000, Blockscout logs:
|
||||
`Phoenix.Router.NoRouteError`, "no route found for GET / (BlockScoutWeb.Router)". So 404 for `/` is expected when nginx sends `/` to Blockscout.
|
||||
- **Live nginx:** HTTPS server block has `location / { proxy_pass http://127.0.0.1:4000; }` with **no** `root` / `try_files` for the frontend. So every request to `/` is proxied to Blockscout and returns 404.
|
||||
|
||||
Conclusion: the frontend files are in place; the **nginx config** is wrong (proxy-only for `/` instead of serving static files).
|
||||
|
||||
---
|
||||
|
||||
## 3. Fix: make nginx serve the custom frontend for `/`
|
||||
|
||||
Apply a config that, for the HTTPS (and optionally HTTP) server block:
|
||||
|
||||
1. Serves **`/`** from `/var/www/html` (e.g. `location = /` with `root /var/www/html` and `try_files /index.html =404`).
|
||||
2. Serves **SPA paths** (e.g. `/address`, `/tx`, `/blocks`, …) from the same root with `try_files $uri $uri/ /index.html`.
|
||||
3. Keeps **`/api/`**, **`/api/v1/`**, **`/api/config/*`**, **`/snap/`**, **`/health`** as they are (proxy or alias).
|
||||
|
||||
**Option A — Apply the full custom-frontend script (recommended)**
|
||||
|
||||
From the repo root, from a host that can SSH to the Proxmox node for VMID 5000 (e.g. r630-02):
|
||||
|
||||
```bash
|
||||
# Set Proxmox host (r630-02)
|
||||
export PROXMOX_R630_02=192.168.11.12 # or PROXMOX_HOST_R630_02
|
||||
|
||||
# Apply nginx config that serves / and SPA from /var/www/html
|
||||
cd /home/intlc/projects/proxmox/explorer-monorepo
|
||||
# Copy script into VM and run (requires pct exec)
|
||||
EXPLORER_VM_HOST=root@192.168.11.12 bash scripts/apply-nginx-explorer-fix.sh
|
||||
```
|
||||
|
||||
Or run the fix script **inside** VMID 5000 (e.g. after copying it in):
|
||||
|
||||
```bash
|
||||
# From Proxmox host
|
||||
pct exec 5000 -- bash /path/to/fix-nginx-serve-custom-frontend.sh
|
||||
```
|
||||
|
||||
**Option B — Manual nginx change (HTTPS server block only)**
|
||||
|
||||
On VMID 5000, edit `/etc/nginx/sites-enabled/blockscout`. In the `server { listen 443 ... }` block, **replace** the single:
|
||||
|
||||
```nginx
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:4000;
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
with something equivalent to:
|
||||
|
||||
```nginx
|
||||
# Serve custom frontend for root
|
||||
location = / {
|
||||
root /var/www/html;
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate";
|
||||
try_files /index.html =404;
|
||||
}
|
||||
|
||||
# SPA paths — serve index.html for client-side routing
|
||||
location ~ ^/(address|tx|block|token|tokens|blocks|transactions|bridge|weth|watchlist|nft|home|analytics|operator)(/|$) {
|
||||
root /var/www/html;
|
||||
try_files /index.html =404;
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate";
|
||||
}
|
||||
|
||||
# All other non-API paths — static files and SPA fallback
|
||||
location / {
|
||||
root /var/www/html;
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
```
|
||||
|
||||
Keep all existing `location /api/`, `location /api/v1/`, `location /api/config/`, `location /snap/`, `location /health` blocks unchanged and **before** the catch-all `location /` (so API and config still proxy correctly).
|
||||
|
||||
Then:
|
||||
|
||||
```bash
|
||||
nginx -t && systemctl reload nginx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Verify
|
||||
|
||||
- From LAN:
|
||||
`curl -sk -H "Host: explorer.d-bis.org" https://192.168.11.140:443/`
|
||||
should return **200** with HTML containing "SolaceScanScout" (or similar), not "Page not found".
|
||||
- Public:
|
||||
`https://explorer.d-bis.org/` should show the explorer UI.
|
||||
- API unchanged:
|
||||
`curl -s http://192.168.11.140:4000/api/v2/stats` and `https://explorer.d-bis.org/api/v2/stats` should still return JSON.
|
||||
|
||||
---
|
||||
|
||||
## 5. Summary
|
||||
|
||||
| Item | Status |
|
||||
|------|--------|
|
||||
| How UI is served | Custom static frontend in `/var/www/html/` (index.html + SPA); Blockscout :4000 is API-only. |
|
||||
| Frontend files on VMID 5000 | Present; `index.html` contains SolaceScanScout. |
|
||||
| Blockscout logs for GET `/` | NoRouteError for GET `/` — expected when nginx proxies `/` to :4000. |
|
||||
| Nginx fix | Serve `/` and SPA paths from `root /var/www/html` and `try_files`; proxy only `/api/` (and specific locations) to :4000. |
|
||||
| Script to apply | `fix-nginx-serve-custom-frontend.sh` or `apply-nginx-explorer-fix.sh`; or apply the manual snippet above. |
|
||||
|
||||
---
|
||||
|
||||
## 6. Completion (2026-03-02)
|
||||
|
||||
- **Applied:** `apply-nginx-explorer-fix.sh` (via `EXPLORER_VM_HOST=root@192.168.11.12`).
|
||||
- **Script updated:** `fix-nginx-serve-custom-frontend.sh` now includes `location /api/v1/` (token-aggregation :3001) and `location = /api/config/token-list` / `location = /api/config/networks` (static JSON) so config and token-aggregation are not lost on re-apply.
|
||||
- **Verification:** From LAN, all return 200: `/` (frontend HTML), `/api/config/token-list`, `/api/config/networks`, `/api/v2/stats`, `/api/v1/chains`.
|
||||
126
docs/03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md
Normal file
126
docs/03-deployment/MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md
Normal file
@@ -0,0 +1,126 @@
|
||||
# MCP and AI Plan Upgrades — Dodoex and Uniswap Liquidity Pool Management
|
||||
|
||||
**Purpose:** Upgrades to the PMM full-mesh and single-sided LP plans so that **MCPs and AIs dedicated to managing and maintaining** all Dodoex (DODO PMM) and Uniswap liquidity pools are explicitly covered and operational.
|
||||
|
||||
**Status:** All planned upgrades (§2) and all additional recommendations (§5) are **implemented**. See §4 and §5.1 for implementation details. Optional tasks index: [OPTIONAL_TASKS_CHECKLIST.md](../00-meta/OPTIONAL_TASKS_CHECKLIST.md).
|
||||
|
||||
**Related:** [PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md](PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md), [AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md](../02-architecture/AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md), [POOL_ACCESS_DASHBOARD_API_MCP.md](../11-references/POOL_ACCESS_DASHBOARD_API_MCP.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Current state (what plans already assume)
|
||||
|
||||
| Component | Role | Plan coverage today |
|
||||
|-----------|------|----------------------|
|
||||
| **ai-mcp-pmm-controller** | Read pool state (getMidPrice, reserves, k, fee); optional quote/add/remove liquidity | Allowlist per chain; one chain per MCP instance; profile `dodo_pmm_v2_like` for Chain 138 Mock DVM |
|
||||
| **Token-aggregation API** | Index DODO (and Uniswap when env set); single-hop quote; tokens/pools discovery | Chain 138 + optional public chains via `CHAIN_*_DODO_PMM_INTEGRATION`, `CHAIN_*_RPC_URL` |
|
||||
| **cross-chain-pmm-lps bot** | Deviation watcher; buy/sell to compress δ; inventory adjust; circuit break | Design only; not wired to MCP or API for automation |
|
||||
| **EnhancedSwapRouter** | Multi-provider routing (Dodoex, Uniswap, Balancer, Curve) | Not deployed; no MCP/API integration specified |
|
||||
|
||||
---
|
||||
|
||||
## 2. Upgrades to the plans
|
||||
|
||||
### 2.1 Full-mesh and single-sided plans (PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN)
|
||||
|
||||
| Upgrade | Description |
|
||||
|---------|-------------|
|
||||
| **Allowlist sync with mesh** | After running `create-pmm-full-mesh-chain138.sh`, **automatically or semi-automatically** update the MCP allowlist (e.g. `ai-mcp-pmm-controller/config/allowlist-138.json`) with every new pool (pool_address, base_token, quote_token, profile). Document a script or MCP tool that: (1) reads `DODOPMMIntegration.getAllPools()` and `getPoolConfig(pool)` on Chain 138, (2) writes the allowlist so the MCP can read state for all mesh pools without manual entry. |
|
||||
| **Per-chain allowlist for public cW* pools** | When single-sided cW* / HUB pools are deployed on a public chain, extend the plan to: (1) add a **per-chain MCP allowlist** (or multi-chain allowlist) so the dedicated MCP/AI can read pool state on that chain; (2) document the mapping from `cross-chain-pmm-lps/config/deployment-status.json` (and pool-matrix) to the MCP config so one source of truth drives both deployment and MCP visibility. |
|
||||
| **Uniswap pool indexing and maintenance** | Where the plan says "DODO PMM or Uniswap V2/V3 per chain": (1) add an explicit **Uniswap pool creation and indexing** path: set `CHAIN_*_UNISWAP_V2_FACTORY` / `CHAIN_*_UNISWAP_V3_FACTORY` and run the token-aggregation indexer so Uniswap pools appear in the API; (2) add a **maintenance** subsection: who (or which AI/MCP) is responsible for adding liquidity, rebalancing, or pausing on Uniswap pools on each chain; (3) if an AI/MCP is dedicated to Uniswap pools, define its **inputs** (API quote, pool state from indexer or RPC) and **allowed actions** (e.g. quote only vs. submit tx). |
|
||||
| **Bot–MCP–API integration** | In the **single-sided** plan (and cross-chain-pmm-lps): (1) specify that the **deviation bot** (v1/v2) can consume **pool state from the MCP** (e.g. `dodo.get_pool_state` for each allowlisted pool) or from the **token-aggregation API** (e.g. `/api/v1/tokens/:address/pools`, reserve/price from indexer); (2) specify that the bot’s **actions** (e.g. trigger buy/sell to compress δ) are either executed by the same AI that uses the MCP or by a separate executor that receives signals from the MCP/AI; (3) add **circuit-break and cooldown** to the MCP/API so the AI can read "pool in CIRCUIT_BREAK" or "cooldown until block X" and avoid sending trades. |
|
||||
| **Dedicated “pool manager” MCP/AI scope** | Add a short subsection: **Dedicated MCP/AI for Dodoex and Uniswap pool management.** Scope: (1) **Dodoex (Chain 138 + public cW*):** MCP tools for read state, quote add/remove liquidity; allowlist kept in sync with full mesh and single-sided deployments; (2) **Uniswap (per chain where used):** Same idea—allowlist or indexer-driven list of Uniswap V2/V3 pools; MCP or API to read pool state and optionally quote; (3) **Maintenance tasks:** Document that this MCP/AI is the **designated** reader (and optionally executor) for rebalancing, add/remove liquidity, and responding to deviation alerts within policy (slippage, size, circuit break). |
|
||||
|
||||
### 2.2 RUNBOOK and script upgrades (SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK)
|
||||
|
||||
| Upgrade | Description |
|
||||
|---------|-------------|
|
||||
| **Post-deploy: update MCP allowlist** | In the runbook checklist, add a step: "After deploying cW* / HUB pools on chain X, update the MCP allowlist for chain X (or the multi-chain config) with pool_address, base_token, quote_token, and profile so the dedicated MCP/AI can read and manage those pools." |
|
||||
| **Post-deploy: update deployment-status.json** | Already implied; make it explicit that `deployment-status.json` is the **source for MCP allowlist generation** (script: given chainId, read deployment-status and output allowlist fragment). |
|
||||
| **List Uniswap pools per chain** | If Uniswap is used on a chain, add a step to list Uniswap V2/V3 pools (from factory events or indexer) and add them to the same MCP/API visibility (allowlist or indexer config) so one MCP/AI can "see" both DODO and Uniswap pools. |
|
||||
|
||||
### 2.3 AI_AGENTS_57XX and POOL_ACCESS_DASHBOARD_API_MCP
|
||||
|
||||
| Upgrade | Description |
|
||||
|---------|-------------|
|
||||
| **Multi-chain MCP** | Document an option for **one MCP server** that supports **multiple chains**: e.g. allowlist contains `chainId` per pool, and the MCP uses the appropriate RPC per chain when calling `get_pool_state` or `quote_add_liquidity`. This reduces the need to run one MCP instance per chain for Dodoex + Uniswap. |
|
||||
| **Uniswap pool profile** | Add a **pool profile** (e.g. `uniswap_v2_pair` or `uniswap_v3_pool`) to the MCP: expected view methods (getReserves, token0, token1, or slot0, liquidity, etc.) so the MCP can read Uniswap pool state and expose it to the same AI that manages DODO pools. |
|
||||
| **Dashboard and API alignment** | State that the **token-aggregation API** and the **MCP** should expose **the same set of pools** for a given chain (DODO + Uniswap once indexed): so the "custom dashboard" and the MCP/AI use one source of truth (allowlist + indexer config) and stay in sync. |
|
||||
| **Automation triggers** | Document how the dedicated AI is **triggered**: (1) **Scheduled:** cron or scheduler calls MCP/API to get state, then decides rebalance/add/remove; (2) **Event-driven:** indexer or chain watcher emits "reserve delta" or "price deviation" and triggers the AI; (3) **Manual:** operator asks the AI (via MCP) for a quote or recommendation, then executes manually. |
|
||||
|
||||
---
|
||||
|
||||
## 3. Summary: what to add to the “above” plans
|
||||
|
||||
- **PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN:** Allowlist sync with full mesh; per-chain allowlist for cW*; Uniswap indexing and maintenance; bot–MCP–API integration; dedicated MCP/AI scope for Dodoex + Uniswap.
|
||||
- **SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK:** Post-deploy MCP allowlist update; deployment-status as source for allowlist; Uniswap pool listing where applicable.
|
||||
- **AI_AGENTS_57XX / POOL_ACCESS:** Multi-chain MCP option; Uniswap pool profile; dashboard/API alignment; automation triggers (scheduled, event-driven, manual).
|
||||
|
||||
---
|
||||
|
||||
## 4. Completed upgrades (implemented)
|
||||
|
||||
| Upgrade | Status | Implementation |
|
||||
|---------|--------|----------------|
|
||||
| **Allowlist sync with mesh** | Done | Script `scripts/generate-mcp-allowlist-from-chain138.sh`: reads `allPools(uint256)` and `poolConfigs(pool)` via RPC; outputs allowlist JSON. Use `-o ai-mcp-pmm-controller/config/allowlist-138.json` to write. Documented in PMM plan §1.5. |
|
||||
| **Per-chain allowlist from deployment-status** | Done | Script `scripts/generate-mcp-allowlist-from-deployment-status.sh <chain_id> [-o path]`: reads `deployment-status.json` pmmPools for that chain; outputs allowlist fragment with limits. Documented in SINGLE_SIDED runbook §5a, §6. |
|
||||
| **Post-deploy MCP allowlist step** | Done | Runbook §5a: run generate-mcp-allowlist-from-deployment-status.sh after deploying cW* / HUB pools. |
|
||||
| **deployment-status as source for allowlist** | Done | Runbook §6: explicit that deployment-status.json is source of truth; script generates allowlist from it. |
|
||||
| **Uniswap pool listing / indexing step** | Done | Runbook §6a: set CHAIN_*_UNISWAP_* env, run indexer; add Uniswap pools to MCP allowlist with profile when available. |
|
||||
| **Uniswap pool profile** | Done | Added `uniswap_v2_pair` and `uniswap_v3_pool` to `ai-mcp-pmm-controller/config/pool_profiles.json` with expected methods. MCP server must implement reading these methods when profile is used. |
|
||||
| **PMM plan: Uniswap, Bot–MCP, Pool Manager** | Done | Plan §2.7 (Uniswap indexing and maintenance), §2.8 (Bot–MCP–API integration), §2.9 (Dedicated MCP/AI scope); §1.5 (Allowlist sync with mesh). |
|
||||
| **Multi-chain MCP, dashboard alignment, automation** | Done | [AI_AGENTS_57XX_MCP_ADDENDUM.md](../02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md): multi-chain allowlist/RPC, Uniswap profile reference, dashboard/API alignment, automation triggers (scheduled, event-driven, manual). |
|
||||
|
||||
---
|
||||
|
||||
## 5. Additional recommendations
|
||||
|
||||
| # | Recommendation | Priority | Notes |
|
||||
|---|-----------------|----------|--------|
|
||||
| 1 | **Implement multi-chain allowlist in MCP server** | High | Extend ai-mcp-pmm-controller to accept `chainId` per pool and select RPC by chain so one server can serve Chain 138 and all public cW* chains. |
|
||||
| 2 | **Wire MCP get_pool_state to Uniswap profiles** | High | In the MCP tool implementation, when profile is `uniswap_v2_pair` or `uniswap_v3_pool`, call getReserves/slot0/liquidity and return normalized state (reserves, derived price) for the AI. |
|
||||
| 3 | **Expose circuit-break and cooldown in API or MCP** | Medium | Add an endpoint or MCP tool that returns bot state (IDLE, ABOVE_BAND, BELOW_BAND, COOLDOWN, CIRCUIT_BREAK) and cooldown-until block/time so the AI does not submit trades during cooldown or circuit-break. Source: cross-chain-pmm-lps peg-bands and bot state. |
|
||||
| 4 | **Event-driven trigger for bot/AI** | Medium | When token-aggregation indexer or a chain watcher detects reserve delta or price deviation beyond a threshold, emit an event or call a webhook that triggers the dedicated AI to fetch state (via MCP/API) and decide rebalance; keeps reaction time low without polling. |
|
||||
| 5 | **Single allowlist file for multi-chain** | Low | Allow one JSON file to contain pools for multiple chains (array of { chainId, pools }) so the MCP can load one file and serve all chains; merge output of generate-mcp-allowlist-from-chain138.sh and generate-mcp-allowlist-from-deployment-status.sh per chain into one manifest. |
|
||||
| 6 | **Rate limits and gas caps in MCP** | Medium | Enforce allowlist `limits` (max_slippage_bps, max_single_tx_notional_usd, cooldown_seconds, gas_cap_gwei) in the MCP server when the AI requests quote or execute; reject or cap out-of-policy requests. |
|
||||
| 7 | **Audit trail for AI-driven txs** | Medium | Log all MCP tool calls (get_pool_state, quote_add_liquidity, add_liquidity, etc.) and any executed txs (tx hash, pool, amount, chain) for audit and incident review. |
|
||||
| 8 | **EnhancedSwapRouter integration with MCP** | Low | When EnhancedSwapRouter is deployed on Chain 138, add it to the MCP/API so the AI can reason about multi-provider routing (Dodoex vs Uniswap vs Balancer) and optionally trigger swaps through the router. |
|
||||
|
||||
### 5.1 Implementation status (all completed)
|
||||
|
||||
| # | Implementation |
|
||||
|---|----------------|
|
||||
| 1 | **Multi-chain allowlist:** `config/server.py` supports allowlist format `chains: [ { chainId, pools } ]` and per-pool `chain_id`. RPC per chain via env `RPC_138`, `RPC_137`, etc. or `RPC_BY_CHAIN_PATH` (JSON file). `_get_web3(chain_id)` caches Web3 per chain. |
|
||||
| 2 | **Uniswap get_pool_state:** In `dodo_get_pool_state`, profiles `uniswap_v2_pair` and `uniswap_v3_pool` use getReserves/slot0/liquidity; return normalized state (reserves, mid_price, liquidity_base/quote). |
|
||||
| 3 | **Circuit-break and cooldown:** `GET /bot_state` and MCP tool `dodo.get_bot_state` return bot state from `BOT_STATE_PATH` (JSON). Example: `config/bot_state.example.json`. Optional `params.pool` for per-pool state. |
|
||||
| 4 | **Event-driven trigger:** `POST /webhook/trigger` accepts JSON body `{ "reason", "chain_id", "pool" }`; returns 202 and logs. Wire indexer/watcher to POST here; AI can poll MCP or react to webhook. |
|
||||
| 5 | **Single multi-chain allowlist:** Allowlist format supports `chains: [ { chainId, pools } ]`. Script `scripts/merge-mcp-allowlist-multichain.sh -o path` merges Chain 138 and other chains into one file. |
|
||||
| 6 | **Rate limits and gas caps:** `_check_limits_and_cooldown()` enforces notional, cooldown_seconds (via `COOLDOWN_STATE_PATH`), gas_cap_gwei, max_slippage_bps. Used in `dodo_simulate_action`; use `_record_trade_ts(pool)` after writes. |
|
||||
| 7 | **Audit trail:** Every MCP tool response goes through `_audit_and_return()`; logs to `AUDIT_LOG_PATH` (JSONL) and logger. |
|
||||
| 8 | **EnhancedSwapRouter stub:** MCP tool `dodo.get_router_quote` returns `configured: true/false` from `ENHANCED_SWAP_ROUTER_ADDRESS`. Add contract calls when router is deployed. |
|
||||
|
||||
---
|
||||
|
||||
## 6. Next steps (operator / runtime)
|
||||
|
||||
After implementation, operators can:
|
||||
|
||||
1. **Multi-chain MCP:** Set `ALLOWLIST_PATH` to a multi-chain file (from `scripts/merge-mcp-allowlist-multichain.sh -o path`); set `RPC_138`, `RPC_137`, etc. or `RPC_BY_CHAIN_PATH`.
|
||||
2. **Bot state:** Set `BOT_STATE_PATH` to a JSON file (see `ai-mcp-pmm-controller/config/bot_state.example.json`); update it from your peg-bands/bot or leave default.
|
||||
3. **Audit / cooldown:** Set `AUDIT_LOG_PATH` and `COOLDOWN_STATE_PATH` if you want persistent audit log and cooldown ledger.
|
||||
4. **Webhook:** Wire your indexer or chain watcher to `POST /webhook/trigger` with `{ "reason", "chain_id", "pool" }` when reserve or price deviation exceeds threshold.
|
||||
5. **EnhancedSwapRouter:** When the router is deployed on Chain 138, set `ENHANCED_SWAP_ROUTER_ADDRESS` and extend `dodo.get_router_quote` in the MCP server to call the router contract.
|
||||
|
||||
---
|
||||
|
||||
## 7. References
|
||||
|
||||
- [PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md](PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md)
|
||||
- [SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md](SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md)
|
||||
- [AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md](../02-architecture/AI_AGENTS_57XX_MCP_CONTRACTS_AND_CHAINS.md)
|
||||
- [AI_AGENTS_57XX_DEPLOYMENT_PLAN.md](../02-architecture/AI_AGENTS_57XX_DEPLOYMENT_PLAN.md)
|
||||
- [POOL_ACCESS_DASHBOARD_API_MCP.md](../11-references/POOL_ACCESS_DASHBOARD_API_MCP.md)
|
||||
- [DEX_AND_AGGREGATORS_CHAIN138_EXPLAINER.md](../04-configuration/DEX_AND_AGGREGATORS_CHAIN138_EXPLAINER.md)
|
||||
- [DEFI_AGGREGATOR_DEX_ROUTING_FLOWS_DIAGRAM.md](DEFI_AGGREGATOR_DEX_ROUTING_FLOWS_DIAGRAM.md) — Mermaid diagram of all DeFi aggregator and DEX routing flows for swaps
|
||||
- [AI_AGENTS_57XX_MCP_ADDENDUM.md](../02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md) — Multi-chain MCP, Uniswap profile, automation triggers
|
||||
- [OPTIONAL_TASKS_CHECKLIST.md](../00-meta/OPTIONAL_TASKS_CHECKLIST.md) — Consolidated optional tasks (Done / Pending)
|
||||
37
docs/03-deployment/PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md
Normal file
37
docs/03-deployment/PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Phase C — cW* Tokens and Edge Pools Runbook
|
||||
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Execute Phase C of REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE: deploy or bridge cW* tokens and create/fund PMM edge pools on public chains for full SBS and arbitrage.
|
||||
|
||||
**Prerequisites:** Phase A (hub liquidity on 138) and Phase B (CCIP bridges + LINK) should be done or in progress.
|
||||
|
||||
---
|
||||
|
||||
## C.1 Deploy or bridge cW* tokens per chain
|
||||
|
||||
Chains: 1, 56, 137, 10, 42161, 8453, 43114, 100, 25, 42220, 1111.
|
||||
Tokens: cWUSDT, cWUSDC, cWAUSDT, cWEURC, cWEURT, cWUSDW (per pool-matrix).
|
||||
|
||||
**Steps:** (1) Use cross-chain-pmm-lps config/chains.json and pool-matrix.json. (2) Deploy CompliantWrappedToken (cW*) per chain or use bridge; set addresses in deployment-status.json and smom-dbis-138/.env. (3) Ref: TOKEN_CONTRACT_DEPLOYMENTS_REMAINING §3, CW_DEPLOY_AND_WIRE_RUNBOOK.
|
||||
|
||||
---
|
||||
|
||||
## C.2 Create and fund PMM edge pools per chain
|
||||
|
||||
**Steps:** (1) From pool-matrix poolsFirst (e.g. cWUSDT/USDC), create DODO PMM or Uniswap pools per chain. (2) Add initial liquidity. (3) Record pool addresses in deployment-status.json chains[chainId].pmmPools. (4) Ensure token-aggregation/heatmap use deployment-status.
|
||||
|
||||
**Ref:** LIQUIDITY_POOLS_MASTER_MAP § Public-chain cW*, pool-matrix.json.
|
||||
|
||||
---
|
||||
|
||||
## C.3 (Optional) Stabilization bot and peg bands
|
||||
|
||||
Run deviation watcher and peg-band config from cross-chain-pmm-lps when cW* and edge pools are live.
|
||||
|
||||
---
|
||||
|
||||
## Quick ref
|
||||
|
||||
- Pool matrix: cross-chain-pmm-lps/config/pool-matrix.json
|
||||
- Deployment status: cross-chain-pmm-lps/config/deployment-status.json
|
||||
- Recipe: cross-chain-pmm-lps/docs/06-deployment-recipe.md
|
||||
50
docs/03-deployment/PHASE_D_OPTIONAL_CHECKLIST.md
Normal file
50
docs/03-deployment/PHASE_D_OPTIONAL_CHECKLIST.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Phase D — Optional Extended Coverage Checklist
|
||||
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Checklist for Phase D of [REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md](REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md): XAU, vaults, ALL Mainnet, and trustless stack.
|
||||
|
||||
---
|
||||
|
||||
## D.1 XAU token + XAU-anchored pools (Chain 138)
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| D.1.1 | Deploy XAU token on Chain 138 (or use oracle-backed representation). | [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md) §2, §5 |
|
||||
| D.1.2 | Create cUSDT/XAU, cUSDC/XAU, cEURT/XAU PMM pools (public and/or private stabilization). | [VAULT_SYSTEM_MASTER_TECHNICAL_PLAN](../02-architecture/VAULT_SYSTEM_MASTER_TECHNICAL_PLAN.md) |
|
||||
| D.1.3 | Register private stabilization pools in PrivatePoolRegistry; configure Stabilizer. | CONTRACT_DEPLOYMENT_RUNBOOK § Private stabilization pools |
|
||||
|
||||
**Env:** `XAU_ADDRESS_138`, `cEURT_ADDRESS_138`, `DODOPMM_INTEGRATION_ADDRESS`.
|
||||
|
||||
---
|
||||
|
||||
## D.2 Vault ac* / vdc* / sdc* for new base tokens
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| D.2.1 | Deploy vault contracts for cEURC, cEURT, etc. (ac*, vdc*, sdc*). | [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md) §5 |
|
||||
| D.2.2 | Wire vaults to reserve/backing and oracle feeds. | VAULT_SYSTEM_MASTER_TECHNICAL_PLAN |
|
||||
|
||||
---
|
||||
|
||||
## D.3 ALL Mainnet (651940)
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| D.3.1 | ACADT/ACADC when Alltra adds CAD token. | TOKEN_CONTRACT_DEPLOYMENTS_REMAINING §2, §4 |
|
||||
| D.3.2 | D-WIN W tokens on 138/651940 if desired. | Same |
|
||||
|
||||
---
|
||||
|
||||
## D.4 Mainnet trustless stack
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| D.4.1 | Deploy LiquidityPoolETH, InboxETH, BondManager on mainnet for trustless bridge liquidity. | [OPTIONAL_DEPLOYMENTS_START_HERE](../07-ccip/OPTIONAL_DEPLOYMENTS_START_HERE.md) §C |
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md](REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md) Phase D
|
||||
- [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md)
|
||||
- [VAULT_SYSTEM_MASTER_TECHNICAL_PLAN](../02-architecture/VAULT_SYSTEM_MASTER_TECHNICAL_PLAN.md)
|
||||
159
docs/03-deployment/PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md
Normal file
159
docs/03-deployment/PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md
Normal file
@@ -0,0 +1,159 @@
|
||||
# PMM Full Mesh (Chain 138) and Single-Sided LPs (Public Networks) — Plan
|
||||
|
||||
**Purpose:** Define and run the full PMM pool mesh on Chain 138 and the single-sided LP deployment on public networks for aggregator and DEX routing.
|
||||
|
||||
---
|
||||
|
||||
## Part 1 — Chain 138: Full PMM mesh
|
||||
|
||||
### 1.1 Scope
|
||||
|
||||
- **c* vs c* mesh:** All pairwise pools between the 12 compliant tokens: cUSDT, cUSDC, cEURC, cEURT, cGBPC, cGBPT, cAUDC, cJPYC, cCHFC, cCADC, cXAUC, cXAUT.
|
||||
- Number of pairs: **66** (12 choose 2).
|
||||
- **c* vs official (optional):** Each c* vs official USDT and vs official USDC on Chain 138 (if addresses are set).
|
||||
- Adds up to **24** pools (12×2) when both official tokens are configured.
|
||||
- **Already created:** The three legacy pools (cUSDT/cUSDC, cUSDT/USDT, cUSDC/USDC) are created separately; the script skips any pair that already has a pool.
|
||||
|
||||
### 1.2 Contracts and roles
|
||||
|
||||
| Contract | Address (Chain 138) | Role |
|
||||
|----------|---------------------|------|
|
||||
| DODOPMMIntegration | `0x79cdbaFBaA0FdF9F55D26F360F54cddE5c743F7D` | `createPool(base, quote, ...)`; `swapExactIn(pool, tokenIn, amountIn, minAmountOut)` for generic routing |
|
||||
| DODOPMMProvider | `0x8EF6657D2a86c569F6ffc337EE6b4260Bd2e59d0` | `registerPool(tokenIn, tokenOut, pool)`; `executeSwap` uses `swapExactIn` for any registered pool |
|
||||
|
||||
- Deployer (or the account that holds **POOL_MANAGER_ROLE** on the integration and **POOL_MANAGER_ROLE** on the provider) must run pool creation and registration.
|
||||
- **Generic routing:** `DODOPMMIntegration.swapExactIn` allows any registered pool to be used for swaps; `DODOPMMProvider.executeSwap` routes through it when the pair is not one of the six legacy pairs.
|
||||
|
||||
### 1.3 How to create the full mesh
|
||||
|
||||
From repo root (or from `smom-dbis-138/`):
|
||||
|
||||
```bash
|
||||
# Ensure .env has: PRIVATE_KEY, RPC_URL_138, DODO_PMM_INTEGRATION_ADDRESS, DODO_PMM_PROVIDER_ADDRESS
|
||||
|
||||
# Create all c* vs c* pools and register with provider (and optionally c* vs official USDT/USDC)
|
||||
./scripts/create-pmm-full-mesh-chain138.sh
|
||||
|
||||
# Only c* vs c* (no official USDT/USDC pairs)
|
||||
MESH_ONLY_C_STAR=1 ./scripts/create-pmm-full-mesh-chain138.sh
|
||||
|
||||
# Preview only (no transactions)
|
||||
DRY_RUN=1 ./scripts/create-pmm-full-mesh-chain138.sh
|
||||
```
|
||||
|
||||
- The script uses `DODOPMMIntegration.createPool(base, quote, lpFeeRate, initialPrice, k, isOpenTWAP)` with defaults: `lpFeeRate=3`, `initialPrice=1e18`, `k=0.5e18`, `isOpenTWAP=false`.
|
||||
- After each pool is created, it calls `DODOPMMProvider.registerPool(base, quote, pool)` so the pool is used for quotes and execution.
|
||||
|
||||
### 1.4 Funding the mesh
|
||||
|
||||
- Pools are created empty. Add liquidity per pool via `DODOPMMIntegration.addLiquidity(pool, baseAmount, quoteAmount)`.
|
||||
- See [PMM_POOLS_FUNDING_PLAN.md](PMM_POOLS_FUNDING_PLAN.md) and [ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md](ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md).
|
||||
- For the full mesh, prioritize funding the most-used pairs (e.g. cUSDT/cUSDC, cUSDT/USDT, cUSDC/USDC, then other c* vs c* and c* vs official).
|
||||
|
||||
### 1.5 Allowlist sync with mesh (MCP/AI)
|
||||
|
||||
After running `create-pmm-full-mesh-chain138.sh`, update the MCP allowlist so the dedicated MCP/AI can read state for all mesh pools without manual entry:
|
||||
|
||||
- **Script:** `./scripts/generate-mcp-allowlist-from-chain138.sh` reads `DODOPMMIntegration.allPools(uint256)` and `poolConfigs(pool)` via RPC and outputs allowlist JSON (chain 138, profile `dodo_pmm_v2_like`).
|
||||
- **Write to MCP config:** `./scripts/generate-mcp-allowlist-from-chain138.sh -o ai-mcp-pmm-controller/config/allowlist-138.json`
|
||||
- **Requires:** `RPC_URL_138`, `DODO_PMM_INTEGRATION_ADDRESS` (or source `smom-dbis-138/.env`). See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md).
|
||||
|
||||
---
|
||||
|
||||
## Part 2 — Public networks: Single-sided LPs for aggregator and DEX routing
|
||||
|
||||
### 2.1 Goal
|
||||
|
||||
On **each public chain** (Ethereum, BSC, Polygon, Arbitrum, Base, Optimism, Gnosis, Avalanche, Cronos, Celo, Wemix), deploy **single-sided PMM (or equivalent) pools** of the form **cW* / HUB**, where:
|
||||
|
||||
- **cW*** = bridged compliant wrapped tokens (cWUSDT, cWUSDC, cWEURC, cWEURT, etc.).
|
||||
- **HUB** = the chain’s main stable (e.g. USDC or USDT per chain).
|
||||
|
||||
These pools are used for:
|
||||
|
||||
- **Aggregator and DEX routing:** So 1inch, 0x, ParaSwap, and other DEX aggregators can route through cW* / HUB for swap and bridge flows.
|
||||
- **Peg stabilization:** Single-sided liquidity on the cW* side; the other side is filled by market/arbitrage.
|
||||
|
||||
### 2.2 Topology and config
|
||||
|
||||
- **Pool topology:** One pool per **cW*** per chain against the chain’s **hub stable** (and optionally extra stables). See [cross-chain-pmm-lps/docs/02-pool-topology.md](../../cross-chain-pmm-lps/docs/02-pool-topology.md).
|
||||
- **Matrix:** [cross-chain-pmm-lps/config/pool-matrix.json](../../cross-chain-pmm-lps/config/pool-matrix.json) defines per chain:
|
||||
- `hubStable` (USDC or USDT),
|
||||
- `poolsFirst`: cW* / HUB pools to deploy first,
|
||||
- `poolsOptional`: extra quote stables (e.g. USDT, DAI, BUSD) if needed for routing.
|
||||
|
||||
### 2.3 Chains and hub stables (from pool-matrix)
|
||||
|
||||
| ChainId | Network | Hub stable | Priority pools |
|
||||
|---------|---------|------------|----------------|
|
||||
| 1 | Ethereum Mainnet | USDC | cWUSDT/USDC, cWUSDC/USDC, cWEURC/USDC, cWEURT/USDC, cWUSDW/USDC, cWAUSDT/USDC |
|
||||
| 56 | BSC | USDT | cWUSDT/USDT, cWUSDC/USDT, … |
|
||||
| 137 | Polygon | USDC | cWUSDT/USDC, cWUSDC/USDC, … |
|
||||
| 10 | Optimism | USDC | same pattern |
|
||||
| 100 | Gnosis | USDC | same + optional mUSD |
|
||||
| 25 | Cronos | USDT | cW* / USDT |
|
||||
| 42161 | Arbitrum One | USDC | cW* / USDC |
|
||||
| 8453 | Base | USDC | cW* / USDC |
|
||||
| 43114 | Avalanche | USDC | cW* / USDC |
|
||||
| 42220 | Celo | USDC | cW* / USDC |
|
||||
| 1111 | Wemix | USDT | cW* / USDT |
|
||||
|
||||
### 2.4 Deployment steps (per chain)
|
||||
|
||||
1. **Deploy or confirm cW* tokens** on that chain (e.g. via DeployCWTokens or existing addresses).
|
||||
2. **Resolve HUB stable address** (USDC or USDT) on that chain from your config or chain list.
|
||||
3. **Create cW* / HUB pool** on the chain’s DEX:
|
||||
- If the chain uses a **DODO-style PMM**: deploy or use a DVM/factory and create a pool (base = cW*, quote = HUB) with single-sided deposit on the cW* side.
|
||||
- If the chain uses **Uniswap V2/V3** or another AMM: create a pair/pool and add single-sided liquidity on the cW* side (or use a vault that supports single-sided).
|
||||
4. **Register pool** in your indexer/aggregator config (token-aggregation, explorer, or external aggregator) so routes use the new pool.
|
||||
5. **Optional:** Run the **bot** (deviation watcher, rebalancing) as in [cross-chain-pmm-lps/docs/06-deployment-recipe.md](../../cross-chain-pmm-lps/docs/06-deployment-recipe.md).
|
||||
|
||||
### 2.5 Deployment status and script stub
|
||||
|
||||
- **Status:** Per-chain cW* addresses and PMM pool addresses are tracked in [cross-chain-pmm-lps/config/deployment-status.json](../../cross-chain-pmm-lps/config/deployment-status.json). Fill `cwTokens`, `anchorAddresses` (HUB), and `pmmPools` as you deploy.
|
||||
- **Script stub:** Use [SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md](SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md) for a step-by-step runbook and a shell stub that outputs the list of pools to create per chain from `pool-matrix.json`.
|
||||
|
||||
### 2.6 Aggregator and DEX routing
|
||||
|
||||
- Once cW* / HUB pools exist and are indexed, aggregators (1inch, 0x, ParaSwap, etc.) can include them in routing when they support that chain.
|
||||
- For **swap–bridge–swap** and **quote APIs**, point destination-chain quote to the chain’s DEX that hosts the cW* / HUB pools so that routes use these pools for cW* ↔ HUB.
|
||||
|
||||
### 2.7 Uniswap pool indexing and maintenance
|
||||
|
||||
Where the plan uses **Uniswap V2/V3** on a chain (instead of or in addition to DODO PMM):
|
||||
|
||||
- **Indexing:** Set `CHAIN_*_UNISWAP_V2_FACTORY` / `CHAIN_*_UNISWAP_V3_FACTORY` (and optional `_ROUTER`, `_START_BLOCK`) in the token-aggregation service env and run the indexer so Uniswap pools appear in the API (`/api/v1/tokens`, `/api/v1/quote`, `/api/v1/tokens/:address/pools`).
|
||||
- **Maintenance:** Designate who (or which AI/MCP) is responsible for adding liquidity, rebalancing, or pausing on Uniswap pools on each chain. The dedicated MCP/AI (see §2.9) can use the same allowlist or indexer-driven list for Uniswap pools; define **inputs** (API quote, pool state from indexer or RPC) and **allowed actions** (quote only vs. submit tx) per policy. See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) §2.1.
|
||||
|
||||
### 2.8 Bot–MCP–API integration
|
||||
|
||||
For the **deviation bot** (cross-chain-pmm-lps v1/v2) that maintains the cW* peg on public chains:
|
||||
|
||||
- **Pool state source:** The bot can consume pool state from (1) the **MCP** (e.g. `dodo.get_pool_state` for each allowlisted pool) or (2) the **token-aggregation API** (e.g. `GET /api/v1/tokens/:address/pools`, reserve/price from indexer). Use one source of truth per chain so the bot and the MCP/AI stay aligned.
|
||||
- **Actions:** Bot actions (buy/sell to compress δ, inventory adjust) can be executed by the same AI that uses the MCP or by a **separate executor** that receives signals from the MCP/AI. Document which component is authorized to submit txs (and under which circuit-break/cooldown rules).
|
||||
- **Circuit-break and cooldown:** Expose "pool in CIRCUIT_BREAK" or "cooldown until block X" in the MCP or API (e.g. from peg-bands config or bot state) so the AI can avoid sending trades when the pool is in cooldown or circuit-break. See [cross-chain-pmm-lps/spec/bot-state-machine.md](../../cross-chain-pmm-lps/spec/bot-state-machine.md).
|
||||
|
||||
### 2.9 Dedicated MCP/AI for Dodoex and Uniswap pool management
|
||||
|
||||
A single **dedicated MCP/AI** is the designated reader (and optionally executor) for managing and maintaining all Dodoex and Uniswap liquidity pools:
|
||||
|
||||
- **Dodoex (Chain 138 + public cW*):** MCP tools for read state (`get_pool_state`), quote add/remove liquidity; allowlist kept in sync with full mesh (script §1.5) and single-sided deployments (script `generate-mcp-allowlist-from-deployment-status.sh`). One MCP instance per chain or multi-chain allowlist with `chainId` per pool.
|
||||
- **Uniswap (per chain where used):** Allowlist or indexer-driven list of Uniswap V2/V3 pools; MCP or API to read pool state (profile `uniswap_v2_pair` / `uniswap_v3_pool` when implemented) and optionally quote. Same MCP/AI can manage both DODO and Uniswap pools for that chain.
|
||||
- **Maintenance tasks:** This MCP/AI is the **designated** reader (and optionally executor) for rebalancing, add/remove liquidity, and responding to deviation alerts within policy (slippage, size, circuit break). See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) and [AI_AGENTS_57XX_MCP_ADDENDUM.md](../02-architecture/AI_AGENTS_57XX_MCP_ADDENDUM.md).
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [PMM_POOLS_FUNDING_PLAN.md](PMM_POOLS_FUNDING_PLAN.md) | Funding Chain 138 PMM pools (amounts, cast commands) |
|
||||
| [ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md](ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md) | Add liquidity to DODO PMM on Chain 138 |
|
||||
| [SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md](SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md) | Single-sided LPs on public chains (runbook + script stub) |
|
||||
| [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) | MCP and AI upgrades for Dodoex and Uniswap pool management |
|
||||
| [DEFI_AGGREGATOR_DEX_ROUTING_FLOWS_DIAGRAM.md](DEFI_AGGREGATOR_DEX_ROUTING_FLOWS_DIAGRAM.md) | Mermaid: all DeFi aggregator and DEX routing flows for swaps |
|
||||
| [LIQUIDITY_POOLS_MASTER_MAP.md](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md) | Pool addresses and status |
|
||||
| [cross-chain-pmm-lps/README.md](../../cross-chain-pmm-lps/README.md) | cW* single-sided PMM mesh design |
|
||||
| [cross-chain-pmm-lps/docs/02-pool-topology.md](../../cross-chain-pmm-lps/docs/02-pool-topology.md) | Pool topology (cW* / HUB) |
|
||||
| [cross-chain-pmm-lps/docs/06-deployment-recipe.md](../../cross-chain-pmm-lps/docs/06-deployment-recipe.md) | Deployment recipe and bot |
|
||||
| [PMM_DEX_ROUTING_STATUS.md](../11-references/PMM_DEX_ROUTING_STATUS.md) | PMM/DEX routing status |
|
||||
154
docs/03-deployment/PMM_POOLS_FUNDING_PLAN.md
Normal file
154
docs/03-deployment/PMM_POOLS_FUNDING_PLAN.md
Normal file
@@ -0,0 +1,154 @@
|
||||
# PMM Pools Funding Plan - Chain 138
|
||||
|
||||
**Purpose:** Step-by-step plan to fund the three DODO PMM liquidity pools on Chain 138.
|
||||
**Deployer:** `0x4A666F96fC8764181194447A7dFdb7d471b301C8`
|
||||
**Integration:** `DODOPMMIntegration` at `0x79cdbaFBaA0FdF9F55D26F360F54cddE5c743F7D`
|
||||
|
||||
---
|
||||
|
||||
## 1. The three pools
|
||||
|
||||
| Pool | Base token | Quote token | Pool address | Fund when |
|
||||
|------|------------|-------------|--------------|-----------|
|
||||
| **1. cUSDT/cUSDC** | cUSDT | cUSDC | `0x9fcB06Aa1FD5215DC0E91Fd098aeff4B62fEa5C8` | Deployer has cUSDT + cUSDC (mintable) |
|
||||
| **2. cUSDT/USDT** | cUSDT | USDT (official) | `0xa3Ee6091696B28e5497b6F491fA1e99047250c59` | Deployer has cUSDT + official USDT |
|
||||
| **3. cUSDC/USDC** | cUSDC | USDC (official) | `0x90bd9Bf18Daa26Af3e814ea224032d015db58Ea5` | Deployer has cUSDC + official USDC |
|
||||
|
||||
- **Pool 1** uses only c* tokens; you can mint both on Chain 138 and fund fully.
|
||||
- **Pools 2 and 3** need "official" USDT/USDC on 138 (set in DODOPMMIntegration at deploy time). If those are deployer-owned mocks, mint them too; otherwise fund only from existing balance.
|
||||
|
||||
---
|
||||
|
||||
## 2. Token addresses (Chain 138)
|
||||
|
||||
| Token | Address | Mintable by deployer? |
|
||||
|-------|---------|------------------------|
|
||||
| cUSDT | `0x93E66202A11B1772E55407B32B44e5Cd8eda7f22` | Yes (owner) |
|
||||
| cUSDC | `0xf22258f57794CC8E06237084b353Ab30fFfa640b` | Yes (owner) |
|
||||
| Official USDT | From integration: cast call INT "officialUSDT()(address)" | Depends (check owner) |
|
||||
| Official USDC | From integration: cast call INT "officialUSDC()(address)" | Depends (check owner) |
|
||||
|
||||
---
|
||||
|
||||
## 3. Current state (as of 2026-03-02)
|
||||
|
||||
- **Pool 1 (cUSDT/cUSDC):** Already funded with 500,000 cUSDT and 500,000 cUSDC (single addLiquidity tx).
|
||||
- **Pools 2 and 3:** Not funded yet (require deployer balance of official USDT/USDC on 138).
|
||||
- **Deployer c* supply:** 1M+ of each c* minted (including cUSDT, cUSDC) via mint-all-c-star-138.sh and earlier mints.
|
||||
|
||||
---
|
||||
|
||||
## 4. Funding plan options
|
||||
|
||||
### Plan A - Fund only Pool 1 (cUSDT/cUSDC) - recommended first
|
||||
|
||||
Use only cUSDT and cUSDC; no official USDT/USDC needed.
|
||||
|
||||
| Step | Action | Command / notes |
|
||||
|------|--------|------------------|
|
||||
| 1 | Ensure deployer has enough cUSDT and cUSDC | Mint if needed: ./scripts/mint-for-liquidity.sh or ./scripts/mint-all-c-star-138.sh [amount] |
|
||||
| 2 | Decide amount per side (base units, 6 decimals) | e.g. 1M each = 1000000000000 (1e12) |
|
||||
| 3 | Approve integration to spend cUSDT and cUSDC | See section 5 below (cast) or run add-liquidity script |
|
||||
| 4 | Add liquidity to Pool 1 | addLiquidity(POOL_CUSDTCUSDC, baseAmount, quoteAmount) via cast or Forge script |
|
||||
|
||||
### Plan B - Fund all three pools
|
||||
|
||||
Requires deployer to hold official USDT and USDC on Chain 138 (in addition to cUSDT/cUSDC).
|
||||
|
||||
| Step | Action | Command / notes |
|
||||
|------|--------|------------------|
|
||||
| 1 | Check deployer balances | ./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh |
|
||||
| 2 | Mint cUSDT/cUSDC (and official USDT/USDC if deployer is owner) | ./scripts/mint-for-liquidity.sh; mint official tokens if contracts allow |
|
||||
| 3 | Set .env amounts | ADD_LIQUIDITY_BASE_AMOUNT, ADD_LIQUIDITY_QUOTE_AMOUNT; optionally per-pool overrides |
|
||||
| 4 | Add liquidity to all three pools | Forge script (if it compiles) or three separate cast addLiquidity calls |
|
||||
|
||||
### Plan C - "Half of balance" rule (from existing doc)
|
||||
|
||||
Use 50% of deployer cUSDT and cUSDC for liquidity; keep the rest for gas/other use.
|
||||
|
||||
1. Run: RPC_URL_138=<rpc> ./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh
|
||||
2. Copy the printed ADD_LIQUIDITY_BASE_AMOUNT and ADD_LIQUIDITY_QUOTE_AMOUNT (half of current balances) into smom-dbis-138/.env
|
||||
3. Add liquidity (Pool 1 only if you do not have official USDT/USDC) per section 5.
|
||||
|
||||
---
|
||||
|
||||
## 5. Commands to add liquidity
|
||||
|
||||
**Prereqs:** smom-dbis-138/.env with PRIVATE_KEY, RPC_URL_138. Deployer must hold at least the amounts you add.
|
||||
|
||||
### Option 1 - Cast (reliable; use if Forge script fails)
|
||||
|
||||
From repo root, with smom-dbis-138/.env sourced:
|
||||
|
||||
```bash
|
||||
cd smom-dbis-138 && source .env
|
||||
|
||||
INT=0x79cdbaFBaA0FdF9F55D26F360F54cddE5c743F7D
|
||||
POOL1=0x9fcB06Aa1FD5215DC0E91Fd098aeff4B62fEa5C8
|
||||
CUSDT=0x93E66202A11B1772E55407B32B44e5Cd8eda7f22
|
||||
CUSDC=0xf22258f57794CC8E06237084b353Ab30fFfa640b
|
||||
RPC="$RPC_URL_138"
|
||||
MAX=0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
|
||||
|
||||
# Approve (once per token)
|
||||
cast send "$CUSDT" "approve(address,uint256)" "$INT" "$MAX" --rpc-url "$RPC" --private-key "$PRIVATE_KEY" --legacy --gas-limit 100000
|
||||
cast send "$CUSDC" "approve(address,uint256)" "$INT" "$MAX" --rpc-url "$RPC" --private-key "$PRIVATE_KEY" --legacy --gas-limit 100000
|
||||
|
||||
# Add liquidity to Pool 1 (amounts in base units, 6 decimals; e.g. 1M = 1000000000000)
|
||||
BASE_AMOUNT=1000000000000
|
||||
QUOTE_AMOUNT=1000000000000
|
||||
cast send "$INT" "addLiquidity(address,uint256,uint256)" "$POOL1" "$BASE_AMOUNT" "$QUOTE_AMOUNT" --rpc-url "$RPC" --private-key "$PRIVATE_KEY" --legacy --gas-limit 500000
|
||||
```
|
||||
|
||||
### Option 2 - Forge script (if it compiles)
|
||||
|
||||
```bash
|
||||
cd smom-dbis-138 && source .env
|
||||
export POOL_CUSDTCUSDC=0x9fcB06Aa1FD5215DC0E91Fd098aeff4B62fEa5C8
|
||||
export ADD_LIQUIDITY_BASE_AMOUNT=1000000000000
|
||||
export ADD_LIQUIDITY_QUOTE_AMOUNT=1000000000000
|
||||
|
||||
forge script script/dex/AddLiquidityPMMPoolsChain138.s.sol:AddLiquidityPMMPoolsChain138 \
|
||||
--rpc-url "$RPC_URL_138" --broadcast --private-key "$PRIVATE_KEY" --with-gas-price 1000000000 --legacy
|
||||
```
|
||||
|
||||
### Option 3 - Mint and add in one go (Pool 1 only)
|
||||
|
||||
```bash
|
||||
./scripts/mint-for-liquidity.sh --add-liquidity
|
||||
```
|
||||
|
||||
Uses half of minted amounts for Pool 1; requires DODO_PMM_INTEGRATION and pool addresses set in smom-dbis-138/.env.
|
||||
|
||||
---
|
||||
|
||||
## 6. Suggested amounts (Pool 1)
|
||||
|
||||
| Goal | Base (cUSDT) | Quote (cUSDC) | Base units each (6 decimals) |
|
||||
|------|----------------|----------------|------------------------------|
|
||||
| Light | 100,000 | 100,000 | 100000000000 |
|
||||
| Medium | 500,000 | 500,000 | 500000000000 |
|
||||
| Heavy | 1,000,000 | 1,000,000 | 1000000000000 |
|
||||
| Already added | 500,000 | 500,000 | (done) |
|
||||
|
||||
You can run the Add liquidity step multiple times to add more (e.g. another 500k/500k for Pool 1).
|
||||
|
||||
---
|
||||
|
||||
## 7. Checklist summary
|
||||
|
||||
- [ ] 1. Run check-deployer-balance-chain138-and-funding-plan.sh and note half-balance amounts.
|
||||
- [ ] 2. Mint cUSDT/cUSDC if needed: ./scripts/mint-for-liquidity.sh or ./scripts/mint-all-c-star-138.sh
|
||||
- [ ] 3. (Optional) If funding Pools 2 and 3: obtain or mint official USDT/USDC on 138; approve integration.
|
||||
- [ ] 4. Approve cUSDT and cUSDC to DODOPMMIntegration (see section 5 Option 1).
|
||||
- [ ] 5. Add liquidity to Pool 1 (and optionally Pools 2 and 3) via cast or Forge script.
|
||||
- [ ] 6. Verify on explorer: pool balances or LP tokens for deployer.
|
||||
|
||||
---
|
||||
|
||||
## 8. References
|
||||
|
||||
- ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md - Add-liquidity runbook
|
||||
- docs/11-references/DEPLOYER_WALLET_FUNDING_PLAN_PMM_POOLS.md - Deployer balances and 50% rule
|
||||
- docs/11-references/LIQUIDITY_POOLS_MASTER_MAP.md - Pool addresses and status
|
||||
- docs/11-references/TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER.md - How to mint c* on 138
|
||||
@@ -0,0 +1,146 @@
|
||||
# Remaining Deployments for Full Network Coverage
|
||||
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Ordered list of remaining deployments to achieve **maximum effective execution across all networks** (13-chain hub model: Chain 138 + 12 edge/alt). Use after [REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST](../00-meta/REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md) and [DEPLOYMENT_ORDER_OF_OPERATIONS](DEPLOYMENT_ORDER_OF_OPERATIONS.md).
|
||||
|
||||
**Routing context:** [routing-matrix-13x13.json](../../smom-dbis-138/real-robinhood/data/routing-matrix-13x13.json) — 138↔Celo (42220) **B/SBS** (CCIP bridges deployed 2026-03-04); 138↔Wemix (1111) **TBD** (deployer needs 0.4 WEMIX). Full coverage = all 13 chains with bridge + liquidity where designed.
|
||||
|
||||
---
|
||||
|
||||
## Phase execution status (2026-03-04)
|
||||
|
||||
| Phase | Step | Status | Notes |
|
||||
|-------|------|--------|-------|
|
||||
| A | A.1 Mint cUSDT/cUSDC (138) | ⚠️ Blocked | Mint script now uses `GAS_PRICE_138`; with 500 gwei first tx accepted but confirmation times out while block production is stalled. When blocks advance, run `mint-for-liquidity.sh` (use `GAS_PRICE_138=500000000000` if “Replacement transaction underpriced”). |
|
||||
| A | A.2 Add liquidity PMM (138) | ⏳ Pending | After A.1 succeeds; run `mint-for-liquidity.sh --add-liquidity` or AddLiquidityPMMPoolsChain138. |
|
||||
| B | B.1 Celo CCIP bridges | ✅ Done | Deployed; 0xD3AD6831aacB5386B8A25BB8D8176a6C8a026f04 (WETH9), 0xa4B9DD039565AeD9641D45b57061f99d9cA6Df08 (WETH10); .env updated; complete-config Celo→138 OK. |
|
||||
| B | B.2 Wemix CCIP bridges | ⏳ Blocked | Deployer 0 WEMIX; need 0.4 WEMIX then run `deploy-bridges-config-ready-chains.sh wemix`. |
|
||||
| B | **Gnosis CCIP bridges** | ✅ Done (2026-03-04) | Deployed: WETH9 `0x4ab39b5BaB7b463435209A9039bd40Cf241F5a82`, WETH10 `0xC15ACdBAC59B3C7Cb4Ea4B3D58334A4b143B4b44`; .env updated. |
|
||||
| B | B.3 Fund CCIP with LINK | ⏳ Ready | Run `fund-ccip-bridges-with-link.sh` (dry-run done). |
|
||||
| C | C.1–C.2 cW* + edge pools | 📋 Runbook | [PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md](PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md). |
|
||||
| D | D.1–D.4 Optional XAU/vaults/trustless | 📋 Checklist | [PHASE_D_OPTIONAL_CHECKLIST.md](PHASE_D_OPTIONAL_CHECKLIST.md). |
|
||||
|
||||
**Latest run (same session):** A.1 mint retry → timeout again (Chain 138 RPC). complete-config → Step A/B still fail (138 tx timeout or destination already set). Gnosis bridges deployed ✅. fund-ccip → failed (Chain 138 Invalid params; other chains: insufficient LINK or gas). Cronos deploy skipped (set CRONOS_RPC and CCIP_ROUTER_CRONOS in .env).
|
||||
|
||||
---
|
||||
|
||||
## Status to continue (run these before Phase A mint/deploy)
|
||||
|
||||
| Item | Status | Action |
|
||||
|------|--------|--------|
|
||||
| **Core RPC 2101** | ✅ Healthy (container, besu-rpc, port 8545, chain 138, DB writable) | None. Use `RPC_URL_138=http://192.168.11.211:8545`. |
|
||||
| **Tx pool** | May repopulate after clear | Run `./scripts/clear-all-transaction-pools.sh`; if mint fails with “Replacement transaction underpriced”, use `GAS_PRICE_138=500000000000` (500 gwei) when running mint. |
|
||||
| **Validators** | 1000–1004 active (1004 restarted 2026-03-04) | If 1004 fails again: `ssh root@192.168.11.10 'pct exec 1004 -- systemctl restart besu-validator'`. |
|
||||
| **Block production** | Stalled (blocks not advancing) | **Blocker for confirmations.** Run `./scripts/monitoring/monitor-blockchain-health.sh`; when blocks advance, mint/add-liquidity txs will confirm. |
|
||||
|
||||
**Next steps in order:** (1) Ensure blocks are advancing (all 5 validators active, wait for sync). (2) `cd smom-dbis-138 && ./scripts/mint-for-liquidity.sh` (optionally `GAS_PRICE_138=500000000000` if pool has a stuck tx). (3) After mint confirms, optionally `--add-liquidity`. See [CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md](../04-configuration/CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md) §7–8.
|
||||
|
||||
---
|
||||
|
||||
## Current state (verified)
|
||||
|
||||
| Area | Status |
|
||||
|------|--------|
|
||||
| Chain 138 core + PMM | 38/38 contracts; DODOPMMIntegration + 3 pools (cUSDT/cUSDC, cUSDT/USDT, cUSDC/USDC) created; DODOPMMProvider deployed. |
|
||||
| Chain 138 liquidity | **0** in pools — deployer WETH/cUSDT/cUSDC = 0; add liquidity blocked until mint/fund. |
|
||||
| CCIP 138 → 1, 56, 137, 10, 42161, 43114, 8453, 100, 25, **42220 (Celo)** | Configured (B/SBS). Celo CCIP bridges deployed 2026-03-04; Gnosis, Cronos config-ready; Wemix (1111) **TBD** (need 0.4 WEMIX). |
|
||||
| Alltra 138 ↔ 651940 | ALT path live. |
|
||||
| cW* on public chains | Addresses in .env / design; **deployment-status.json empty** — no cW* pool addresses. |
|
||||
| LINK for CCIP | Fund bridges per lane so cross-chain messages execute. |
|
||||
|
||||
---
|
||||
|
||||
## Phase A — Hub liquidity (Chain 138)
|
||||
|
||||
**Goal:** Enable swap execution on Chain 138 (cUSDT↔cUSDC and future pairs).
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| A.1 | **Mint cUSDT and cUSDC to deployer** (owner mint). | [TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER](../11-references/TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER.md) §1. Use `./scripts/mint-for-liquidity.sh` in smom-dbis-138 (or `mint-to-750m.sh`). |
|
||||
| A.2 | **Add liquidity to PMM pools** (cUSDT/cUSDC first; then cUSDT/USDT, cUSDC/USDC if official tokens exist on 138). Set `ADD_LIQUIDITY_*` in smom-dbis-138/.env; run AddLiquidityPMMPoolsChain138 or `mint-for-liquidity.sh --add-liquidity`. | [ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK](ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md) |
|
||||
| A.3 | **(Optional)** Mint other c* (cEURC, cEURT, cGBP*, etc.) for future pools / bridge; extend PMM mesh if desired. | [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md) §1; DeployCompliantFiatTokens already run (10 tokens). |
|
||||
|
||||
**Pre-checks:** `./scripts/deployment/preflight-chain138-deploy.sh`; `RPC_URL_138=http://192.168.11.211:8545 ./scripts/deployment/check-deployer-balance-chain138-and-funding-plan.sh`.
|
||||
|
||||
---
|
||||
|
||||
## Phase B — Bridge coverage (all 13 chains)
|
||||
|
||||
**Goal:** Turn **TBD** into **B/SBS** for Celo and Wemix; ensure LINK-funded lanes so routes execute.
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| B.1 | **Celo (42220):** Deploy CCIP WETH9/WETH10 bridges on Celo; add 138↔Celo destinations on both sides; fund bridges with LINK. | [CONFIG_READY_CHAINS_COMPLETION_RUNBOOK](../07-ccip/CONFIG_READY_CHAINS_COMPLETION_RUNBOOK.md). Preflight: `./scripts/deployment/preflight-config-ready-chains.sh celo`. Deploy: `./scripts/deployment/deploy-bridges-config-ready-chains.sh celo`; then `complete-config-ready-chains.sh`. |
|
||||
| B.2 | **Wemix (1111):** Same as B.1 for Wemix. Confirm WETH/USDT/USDC addresses on scan.wemix.com; set in token-mapping and .env. | Same runbook; `deploy-bridges-config-ready-chains.sh wemix`. [REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST](../00-meta/REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md) §2.2 (Wemix tokens). |
|
||||
| B.3 | **Fund all CCIP bridges with LINK** (138 and each destination). Run `./scripts/deployment/fund-ccip-bridges-with-link.sh` (dry-run first). | [CCIP_BRIDGE_DESTINATIONS_AND_LINK_FUNDING](../../smom-dbis-138/docs/deployment/CCIP_BRIDGE_DESTINATIONS_AND_LINK_FUNDING.md) |
|
||||
| B.4 | **(Optional)** LINK support on Mainnet relay for LINK transfers. | [RELAY_BRIDGE_ADD_LINK_SUPPORT_RUNBOOK](../07-ccip/RELAY_BRIDGE_ADD_LINK_SUPPORT_RUNBOOK.md) |
|
||||
|
||||
**Outcome:** 138↔1, 56, 137, 10, 42161, 43114, 8453, 100, 25, **42220**, **1111** all B/SBS; 138↔651940 remains ALT. Routing matrix TBD cells removed.
|
||||
|
||||
---
|
||||
|
||||
## Phase C — Public-chain cW* and edge pools
|
||||
|
||||
**Goal:** Enable swap-bridge-swap and arbitrage on **public chains** (cW* tokens + DODO/Uniswap edge pools per pool-matrix).
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| C.1 | **Deploy or bridge cW* tokens** per chain (1, 56, 137, 10, 42161, 8453, 43114, 100, 25, 42220, 1111). Use cross-chain-pmm-lps token-map and deployment recipe; record addresses in deployment-status.json and .env. | [PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK](PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md), [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md) §3 |
|
||||
| C.2 | **Create and fund PMM edge pools** (cW*/USDC, cW*/USDT, etc.) per [pool-matrix.json](../../cross-chain-pmm-lps/config/pool-matrix.json). Populate deployment-status.json with pool addresses. | [PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK](PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md), [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md) § Public-chain cW* |
|
||||
| C.3 | **Stabilization bot / peg bands** (optional): Run bot and peg-band config from cross-chain-pmm-lps for cW* peg maintenance. | [cross-chain-pmm-lps/README.md](../../cross-chain-pmm-lps/README.md) |
|
||||
|
||||
**Outcome:** Each public chain has cW* and edge pools so SBS and arbitrage can execute on both 138 and edge.
|
||||
|
||||
---
|
||||
|
||||
## Phase D — Optional (extended coverage)
|
||||
|
||||
| Step | Action | Ref |
|
||||
|------|--------|-----|
|
||||
| D.1 | **XAU token + XAU-anchored pools (138):** Deploy XAU; create cUSDT/XAU, cUSDC/XAU, cEURT/XAU PMM pools and private stabilization pools. | [PHASE_D_OPTIONAL_CHECKLIST](PHASE_D_OPTIONAL_CHECKLIST.md), [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md) §2, §5 |
|
||||
| D.2 | **Vault ac* / vdc* / sdc*** for new base tokens (cEURC, cEURT, etc.). | [PHASE_D_OPTIONAL_CHECKLIST](PHASE_D_OPTIONAL_CHECKLIST.md), [TOKEN_CONTRACT_DEPLOYMENTS_REMAINING](../11-references/TOKEN_CONTRACT_DEPLOYMENTS_REMAINING.md) §5 |
|
||||
| D.3 | **ALL Mainnet (651940):** ACADT/ACADC when Alltra adds CAD; D-WIN W on 138/651940 if desired. | [PHASE_D_OPTIONAL_CHECKLIST](PHASE_D_OPTIONAL_CHECKLIST.md) |
|
||||
| D.4 | **Mainnet trustless stack:** LiquidityPoolETH, InboxETH, BondManager on mainnet for trustless bridge liquidity. | [PHASE_D_OPTIONAL_CHECKLIST](PHASE_D_OPTIONAL_CHECKLIST.md), [OPTIONAL_DEPLOYMENTS_START_HERE](../07-ccip/OPTIONAL_DEPLOYMENTS_START_HERE.md) §C |
|
||||
|
||||
---
|
||||
|
||||
## Execution order (recommended)
|
||||
|
||||
1. **A.1 → A.2** (mint + add liquidity on 138) so hub has executable liquidity.
|
||||
2. **B.1 → B.2 → B.3** (Celo + Wemix CCIP + LINK fund) so all 13 chains are routable and bridges can execute.
|
||||
3. **C.1 → C.2** (cW* + edge pools) so public chains have full SBS and arbitrage.
|
||||
4. **D.** as needed for XAU, vaults, and optional chains/tokens.
|
||||
|
||||
---
|
||||
|
||||
## Quick command reference
|
||||
|
||||
| Task | Command / script |
|
||||
|------|------------------|
|
||||
| Preflight (138) | `./scripts/deployment/preflight-chain138-deploy.sh` |
|
||||
| Mint cUSDT/cUSDC (138) | `cd smom-dbis-138 && ./scripts/mint-for-liquidity.sh` |
|
||||
| Mint + add liquidity | `./scripts/mint-for-liquidity.sh --add-liquidity` |
|
||||
| Preflight (config-ready chains) | `cd smom-dbis-138 && ./scripts/deployment/preflight-config-ready-chains.sh [celo|wemix|all]` |
|
||||
| Deploy bridges (Celo/Wemix) | `./scripts/deployment/deploy-bridges-config-ready-chains.sh [celo|wemix|all]` |
|
||||
| Complete destinations | `./scripts/deployment/complete-config-ready-chains.sh` |
|
||||
| Fund CCIP with LINK | `./scripts/deployment/fund-ccip-bridges-with-link.sh` |
|
||||
| Add liquidity runbook | [ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK](ADD_LIQUIDITY_PMM_CHAIN138_RUNBOOK.md) |
|
||||
|
||||
---
|
||||
|
||||
## Phase runbooks
|
||||
|
||||
- **Phase C (cW* + edge pools):** [PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md](PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md)
|
||||
- **Phase D (optional XAU, vaults, trustless):** [PHASE_D_OPTIONAL_CHECKLIST.md](PHASE_D_OPTIONAL_CHECKLIST.md)
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST](../00-meta/REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md)
|
||||
- [DEPLOYMENT_ORDER_OF_OPERATIONS](DEPLOYMENT_ORDER_OF_OPERATIONS.md)
|
||||
- [TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER](../11-references/TOKENS_AND_NETWORKS_MINTABLE_TO_DEPLOYER.md)
|
||||
- [DEPLOYED_TOKENS_BRIDGES_LPS_AND_ROUTING_STATUS](../11-references/DEPLOYED_TOKENS_BRIDGES_LPS_AND_ROUTING_STATUS.md)
|
||||
- [CONFIG_READY_CHAINS_COMPLETION_RUNBOOK](../07-ccip/CONFIG_READY_CHAINS_COMPLETION_RUNBOOK.md)
|
||||
- [LIQUIDITY_POOLS_MASTER_MAP](../11-references/LIQUIDITY_POOLS_MASTER_MAP.md)
|
||||
- [TODOS_CONSOLIDATED](../00-meta/TODOS_CONSOLIDATED.md)
|
||||
@@ -0,0 +1,86 @@
|
||||
# Single-Sided LPs on Public Networks — Runbook (Aggregator and DEX Routing)
|
||||
|
||||
**Purpose:** Deploy **cW* / HUB** single-sided PMM (or AMM) pools on each public chain so aggregators and DEX routing can use them.
|
||||
|
||||
---
|
||||
|
||||
## 1. What to deploy
|
||||
|
||||
On each public chain:
|
||||
|
||||
- **Pool type:** One pool per **cW*** token against the chain’s **hub stable** (USDC or USDT).
|
||||
- **Single-sided:** Liquidity is provided on the **cW*** side; the other side is filled by market/arbitrage (and optionally by a bot).
|
||||
- **Use case:** Aggregator and DEX routing (1inch, 0x, ParaSwap, swap–bridge–swap, etc.) so that cW* ↔ USDC/USDT is routable on each chain.
|
||||
|
||||
---
|
||||
|
||||
## 2. Per-chain config (pool-matrix)
|
||||
|
||||
The source of truth is **cross-chain-pmm-lps/config/pool-matrix.json**:
|
||||
|
||||
- **chains[chainId].hubStable:** USDC or USDT for that chain.
|
||||
- **chains[chainId].poolsFirst:** List of pools to deploy first (e.g. `cWUSDT/USDC`, `cWUSDC/USDC`, …).
|
||||
- **chains[chainId].poolsOptional:** Optional extra quote stables (e.g. `cWUSDT/USDT`, `cWUSDT/DAI`).
|
||||
|
||||
---
|
||||
|
||||
## 3. Prerequisites per chain
|
||||
|
||||
1. **cW* tokens deployed** on that chain (addresses in `.env` or in `cross-chain-pmm-lps/config/deployment-status.json`).
|
||||
2. **Hub stable address** (USDC or USDT) on that chain (from chain list or explorer).
|
||||
3. **DEX/factory** on that chain:
|
||||
- DODO-style: DVM or PMM factory.
|
||||
- Uniswap V2: factory + router.
|
||||
- Uniswap V3: factory + NonfungiblePositionManager (or equivalent).
|
||||
4. **RPC URL** and **deployer key** (or LP provider key) for that chain.
|
||||
|
||||
---
|
||||
|
||||
## 4. Steps per chain (checklist)
|
||||
|
||||
For each chain (e.g. 1, 56, 137, 10, 100, 25, 42161, 8453, 43114, 42220, 1111):
|
||||
|
||||
- [ ] 1. Set `CW*_<CHAIN>` and `*_RPC` (e.g. `CWUSDT_MAINNET`, `ETHEREUM_MAINNET_RPC`) in `.env`.
|
||||
- [ ] 2. Resolve hub stable address (e.g. USDC on Ethereum: `0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48`).
|
||||
- [ ] 3. Choose DEX: DODO PMM vs Uniswap V2 vs Uniswap V3 (per chain).
|
||||
- [ ] 4. For each pair in `poolsFirst` (e.g. cWUSDT/USDC):
|
||||
- [ ] Create pool (factory.createPair or DVM.createDVM or V3 factory.createPool).
|
||||
- [ ] Add single-sided liquidity on the cW* side (or both sides with a 1:1 target).
|
||||
- [ ] 5. Record pool address in `cross-chain-pmm-lps/config/deployment-status.json` under `chains[chainId].pmmPools` (each entry: `base`, `quote`, `poolAddress` or `base_token`, `quote_token`, `pool_address`).
|
||||
- [ ] 5a. **Update MCP allowlist for this chain:** Run `./scripts/generate-mcp-allowlist-from-deployment-status.sh <chain_id> -o ai-mcp-pmm-controller/config/allowlist-<chain_id>.json` (or merge into a multi-chain allowlist). So the dedicated MCP/AI can read and manage these pools. See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md).
|
||||
- [ ] 6. Register pool in token-aggregation API / indexer so quote and routing use it (set `CHAIN_*_DODO_PMM_INTEGRATION` or Uniswap factory env for that chain and run indexer).
|
||||
- [ ] 6a. **Uniswap (if used on this chain):** List Uniswap V2/V3 pools (from factory events or indexer) and add them to the same MCP/API visibility: set `CHAIN_*_UNISWAP_V2_FACTORY` / `CHAIN_*_UNISWAP_V3_FACTORY` and run indexer; add Uniswap pools to MCP allowlist with profile `uniswap_v2_pair` or `uniswap_v3_pool` when that profile is available. See [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) §2.3.
|
||||
- [ ] 7. (Optional) Enable bot for that chain (deviation watcher, rebalance). Bot can consume pool state from MCP or token-aggregation API; see plan upgrades § Bot–MCP–API integration.
|
||||
|
||||
---
|
||||
|
||||
## 5. Script stub: list pools to create
|
||||
|
||||
From repo root, run:
|
||||
|
||||
```bash
|
||||
./scripts/list-single-sided-pools-by-chain.sh
|
||||
```
|
||||
|
||||
This script reads **cross-chain-pmm-lps/config/pool-matrix.json** and prints, per chain, the list of **cW* / HUB** pools to create (and optional pools). Use the output to drive deployment (manual or via a deploy script that calls the appropriate factory on each chain).
|
||||
|
||||
---
|
||||
|
||||
## 6. Deployment status and MCP allowlist source
|
||||
|
||||
- **Config:** [cross-chain-pmm-lps/config/deployment-status.json](../../cross-chain-pmm-lps/config/deployment-status.json)
|
||||
- Fill for each chain:
|
||||
- **cwTokens:** e.g. `{ "cWUSDT": "0x...", "cWUSDC": "0x..." }`
|
||||
- **anchorAddresses:** e.g. `{ "USDC": "0x...", "USDT": "0x..." }`
|
||||
- **pmmPools:** array of `{ "base", "quote", "poolAddress", "feeBps", "k", ... }` (or `base_token`, `quote_token`, `pool_address`)
|
||||
- **MCP allowlist generation:** `deployment-status.json` is the **source of truth** for generating the MCP allowlist per chain. Run `./scripts/generate-mcp-allowlist-from-deployment-status.sh <chain_id> [-o path]` to produce an allowlist fragment (pool_address, base_token, quote_token, profile) so the MCP/AI can read pool state on that chain.
|
||||
|
||||
---
|
||||
|
||||
## 7. References
|
||||
|
||||
- [PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md](PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md) — Overall plan (Chain 138 mesh + public single-sided).
|
||||
- [MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md](MCP_AI_POOL_MANAGEMENT_PLAN_UPGRADES.md) — MCP/AI upgrades, allowlist sync, Uniswap, bot integration.
|
||||
- [cross-chain-pmm-lps/docs/02-pool-topology.md](../../cross-chain-pmm-lps/docs/02-pool-topology.md) — Pool topology.
|
||||
- [cross-chain-pmm-lps/docs/06-deployment-recipe.md](../../cross-chain-pmm-lps/docs/06-deployment-recipe.md) — Recipe and bot.
|
||||
- [PMM_DEX_ROUTING_STATUS.md](../11-references/PMM_DEX_ROUTING_STATUS.md) — Routing status.
|
||||
91
docs/04-configuration/ACTIVITY_FEED_SPEC.md
Normal file
91
docs/04-configuration/ACTIVITY_FEED_SPEC.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Activity Feed — Event Schema and Ingestion
|
||||
|
||||
**Purpose:** Canonical event model and ingestion spec for the normalized activity feed: transfers, app events, and bridge stitching. Table: `activity_events` (migration 0014).
|
||||
|
||||
**References:** [indexer-architecture.md](../../explorer-monorepo/docs/specs/indexing/indexer-architecture.md), [heatmap-chains.ts](../../smom-dbis-138/services/token-aggregation/src/config/heatmap-chains.ts) (ALT vs B/SBS), [cross-chain-bridges.ts](../../smom-dbis-138/services/token-aggregation/src/config/cross-chain-bridges.ts) (`getRouteFromRegistry`).
|
||||
|
||||
---
|
||||
|
||||
## 1. Table: `activity_events`
|
||||
|
||||
| Column | Type | Description |
|
||||
|--------|------|-------------|
|
||||
| `id` | uuid | Primary key (default gen_random_uuid()) |
|
||||
| `chain_id` | integer | 138 or 651940 (and others when indexed) |
|
||||
| `transaction_hash` | varchar(66) | Tx hash |
|
||||
| `log_index` | integer | Log index (0 for tx-level) |
|
||||
| `block_number` | bigint | Block number |
|
||||
| `block_timestamp` | timestamptz | Block time |
|
||||
| `actor` | varchar(42) | Wallet that initiated the action |
|
||||
| `subject` | varchar(42) | Optional: user/account/tokenId/resource |
|
||||
| `event_type` | varchar(32) | TRANSFER, APP_ACTION, CLAIM, BRIDGE_OUT, BRIDGE_IN |
|
||||
| `contract_address` | varchar(42) | Contract that emitted or was called |
|
||||
| `data` | jsonb | Parsed event fields |
|
||||
| `routing` | jsonb | `{ "path": "ALT" | "CCIP", "fromChain", "toChain", "bridgeTxHash"?: string }` |
|
||||
| `created_at` | timestamptz | Insert time |
|
||||
|
||||
**Unique:** `(chain_id, transaction_hash, log_index)`.
|
||||
|
||||
---
|
||||
|
||||
## 2. Ingestion
|
||||
|
||||
### 2.1 Transfers
|
||||
|
||||
- **Source:** Existing `token_transfers` (and ERC-721/1155 logs when indexed).
|
||||
- **Mapping:** For each row: insert into `activity_events` with `event_type = 'TRANSFER'`, `actor = from_address`, `subject = to_address` (or token id for NFT), `data = { from, to, value, tokenContract }`, `contract_address = token_contract`. `routing` = NULL for same-chain transfers.
|
||||
- **Backfill:** One-time or periodic job: `INSERT INTO activity_events (...) SELECT ... FROM token_transfers WHERE NOT EXISTS (...)`.
|
||||
|
||||
### 2.2 App events
|
||||
|
||||
- **Source:** Application-lifecycle events (create, complete, settle, redeem, etc.) from your contracts.
|
||||
- **Registry:** Maintain a mapping (event signature → `event_type` + parser). Example: `0x...` → `APP_ACTION`, parse `data` from log topics/data.
|
||||
- **Insert:** Decode log; set `event_type`, `actor` (e.g. tx from), `subject` (e.g. orderId), `data` (decoded fields), `contract_address`.
|
||||
|
||||
### 2.3 Bridge stitching
|
||||
|
||||
- **Source:** Bridge contracts (AlltraAdapter, CCIP WETH9/WETH10); events such as lock/burn on source, mint/release on destination.
|
||||
- **Routing:** Use [getRouteFromRegistry](../../smom-dbis-138/services/token-aggregation/src/config/cross-chain-bridges.ts) or [config/routing-registry.json](../../config/routing-registry.json): 138↔651940 → `path: "ALT"`, 138↔others → `path: "CCIP"`.
|
||||
- **Insert:** For each bridge event, set `routing = { path: "ALT"|"CCIP", fromChain, toChain, bridgeTxHash }`. Optionally correlate "bridge out" and "bridge in" with a shared `data.correlationId` so the API can return one stitched feed item per cross-chain move.
|
||||
|
||||
---
|
||||
|
||||
## 3. Activity feed API
|
||||
|
||||
**Queries:**
|
||||
|
||||
- **By user:** `WHERE actor = $address OR subject = $address` (paginated).
|
||||
- **By token/NFT:** `WHERE subject = $tokenId` or `WHERE contract_address = $token AND data->>'tokenId' = $tokenId` (paginated).
|
||||
- **Global:** `WHERE event_type IN (...)` with pagination by `(block_timestamp DESC, id DESC)`.
|
||||
|
||||
**Pagination:** Cursor-based using `(block_timestamp, id)`; limit e.g. 50 per page.
|
||||
|
||||
**Example (by user):**
|
||||
|
||||
```sql
|
||||
SELECT * FROM activity_events
|
||||
WHERE actor = $1 OR subject = $1
|
||||
ORDER BY block_timestamp DESC, id DESC
|
||||
LIMIT 50 OFFSET $2;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Event type enum (logical)
|
||||
|
||||
| event_type | Description |
|
||||
|------------|-------------|
|
||||
| TRANSFER | ERC-20/721/1155 transfer |
|
||||
| APP_ACTION | App-lifecycle (create, complete, settle, etc.) |
|
||||
| CLAIM | Claim/mint from drop or contract |
|
||||
| BRIDGE_OUT | Lock/burn on source chain |
|
||||
| BRIDGE_IN | Mint/release on destination chain |
|
||||
|
||||
---
|
||||
|
||||
## 5. Migration
|
||||
|
||||
- **Up:** [0014_activity_events.up.sql](../../explorer-monorepo/backend/database/migrations/0014_activity_events.up.sql)
|
||||
- **Down:** `0014_activity_events.down.sql`
|
||||
|
||||
Run with your existing migration runner (e.g. golang-migrate, node-pg-migrate) against the explorer/backend DB.
|
||||
105
docs/04-configuration/ALLTRA_SPONSORSHIP_POLICY_MATRIX.md
Normal file
105
docs/04-configuration/ALLTRA_SPONSORSHIP_POLICY_MATRIX.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Alltra (651940) Gas Sponsorship — Policy Matrix and Method Allowlist
|
||||
|
||||
**Purpose:** Define the sponsorship policy for Alltra-native gas (ERC-4337 paymaster on chain 651940): three-tier policy, method allowlist, and anti-abuse controls. Use with thirdweb Engine or an ERC-4337 paymaster contract on 651940.
|
||||
|
||||
**References:** [thirdweb Gas Sponsorship](https://portal.thirdweb.com/wallets/sponsor-gas), [ERC-4337 Paymasters](https://docs.erc4337.io/paymasters/index.html), [THIRDWEB_ENGINE_CHAIN_OVERRIDES.md](THIRDWEB_ENGINE_CHAIN_OVERRIDES.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Policy groups
|
||||
|
||||
### Policy Group 1 — Always sponsor (low risk, onboarding)
|
||||
|
||||
| Category | Contract | Allowed methods | Notes |
|
||||
|----------|----------|-----------------|-------|
|
||||
| Smart account init | AA factory / account | `createAccount`, `initialize` | Required for first use |
|
||||
| Session / auth proofs | Auth/Session contract (if onchain) | `registerKey`, `rotateKey` | If keys stored onchain |
|
||||
| First app action | CoreApp contract (TBD) | 1–2 core functions | Keep small initially |
|
||||
|
||||
### Policy Group 2 — Sponsor with caps (medium risk)
|
||||
|
||||
| Category | Contract | Allowed methods | Caps |
|
||||
|----------|----------|-----------------|------|
|
||||
| App events writes | CoreApp / Modules | Selected write funcs | Per-user/day tx limit + per-user/day gas limit |
|
||||
| Claims / mints | Token/NFT drop | `claim`, `mintTo` | Restrict to allowlisted drops only |
|
||||
|
||||
### Policy Group 3 — Do not sponsor (high risk)
|
||||
|
||||
- Arbitrary `approve()` to unknown spenders
|
||||
- Arbitrary ERC-20 `transfer` / `transferFrom`
|
||||
- Swaps and bridge calls (user pays gas)
|
||||
|
||||
---
|
||||
|
||||
## 2. Anti-abuse controls (minimum viable)
|
||||
|
||||
- **Per-user daily max sponsored gas** — e.g. 500k gas/day per wallet.
|
||||
- **Per-IP / per-device burst limits** — e.g. max N requests per minute from same IP.
|
||||
- **Contract allowlist only** — only contracts in the allowlist can be called in sponsored userOps.
|
||||
- **Method allowlist only** — only method selectors in the allowlist (see below) are sponsored.
|
||||
- **Optional:** After first N sponsored tx, require user to hold a small amount of native gas token before further sponsorship.
|
||||
|
||||
---
|
||||
|
||||
## 3. Method allowlist (production)
|
||||
|
||||
Configure the paymaster with a **method allowlist** keyed by `(chainId, contract, method selector)`.
|
||||
|
||||
**Chain:** 651940 (Alltra).
|
||||
|
||||
**Contract + method selectors:** To be filled when CoreApp (and optional AA factory, session contract) addresses and method names are known. Example shape:
|
||||
|
||||
| Contract (address) | Method | Selector (4 bytes) | Policy group |
|
||||
|--------------------|--------|---------------------|--------------|
|
||||
| TBD (CoreApp) | `doAction` | `0x...` | 1 or 2 |
|
||||
| TBD (AA factory) | `createAccount` | `0x...` | 1 |
|
||||
| TBD (AA factory) | `initialize` | `0x...` | 1 |
|
||||
|
||||
**How to add selectors:** For each method, compute `keccak256(methodSignature).slice(0, 10)` (e.g. `doAction(uint256)` → selector). Paste into Engine paymaster policy or into your paymaster contract’s allowlist.
|
||||
|
||||
**Placeholder JSON (allowlist):** When you have contract addresses and method names, add a file e.g. `config/alltra-sponsorship-allowlist.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"chainId": 651940,
|
||||
"contracts": [
|
||||
{
|
||||
"address": "0x...",
|
||||
"label": "CoreApp",
|
||||
"methods": [
|
||||
{ "name": "doAction", "selector": "0x..." }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Per-user / per-day caps (recommended values)
|
||||
|
||||
| Limit | Suggested value | Notes |
|
||||
|-------|------------------|-------|
|
||||
| Sponsored gas per user per day | 500_000 | Tune for your app |
|
||||
| Sponsored tx count per user per day | 10 | For Group 2 |
|
||||
| Burst (per IP) | 20 req/min | Rate limit |
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementation checklist
|
||||
|
||||
- [ ] Add chain 651940 to Engine (see [THIRDWEB_ENGINE_CHAIN_OVERRIDES.md](THIRDWEB_ENGINE_CHAIN_OVERRIDES.md)).
|
||||
- [ ] Create or configure paymaster on 651940 (thirdweb Engine or custom contract).
|
||||
- [ ] Set Policy Group 1 contracts and method selectors (AA init, optional session).
|
||||
- [ ] Set Policy Group 2 contracts and method selectors (CoreApp, claims) with per-user/day caps.
|
||||
- [ ] Enforce contract + method allowlist; reject all other calls.
|
||||
- [ ] Add per-user daily gas and tx limits; optional per-IP burst limit.
|
||||
|
||||
---
|
||||
|
||||
## 6. Separation from x402
|
||||
|
||||
- **Sponsorship:** Pays for **gas** of user’s app actions (onchain writes) on 651940.
|
||||
- **x402:** User pays **USDC** for API/service access (offchain response gated by onchain payment proof).
|
||||
|
||||
They are independent: x402 payment is a user-funded USDC transfer; sponsored txs are paymaster-funded gas.
|
||||
44
docs/04-configuration/ALLTRA_X402_OPERATOR_GUIDE.md
Normal file
44
docs/04-configuration/ALLTRA_X402_OPERATOR_GUIDE.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Alltra + x402 Operator Guide
|
||||
|
||||
**Purpose:** Short operator reference for Alltra (651940) and x402: server wallet usage, chain config, and where to look for runbooks.
|
||||
|
||||
---
|
||||
|
||||
## Server wallet usage
|
||||
|
||||
- Use the **server wallet** (e.g. `SERVER_WALLET_ADDRESS` in x402-api) only for:
|
||||
- Contract admin (roles, pausing, upgrades)
|
||||
- Allowlist/signature minting
|
||||
- Indexer repair jobs
|
||||
- Operational controls (key rotation, emergency)
|
||||
- **Do not** use it in user flows; keep keys in KMS/HSM/custody.
|
||||
- Full policy: [THIRDWEB_WALLETS_INTEGRATION.md](THIRDWEB_WALLETS_INTEGRATION.md) §3.1.
|
||||
|
||||
---
|
||||
|
||||
## Chains
|
||||
|
||||
- **138:** Hub (DeFi Oracle Meta Mainnet); RPC and Engine overrides: [THIRDWEB_ENGINE_CHAIN_OVERRIDES.md](THIRDWEB_ENGINE_CHAIN_OVERRIDES.md).
|
||||
- **651940:** Alltra (ALL Mainnet); sponsorship and x402 USDC on this chain.
|
||||
|
||||
---
|
||||
|
||||
## x402 (Alltra-native)
|
||||
|
||||
- **Env:** `X402_USE_ALLTRA=true`, `SERVER_WALLET_ADDRESS`, optional `CHAIN_651940_RPC_URL`. When Alltra is used, local verification does not require `THIRDWEB_SECRET_KEY`.
|
||||
- **Spec:** [X402_ALLTRA_ENDPOINT_SPEC.md](X402_ALLTRA_ENDPOINT_SPEC.md) — 402 challenge, PAYMENT-SIGNATURE, local verification on 651940 USDC.
|
||||
- **API:** x402-api returns 402 + `PAYMENT-REQUIRED` when unpaid; accepts `PAYMENT-SIGNATURE` with `txHash` and verifies settlement on 651940.
|
||||
|
||||
---
|
||||
|
||||
## Sponsorship (paymaster on 651940)
|
||||
|
||||
- **Policy:** [ALLTRA_SPONSORSHIP_POLICY_MATRIX.md](ALLTRA_SPONSORSHIP_POLICY_MATRIX.md) — three-tier policy, method allowlist, anti-abuse caps.
|
||||
- **Engine:** Add chain 651940 per [THIRDWEB_ENGINE_CHAIN_OVERRIDES.md](THIRDWEB_ENGINE_CHAIN_OVERRIDES.md) so paymaster and backend wallets work.
|
||||
|
||||
---
|
||||
|
||||
## Routing and activity feed
|
||||
|
||||
- **Routing registry:** [config/routing-registry.json](../../config/routing-registry.json); ALT for 138↔651940, CCIP for 138↔others. Helper: `getRouteFromRegistry()` in token-aggregation.
|
||||
- **Activity feed:** [ACTIVITY_FEED_SPEC.md](ACTIVITY_FEED_SPEC.md) — `activity_events` table, ingestion, feed API.
|
||||
129
docs/04-configuration/CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md
Normal file
129
docs/04-configuration/CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Core RPC 2101 & 2102 — TXPOOL and ADMIN Status
|
||||
|
||||
**Last Updated:** 2026-03-04
|
||||
**Purpose:** Status of Core RPC nodes 2101 and 2102 for tx pool and admin APIs, and whether `txpool_besuClear`, `txpool_clear`, and `admin_removeTransaction` can be supported.
|
||||
|
||||
---
|
||||
|
||||
## 1. Current status (verified 2026-03-04)
|
||||
|
||||
| Node | IP:Port | Container | rpc_modules (exposed) | Block |
|
||||
|--------|--------------------|-----------|---------------------------|---------|
|
||||
| **2101** | 192.168.11.211:8545 | Running | admin, eth, net, txpool, web3 | 2,547,803 |
|
||||
| **2102** | 192.168.11.212:8545 | Running | admin, eth, net, txpool, web3 | — |
|
||||
|
||||
- **2101** uses `/etc/besu/config-rpc-core.toml` with `rpc-http-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]`.
|
||||
- **2102** uses `/etc/besu/config-rpc.toml` with `rpc-http-api=["ETH","NET","WEB3","ADMIN","DEBUG","TXPOOL"]`.
|
||||
|
||||
Both nodes already expose the **TXPOOL** and **ADMIN** API groups. No extra config is required for “enabling” these groups.
|
||||
|
||||
---
|
||||
|
||||
## 2. Requested methods: not implemented in Besu
|
||||
|
||||
You want Core RPC nodes 2101 and 2102 to support:
|
||||
|
||||
- `txpool_besuClear`
|
||||
- `txpool_clear`
|
||||
- `admin_removeTransaction`
|
||||
|
||||
**Conclusion: Hyperledger Besu does not implement these JSON-RPC methods.**
|
||||
|
||||
- **txpool_besuClear** and **txpool_clear** are not part of Besu’s JSON-RPC API. Besu only provides:
|
||||
- `txpool_besuPendingTransactions`
|
||||
- `txpool_besuStatistics`
|
||||
- `txpool_besuTransactions`
|
||||
and does not document or ship a “clear pool” RPC method.
|
||||
- **admin_removeTransaction** is not documented or implemented in Besu. Admin methods that do exist include things like `admin_peers`, `admin_nodeInfo`, etc., but not transaction removal.
|
||||
|
||||
So **no configuration or version change on 2101/2102 can add these three methods**; they are not available in Besu.
|
||||
|
||||
---
|
||||
|
||||
## 3. What 2101 and 2102 do support
|
||||
|
||||
- **TXPOOL:** `txpool_besuTransactions`, `txpool_besuStatistics`, `txpool_besuPendingTransactions` (and any other TXPOOL methods Besu implements). These work today when the TXPOOL API group is enabled (as on 2101 and 2102).
|
||||
- **ADMIN:** All Besu admin methods (e.g. `admin_peers`, `admin_nodeInfo`, etc.) are available; only `admin_removeTransaction` does not exist in Besu.
|
||||
|
||||
Config for Core RPC should keep **TXPOOL** and **ADMIN** (and DEBUG if desired) in `rpc-http-api` and `rpc-ws-api` so that all supported txpool and admin methods remain available. The repo’s canonical config for Core RPC is:
|
||||
|
||||
- **Path:** `smom-dbis-138/config/config-rpc-core.toml`
|
||||
- **Relevant lines:**
|
||||
`rpc-http-api=["ETH","NET","WEB3","TXPOOL","QBFT","ADMIN","DEBUG","TRACE"]`
|
||||
and matching `rpc-ws-api`.
|
||||
|
||||
Ensuring 2101 and 2102 use this (or equivalent) gives maximum supported TXPOOL/ADMIN surface; it does not add the three unsupported methods above.
|
||||
|
||||
---
|
||||
|
||||
## 4. How to clear stuck transactions (operational workaround)
|
||||
|
||||
Because Besu does not expose a “clear pool” or “remove transaction” RPC:
|
||||
|
||||
1. **Preferred:** Run `./scripts/clear-all-transaction-pools.sh` (clears pool on validators and RPC nodes 2101/2201 by restarting Besu and clearing pool data). Then wait 30–60s before sending new transactions.
|
||||
2. **Alternative:** Use replacement transactions (same nonce, higher gas) so the new tx replaces the stuck one; see `./scripts/cancel-pending-transactions.sh` if available.
|
||||
3. **Resolve script:** `./scripts/resolve-stuck-transaction-besu-qbft.sh` will try `txpool_besuClear` / `txpool_clear` / `admin_removeTransaction`; on Besu these return “Method not found”. The script is still useful to inspect nonce and suggest the operational workarounds above.
|
||||
|
||||
---
|
||||
|
||||
## 5. Can we “continue where we left off”?
|
||||
|
||||
- **If “continue” means:** deploy contracts, mint, add liquidity, or run other scripts that send transactions via 2101/2102:
|
||||
- **Yes**, as long as:
|
||||
- No stuck transaction is blocking the deployer nonce. If there is, run `./scripts/clear-all-transaction-pools.sh` (and optionally wait for validators to sync), then retry.
|
||||
- Block production is progressing (run `./scripts/monitoring/monitor-blockchain-health.sh`); if it’s stalled, see `docs/06-besu/CRITICAL_ISSUE_BLOCK_PRODUCTION_STOPPED.md`.
|
||||
- **If “continue” means:** having 2101/2102 support `txpool_besuClear`, `txpool_clear`, and `admin_removeTransaction`:
|
||||
- **No.** Those methods are not implemented in Besu; no further “fixes” or config can add them. Use the operational workarounds in §4 instead.
|
||||
|
||||
---
|
||||
|
||||
## 6. Ensuring 2101 and 2102 use canonical config
|
||||
|
||||
To keep 2101 and 2102 in sync with the repo and with maximum TXPOOL/ADMIN support (without adding the three unsupported methods):
|
||||
|
||||
- Run:
|
||||
`./scripts/maintenance/ensure-core-rpc-config-2101-2102.sh`
|
||||
Options: `--dry-run` (no changes), `--2101-only`, `--2102-only`.
|
||||
The script sets `rpc-http-api` and `rpc-ws-api` on the node to include ETH, NET, WEB3, TXPOOL, QBFT, ADMIN, DEBUG, TRACE (HTTP) and ETH, NET, WEB3, TXPOOL, QBFT, ADMIN (WS), then restarts Besu.
|
||||
- After any config change, verify with:
|
||||
- `./scripts/maintenance/health-check-rpc-2101.sh`
|
||||
- For 2102:
|
||||
`curl -s -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' http://192.168.11.212:8545`
|
||||
|
||||
---
|
||||
|
||||
## 7. Status to continue (run these before mint/deploy)
|
||||
|
||||
| Check | Command / target | Last result | Action if fail |
|
||||
|-------|------------------|------------|----------------|
|
||||
| Core RPC 2101 healthy | `./scripts/maintenance/health-check-rpc-2101.sh` | All passed | Run `./scripts/maintenance/fix-core-rpc-2101.sh` |
|
||||
| Tx pool empty | `txpool_besuTransactions` → expect 0 | 1 tx (stuck) | Run `./scripts/clear-all-transaction-pools.sh`; wait 30–60s |
|
||||
| All 5 validators active | `systemctl is-active besu-validator` on 1000–1004 | 1000–1003 active; **1004 failed** | On ML110: `ssh root@192.168.11.10 'pct exec 1004 -- systemctl restart besu-validator'` |
|
||||
| Block production | `./scripts/monitoring/monitor-blockchain-health.sh` | Stalled | Ensure all 5 validators active; wait for sync |
|
||||
| RPC for mint | `RPC_URL_138=http://192.168.11.211:8545` in smom-dbis-138/.env | Set | Use Core RPC only |
|
||||
|
||||
**Order to continue:** (1) Restart validator 1004. (2) Clear tx pool. (3) Re-check tx pool = 0 and validators 5/5. (4) Run monitor-blockchain-health until blocks advance. (5) `cd smom-dbis-138 && ./scripts/mint-for-liquidity.sh` then optionally `--add-liquidity`.
|
||||
|
||||
---
|
||||
|
||||
## 8. Continue run (2026-03-04)
|
||||
|
||||
- ensure-core-rpc-config: 2101 and 2102 updated and restarted. Health-check-rpc-2101 passed.
|
||||
- Stuck tx: 1 in pool. Clear via `clear-all-transaction-pools.sh`. Validator 1004: **failed** — restart on ML110. Block production stalled until 1004 up and sync; then retry mint.
|
||||
|
||||
**Continue run (same day, after “Update the Status”):**
|
||||
- Validator 1004 restarted on ML110.
|
||||
- `clear-all-transaction-pools.sh` run to completion (validators 1000–1004, RPC 2101, 2201 cleared and restarted).
|
||||
- Tx pool still showed 1 tx after clear (re-broadcast from peers or RPC 2101 pool repopulated).
|
||||
- Block production still stalled (monitor: “no new blocks in 5s”).
|
||||
- Mint failed with “Replacement transaction underpriced” until mint script was updated to pass `--gas-price` (uses `GAS_PRICE_138`, default 1 gwei). With `GAS_PRICE_138=500000000000` (500 gwei), first mint tx was **accepted** but **timed out** waiting for confirmation (blocks not advancing).
|
||||
- **Next:** When blocks advance, run `cd smom-dbis-138 && GAS_PRICE_138=500000000000 ./scripts/mint-for-liquidity.sh` (or re-run clear then mint with default gas). Optional: `--add-liquidity` after mint confirms.
|
||||
|
||||
---
|
||||
|
||||
## 9. References
|
||||
|
||||
- Besu transaction pool concepts: https://besu.hyperledger.org/stable/public-networks/concepts/transactions/pool
|
||||
- Resolve stuck tx (workarounds): `./scripts/resolve-stuck-transaction-besu-qbft.sh`
|
||||
- Clear all tx pools: `./scripts/clear-all-transaction-pools.sh`
|
||||
- Health: `./scripts/maintenance/health-check-rpc-2101.sh`, `./scripts/monitoring/monitor-blockchain-health.sh`
|
||||
@@ -166,6 +166,20 @@
|
||||
|
||||
---
|
||||
|
||||
## 11a. Interpreting verification HTTP codes (301, 404, 000)
|
||||
|
||||
When running `verify-backend-vms.sh`, `verify-all-systems.sh`, or NPMplus checks, the following responses are **often expected** and do not necessarily indicate a failure:
|
||||
|
||||
| Code | Meaning | Typical cause |
|
||||
|------|--------|----------------|
|
||||
| **301** | Redirect | HTTPS redirect (e.g. nginx on :80 redirecting to HTTPS). Service is up. |
|
||||
| **404** | Not found | Wrong port or path used in the check; or NPMplus/proxy returns 404 for a bare path. Service may still be healthy. |
|
||||
| **000** | No response | Connection failed from the host running the script: wrong host (e.g. checking NPMplus admin from off-LAN), firewall, or service bound to localhost only (e.g. NPMplus admin on :81 inside CT). |
|
||||
|
||||
**Summary:** 301 = HTTPS redirect (normal). 404 = incorrect port/path or NPMplus behaviour. 000 = connectivity/context (host, TLS, or port). Treat as failures only when the intended endpoint and client context match.
|
||||
|
||||
---
|
||||
|
||||
## 12. Remaining Operator Actions (Requires Proxmox/Server Access)
|
||||
|
||||
1. **Apply nginx fix and deploy config on VMID 5000:** Run `./scripts/apply-remaining-operator-fixes.sh` from repo root (LAN/operator). **Applied 2026-03-02:** nginx fix and explorer config deploy completed successfully.
|
||||
|
||||
@@ -67,6 +67,22 @@ If the NPMplus UI shows **ApiError** with **code: 400** and an empty or vague me
|
||||
|
||||
If 400 persists, check the NPMplus container logs (e.g. from the Proxmox host: `pct exec 10233 -- tail -100 /data/logs/*.log` or the path your NPMplus uses) for the actual validation or backend error.
|
||||
|
||||
**ApiError 400 on dashboard load (already logged in)**
|
||||
|
||||
If you see repeated **ApiError code 400** in the console as soon as the NPMplus UI loads (e.g. "Welcome to NPMplus", "You are logged in as Administrator"), the frontend is calling one or more API endpoints that return 400. Common causes:
|
||||
|
||||
1. **Find the failing request:** In the browser, open **Developer Tools** → **Network** tab → reload the NPMplus page. Filter by "Fetch/XHR". Find any request with status **400** and note the **Request URL** and **Response** body. Typical endpoints the dashboard calls: `/api/nginx/proxy-hosts`, `/api/nginx/certificates`, `/api/nginx/access-lists`, `/api/settings`, etc.
|
||||
2. **Test the API from the command line** (from a host that can reach NPMplus):
|
||||
```bash
|
||||
# From project root, with NPM_PASSWORD and NPM_EMAIL in .env
|
||||
NPM_URL="https://192.168.11.167:81" bash scripts/verify/export-npmplus-config.sh
|
||||
```
|
||||
If the export script succeeds, the same GET endpoints work from curl; the 400 may be limited to a specific endpoint or to the browser (e.g. session, or a different endpoint the UI calls). If the script fails with 400, note which step fails (login vs proxy-hosts vs certificates).
|
||||
3. **Browser:** Try an **incognito/private** window or another browser; clear cache and log in again.
|
||||
4. **Backend data:** Sometimes one proxy host or certificate record has invalid or unexpected data and the API returns 400 when returning the list. Check NPMplus container logs (see above). If you have a recent backup, you can compare or restore.
|
||||
|
||||
See also: [NPMPLUS_UI_APIERROR_400_RUNBOOK.md](NPMPLUS_UI_APIERROR_400_RUNBOOK.md) for a short runbook and API test commands.
|
||||
|
||||
---
|
||||
|
||||
## Fixes Applied (2026-01-31)
|
||||
|
||||
78
docs/04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md
Normal file
78
docs/04-configuration/EXPLORER_WALLET_LINK_QUICK_WIN.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Explorer "Wallet" Link — Quick Win Runbook
|
||||
|
||||
**Purpose:** Add a Wallet link to the Blockscout/explorer navbar so users can reach the wallet page (e.g. https://explorer.d-bis.org/wallet).
|
||||
**Effort:** ~15 minutes.
|
||||
**Prerequisite:** SSH access to the explorer VM (e.g. VMID 5000).
|
||||
|
||||
---
|
||||
|
||||
## Option A: Blockscout frontend (recommended)
|
||||
|
||||
If the explorer uses a Blockscout frontend with a configurable nav:
|
||||
|
||||
1. SSH to the explorer host:
|
||||
```bash
|
||||
ssh user@explorer-host # or pct exec if container
|
||||
```
|
||||
2. Locate the frontend config or template that defines the navbar (e.g. env `NAV_LINKS`, or a template under the Blockscout app).
|
||||
3. Add a Wallet entry. Example (env-style):
|
||||
```bash
|
||||
# If NAV_LINKS or similar is JSON:
|
||||
# Add {"label":"Wallet","href":"/wallet"} to the links array
|
||||
```
|
||||
4. Restart the frontend service if required; reload the site.
|
||||
|
||||
---
|
||||
|
||||
## Option B: Static HTML / proxy landing page
|
||||
|
||||
If the explorer is served by Nginx with a static landing page (e.g. `/var/www/html/index.html`):
|
||||
|
||||
1. SSH to the host that serves the explorer (e.g. VMID 5000 or the NPMplus/proxy host).
|
||||
2. Find the main HTML file. Common paths:
|
||||
- `/var/www/html/index.html`
|
||||
- Nginx root for the explorer vhost
|
||||
3. Open the file and locate the navigation section (e.g. `<nav>`, `<header>`, or a div with nav links).
|
||||
4. Add the Wallet link next to existing nav items:
|
||||
```html
|
||||
<a href="/wallet">Wallet</a>
|
||||
```
|
||||
Example in context:
|
||||
```html
|
||||
<nav>
|
||||
<a href="/">Explorer</a>
|
||||
<a href="/wallet">Wallet</a>
|
||||
<!-- other links -->
|
||||
</nav>
|
||||
```
|
||||
5. Save and reload https://explorer.d-bis.org — the Wallet link should appear.
|
||||
|
||||
---
|
||||
|
||||
## Option C: One-liner (when you know the exact file)
|
||||
|
||||
If the nav is in a single file and you can edit it:
|
||||
|
||||
```bash
|
||||
# After SSH to explorer host — backup then append or sed (adjust path and nav structure)
|
||||
sudo cp /var/www/html/index.html /var/www/html/index.html.bak
|
||||
# Then edit manually, or use sed only if the structure is known and stable:
|
||||
# sudo sed -i 's|</nav>| <a href="/wallet">Wallet</a>\n</nav>|' /var/www/html/index.html
|
||||
```
|
||||
|
||||
Prefer manual edit when the HTML structure is not uniform.
|
||||
|
||||
---
|
||||
|
||||
## Verify
|
||||
|
||||
- Open https://explorer.d-bis.org (or your explorer URL).
|
||||
- Confirm "Wallet" appears in the navbar.
|
||||
- Click it and confirm the wallet page loads (e.g. MetaMask chain-add / token list).
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [REMAINING_TASKS.md](../REMAINING_TASKS.md) § Quick Wins
|
||||
- [OPTIONAL_RECOMMENDATIONS_INDEX.md](../OPTIONAL_RECOMMENDATIONS_INDEX.md) § Quick win: Explorer "Wallet" link
|
||||
@@ -429,6 +429,14 @@ wscat -c wss://rpc-ws-pub.d-bis.org
|
||||
- Verify certificate in NPMplus: Check certificate list in API export
|
||||
- Renew certificate if expired: NPMplus UI → SSL Certificates → Renew
|
||||
|
||||
### Public URL Timeout (000) — DNS OK but explorer.d-bis.org unreachable
|
||||
|
||||
**Symptoms**: `curl https://explorer.d-bis.org` times out; `dig explorer.d-bis.org` returns 76.53.10.36.
|
||||
|
||||
**Cause**: Often NAT hairpin (client on LAN; router does not loop 76.53.10.36 back to NPMplus), or firewall blocking 443.
|
||||
|
||||
**Solutions**: See [EXPLORER_PUBLIC_URL_UNREACHABLE_FIX.md](../05-network/EXPLORER_PUBLIC_URL_UNREACHABLE_FIX.md): enable hairpin on UDM Pro, or use hosts entry `192.168.11.167 explorer.d-bis.org` for LAN; verify port forward; test from external network.
|
||||
|
||||
### Internal Connectivity Fails
|
||||
|
||||
**Symptoms**: Cannot connect to NPMplus or backend VMs
|
||||
|
||||
50
docs/04-configuration/NPMPLUS_CUSTOM_NGINX_CONFIG.md
Normal file
50
docs/04-configuration/NPMPLUS_CUSTOM_NGINX_CONFIG.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# NPMplus custom Nginx configuration
|
||||
|
||||
**Purpose:** Reference for editing proxy hosts in NPMplus when adding security headers or custom directives.
|
||||
**Important:** Adding `location '/'` in custom config **overwrites** the proxy; use headers only or a custom `'/'` location as needed.
|
||||
|
||||
---
|
||||
|
||||
## Proxy details as Nginx variables
|
||||
|
||||
In **Custom Nginx Configuration** for a proxy host, these variables are available:
|
||||
|
||||
| Variable | Meaning |
|
||||
|----------|--------|
|
||||
| `$server` | Backend domain or IP (e.g. `192.168.11.140`) |
|
||||
| `$port` | Backend port (e.g. `80`) |
|
||||
| `$forward_scheme` | Scheme to backend: `http` or `https` |
|
||||
| `$forward_path` | Optional path forwarded to backend |
|
||||
|
||||
Use them if you need to reference the proxy target in custom blocks.
|
||||
|
||||
---
|
||||
|
||||
## Safe custom config (headers only)
|
||||
|
||||
To add **security headers** (including CSP with `'unsafe-eval'` for ethers.js v5) **without** replacing the proxy, paste the following in **Custom Nginx Configuration**. Do **not** add a `location '/'` block here, or it will overwrite the proxy to the backend.
|
||||
|
||||
```nginx
|
||||
# Security Headers (unsafe-eval for ethers.js v5)
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests" always;
|
||||
```
|
||||
|
||||
These directives apply in the context where NPMplus injects them (typically the proxy location). If your NPMplus version supports **more_set_headers** (from the headers-more module), you can use that instead of `add_header` for more control.
|
||||
|
||||
---
|
||||
|
||||
## Caveats (from NPMplus)
|
||||
|
||||
- **Adding `location '/'`** in custom config **overwrites** the proxy configuration for that host. The request will no longer be forwarded to `$server:$port`.
|
||||
- If you need directives **inside** the `'/'` location, create a **custom location** for `'/'` in the UI (e.g. “Custom locations” → add location path `/`) instead of putting `location / { ... }` in the custom Nginx snippet.
|
||||
- For **headers only**, prefer the snippet above (or **more_set_headers** if available); no `location` block is needed.
|
||||
|
||||
---
|
||||
|
||||
## Example use
|
||||
|
||||
- **Explorer (explorer.d-bis.org):** Proxy target `http://192.168.11.140:80`. Pasting the security-headers block above into “Custom Nginx Configuration” adds CSP and other headers without changing the proxy. Backend (VMID 5000) still serves the custom frontend and APIs.
|
||||
@@ -0,0 +1,86 @@
|
||||
# NPMplus Proxy Hosts — Snapshot (March 2026)
|
||||
|
||||
**Source:** NPMplus UI (main instance, VMID 10233).
|
||||
**Snapshot date:** 2026-03-02.
|
||||
**Purpose:** Reference of current proxy destinations and their VMID/service mapping.
|
||||
|
||||
---
|
||||
|
||||
## Unique backends (destination IP:port)
|
||||
|
||||
Deduplicated by destination. Multiple proxy hosts (domains) can point to the same backend.
|
||||
|
||||
| Destination | TLS | Status | VMID / Service |
|
||||
|-------------|-----|--------|----------------|
|
||||
| http://192.168.11.140:80 | Certbot | Online | **5000** blockscout-1 (Explorer) |
|
||||
| http://192.168.11.211:80 | Certbot | Online | **2101** besu-rpc-core-1 (or legacy web?) |
|
||||
| http://192.168.11.211:8545 | Certbot | Online | **2101** besu-rpc-core-1 |
|
||||
| http://192.168.11.211:8546 | Certbot | Online | **2101** besu-rpc-core-1 (WS) |
|
||||
| http://192.168.11.221:8545 | Certbot | Online | **2201** besu-rpc-public-1 |
|
||||
| http://192.168.11.221:8546 | Certbot | Online | **2201** besu-rpc-public-1 (WS) |
|
||||
| http://192.168.11.232:8545 | Certbot | Online | **2301** besu-rpc-private-1 |
|
||||
| http://192.168.11.232:8546 | Certbot | Online | **2301** besu-rpc-private-1 (WS) |
|
||||
| http://192.168.11.240:443 | Certbot | Online | **2400** thirdweb-rpc-1 (HTTPS) |
|
||||
| http://192.168.11.246:8545 | Certbot | Online | **2503** besu-rpc-hybx-1 |
|
||||
| http://192.168.11.247:8545 | Certbot | Online | **2504** besu-rpc-hybx-2 |
|
||||
| http://192.168.11.248:8545 | Certbot | Online | **2505** besu-rpc-hybx-3 |
|
||||
| http://192.168.11.172:8545 | Certbot | Online | **2500** besu-rpc-alltra-1 |
|
||||
| http://192.168.11.173:8545 | Certbot | Online | **2501** besu-rpc-alltra-2 |
|
||||
| http://192.168.11.174:8545 | Certbot | Online | **2502** besu-rpc-alltra-3 |
|
||||
| http://192.168.11.177:80 | Certbot | Online | **5201** cacti-alltra-1 |
|
||||
| http://192.168.11.251:80 | Certbot | Online | Legacy / verify (2501 destroyed; 2201 → .221) |
|
||||
| http://192.168.11.58:80 | Certbot | Online | **5801** dapp-smom |
|
||||
| http://192.168.11.130:80 | Certbot | Online | **10130** dbis-frontend |
|
||||
| http://192.168.11.155:3000 | Certbot | Online | **10150** dbis-api-primary |
|
||||
| http://192.168.11.156:3000 | Certbot | Online | **10151** dbis-api-secondary |
|
||||
| http://192.168.11.54:3001 | Certbot | Public | **7804** Gov Portals (dbis.xom-dev.phoenix.sankofa.nexus) |
|
||||
| http://192.168.11.54:3002 | HTTP only | **Unknown** | **7804** Gov Portals (iccc.xom-dev) |
|
||||
| http://192.168.11.54:3003 | HTTP only | **Unknown** | **7804** Gov Portals (omnl.xom-dev) |
|
||||
| http://192.168.11.54:3004 | HTTP only | **Unknown** | **7804** Gov Portals (xom.xom-dev) |
|
||||
| http://192.168.11.60:3000 | Certbot | Online | **3000** or **5700** (ML / Dev VM — confirm which has .60) |
|
||||
| http://192.168.11.37:80 | Certbot | Online | **7810** mim-web-1 (MIM4U) |
|
||||
| http://192.168.11.36:80 | Certbot | Online | **7811** mim-api-1 |
|
||||
| http://192.168.11.50:4000 | Certbot | Online | **7800** sankofa-api-1 (Phoenix API) |
|
||||
| http://192.168.11.51:3000 | Certbot | Online | **7801** sankofa-portal-1 (Sankofa Portal) |
|
||||
| http://192.168.11.72:8000 | Certbot | Online | **7805** sankofa-studio |
|
||||
| http://192.168.11.10:8006 | Certbot | Online | Proxmox ml110 API |
|
||||
| http://192.168.11.11:8006 | Certbot | Online | Proxmox r630-01 API |
|
||||
| http://192.168.11.12:8006 | Certbot | Online | Proxmox r630-02 API |
|
||||
|
||||
---
|
||||
|
||||
## Not in this NPMplus instance
|
||||
|
||||
- **192.168.11.85** (Mifos, VMID 5800): Proxied by **NPMplus Mifos (VMID 10237)** at 192.168.11.171, not by main NPMplus (10233). Target is **https://192.168.11.85:443** (Mifos serves HTTPS only on this VM).
|
||||
|
||||
---
|
||||
|
||||
## Proper ports (from health checks)
|
||||
|
||||
| VMID | Hostname | Use this port for health/NPMplus |
|
||||
|------|----------|-----------------------------------|
|
||||
| 5000 | blockscout-1 | **80** (redirect), **443**, **4000** (API) |
|
||||
| 2400 | thirdweb-rpc-1 | **443** (HTTPS proxy) or **8545** (RPC direct) |
|
||||
| 5800 | mifos | **443** (HTTPS only; no :80 listener) — on NPMplus 10237 |
|
||||
| 10130 | dbis-frontend | **80** |
|
||||
| 10150 | dbis-api-primary | **3000** |
|
||||
| 10151 | dbis-api-secondary | **3000** |
|
||||
|
||||
---
|
||||
|
||||
## Status notes
|
||||
|
||||
- **Online:** NPMplus reports the backend as reachable.
|
||||
- **Unknown:** NPMplus reports Unknown (e.g. 192.168.11.54:3002, :3003, :3004) — may need TLS or backend check.
|
||||
- **192.168.11.251:80:** Likely legacy; VMID 2501 (besu-rpc-2) was destroyed; core/public RPC are .211 and .221. Confirm or remove.
|
||||
- **Duplicates in UI:** Same destination can appear multiple times (different domains), e.g. 192.168.11.140:80, 192.168.11.37:80, 192.168.11.221:8545/8546.
|
||||
|
||||
---
|
||||
|
||||
## Related docs
|
||||
|
||||
- [NPMPLUS_CUSTOM_NGINX_CONFIG.md](NPMPLUS_CUSTOM_NGINX_CONFIG.md) — proxy variables (`$server`, `$port`, `$forward_scheme`, `$forward_path`), safe custom config (headers only), and caveat: do not add `location '/'` or it overwrites the proxy.
|
||||
- [ALL_VMIDS_ENDPOINTS.md](ALL_VMIDS_ENDPOINTS.md) — canonical VMID ↔ IP:port
|
||||
- [RPC_ENDPOINTS_MASTER.md](RPC_ENDPOINTS_MASTER.md) — domain → backend
|
||||
- [NPMPLUS_SERVICE_MAPPING_COMPLETE.md](NPMPLUS_SERVICE_MAPPING_COMPLETE.md) — NPMplus configuration reference
|
||||
- [DETAILED_GAPS_AND_ISSUES_LIST.md](DETAILED_GAPS_AND_ISSUES_LIST.md) §11a — interpreting 301/404/000
|
||||
103
docs/04-configuration/NPMPLUS_UI_APIERROR_400_RUNBOOK.md
Normal file
103
docs/04-configuration/NPMPLUS_UI_APIERROR_400_RUNBOOK.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# NPMplus UI — ApiError 400 runbook
|
||||
|
||||
**Symptom:** NPMplus at https://192.168.11.167:81 shows "Welcome to NPMplus", "You are logged in as Administrator", but the browser console shows repeated **ApiError** with **code: 400** and empty or vague **message**.
|
||||
|
||||
**Meaning:** The UI is logged in, but one or more API calls (e.g. loading proxy hosts, certificates, settings) return HTTP 400 Bad Request. The frontend (main.bundle.js) turns that into ApiError.
|
||||
|
||||
---
|
||||
|
||||
## 1. Identify which request returns 400
|
||||
|
||||
1. Open NPMplus in the browser: `https://192.168.11.167:81`
|
||||
2. Open **Developer Tools** (F12) → **Network** tab
|
||||
3. Enable "Preserve log" if available
|
||||
4. Reload the page (or navigate to the tab that triggers the errors)
|
||||
5. In the Network list, filter by **Fetch/XHR** (or look for requests to `/api/`)
|
||||
6. Find any request with status **400** (red). Click it and check:
|
||||
- **Request URL** (e.g. `https://192.168.11.167:81/api/nginx/proxy-hosts`)
|
||||
- **Response** body (often JSON with an error message or validation detail)
|
||||
|
||||
Note the exact URL and response; that tells you which backend endpoint is failing.
|
||||
|
||||
---
|
||||
|
||||
## 2. Test the same endpoint from the command line
|
||||
|
||||
From a machine that can reach NPMplus (e.g. on the same LAN), with `NPM_EMAIL` and `NPM_PASSWORD` set (e.g. from `.env`):
|
||||
|
||||
```bash
|
||||
cd /path/to/proxmox
|
||||
source .env 2>/dev/null || true
|
||||
NPM_URL="${NPM_URL:-https://192.168.11.167:81}"
|
||||
|
||||
# Login and get token
|
||||
TOKEN_RESPONSE=$(curl -s -k -X POST "$NPM_URL/api/tokens" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"identity\":\"$NPM_EMAIL\",\"secret\":\"$NPM_PASSWORD\"}")
|
||||
TOKEN=$(echo "$TOKEN_RESPONSE" | jq -r '.token // empty')
|
||||
|
||||
if [ -z "$TOKEN" ]; then
|
||||
echo "Login failed: $TOKEN_RESPONSE"
|
||||
exit 1
|
||||
fi
|
||||
echo "Login OK"
|
||||
|
||||
# Test endpoints the dashboard typically calls
|
||||
for path in "/api/nginx/proxy-hosts" "/api/nginx/certificates" "/api/nginx/access-lists"; do
|
||||
CODE=$(curl -s -k -o /tmp/npm_test_body -w "%{http_code}" -X GET "$NPM_URL$path" -H "Authorization: Bearer $TOKEN")
|
||||
echo "$path -> HTTP $CODE"
|
||||
[ "$CODE" != "200" ] && echo " Body: $(head -c 500 /tmp/npm_test_body)"
|
||||
done
|
||||
```
|
||||
|
||||
- If **login** fails with 400: credentials or request body may be wrong.
|
||||
- If **proxy-hosts** or **certificates** returns 400: the backend may be rejecting the request or returning bad data (e.g. invalid record in DB). Check NPMplus logs.
|
||||
|
||||
---
|
||||
|
||||
## 3. NPMplus container logs
|
||||
|
||||
From the Proxmox host that runs NPMplus (VMID 10233, typically 192.168.11.11):
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 10233 -- docker logs npmplus --tail 200 2>&1"
|
||||
```
|
||||
|
||||
Or, if NPMplus runs without Docker inside the container:
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 10233 -- tail -200 /data/logs/*.log 2>/dev/null"
|
||||
```
|
||||
|
||||
Look for lines containing "400", "Bad Request", or validation errors around the time you load the UI.
|
||||
|
||||
---
|
||||
|
||||
## 4. Quick fixes to try
|
||||
|
||||
| Action | When it helps |
|
||||
|--------|----------------|
|
||||
| **Hard refresh / clear cache** | Cached frontend or bad session |
|
||||
| **Incognito window** | Extensions or cache affecting requests |
|
||||
| **Different browser** | Browser-specific behavior |
|
||||
| **Re-login** | Session or token format issue |
|
||||
| **Use .166 instead of .167** | If NPMplus is bound to .166 and .167 is a VIP, try `https://192.168.11.166:81` |
|
||||
|
||||
---
|
||||
|
||||
## 5. If one endpoint always returns 400
|
||||
|
||||
- **GET /api/nginx/proxy-hosts** or **/api/nginx/certificates** returning 400 can mean the backend has a record that fails validation when serialized. Options: restore from backup, or (if you have DB access) inspect and fix or remove the offending row. See [NPMPLUS_BACKUP_RESTORE.md](NPMPLUS_BACKUP_RESTORE.md).
|
||||
- **NPMplus version:** You are on 2.12.3+0a85402. Check release notes or issues for that version for known 400s on list endpoints.
|
||||
|
||||
---
|
||||
|
||||
## 6. Export config (full API test)
|
||||
|
||||
Running the full export script exercises login + proxy-hosts + certificates:
|
||||
|
||||
```bash
|
||||
NPM_URL="https://192.168.11.167:81" bash scripts/verify/export-npmplus-config.sh
|
||||
```
|
||||
|
||||
If this completes without error, the main GET APIs work from curl; the UI 400 may be a different endpoint or browser-specific. If it fails, the script output shows which step returned an error.
|
||||
@@ -0,0 +1,65 @@
|
||||
# Adding a third and/or fourth R630 before migration — decision guide
|
||||
|
||||
**Context:** You are about to balance load by migrating containers from r630-01 to r630-02 (and optionally ml110). You asked whether it makes sense to add a **third** and/or **fourth** R630 to Proxmox **before** starting that migration.
|
||||
|
||||
---
|
||||
|
||||
## 1. You may already have a third and fourth R630
|
||||
|
||||
The repo documents **r630-03** (192.168.11.13) and **r630-04** (192.168.11.14):
|
||||
|
||||
- **Status:** Powered off; **not currently in the Proxmox cluster** (only ml110, r630-01, r630-02 are active).
|
||||
- **Hardware (per report):** Dell R630, 512 GB RAM each, 2×600 GB boot, 6×250 GB SSD.
|
||||
- **Issues when last used:** Not in cluster, SSL/certificate issues, and others — all with documented fixes.
|
||||
|
||||
**If these servers are still available and you are willing to power them on and fix them:**
|
||||
|
||||
- **Add them to the cluster first** (power on → fix SSL/join cluster per [reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md](../../reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md)).
|
||||
- Then you have **four** Proxmox nodes (ml110 + r630-01, -02, -03, -04) or **three R630s + ml110**. Migration can then spread workload to r630-03 and r630-04 as well, instead of only to r630-02 and ml110.
|
||||
- That gives more headroom and better HA (see below) **without** buying new hardware.
|
||||
|
||||
**If r630-03/04 are decommissioned or unavailable:** Treat this as “add new R630(s)” below.
|
||||
|
||||
---
|
||||
|
||||
## 2. Does it make sense to add a third/fourth R630 (or bring r630-03/04 online) before migration?
|
||||
|
||||
**Yes, it can make sense**, depending on goals.
|
||||
|
||||
| Goal | Add 3rd/4th R630 before migration? | Notes |
|
||||
|------|-------------------------------------|--------|
|
||||
| **Reduce load on r630-01 quickly** | Optional | Migration to **existing** r630-02 (and ml110) already helps. You can migrate first and add nodes later. |
|
||||
| **More headroom long term** | Yes | With 3–4 R630s (+ ml110), workload is spread across more nodes; no single node is as hot as r630-01 today. |
|
||||
| **Proxmox HA + Ceph** | Yes (3 min, 4 better) | Per [PROXMOX_HA_CLUSTER_ROADMAP.md](../02-architecture/PROXMOX_HA_CLUSTER_ROADMAP.md): **3 R630s** minimum for HA + Ceph; **4 R630s** better for Ceph recovery. You currently have 2 R630s + ml110; adding a 3rd (and 4th) R630 aligns with that. |
|
||||
| **Avoid “just moving the problem”** | Yes | If you only move workload to r630-02, r630-02 may become the new bottleneck. Adding nodes gives more capacity so migration actually balances. |
|
||||
| **Cost / complexity** | Your call | New hardware = cost and setup. Bringing r630-03/04 back = no new purchase, but time to power on, fix, and join cluster. |
|
||||
|
||||
**Practical recommendation:**
|
||||
|
||||
1. **If r630-03 and/or r630-04 exist and are usable:**
|
||||
**Power them on and add them to the cluster first**, then run migration. You get a 4- (or 5-) node cluster and can move workload to r630-03 and r630-04 as well as r630-02. Use [reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md](../../reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md) for the fix sequence.
|
||||
|
||||
2. **If you do not have extra R630s (or they’re gone):**
|
||||
**Migration first** is still valid: move candidates from r630-01 to r630-02 (and optionally ml110) per [PROXMOX_LOAD_BALANCING_RUNBOOK.md](PROXMOX_LOAD_BALANCING_RUNBOOK.md). That reduces r630-01 load with no new hardware. If after that you still want more capacity or HA, **then** add a 3rd (and 4th) R630.
|
||||
|
||||
3. **If you are buying new R630s:**
|
||||
For HA + Ceph, the docs recommend **at least 3 R630s** (4 is better). So adding a **third** R630 is the minimum for that path; a **fourth** improves Ceph and spread. You can add them before or after the current migration; adding before gives more migration targets.
|
||||
|
||||
---
|
||||
|
||||
## 3. Order of operations (suggested)
|
||||
|
||||
| Scenario | Order |
|
||||
|----------|--------|
|
||||
| **r630-03 / r630-04 exist and you will use them** | 1) Power on r630-03 (and -04). 2) Fix and join cluster. 3) Run load-balance migration (including to r630-03 / -04 if desired). |
|
||||
| **No extra R630s yet; migration only** | 1) Run migration r630-01 → r630-02 (and optionally ml110). 2) Re-check load. 3) If needed, plan 3rd/4th R630. |
|
||||
| **Buying new 3rd/4th R630** | 1) Install Proxmox and join cluster. 2) Run migration so new nodes take part of the workload. |
|
||||
|
||||
---
|
||||
|
||||
## 4. References
|
||||
|
||||
- **r630-03/04 issues and fixes:** [reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md](../../reports/R630_03_04_POWER_ON_ISSUES_AND_FIXES.md)
|
||||
- **HA and how many R630s:** [PROXMOX_HA_CLUSTER_ROADMAP.md](../02-architecture/PROXMOX_HA_CLUSTER_ROADMAP.md) — “At least 3 R630s for full HA with Ceph; 4 is better.”
|
||||
- **Load-balance migration:** [PROXMOX_LOAD_BALANCING_RUNBOOK.md](PROXMOX_LOAD_BALANCING_RUNBOOK.md)
|
||||
- **13-node long-term plan:** [R630_13_NODE_DOD_HA_MASTER_PLAN.md](../02-architecture/R630_13_NODE_DOD_HA_MASTER_PLAN.md)
|
||||
116
docs/04-configuration/PROXMOX_LOAD_BALANCING_RUNBOOK.md
Normal file
116
docs/04-configuration/PROXMOX_LOAD_BALANCING_RUNBOOK.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Proxmox load balancing runbook
|
||||
|
||||
**Purpose:** Reduce load on the busiest node (r630-01) by migrating selected LXC containers to r630-02. Also frees space on r630-01 when moving to another host. **Note:** ml110 is being repurposed to OPNsense/pfSense (WAN aggregator); migrate workloads *off* ml110 to r630-01/r630-02 before repurpose — see [ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md](../11-references/ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md).
|
||||
|
||||
**Before you start:** If you are considering adding a **third or fourth R630** to the cluster first, see [PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md](PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md) — including whether you already have r630-03/r630-04 (powered off) to bring online.
|
||||
|
||||
**Current imbalance (typical):**
|
||||
|
||||
| Node | IP | LXC count | Load (1/5/15) | Notes |
|
||||
|----------|---------------|-----------|------------------|--------------|
|
||||
| r630-01 | 192.168.11.11 | 58 | 56 / 81 / 92 | Heavily loaded |
|
||||
| r630-02 | 192.168.11.12 | 23 | ~4 / 4 / 4 | Light |
|
||||
| ml110 | 192.168.11.10 | 18 | ~7 / 7 / 9 | **Repurposing to OPNsense/pfSense** — migrate workloads off to r630-01/r630-02 |
|
||||
|
||||
**Ways to balance:**
|
||||
|
||||
1. **Cross-host migration (r630-01 → r630-02)** — Moves workload off r630-01. IP stays the same if the container uses a static IP; only the Proxmox host changes. (ml110 is no longer a migration target; migrate containers *off* ml110 first.)
|
||||
2. **Same-host storage migration (r630-01 data → thin1)** — Frees space on the `data` pool and can improve I/O; does not reduce CPU/load by much. See [MIGRATION_PLAN_R630_01_DATA.md](MIGRATION_PLAN_R630_01_DATA.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Check cluster (live migrate vs backup/restore)
|
||||
|
||||
If all nodes are in the **same Proxmox cluster**, you can try **live migration** (faster, less downtime):
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pvecm status"
|
||||
ssh root@192.168.11.12 "pvecm status"
|
||||
```
|
||||
|
||||
- If both show the **same cluster name** and list each other: use `pct migrate <VMID> <target_node> --restart` from any cluster node (run on r630-01 or from a host that SSHs to r630-01).
|
||||
- If nodes are **not** in a cluster (or migrate fails due to storage): use **backup → copy → restore** with the script below.
|
||||
|
||||
---
|
||||
|
||||
## 2. Cross-host migration (r630-01 → r630-02)
|
||||
|
||||
**Script (backup/restore; works without shared storage):**
|
||||
|
||||
```bash
|
||||
cd /path/to/proxmox
|
||||
|
||||
# One container (replace VMID and target storage)
|
||||
./scripts/maintenance/migrate-ct-r630-01-to-r630-02.sh <VMID> [target_storage] [--destroy-source]
|
||||
|
||||
# Examples
|
||||
./scripts/maintenance/migrate-ct-r630-01-to-r630-02.sh 3501 thin1 --dry-run
|
||||
./scripts/maintenance/migrate-ct-r630-01-to-r630-02.sh 3501 thin1 --destroy-source
|
||||
```
|
||||
|
||||
**Target storage on r630-02:** Check with `ssh root@192.168.11.12 "pvesm status"`. Common: `thin1`, `thin2`, `thin5`, `thin6`.
|
||||
|
||||
**If cluster works (live migrate):**
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct migrate <VMID> r630-02 --storage thin1 --restart"
|
||||
# Then remove source CT if desired: pct destroy <VMID> --purge 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Good candidates to move (r630-01 → r630-02)
|
||||
|
||||
Containers that **reduce load** and are **safe to move** (no critical chain/consensus; IP can stay static). Prefer moving several smaller ones rather than one critical RPC.
|
||||
|
||||
| VMID | Name / role | Notes |
|
||||
|--------|------------------------|-------|
|
||||
| 3500 | oracle-publisher-1 | Oracle publisher |
|
||||
| 3501 | ccip-monitor-1 | CCIP monitor |
|
||||
| 7804 | gov-portals-dev | Gov portals (already migrated in past; verify current host) |
|
||||
| 8640 | vault-phoenix-1 | Vault (if not critical path) |
|
||||
| 8642 | vault-phoenix-3 | Vault |
|
||||
| 10232 | CT10232 | Small service |
|
||||
| 10235 | npmplus-alltra-hybx | NPMplus instance (has its own NPM; update UDM port forward if needed) |
|
||||
| 10236 | npmplus-fourth | NPMplus instance |
|
||||
| 10030–10092 | order-* (identity, intake, finance, etc.) | Order stack; move as a group if desired |
|
||||
| 10200–10210 | order-prometheus, grafana, opensearch, haproxy | Monitoring/HA; move with order-* or after |
|
||||
|
||||
**Do not move (keep on r630-01 for now):**
|
||||
|
||||
- **10233** — npmplus (main NPMplus; 76.53.10.36 → .167)
|
||||
- **2101** — besu-rpc-core-1 (core RPC for deploy/admin)
|
||||
- **2500–2505** — RPC alltra/hybx (critical RPCs)
|
||||
- **1000–1002, 1500–1502** — validators and sentries (consensus)
|
||||
- **10130, 10150, 10151** — dbis-frontend, dbis-api (core apps; move only with a plan)
|
||||
- **100, 101, 102, 103, 104, 105** — mail, datacenter, cloudflared, omada, gitea (infra)
|
||||
|
||||
---
|
||||
|
||||
## 4. Migrating workloads *off* ml110 (before OPNsense/pfSense repurpose)
|
||||
|
||||
ml110 (192.168.11.10) is being **repurposed to OPNsense/pfSense** (WAN aggregator between 6–10 cable modems and UDM Pros). All containers/VMs on ml110 must be **migrated to r630-01 or r630-02** before the repurpose.
|
||||
|
||||
- **If cluster:** `ssh root@192.168.11.10 "pct migrate <VMID> r630-01 --storage <storage> --restart"` or `... r630-02 ...`
|
||||
- **If no cluster:** Use backup on ml110, copy to r630-01 or r630-02, restore there (see [MIGRATE_CT_R630_01_TO_R630_02.md](../03-deployment/MIGRATE_CT_R630_01_TO_R630_02.md) and adapt for source=ml110, target=r630-01 or r630-02).
|
||||
|
||||
After all workloads are off ml110, remove ml110 from the cluster (or reinstall the node with OPNsense/pfSense). See [ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md](../11-references/ML110_OPNSENSE_PFSENSE_WAN_AGGREGATOR.md).
|
||||
|
||||
---
|
||||
|
||||
## 5. After migration
|
||||
|
||||
- **IP:** Containers keep the same IP if they use static IP in the CT config; no change needed for NPM/DNS if they point by IP.
|
||||
- **Docs:** Update any runbooks or configs that assume “VMID X is on r630-01” (e.g. `config/ip-addresses.conf` comments, backup scripts).
|
||||
- **Verify:** Re-run `bash scripts/check-all-proxmox-hosts.sh` and confirm load and container counts.
|
||||
|
||||
---
|
||||
|
||||
## 6. Quick reference
|
||||
|
||||
| Goal | Command / doc |
|
||||
|------|----------------|
|
||||
| Check current load | `bash scripts/check-all-proxmox-hosts.sh` |
|
||||
| Migrate one CT (r630-01 → r630-02) | `./scripts/maintenance/migrate-ct-r630-01-to-r630-02.sh <VMID> thin1 [--destroy-source]` |
|
||||
| Same-host (data → thin1) | [MIGRATION_PLAN_R630_01_DATA.md](MIGRATION_PLAN_R630_01_DATA.md), `migrate-ct-r630-01-data-to-thin1.sh` |
|
||||
| Full migration doc | [MIGRATE_CT_R630_01_TO_R630_02.md](../03-deployment/MIGRATE_CT_R630_01_TO_R630_02.md) |
|
||||
@@ -21,6 +21,11 @@ This directory contains setup and configuration guides.
|
||||
- **[cloudflare/](cloudflare)** ⭐⭐⭐ - Cloudflare configuration documentation
|
||||
- **[CLOUDFLARE_CREDENTIALS_BOTH_METHODS.md](CLOUDFLARE_CREDENTIALS_BOTH_METHODS.md)** ⭐⭐ - API token vs email+key; Certbot one method per file
|
||||
- **[NPMPLUS_CERTBOT_CLOUDNS_CREDENTIALS.md](NPMPLUS_CERTBOT_CLOUDNS_CREDENTIALS.md)** ⭐ - ClouDNS credentials from .env for NPMplus Certbot DNS challenge
|
||||
- **[NPMPLUS_PROXY_HOSTS_SNAPSHOT_2026-03.md](NPMPLUS_PROXY_HOSTS_SNAPSHOT_2026-03.md)** - Snapshot of NPMplus proxy destinations (IP:port) and VMID mapping (March 2026)
|
||||
- **[NPMPLUS_CUSTOM_NGINX_CONFIG.md](NPMPLUS_CUSTOM_NGINX_CONFIG.md)** - NPMplus custom config: proxy variables, security headers (CSP with unsafe-eval for ethers.js), and caveat (do not add `location '/'`)
|
||||
- **[NPMPLUS_UI_APIERROR_400_RUNBOOK.md](NPMPLUS_UI_APIERROR_400_RUNBOOK.md)** - NPMplus UI ApiError 400 on dashboard load: find failing request, test API with curl, logs, fixes
|
||||
- **[PROXMOX_LOAD_BALANCING_RUNBOOK.md](PROXMOX_LOAD_BALANCING_RUNBOOK.md)** - Balance Proxmox load: migrate containers from r630-01 to r630-02/ml110; candidates, script, cluster vs backup/restore
|
||||
- **[PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md](PROXMOX_ADD_THIRD_FOURTH_R630_DECISION.md)** - Add 3rd/4th R630 before migration? r630-03/04 status, HA/Ceph (3–4 nodes), order of operations
|
||||
- **[ER605_ROUTER_CONFIGURATION.md](ER605_ROUTER_CONFIGURATION.md)** ⭐⭐ - ER605 router configuration
|
||||
- **[OMADA_API_SETUP.md](OMADA_API_SETUP.md)** ⭐⭐ - Omada API integration setup
|
||||
- **[OMADA_HARDWARE_CONFIGURATION_REVIEW.md](OMADA_HARDWARE_CONFIGURATION_REVIEW.md)** ⭐⭐⭐ - Comprehensive Omada hardware and configuration review
|
||||
@@ -96,6 +101,7 @@ This directory contains setup and configuration guides.
|
||||
**Explorer (explorer.d-bis.org):**
|
||||
- **[EXPLORER_FUNCTIONALITY_REVIEW.md](EXPLORER_FUNCTIONALITY_REVIEW.md)** - Routes, API URLs, contract verification, Snap send HTTPS.
|
||||
- **[EXPLORER_GAPS_AND_RECOMMENDATIONS.md](EXPLORER_GAPS_AND_RECOMMENDATIONS.md)** - Loading on all pages, bridge/lanes, **Verify & Publish** (UI) and batch verification (Forge + proxy), user/API key issuance, operator checklist.
|
||||
- **[EXPLORER_WALLET_LINK_QUICK_WIN.md](EXPLORER_WALLET_LINK_QUICK_WIN.md)** — Add Wallet link to explorer navbar (quick win runbook)
|
||||
- **[EXPLORER_TROUBLESHOOTING.md](EXPLORER_TROUBLESHOOTING.md)** - SSL, NPMplus, 502/verification failures, common errors.
|
||||
- **Contract verification (Forge + Blockscout):** [../08-monitoring/BLOCKSCOUT_VERIFICATION_GUIDE.md](../08-monitoring/BLOCKSCOUT_VERIFICATION_GUIDE.md) — proxy, manual UI, 502/HTML troubleshooting.
|
||||
|
||||
|
||||
110
docs/04-configuration/THIRDWEB_ENGINE_CHAIN_OVERRIDES.md
Normal file
110
docs/04-configuration/THIRDWEB_ENGINE_CHAIN_OVERRIDES.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# thirdweb Engine — Custom Chain Overrides
|
||||
|
||||
**Purpose:** Document chain overrides for thirdweb Engine so it can resolve RPC and metadata for Chain 138 and ALL Mainnet (651940). Required for AA (account abstraction), paymaster, and backend wallet flows on these chains.
|
||||
|
||||
**Reference:** [Custom Chains \| thirdweb Engine](https://portal.thirdweb.com/engine/v2/features/custom-chains).
|
||||
|
||||
---
|
||||
|
||||
## Why override
|
||||
|
||||
Engine needs to know RPC URLs and chain metadata for every chain your app uses. Public chain lists may not include 138 or 651940; adding overrides prevents "unknown chain" errors and keeps AA + paymaster stable.
|
||||
|
||||
---
|
||||
|
||||
## Chain override shape
|
||||
|
||||
Per chain, configure at least:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `chainId` | number | 138 or 651940 |
|
||||
| `rpc` | string[] | Primary RPC first; fallback URLs optional |
|
||||
| `nativeCurrency` | object | `{ name, symbol, decimals }` |
|
||||
| `blockExplorers` | array | `[{ name, url }]` (optional but recommended) |
|
||||
| `name` | string | Human-readable name |
|
||||
| `slug` | string | Optional; used in logs/APIs |
|
||||
|
||||
---
|
||||
|
||||
## Chain 138 (DeFi Oracle Meta Mainnet)
|
||||
|
||||
```json
|
||||
{
|
||||
"chainId": 138,
|
||||
"name": "DeFi Oracle Meta Mainnet",
|
||||
"slug": "chain-138",
|
||||
"rpc": [
|
||||
"https://rpc-http-pub.d-bis.org",
|
||||
"https://rpc.d-bis.org",
|
||||
"https://rpc.defi-oracle.io"
|
||||
],
|
||||
"nativeCurrency": {
|
||||
"name": "Ether",
|
||||
"symbol": "ETH",
|
||||
"decimals": 18
|
||||
},
|
||||
"blockExplorers": [
|
||||
{
|
||||
"name": "Explorer",
|
||||
"url": "https://explorer.d-bis.org"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Admin/deployment RPC:** Set via `RPC_URL_138` (e.g. `http://192.168.11.211:8545`) when running from LAN; use public RPC in Engine for external clients.
|
||||
|
||||
---
|
||||
|
||||
## Chain 651940 (ALL Mainnet / Alltra)
|
||||
|
||||
```json
|
||||
{
|
||||
"chainId": 651940,
|
||||
"name": "ALL Mainnet",
|
||||
"slug": "alltra",
|
||||
"rpc": [
|
||||
"https://mainnet-rpc.alltra.global"
|
||||
],
|
||||
"nativeCurrency": {
|
||||
"name": "Ether",
|
||||
"symbol": "ETH",
|
||||
"decimals": 18
|
||||
},
|
||||
"blockExplorers": [
|
||||
{
|
||||
"name": "Alltra",
|
||||
"url": "https://alltra.global"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Usage:** Alltra-native sponsorship and x402 USDC payments use this chain. Add fallback RPC in `rpc[]` if Alltra provides one.
|
||||
|
||||
---
|
||||
|
||||
## Where to configure
|
||||
|
||||
- **Engine dashboard:** Add custom chains in the Engine project settings (Custom Chains / Chain Overrides).
|
||||
- **Config file:** If your Engine deployment uses a config file, add the above objects to the chain overrides section per [Engine Custom Chains docs](https://portal.thirdweb.com/engine/v2/features/custom-chains).
|
||||
|
||||
---
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] Add chain **138** with production RPC (and fallback if available).
|
||||
- [ ] Add chain **651940** with production RPC so paymaster and backend wallets work on Alltra.
|
||||
- [ ] Ensure `nativeCurrency` and `blockExplorers` are set so fee display and explorer links work.
|
||||
|
||||
---
|
||||
|
||||
## Single source of truth
|
||||
|
||||
RPC and explorer URLs are aligned with:
|
||||
|
||||
- [smom-dbis-138/services/token-aggregation/src/config/chains.ts](../../smom-dbis-138/services/token-aggregation/src/config/chains.ts) — `CHAIN_CONFIGS[138]`, `CHAIN_CONFIGS[651940]`
|
||||
- [metamask-integration/provider/config/DUAL_CHAIN_NETWORKS.json](../../metamask-integration/provider/config/DUAL_CHAIN_NETWORKS.json)
|
||||
|
||||
Update this doc if you add new RPC endpoints or explorers.
|
||||
@@ -56,7 +56,26 @@ Relevant for backend or headless flows:
|
||||
**Secrets / env:**
|
||||
|
||||
- **frontend-dapp:** `VITE_THIRDWEB_CLIENT_ID`, `VITE_WALLETCONNECT_PROJECT_ID` (see [MASTER_SECRETS.md](MASTER_SECRETS.md), [DAPP_LXC_DEPLOYMENT.md](../03-deployment/DAPP_LXC_DEPLOYMENT.md)).
|
||||
- **x402-api:** `THIRDWEB_SECRET_KEY` (backend only).
|
||||
- **x402-api:** `THIRDWEB_SECRET_KEY` (backend only), `SERVER_WALLET_ADDRESS` (treasury for x402). When `X402_USE_ALLTRA=true`, local verification does not require `THIRDWEB_SECRET_KEY`.
|
||||
|
||||
---
|
||||
|
||||
## 3.1 Server wallet (admin signer) — usage policy
|
||||
|
||||
Use the **server wallet** (e.g. the key backing `SERVER_WALLET_ADDRESS` or an Engine backend wallet) only for:
|
||||
|
||||
- **Contract admin actions:** roles, pausing, upgrades.
|
||||
- **Allowlist / signature minting** (if your contracts support it).
|
||||
- **Indexer repair jobs** (rare, e.g. backfill or reconciliation).
|
||||
- **Operational controls:** key rotation, emergency ops.
|
||||
|
||||
**Do not** use it for user flows (no user impersonation). Keep keys in **KMS, HSM, or secure custody**. See [ALLTRA_X402_OPERATOR_GUIDE.md](ALLTRA_X402_OPERATOR_GUIDE.md) for Alltra/x402 operator context.
|
||||
|
||||
**User wallets vs server wallet:**
|
||||
|
||||
- **External connect:** power users (MetaMask, WalletConnect, etc.).
|
||||
- **Embedded:** email/social/passkeys for smooth onboarding; both are user-controlled.
|
||||
- **Server wallet:** backend-only; never exposed to or used on behalf of end users.
|
||||
|
||||
---
|
||||
|
||||
|
||||
116
docs/04-configuration/X402_ALLTRA_ENDPOINT_SPEC.md
Normal file
116
docs/04-configuration/X402_ALLTRA_ENDPOINT_SPEC.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# x402 Endpoint Contract — Alltra (651940) + USDC
|
||||
|
||||
**Purpose:** Spec for Alltra-native x402 paid endpoints: 402 challenge, retry with PAYMENT-SIGNATURE, and local verification on chain 651940 with USDC. Settlement is on Alltra; no dependency on Base or external facilitator.
|
||||
|
||||
**References:** [coinbase/x402](https://github.com/coinbase/x402), [HTTP 402 — x402](https://docs.x402.org/core-concepts/http-402), [ADDRESS_MATRIX_AND_STATUS.md](../11-references/ADDRESS_MATRIX_AND_STATUS.md) §2.3 (Alltra USDC).
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
- **Chain:** `eip155:651940` (ALL Mainnet / Alltra)
|
||||
- **Payment token:** USDC at `0xa95EeD79f84E6A0151eaEb9d441F9Ffd50e8e881`
|
||||
- **Recipient:** Server treasury (e.g. `SERVER_WALLET_ADDRESS`)
|
||||
- **Verification:** Local (recommended): server verifies signature, intent, and on-chain settlement; optional facilitator-like `/verify` later.
|
||||
|
||||
---
|
||||
|
||||
## 2. Step 1 — Client calls paid endpoint (unpaid)
|
||||
|
||||
**Request:** `GET /api/resource` (or any paid route)
|
||||
|
||||
**Response when unpaid:** `402 Payment Required`
|
||||
|
||||
**Headers:**
|
||||
|
||||
- `PAYMENT-REQUIRED: <base64 PaymentRequired>`
|
||||
|
||||
**PaymentRequired (JSON, then base64-encoded):**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `network` | string | `eip155:651940` |
|
||||
| `asset` | string | USDC contract address (0xa95EeD79f84E6A0151eaEb9d441F9Ffd50e8e881) |
|
||||
| `amount` | string | Price in base units (e.g. "10000" for 0.01 USDC if 6 decimals) |
|
||||
| `recipient` | string | Treasury address |
|
||||
| `nonce` | string | Unique per request (e.g. UUID) |
|
||||
| `expiresAt` | string | ISO 8601 (e.g. now + 5 minutes) |
|
||||
| `resourceId` | string | Identifies the resource (e.g. URL or hash) so payment is bound to the request |
|
||||
|
||||
**Example (decoded):**
|
||||
|
||||
```json
|
||||
{
|
||||
"network": "eip155:651940",
|
||||
"asset": "0xa95EeD79f84E6A0151eaEb9d441F9Ffd50e8e881",
|
||||
"amount": "10000",
|
||||
"recipient": "0x...",
|
||||
"nonce": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"expiresAt": "2026-03-04T12:05:00.000Z",
|
||||
"resourceId": "GET /api/premium"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Step 2 — Client pays and retries
|
||||
|
||||
Client performs USDC transfer (or authorization) on chain 651940 to `recipient` for `amount`, then retries the same request with:
|
||||
|
||||
**Headers:**
|
||||
|
||||
- `PAYMENT-SIGNATURE: <base64 PaymentPayload>`
|
||||
|
||||
**PaymentPayload (JSON, then base64-encoded):**
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `payer` | string | Payer wallet address |
|
||||
| `signature` | string | Signature over the payment intent (e.g. EIP-191 or EIP-712 of PaymentRequired or its hash) |
|
||||
| `paymentRequired` | object | Copy of PaymentRequired so server can verify match |
|
||||
| `txHash` | string (optional) | Transaction hash on 651940 proving transfer (for on-chain verification) |
|
||||
|
||||
Server uses `txHash` to verify settlement via `eth_getTransactionReceipt` on 651940 when doing local verification.
|
||||
|
||||
---
|
||||
|
||||
## 4. Step 3 — Server verification (Alltra-native local)
|
||||
|
||||
1. **Decode** PaymentPayload from base64.
|
||||
2. **Verify signature** — signature belongs to `payer` (recover signer from signature over payment intent).
|
||||
3. **Verify intent** — PaymentPayload.paymentRequired matches the server-issued PaymentRequired (same amount, asset, chain, recipient, resourceId); `expiresAt` is in the future.
|
||||
4. **Verify settlement:**
|
||||
- If `txHash` present: call 651940 RPC `eth_getTransactionReceipt(txHash)`; confirm success and that transfer is to `recipient` for `amount` (USDC) from `payer`.
|
||||
- If authorization-based: verify authorization and that a transfer occurred (per your scheme).
|
||||
5. **Replay:** Mark `(payer, resourceId, nonce)` as consumed (store in DB or cache with TTL); reject if already consumed.
|
||||
6. **Respond:** Return 200 with resource body; optionally set `PAYMENT-RESPONSE` header (per x402) with settlement response.
|
||||
|
||||
---
|
||||
|
||||
## 5. Replay protection
|
||||
|
||||
- Key: `(payer, resourceId, nonce)`.
|
||||
- Store consumed keys with expiry ≥ `expiresAt` so the same nonce cannot be reused.
|
||||
- Production: use Redis or DB; development: in-memory Map with TTL is acceptable.
|
||||
|
||||
---
|
||||
|
||||
## 6. PAYMENT-RESPONSE (optional)
|
||||
|
||||
Per [docs.x402.org](https://docs.x402.org/core-concepts/http-402), server may return `PAYMENT-RESPONSE` header with settlement confirmation (e.g. txHash, status). Optional for minimal implementation.
|
||||
|
||||
---
|
||||
|
||||
## 7. Separation from sponsorship
|
||||
|
||||
- **Sponsorship (paymaster):** Covers gas for app actions (e.g. CoreApp writes) on 651940.
|
||||
- **x402:** User-paid USDC for API/service access; validated by this flow.
|
||||
|
||||
The two are independent: x402 payment tx is user-funded; sponsored txs are paymaster-funded.
|
||||
|
||||
---
|
||||
|
||||
## 8. Implementation
|
||||
|
||||
- **x402-api:** When `X402_USE_ALLTRA=true`, the server can use this local verification path: return 402 + PAYMENT-REQUIRED when unpaid; on PAYMENT-SIGNATURE, run steps 1–6 and serve the resource on success.
|
||||
- **USDC decimals:** 6 for Alltra USDC; `amount` in PaymentRequired is in base units (e.g. 10000 = 0.01 USDC).
|
||||
@@ -0,0 +1,307 @@
|
||||
# OMNL HYBX Operational Run Book
|
||||
|
||||
## Office Onboarding – ADF Asian Pacific Holding Singapore PTE LTD
|
||||
|
||||
---
|
||||
|
||||
## 1. Run Book Overview
|
||||
|
||||
This run book defines the **step-by-step operational procedure** to onboard a new corporate office into the **OMNL HYBX financial infrastructure** running on **Apache Fineract (Mifos X)**.
|
||||
|
||||
The procedure ensures:
|
||||
|
||||
- Consistent office creation
|
||||
- CIS (Client Information Sheet) verification
|
||||
- KYC/KYB validation
|
||||
- Wallet provisioning
|
||||
- ISO 20022 transaction readiness
|
||||
- Audit trail compliance
|
||||
|
||||
**Instance:** [omnl.hybxfinance.io](https://omnl.hybxfinance.io/) (or omnl.hybx.global). Set `OMNL_FINERACT_BASE_URL` in `.env` accordingly.
|
||||
|
||||
---
|
||||
|
||||
## 2. System Environment
|
||||
|
||||
| Component | System |
|
||||
| --------------------- | -------------------- |
|
||||
| Core Banking | Apache Fineract |
|
||||
| Interface | Mifos X |
|
||||
| Treasury | OMNL HYBX |
|
||||
| Messaging | ISO 20022 |
|
||||
| Wallet Infrastructure | HYBX Treasury Wallet |
|
||||
| Audit Logs | OMNL Ledger |
|
||||
|
||||
---
|
||||
|
||||
## 3. Office Identity
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | -------------------------------------------- |
|
||||
| Office Name | ADF ASIAN PACIFIC HOLDING SINGAPORE PTE LTD |
|
||||
| Company Number | 202328126M |
|
||||
| Representative | MR. ANG KOK YONG |
|
||||
| Title | CEO |
|
||||
| Jurisdiction | Singapore |
|
||||
| Parent Office | OMNL |
|
||||
| Parent Office ID | 1 |
|
||||
|
||||
---
|
||||
|
||||
## 4. Roles & Responsibilities
|
||||
|
||||
| Role | Responsibility |
|
||||
| ---------------------- | ---------------------------- |
|
||||
| Platform Administrator | Creates Office in Fineract |
|
||||
| Compliance Officer | Verifies CIS and KYB |
|
||||
| Treasury Operator | Creates institutional wallet |
|
||||
| DevOps | Configures ISO-20022 node |
|
||||
| Risk & Audit | Reviews onboarding log |
|
||||
|
||||
---
|
||||
|
||||
## 5. Required Documents
|
||||
|
||||
The following documents must be verified prior to office creation:
|
||||
|
||||
1. Client Information Sheet (CIS)
|
||||
2. Company Registration
|
||||
3. Director Identification
|
||||
4. Corporate Address
|
||||
5. Banking Coordinates
|
||||
6. Compliance Verification
|
||||
|
||||
Documents are archived in:
|
||||
|
||||
```
|
||||
HYBX/KYC/ADF_APAC_SINGAPORE/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Office Creation Procedure
|
||||
|
||||
### Step 1 — Authenticate to OMNL HYBX
|
||||
|
||||
Ensure API access: load credentials from `omnl-fineract/.env` or repo root `.env` (`OMNL_FINERACT_BASE_URL`, `OMNL_FINERACT_USER`, `OMNL_FINERACT_PASSWORD`, `OMNL_FINERACT_TENANT`). Fineract uses **Basic auth** on each request (no separate token endpoint). Verify access with:
|
||||
|
||||
```bash
|
||||
curl -s -u "${OMNL_FINERACT_USER}:${OMNL_FINERACT_PASSWORD}" \
|
||||
-H "Fineract-Platform-TenantId: ${OMNL_FINERACT_TENANT:-omnl}" \
|
||||
"${OMNL_FINERACT_BASE_URL}/offices"
|
||||
```
|
||||
|
||||
Expected: HTTP 200 and a JSON array of offices.
|
||||
|
||||
### Step 2 — Create Office
|
||||
|
||||
**Endpoint:** `POST /fineract-provider/api/v1/offices`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "ADF ASIAN PACIFIC HOLDING SINGAPORE PTE LTD",
|
||||
"parentId": 1,
|
||||
"openingDate": "2023-07-11",
|
||||
"dateFormat": "yyyy-MM-dd",
|
||||
"locale": "en",
|
||||
"externalId": "202328126M"
|
||||
}
|
||||
```
|
||||
|
||||
**cURL (from repo root with env loaded):**
|
||||
|
||||
```bash
|
||||
curl -X POST "${OMNL_FINERACT_BASE_URL}/offices" \
|
||||
-u "${OMNL_FINERACT_USER}:${OMNL_FINERACT_PASSWORD}" \
|
||||
-H "Fineract-Platform-TenantId: ${OMNL_FINERACT_TENANT:-omnl}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "ADF ASIAN PACIFIC HOLDING SINGAPORE PTE LTD",
|
||||
"parentId": 1,
|
||||
"openingDate": "2023-07-11",
|
||||
"dateFormat": "yyyy-MM-dd",
|
||||
"locale": "en",
|
||||
"externalId": "202328126M"
|
||||
}'
|
||||
```
|
||||
|
||||
**Script (recommended, idempotent by externalId):**
|
||||
|
||||
```bash
|
||||
DRY_RUN=1 bash scripts/omnl/omnl-office-create-adf-singapore.sh # preview
|
||||
bash scripts/omnl/omnl-office-create-adf-singapore.sh # create
|
||||
```
|
||||
|
||||
Optional overrides: `OPENING_DATE`, `ADF_SINGAPORE_EXTERNAL_ID`, `ADF_SINGAPORE_OFFICE_NAME`. Script outputs `OFFICE_ID_ADF_SINGAPORE=<id>` on success.
|
||||
|
||||
### Step 3 — Verify Office Creation
|
||||
|
||||
Confirm via:
|
||||
|
||||
```bash
|
||||
GET ${OMNL_FINERACT_BASE_URL}/offices
|
||||
```
|
||||
|
||||
(e.g. `curl -s -u "..." -H "Fineract-Platform-TenantId: omnl" "${OMNL_FINERACT_BASE_URL}/offices"`)
|
||||
|
||||
**Expected result:**
|
||||
|
||||
- Office ID: <auto-generated>
|
||||
- Parent Office: OMNL (id: 1)
|
||||
- externalId: 202328126M
|
||||
- Status: Active
|
||||
|
||||
Log entry created in **HYBX_LEDGER / OFFICE_REGISTRY**.
|
||||
|
||||
---
|
||||
|
||||
## 7. Corporate Profile Attachment
|
||||
|
||||
Create additional corporate data using **Fineract Datatable**.
|
||||
|
||||
**Datatable name:** `office_corporate_profile`
|
||||
|
||||
| Field | Value |
|
||||
| --------------------- | ------------------ |
|
||||
| representative_name | MR. ANG KOK YONG |
|
||||
| representative_title | CEO |
|
||||
| jurisdiction | Singapore |
|
||||
| entity_type | Private Limited |
|
||||
|
||||
Create the datatable in Fineract (Administration → Register Datatables), link it to the **Office** entity, then populate a row for this office after creation.
|
||||
|
||||
---
|
||||
|
||||
## 8. Treasury Wallet Creation
|
||||
|
||||
After office creation, create the institutional treasury wallet.
|
||||
|
||||
| Parameter | Value |
|
||||
| ---------------- | ------------------ |
|
||||
| Wallet type | Corporate Treasury Wallet |
|
||||
| Wallet ID format | HYBX-SG-ADF-001 |
|
||||
| Currency | Multi-Currency |
|
||||
| Vault | HYBX Treasury |
|
||||
| Liquidity Access | Enabled |
|
||||
| Settlement Mode | ISO 20022 |
|
||||
|
||||
*(Wallet creation steps and API are defined in the Institutional Client Onboarding Run Book.)*
|
||||
|
||||
---
|
||||
|
||||
## 9. ISO-20022 Messaging Enablement
|
||||
|
||||
Configure messaging endpoint for the office.
|
||||
|
||||
**Required channels:**
|
||||
|
||||
| Channel | Purpose |
|
||||
| -------- | -------------------- |
|
||||
| pacs.008 | Credit transfer |
|
||||
| pacs.009 | Interbank settlement |
|
||||
| camt.053 | Statement reporting |
|
||||
| camt.056 | Payment recall |
|
||||
|
||||
**Node registration:** `OMNL-HYBX-NODE-SG-ADF`
|
||||
|
||||
---
|
||||
|
||||
## 10. Compliance Verification
|
||||
|
||||
Compliance officer confirms:
|
||||
|
||||
- ✔ CIS verified
|
||||
- ✔ Corporate registration validated
|
||||
- ✔ Representative identity verified
|
||||
- ✔ Sanctions screening completed
|
||||
|
||||
Compliance approval logged in **HYBX_COMPLIANCE_LEDGER**.
|
||||
|
||||
---
|
||||
|
||||
## 11. Operational Activation
|
||||
|
||||
Once all steps are completed:
|
||||
|
||||
**System status:** `OFFICE_STATUS = ACTIVE`
|
||||
|
||||
**Operational capabilities enabled:**
|
||||
|
||||
- Wallet transactions
|
||||
- Treasury participation
|
||||
- Liquidity routing
|
||||
- ISO-20022 transfers
|
||||
|
||||
---
|
||||
|
||||
## 12. Audit Trail
|
||||
|
||||
All steps recorded in:
|
||||
|
||||
- **HYBX_LEDGER**
|
||||
- **HYBX_AUDIT_LOG**
|
||||
|
||||
Audit data includes:
|
||||
|
||||
- Timestamp
|
||||
- Operator ID
|
||||
- API request hash
|
||||
- System response
|
||||
|
||||
---
|
||||
|
||||
## 13. Disaster Recovery
|
||||
|
||||
If onboarding fails:
|
||||
|
||||
1. Rollback office creation (if created).
|
||||
2. Archive CIS and failure details in `HYBX/KYC/ADF_APAC_SINGAPORE/`.
|
||||
3. Generate failure log.
|
||||
|
||||
**Rollback (delete office):**
|
||||
|
||||
```bash
|
||||
DELETE /fineract-provider/api/v1/offices/{officeId}
|
||||
```
|
||||
|
||||
*Note:* Deleting an office may be restricted if the office has dependent data (clients, accounts). Resolve dependencies in UI/API first or contact platform admin.
|
||||
|
||||
---
|
||||
|
||||
## 14. Final Validation Checklist
|
||||
|
||||
| Validation | Status |
|
||||
| ----------------------- | ------ |
|
||||
| Office created | ☐ |
|
||||
| Corporate data attached | ☐ |
|
||||
| Wallet created | ☐ |
|
||||
| ISO-20022 enabled | ☐ |
|
||||
| Compliance approved | ☐ |
|
||||
| Audit logged | ☐ |
|
||||
|
||||
---
|
||||
|
||||
## 15. Office Hierarchy
|
||||
|
||||
```
|
||||
OMNL (Head Office, ID: 1)
|
||||
│
|
||||
└── ADF ASIAN PACIFIC HOLDING SINGAPORE PTE LTD
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 16. Run Book Completion
|
||||
|
||||
Run book execution is **complete when all validation checks pass and treasury wallet is active.**
|
||||
|
||||
---
|
||||
|
||||
## 17. Related Documentation
|
||||
|
||||
- [OMNL_FINERACT_CONFIGURATION.md](../OMNL_FINERACT_CONFIGURATION.md) — Base URL, tenant, credentials.
|
||||
- [PELICAN_MOTORS_OFFICE_RUNBOOK.md](PELICAN_MOTORS_OFFICE_RUNBOOK.md) — Same POST /offices pattern.
|
||||
- [OMNL_OFFICES_POPULATE.md](OMNL_OFFICES_POPULATE.md) — Bulk office creation from entity master data.
|
||||
- **Institutional Client Onboarding Run Book** (when available) — Client creation, wallet provisioning, KYB automation, and ISO-20022 routing under this office.
|
||||
@@ -0,0 +1,140 @@
|
||||
# Besu Nodes Health, Block Production & Transaction Pool Check
|
||||
|
||||
**Date:** 2026-03-04
|
||||
**Scope:** Block production, transaction pools (stuck txs), all Besu VM nodes health, storage.
|
||||
|
||||
---
|
||||
|
||||
## 1. Block production
|
||||
|
||||
| Check | Result |
|
||||
|-------|--------|
|
||||
| **Core RPC (192.168.11.211:8545)** | Reachable; Chain ID 138 |
|
||||
| **Latest block** | 2,547,803 (0x26e05b) |
|
||||
| **Block advance (5s window)** | No new blocks (monitor: stalled) |
|
||||
| **Block advance (12s window)** | No new blocks (diff=0) |
|
||||
|
||||
**Conclusion:** Block production is currently **stalled**. Validators are all active and RPC has 24 peers; likely cause is validators in sync or consensus not producing (see docs/06-besu/CRITICAL_ISSUE_BLOCK_PRODUCTION_STOPPED.md).
|
||||
|
||||
---
|
||||
|
||||
## 2. Transaction pools / stuck transactions
|
||||
|
||||
| Source | Result |
|
||||
|-------|--------|
|
||||
| **txpool_status** | Method not found (Besu uses different APIs) |
|
||||
| **txpool_besuTransactions** | **1 transaction** in pool |
|
||||
| **Tx hash** | `0x1c206b659cbb00cbe45557eda3fb3acbab86231820ed0f9ea41e836d7e07f591` |
|
||||
| **Added to pool** | 2026-03-04T07:44:45.584Z |
|
||||
| **Deployer pending (monitor)** | 1 pending (nonce 13486) |
|
||||
|
||||
**Conclusion:** One stuck transaction in the RPC node pool. Clearing options:
|
||||
|
||||
- Run `./scripts/clear-all-transaction-pools.sh` (clears pool DB on all nodes; requires restart).
|
||||
- Or try RPC-side clear first: `./scripts/resolve-stuck-transaction-besu-qbft.sh` (uses `txpool_besuClear` / `txpool_clear` if available on this RPC).
|
||||
|
||||
---
|
||||
|
||||
## 3. RPC VMID 2101 (Core) health
|
||||
|
||||
| Check | Status |
|
||||
|-------|--------|
|
||||
| Container 2101 | Running |
|
||||
| besu-rpc.service | active |
|
||||
| Port 8545 | Listening |
|
||||
| RPC eth_chainId | 0x8a (Chain 138) |
|
||||
| RPC eth_blockNumber | 0x26e05b |
|
||||
| Database path /data/besu/database | **Writable** |
|
||||
|
||||
All checks passed (script: `./scripts/maintenance/health-check-rpc-2101.sh`).
|
||||
|
||||
---
|
||||
|
||||
## 4. All RPC node VMs (health script)
|
||||
|
||||
Run: `bash ./scripts/health/check-rpc-vms-health.sh`
|
||||
|
||||
| VMID | Host | Status | Block |
|
||||
|------|------|--------|-------|
|
||||
| 2101 | 192.168.11.11 | running, besu-rpc active | 2547803 |
|
||||
| 2201 | 192.168.11.12 | running, besu-rpc active | 2547803 |
|
||||
| 2301 | 192.168.11.10 | running, besu-rpc active | 2547803 |
|
||||
| 2303 | 192.168.11.12 | running, besu-rpc active | 2547803 |
|
||||
| 2304–2307, 2400 | 192.168.11.10 / .12 | running, besu-rpc active | 2547803 |
|
||||
| **2308** | 192.168.11.10 | running, besu-rpc active | **2372719** (behind) |
|
||||
|
||||
**Note:** VMID 2308 is ~175k blocks behind; may need sync or investigation.
|
||||
|
||||
---
|
||||
|
||||
## 5. Validator status (1000–1004)
|
||||
|
||||
| VMID | Host | Service status |
|
||||
|------|------|----------------|
|
||||
| 1000, 1001, 1002 | 192.168.11.11 (R630-01) | besu-validator active |
|
||||
| 1003, 1004 | 192.168.11.10 (ML110) | besu-validator active |
|
||||
|
||||
All 5 validators reported **active** by `./scripts/monitoring/monitor-blockchain-health.sh`.
|
||||
|
||||
---
|
||||
|
||||
## 6. Storage (Besu nodes)
|
||||
|
||||
| Node | VMID | Host | Root disk | /data/besu size |
|
||||
|------|------|------|-----------|-----------------|
|
||||
| Validator | 1000 | 192.168.11.11 | 98G, 6% used | 3.3G |
|
||||
| Validator | 1001 | 192.168.11.11 | 98G, 6% used | 3.3G |
|
||||
| Validator | 1002 | 192.168.11.11 | 98G, 6% used | 3.3G |
|
||||
| Validator | 1003 | 192.168.11.10 | 98G, 10% used | 3.2G |
|
||||
| Validator | 1004 | 192.168.11.10 | 98G, 10% used | 3.2G |
|
||||
| RPC Core | 2101 | 192.168.11.11 | 196G, 4% used | 3.2G |
|
||||
| RPC Public | 2201 | 192.168.11.12 | 196G, 5% used | 3.1G |
|
||||
|
||||
No storage issues observed; all nodes have ample free space.
|
||||
|
||||
---
|
||||
|
||||
## 7. Summary and next steps
|
||||
|
||||
| Item | Status |
|
||||
|------|--------|
|
||||
| Block production | Stalled (no new blocks in 5s and 12s checks) |
|
||||
| Stuck tx in pool | 1 tx (hash 0x1c20…f591; nonce 13486) |
|
||||
| RPC 2101 health | All passed, storage writable |
|
||||
| RPC VMs (2101, 2201, 230x, 2400) | Running; 2308 behind |
|
||||
| Validators 1000–1004 | All active |
|
||||
| Storage (all Besu nodes) | Healthy, sufficient free space |
|
||||
|
||||
**Recommended next steps:**
|
||||
|
||||
1. **Unblock chain:** If block production does not resume, check validator logs (e.g. `journalctl -u besu-validator` on 1000–1004) and docs/06-besu/CRITICAL_ISSUE_BLOCK_PRODUCTION_STOPPED.md.
|
||||
2. **Clear stuck tx:** Run `./scripts/clear-all-transaction-pools.sh` then wait 30–60s; or try `./scripts/resolve-stuck-transaction-besu-qbft.sh` first (ensure RPC_URL_138 in smom-dbis-138/.env points to Core RPC).
|
||||
3. **RPC 2308:** Investigate why it is ~175k blocks behind (sync or connectivity).
|
||||
|
||||
---
|
||||
|
||||
## 8. Recommended steps executed (2026-03-04)
|
||||
|
||||
| Step | Action | Result |
|
||||
|------|--------|--------|
|
||||
| **Stuck tx** | Ran `resolve-stuck-transaction-besu-qbft.sh` with RPC_URL_138=Core RPC | TXPOOL + ADMIN enabled; `txpool_besuClear` and `txpool_clear` **Method not found** on this RPC; `admin_removeTransaction` also not found. Nonce 13485 (latest). |
|
||||
| **Stuck tx** | Ran `clear-all-transaction-pools.sh` | Validators 1000–1004 cleared and restarted ✅. RPC 2101 and 2201 clear runs after validators (script was clearing 2101 when checked). Wait 30–60s then re-check `txpool_besuTransactions`. |
|
||||
| **Validator logs** | Checked VMID 1000 and 1003 | 1000: had "QBFT mining coordinator not starting while initial sync in progress" then "Starting QBFT mining coordinator following initial sync", "Starting full sync"; then stopped by pool-clear. **Block production stall** is consistent with validators in/after full sync (or restarted and syncing again). See CRITICAL_ISSUE_BLOCK_PRODUCTION_STOPPED.md. |
|
||||
| **resolve script** | SOURCE_PROJECT | Updated script to use `PROJECT_ROOT_SCRIPT/smom-dbis-138` when present so it runs from proxmox repo. |
|
||||
|
||||
**RPC 2308 (behind ~175k blocks):** No automatic fix run. Options: (1) Let it sync (if it is catching up), (2) Check logs on VMID 2308 for errors, (3) Restart the container if sync is stuck. See `bash ./scripts/health/check-rpc-vms-health.sh` for current block per node.
|
||||
|
||||
---
|
||||
|
||||
## 9. Status to continue (updated)
|
||||
|
||||
| Check | Result | Action |
|
||||
|-------|--------|--------|
|
||||
| RPC 2101 | Healthy | — |
|
||||
| Tx pool (2101) | May repopulate | If mint fails with “Replacement transaction underpriced”, run mint with `GAS_PRICE_138=500000000000`. |
|
||||
| Validators 1000–1004 | 1004 **restarted** (2026-03-04) | All 5 active after restart; if 1004 fails again, restart on ML110. |
|
||||
| Block production | **Stalled** (blocker) | Mint tx accepted with 500 gwei but confirmation times out until blocks advance. Run `./scripts/monitoring/monitor-blockchain-health.sh`; when blocks advance, re-run mint. |
|
||||
|
||||
**Continue run (2026-03-04):** Validator 1004 restarted; `clear-all-transaction-pools.sh` completed (1000–1004, 2101, 2201). Mint script updated to use `GAS_PRICE_138`; with 500 gwei first mint tx was accepted by RPC but timed out waiting for confirmation (blocks not advancing). **Next:** When blocks advance, run `cd smom-dbis-138 && ./scripts/mint-for-liquidity.sh` (optionally `GAS_PRICE_138=500000000000`).
|
||||
|
||||
**Continue with:** [CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md](../CORE_RPC_2101_2102_TXPOOL_ADMIN_STATUS.md) §7–8 and [REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md](../../03-deployment/REMAINING_DEPLOYMENTS_FOR_FULL_NETWORK_COVERAGE.md) “Status to continue”.
|
||||
@@ -0,0 +1,47 @@
|
||||
# UDM Pro check — 2026-03-03
|
||||
|
||||
**Checked from:** ASERET (192.168.11.23), LAN.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Check | Result |
|
||||
|-------|--------|
|
||||
| **Gateway** | 192.168.11.1 reachable (ping OK) |
|
||||
| **UDM Pro management** | https://192.168.11.1:443 → **HTTP 200** (UniFi controller) |
|
||||
| **Public IP from LAN** | https://76.53.10.36:443 → **timeout (000)** — expected without NAT hairpin |
|
||||
| **NPMplus internal** | 192.168.11.166 / 192.168.11.167:80,443 — not reachable from this host (timeout) |
|
||||
|
||||
---
|
||||
|
||||
## Expected port forwarding (manual verification in UniFi UI)
|
||||
|
||||
In **UniFi Network** → **Settings** → **Firewall & Security** → **Port Forwarding** confirm:
|
||||
|
||||
| Rule | Destination IP | Dest Port | Forward to IP | Forward to Port | Protocol |
|
||||
|------|-----------------|-----------|---------------|-----------------|----------|
|
||||
| NPMplus HTTPS | 76.53.10.36 | 443 | 192.168.11.167 | 443 | TCP |
|
||||
| NPMplus HTTP | 76.53.10.36 | 80 | 192.168.11.167 | 80 | TCP |
|
||||
|
||||
**Verified 2026-03-03 (screenshot):** UI shows **Nginx HTTP** and **Nginx HTTPS** on 76.53.10.36 → 192.168.11.167:80 and :443. Also present: 76.53.10.38→.169 (Alltra/HYBX), 76.53.10.40→.170/.60 (Dev), 76.53.10.41→.171 (Mifos). Full table: [UDM_PRO_PORT_FORWARDING_SNAPSHOT_20260303.md](UDM_PRO_PORT_FORWARDING_SNAPSHOT_20260303.md).
|
||||
|
||||
---
|
||||
|
||||
## Interpretation
|
||||
|
||||
- **UDM Pro device:** Online and responding; management at https://192.168.11.1 works.
|
||||
- **Public URL from LAN:** Traffic to 76.53.10.36 from 192.168.11.23 times out — typical when **NAT hairpin (loopback)** is disabled. Enable it in UniFi if you want explorer.d-bis.org to work from LAN without a hosts entry.
|
||||
- **External access:** Test from a device off the LAN (e.g. phone on cellular): if https://explorer.d-bis.org works there, port forward and NPMplus are correct and the issue is LAN-only (hairpin).
|
||||
- **Prior run (2026-02-07):** From another host, internal and public tests all passed — so port forward and NPMplus were working from that segment.
|
||||
|
||||
---
|
||||
|
||||
## Manual steps
|
||||
|
||||
1. Open **https://192.168.11.1** in a browser (on the LAN).
|
||||
2. Go to **Settings** → **Firewall & Security** → **Port Forwarding**.
|
||||
3. Confirm the two rules above exist and are enabled.
|
||||
4. (Optional) Look for **NAT loopback** / **Hairpin NAT** and enable so LAN clients can reach 76.53.10.36.
|
||||
|
||||
Script: `bash scripts/verify/verify-udm-pro-port-forwarding.sh` (runs connectivity tests and writes evidence to `verification-evidence/udm-pro-verification-*`).
|
||||
@@ -0,0 +1,35 @@
|
||||
# UDM Pro port forwarding — verified snapshot 2026-03-03
|
||||
|
||||
**Source:** Screenshot from UniFi Network → Settings → Firewall & Security → Port Forwarding.
|
||||
**Interface:** Internet 1. Protocol: TCP/UDP. Source: Any.
|
||||
|
||||
---
|
||||
|
||||
## Rules (as shown in UI)
|
||||
|
||||
| Name | External WAN IP | Ext Port | Forward to (Internal) | Int Port |
|
||||
|------|------------------|----------|------------------------|----------|
|
||||
| Nginx HTTP | 76.53.10.36 | 80 | 192.168.11.167 | 80 |
|
||||
| Nginx HTTPS | 76.53.10.36 | 443 | 192.168.11.167 | 443 |
|
||||
| NPMplus Alltra/HYBX HTTP | 76.53.10.38 | 80 | 192.168.11.169 | 80 |
|
||||
| NPMplus Alltra/HYBX HTTPS | 76.53.10.38 | 443 | 192.168.11.169 | 443 |
|
||||
| NPMplus Alltra/HYBX Admin | 76.53.10.38 | 81 | 192.168.11.169 | 81 |
|
||||
| NPMplus Dev (HTTP/HTTPS/Admin) | 76.53.10.40 | 80, 443, 81 | 192.168.11.170 | 80, 443, 81 |
|
||||
| NPMplus Dev (SSH) | 76.53.10.40 | 22 | 192.168.11.60 | 22 |
|
||||
| NPMplus Dev (port 3000) | 76.53.10.40 | 3000 | 192.168.11.60 | 3000 |
|
||||
| NPMplus Mifos HTTP | 76.53.10.41 | 80 | 192.168.11.171 | 80 |
|
||||
| NPMplus Mifos HTTPS | 76.53.10.41 | 443 | 192.168.11.171 | 443 |
|
||||
| NPMplus Mifos Admin | 76.53.10.41 | 81 | 192.168.11.171 | 81 |
|
||||
|
||||
---
|
||||
|
||||
## Verification vs docs
|
||||
|
||||
| Item | Doc expectation | Snapshot | Status |
|
||||
|------|-----------------|----------|--------|
|
||||
| Explorer / Nginx | 76.53.10.36:80/443 → 192.168.11.167 | Nginx HTTP/HTTPS → .167:80, .167:443 | ✅ Match |
|
||||
| Alltra/HYBX | 76.53.10.38 → 192.168.11.169 (80, 81, 443) | NPMplus Alltra/HYBX → .169:80, .169:443, .169:81 | ✅ Match |
|
||||
| Mifos | 76.53.10.41 → 192.168.11.171 | NPMplus Mifos → .171:80, .171:443, .171:81 | ✅ Match |
|
||||
| Dev | 76.53.10.40 → .170 (and .60 for SSH/3000) | NPMplus Dev → .170 (80,443,81), .60 (22, 3000) | ✅ Match |
|
||||
|
||||
**Conclusion:** Port forwarding for explorer (76.53.10.36 → 192.168.11.167) and for Alltra/HYBX, Mifos, and Dev is correctly configured. Explorer timeout from LAN is due to NAT hairpin not being required for the forward itself; enable NAT loopback on UDM Pro if LAN clients should reach 76.53.10.36 without a hosts entry.
|
||||
@@ -0,0 +1,380 @@
|
||||
[
|
||||
{
|
||||
"vmid": 2101,
|
||||
"hostname": "besu-rpc-core-1",
|
||||
"host": "r630-01",
|
||||
"host_ip": "192.168.11.11",
|
||||
"expected_ip": "192.168.11.211",
|
||||
"actual_ip": "192.168.11.211",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "besu",
|
||||
"config_path": "8545,8546",
|
||||
"public_domains": [
|
||||
"rpc-http-prv.d-bis.org",
|
||||
"rpc-ws-prv.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "besu-rpc",
|
||||
"type": "direct",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 8545,
|
||||
"protocol": "tcp",
|
||||
"process": "besu"
|
||||
},
|
||||
{
|
||||
"port": 8546,
|
||||
"protocol": "tcp",
|
||||
"process": "besu"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.211:8545",
|
||||
"expected_code": 200,
|
||||
"actual_code": 200,
|
||||
"status": "pass"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:21:42-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 7810,
|
||||
"hostname": "mim-web-1",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.37",
|
||||
"actual_ip": "192.168.11.37",
|
||||
"status": "running",
|
||||
"has_nginx": true,
|
||||
"service_type": "nginx",
|
||||
"config_path": "/etc/nginx/sites-available/mim4u",
|
||||
"public_domains": [
|
||||
"mim4u.org",
|
||||
"www.mim4u.org",
|
||||
"secure.mim4u.org",
|
||||
"training.mim4u.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "nginx",
|
||||
"type": "systemd",
|
||||
"status": "active"
|
||||
}
|
||||
],
|
||||
"listening_ports": [],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.37:80",
|
||||
"expected_code": 200,
|
||||
"actual_code": 200,
|
||||
"status": "pass"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:21:51-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 10150,
|
||||
"hostname": "dbis-api-primary",
|
||||
"host": "r630-01",
|
||||
"host_ip": "192.168.11.11",
|
||||
"expected_ip": "192.168.11.155",
|
||||
"actual_ip": "192.168.11.155",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "nodejs",
|
||||
"config_path": "3000",
|
||||
"public_domains": [
|
||||
"dbis-api.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "nodejs-api",
|
||||
"type": "systemd",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 3000,
|
||||
"protocol": "tcp",
|
||||
"process": "nodejs"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.155:3000",
|
||||
"expected_code": 200,
|
||||
"actual_code": 0,
|
||||
"status": "fail"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:03-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 10151,
|
||||
"hostname": "dbis-api-secondary",
|
||||
"host": "r630-01",
|
||||
"host_ip": "192.168.11.11",
|
||||
"expected_ip": "192.168.11.156",
|
||||
"actual_ip": "192.168.11.156",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "nodejs",
|
||||
"config_path": "3000",
|
||||
"public_domains": [
|
||||
"dbis-api-2.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "nodejs-api",
|
||||
"type": "systemd",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 3000,
|
||||
"protocol": "tcp",
|
||||
"process": "nodejs"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.156:3000",
|
||||
"expected_code": 200,
|
||||
"actual_code": 0,
|
||||
"status": "fail"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:13-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 2201,
|
||||
"hostname": "besu-rpc-public-1",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.221",
|
||||
"actual_ip": "192.168.11.221",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "besu",
|
||||
"config_path": "8545,8546",
|
||||
"public_domains": [
|
||||
"rpc-http-pub.d-bis.org",
|
||||
"rpc-ws-pub.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "besu-rpc",
|
||||
"type": "direct",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 8545,
|
||||
"protocol": "tcp",
|
||||
"process": "besu"
|
||||
},
|
||||
{
|
||||
"port": 8546,
|
||||
"protocol": "tcp",
|
||||
"process": "besu"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.221:8545",
|
||||
"expected_code": 200,
|
||||
"actual_code": 200,
|
||||
"status": "pass"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:22-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 2400,
|
||||
"hostname": "thirdweb-rpc-1",
|
||||
"host": "ml110",
|
||||
"host_ip": "192.168.11.10",
|
||||
"expected_ip": "192.168.11.240",
|
||||
"actual_ip": "192.168.11.240",
|
||||
"status": "running",
|
||||
"has_nginx": true,
|
||||
"service_type": "nginx",
|
||||
"config_path": "/etc/nginx/sites-available/rpc-thirdweb",
|
||||
"public_domains": [
|
||||
"rpc.public-0138.defi-oracle.io"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "nginx",
|
||||
"type": "systemd",
|
||||
"status": "active"
|
||||
}
|
||||
],
|
||||
"listening_ports": [],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.240:80",
|
||||
"expected_code": 200,
|
||||
"actual_code": 404,
|
||||
"status": "fail"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:34-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 5800,
|
||||
"hostname": "mifos",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.85",
|
||||
"actual_ip": "192.168.11.85",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "web",
|
||||
"config_path": "-",
|
||||
"public_domains": [
|
||||
"mifos.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "http",
|
||||
"type": "direct",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 80,
|
||||
"protocol": "tcp",
|
||||
"process": "http"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.85:80",
|
||||
"expected_code": 200,
|
||||
"actual_code": 0,
|
||||
"status": "fail"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:40-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 5801,
|
||||
"hostname": "dapp-smom",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.58",
|
||||
"actual_ip": "192.168.11.58",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "web",
|
||||
"config_path": "-",
|
||||
"public_domains": [
|
||||
"dapp.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "http",
|
||||
"type": "direct",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 80,
|
||||
"protocol": "tcp",
|
||||
"process": "http"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.58:80",
|
||||
"expected_code": 200,
|
||||
"actual_code": 200,
|
||||
"status": "pass"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:46-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 10130,
|
||||
"hostname": "dbis-frontend",
|
||||
"host": "r630-01",
|
||||
"host_ip": "192.168.11.11",
|
||||
"expected_ip": "192.168.11.130",
|
||||
"actual_ip": "192.168.11.130",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "web",
|
||||
"config_path": "/etc/nginx/sites-available/dbis-frontend",
|
||||
"public_domains": [
|
||||
"dbis-admin.d-bis.org",
|
||||
"secure.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "http",
|
||||
"type": "direct",
|
||||
"status": "running"
|
||||
}
|
||||
],
|
||||
"listening_ports": [
|
||||
{
|
||||
"port": 80,
|
||||
"protocol": "tcp",
|
||||
"process": "http"
|
||||
}
|
||||
],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.130:80",
|
||||
"expected_code": 200,
|
||||
"actual_code": 0,
|
||||
"status": "fail"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:22:56-08:00"
|
||||
},
|
||||
{
|
||||
"vmid": 5000,
|
||||
"hostname": "blockscout-1",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.140",
|
||||
"actual_ip": "192.168.11.140",
|
||||
"status": "running",
|
||||
"has_nginx": true,
|
||||
"service_type": "nginx",
|
||||
"config_path": "/etc/nginx/sites-available/blockscout",
|
||||
"public_domains": [
|
||||
"explorer.d-bis.org"
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "nginx",
|
||||
"type": "systemd",
|
||||
"status": "active"
|
||||
}
|
||||
],
|
||||
"listening_ports": [],
|
||||
"health_endpoints": [
|
||||
{
|
||||
"path": "http://192.168.11.140:80",
|
||||
"expected_code": 200,
|
||||
"actual_code": 301,
|
||||
"status": "pass"
|
||||
}
|
||||
],
|
||||
"verified_at": "2026-03-02T14:23:06-08:00"
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,95 @@
|
||||
# Backend VMs Verification Report
|
||||
|
||||
**Date**: 2026-03-02T14:23:06-08:00
|
||||
**Verifier**: intlc
|
||||
|
||||
## Summary
|
||||
|
||||
Total VMs verified: 10
|
||||
|
||||
## VM Verification Results
|
||||
|
||||
|
||||
### VMID 2101: besu-rpc-core-1
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.211
|
||||
- Actual IP: 192.168.11.211
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_2101_verification.json`
|
||||
|
||||
### VMID 7810: mim-web-1
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.37
|
||||
- Actual IP: 192.168.11.37
|
||||
- Has Nginx: true
|
||||
- Details: See `vmid_7810_verification.json`
|
||||
|
||||
### VMID 10150: dbis-api-primary
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.155
|
||||
- Actual IP: 192.168.11.155
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_10150_verification.json`
|
||||
|
||||
### VMID 10151: dbis-api-secondary
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.156
|
||||
- Actual IP: 192.168.11.156
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_10151_verification.json`
|
||||
|
||||
### VMID 2201: besu-rpc-public-1
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.221
|
||||
- Actual IP: 192.168.11.221
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_2201_verification.json`
|
||||
|
||||
### VMID 2400: thirdweb-rpc-1
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.240
|
||||
- Actual IP: 192.168.11.240
|
||||
- Has Nginx: true
|
||||
- Details: See `vmid_2400_verification.json`
|
||||
|
||||
### VMID 5800: mifos
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.85
|
||||
- Actual IP: 192.168.11.85
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_5800_verification.json`
|
||||
|
||||
### VMID 5801: dapp-smom
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.58
|
||||
- Actual IP: 192.168.11.58
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_5801_verification.json`
|
||||
|
||||
### VMID 10130: dbis-frontend
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.130
|
||||
- Actual IP: 192.168.11.130
|
||||
- Has Nginx: false
|
||||
- Details: See `vmid_10130_verification.json`
|
||||
|
||||
### VMID 5000: blockscout-1
|
||||
- Status: running
|
||||
- Expected IP: 192.168.11.140
|
||||
- Actual IP: 192.168.11.140
|
||||
- Has Nginx: true
|
||||
- Details: See `vmid_5000_verification.json`
|
||||
|
||||
## Files Generated
|
||||
|
||||
- `all_vms_verification.json` - Complete VM verification results
|
||||
- `vmid_*_verification.json` - Individual VM verification details
|
||||
- `vmid_*_listening_ports.txt` - Listening ports output per VM
|
||||
- `verification_report.md` - This report
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Review verification results for each VM
|
||||
2. Investigate any VMs with mismatched IPs or failed health checks
|
||||
3. Document any missing nginx config paths
|
||||
4. Update source-of-truth JSON after verification
|
||||
@@ -0,0 +1,2 @@
|
||||
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=130,fd=14))
|
||||
LISTEN 0 5 0.0.0.0:80 0.0.0.0:* users:(("python3",pid=1006,fd=3))
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"vmid": 10130,
|
||||
"hostname": "dbis-frontend",
|
||||
"host": "r630-01",
|
||||
"host_ip": "192.168.11.11",
|
||||
"expected_ip": "192.168.11.130",
|
||||
"actual_ip": "192.168.11.130",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "web",
|
||||
"config_path": "/etc/nginx/sites-available/dbis-frontend",
|
||||
"public_domains": ["dbis-admin.d-bis.org","secure.d-bis.org"],
|
||||
"services": [{"name":"http","type":"direct","status":"running"}],
|
||||
"listening_ports": [{"port":80,"protocol":"tcp","process":"http"}],
|
||||
"health_endpoints": [{"path":"http://192.168.11.130:80","expected_code":200,"actual_code":000000,"status":"fail"}],
|
||||
"verified_at": "2026-03-02T14:22:56-08:00"
|
||||
}
|
||||
@@ -0,0 +1,13 @@
|
||||
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=203,fd=10),("nginx",pid=202,fd=10),("nginx",pid=201,fd=10),("nginx",pid=198,fd=10),("nginx",pid=197,fd=10),("nginx",pid=196,fd=10),("nginx",pid=195,fd=10),("nginx",pid=194,fd=10),("nginx",pid=193,fd=10),("nginx",pid=192,fd=10),("nginx",pid=191,fd=10),("nginx",pid=189,fd=10),("nginx",pid=188,fd=10),("nginx",pid=187,fd=10),("nginx",pid=186,fd=10),("nginx",pid=185,fd=10),("nginx",pid=184,fd=10),("nginx",pid=183,fd=10),("nginx",pid=182,fd=10),("nginx",pid=181,fd=10),("nginx",pid=180,fd=10),("nginx",pid=179,fd=10),("nginx",pid=178,fd=10),("nginx",pid=177,fd=10),("nginx",pid=176,fd=10),("nginx",pid=175,fd=10),("nginx",pid=174,fd=10),("nginx",pid=173,fd=10),("nginx",pid=172,fd=10),("nginx",pid=171,fd=10),("nginx",pid=170,fd=10),("nginx",pid=169,fd=10),("nginx",pid=168,fd=10),("nginx",pid=167,fd=10),("nginx",pid=166,fd=10),("nginx",pid=165,fd=10),("nginx",pid=164,fd=10),("nginx",pid=163,fd=10),("nginx",pid=162,fd=10),("nginx",pid=161,fd=10),("nginx",pid=160,fd=10),("nginx",pid=159,fd=10),("nginx",pid=158,fd=10),("nginx",pid=157,fd=10),("nginx",pid=156,fd=10),("nginx",pid=154,fd=10),("nginx",pid=153,fd=10),("nginx",pid=152,fd=10),("nginx",pid=151,fd=10),("nginx",pid=150,fd=10),("nginx",pid=149,fd=10),("nginx",pid=148,fd=10),("nginx",pid=147,fd=10),("nginx",pid=146,fd=10),("nginx",pid=144,fd=10),("nginx",pid=143,fd=10),("nginx",pid=142,fd=10))
|
||||
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=203,fd=12),("nginx",pid=202,fd=12),("nginx",pid=201,fd=12),("nginx",pid=198,fd=12),("nginx",pid=197,fd=12),("nginx",pid=196,fd=12),("nginx",pid=195,fd=12),("nginx",pid=194,fd=12),("nginx",pid=193,fd=12),("nginx",pid=192,fd=12),("nginx",pid=191,fd=12),("nginx",pid=189,fd=12),("nginx",pid=188,fd=12),("nginx",pid=187,fd=12),("nginx",pid=186,fd=12),("nginx",pid=185,fd=12),("nginx",pid=184,fd=12),("nginx",pid=183,fd=12),("nginx",pid=182,fd=12),("nginx",pid=181,fd=12),("nginx",pid=180,fd=12),("nginx",pid=179,fd=12),("nginx",pid=178,fd=12),("nginx",pid=177,fd=12),("nginx",pid=176,fd=12),("nginx",pid=175,fd=12),("nginx",pid=174,fd=12),("nginx",pid=173,fd=12),("nginx",pid=172,fd=12),("nginx",pid=171,fd=12),("nginx",pid=170,fd=12),("nginx",pid=169,fd=12),("nginx",pid=168,fd=12),("nginx",pid=167,fd=12),("nginx",pid=166,fd=12),("nginx",pid=165,fd=12),("nginx",pid=164,fd=12),("nginx",pid=163,fd=12),("nginx",pid=162,fd=12),("nginx",pid=161,fd=12),("nginx",pid=160,fd=12),("nginx",pid=159,fd=12),("nginx",pid=158,fd=12),("nginx",pid=157,fd=12),("nginx",pid=156,fd=12),("nginx",pid=154,fd=12),("nginx",pid=153,fd=12),("nginx",pid=152,fd=12),("nginx",pid=151,fd=12),("nginx",pid=150,fd=12),("nginx",pid=149,fd=12),("nginx",pid=148,fd=12),("nginx",pid=147,fd=12),("nginx",pid=146,fd=12),("nginx",pid=144,fd=12),("nginx",pid=143,fd=12),("nginx",pid=142,fd=12))
|
||||
LISTEN 0 5 127.0.0.1:8888 0.0.0.0:* users:(("python3",pid=108,fd=3))
|
||||
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=370,fd=13))
|
||||
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=97,fd=14))
|
||||
LISTEN 0 4096 *:30303 *:* users:(("java",pid=14903,fd=359))
|
||||
LISTEN 0 4096 *:9545 *:* users:(("java",pid=14903,fd=356))
|
||||
LISTEN 0 4096 *:8545 *:* users:(("java",pid=14903,fd=357))
|
||||
LISTEN 0 4096 *:8546 *:* users:(("java",pid=14903,fd=358))
|
||||
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=370,fd=14))
|
||||
LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=203,fd=11),("nginx",pid=202,fd=11),("nginx",pid=201,fd=11),("nginx",pid=198,fd=11),("nginx",pid=197,fd=11),("nginx",pid=196,fd=11),("nginx",pid=195,fd=11),("nginx",pid=194,fd=11),("nginx",pid=193,fd=11),("nginx",pid=192,fd=11),("nginx",pid=191,fd=11),("nginx",pid=189,fd=11),("nginx",pid=188,fd=11),("nginx",pid=187,fd=11),("nginx",pid=186,fd=11),("nginx",pid=185,fd=11),("nginx",pid=184,fd=11),("nginx",pid=183,fd=11),("nginx",pid=182,fd=11),("nginx",pid=181,fd=11),("nginx",pid=180,fd=11),("nginx",pid=179,fd=11),("nginx",pid=178,fd=11),("nginx",pid=177,fd=11),("nginx",pid=176,fd=11),("nginx",pid=175,fd=11),("nginx",pid=174,fd=11),("nginx",pid=173,fd=11),("nginx",pid=172,fd=11),("nginx",pid=171,fd=11),("nginx",pid=170,fd=11),("nginx",pid=169,fd=11),("nginx",pid=168,fd=11),("nginx",pid=167,fd=11),("nginx",pid=166,fd=11),("nginx",pid=165,fd=11),("nginx",pid=164,fd=11),("nginx",pid=163,fd=11),("nginx",pid=162,fd=11),("nginx",pid=161,fd=11),("nginx",pid=160,fd=11),("nginx",pid=159,fd=11),("nginx",pid=158,fd=11),("nginx",pid=157,fd=11),("nginx",pid=156,fd=11),("nginx",pid=154,fd=11),("nginx",pid=153,fd=11),("nginx",pid=152,fd=11),("nginx",pid=151,fd=11),("nginx",pid=150,fd=11),("nginx",pid=149,fd=11),("nginx",pid=148,fd=11),("nginx",pid=147,fd=11),("nginx",pid=146,fd=11),("nginx",pid=144,fd=11),("nginx",pid=143,fd=11),("nginx",pid=142,fd=11))
|
||||
LISTEN 0 4096 *:22 *:* users:(("systemd",pid=1,fd=41))
|
||||
LISTEN 0 511 [::]:443 [::]:* users:(("nginx",pid=203,fd=13),("nginx",pid=202,fd=13),("nginx",pid=201,fd=13),("nginx",pid=198,fd=13),("nginx",pid=197,fd=13),("nginx",pid=196,fd=13),("nginx",pid=195,fd=13),("nginx",pid=194,fd=13),("nginx",pid=193,fd=13),("nginx",pid=192,fd=13),("nginx",pid=191,fd=13),("nginx",pid=189,fd=13),("nginx",pid=188,fd=13),("nginx",pid=187,fd=13),("nginx",pid=186,fd=13),("nginx",pid=185,fd=13),("nginx",pid=184,fd=13),("nginx",pid=183,fd=13),("nginx",pid=182,fd=13),("nginx",pid=181,fd=13),("nginx",pid=180,fd=13),("nginx",pid=179,fd=13),("nginx",pid=178,fd=13),("nginx",pid=177,fd=13),("nginx",pid=176,fd=13),("nginx",pid=175,fd=13),("nginx",pid=174,fd=13),("nginx",pid=173,fd=13),("nginx",pid=172,fd=13),("nginx",pid=171,fd=13),("nginx",pid=170,fd=13),("nginx",pid=169,fd=13),("nginx",pid=168,fd=13),("nginx",pid=167,fd=13),("nginx",pid=166,fd=13),("nginx",pid=165,fd=13),("nginx",pid=164,fd=13),("nginx",pid=163,fd=13),("nginx",pid=162,fd=13),("nginx",pid=161,fd=13),("nginx",pid=160,fd=13),("nginx",pid=159,fd=13),("nginx",pid=158,fd=13),("nginx",pid=157,fd=13),("nginx",pid=156,fd=13),("nginx",pid=154,fd=13),("nginx",pid=153,fd=13),("nginx",pid=152,fd=13),("nginx",pid=151,fd=13),("nginx",pid=150,fd=13),("nginx",pid=149,fd=13),("nginx",pid=148,fd=13),("nginx",pid=147,fd=13),("nginx",pid=146,fd=13),("nginx",pid=144,fd=13),("nginx",pid=143,fd=13),("nginx",pid=142,fd=13))
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"vmid": 2201,
|
||||
"hostname": "besu-rpc-public-1",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.221",
|
||||
"actual_ip": "192.168.11.221",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "besu",
|
||||
"config_path": "8545,8546",
|
||||
"public_domains": ["rpc-http-pub.d-bis.org","rpc-ws-pub.d-bis.org"],
|
||||
"services": [{"name":"besu-rpc","type":"direct","status":"running"}],
|
||||
"listening_ports": [{"port":8545,"protocol":"tcp","process":"besu"},{"port":8546,"protocol":"tcp","process":"besu"}],
|
||||
"health_endpoints": [{"path":"http://192.168.11.221:8545","expected_code":200,"actual_code":200,"status":"pass"}],
|
||||
"verified_at": "2026-03-02T14:22:22-08:00"
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
LISTEN 0 4096 127.0.0.1:20241 0.0.0.0:* users:(("cloudflared",pid=543700,fd=3))
|
||||
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=186,fd=7),("nginx",pid=185,fd=7),("nginx",pid=184,fd=7),("nginx",pid=183,fd=7),("nginx",pid=182,fd=7))
|
||||
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=186,fd=9),("nginx",pid=185,fd=9),("nginx",pid=184,fd=9),("nginx",pid=183,fd=9),("nginx",pid=182,fd=9))
|
||||
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=334,fd=13))
|
||||
LISTEN 0 511 *:9645 *:* users:(("node",pid=170005,fd=21))
|
||||
LISTEN 0 511 *:9646 *:* users:(("node",pid=170005,fd=20))
|
||||
LISTEN 0 4096 *:9547 *:* users:(("java",pid=118,fd=350))
|
||||
LISTEN 0 4096 *:8545 *:* users:(("java",pid=118,fd=351))
|
||||
LISTEN 0 4096 *:8546 *:* users:(("java",pid=118,fd=352))
|
||||
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=334,fd=14))
|
||||
LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=186,fd=8),("nginx",pid=185,fd=8),("nginx",pid=184,fd=8),("nginx",pid=183,fd=8),("nginx",pid=182,fd=8))
|
||||
LISTEN 0 4096 *:22 *:* users:(("sshd",pid=194,fd=3),("systemd",pid=1,fd=42))
|
||||
LISTEN 0 511 [::]:443 [::]:* users:(("nginx",pid=186,fd=10),("nginx",pid=185,fd=10),("nginx",pid=184,fd=10),("nginx",pid=183,fd=10),("nginx",pid=182,fd=10))
|
||||
LISTEN 0 4096 *:30303 *:* users:(("java",pid=118,fd=353))
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"vmid": 2400,
|
||||
"hostname": "thirdweb-rpc-1",
|
||||
"host": "ml110",
|
||||
"host_ip": "192.168.11.10",
|
||||
"expected_ip": "192.168.11.240",
|
||||
"actual_ip": "192.168.11.240",
|
||||
"status": "running",
|
||||
"has_nginx": true,
|
||||
"service_type": "nginx",
|
||||
"config_path": "/etc/nginx/sites-available/rpc-thirdweb",
|
||||
"public_domains": ["rpc.public-0138.defi-oracle.io"],
|
||||
"services": [{"name":"nginx","type":"systemd","status":"active"}],
|
||||
"listening_ports": [],
|
||||
"health_endpoints": [{"path":"http://192.168.11.240:80","expected_code":200,"actual_code":404,"status":"fail"}],
|
||||
"verified_at": "2026-03-02T14:22:34-08:00"
|
||||
}
|
||||
@@ -0,0 +1,13 @@
|
||||
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=98,fd=14))
|
||||
LISTEN 0 4096 127.0.0.1:34981 0.0.0.0:* users:(("containerd",pid=122,fd=8))
|
||||
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=470,fd=13))
|
||||
LISTEN 0 4096 0.0.0.0:4000 0.0.0.0:* users:(("docker-proxy",pid=1640248,fd=7))
|
||||
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=1628657,fd=9),("nginx",pid=1628656,fd=9),("nginx",pid=1628655,fd=9),("nginx",pid=1628654,fd=9),("nginx",pid=1628653,fd=9))
|
||||
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=1628657,fd=11),("nginx",pid=1628656,fd=11),("nginx",pid=1628655,fd=11),("nginx",pid=1628654,fd=11),("nginx",pid=1628653,fd=11))
|
||||
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=470,fd=14))
|
||||
LISTEN 0 4096 [::]:4000 [::]:* users:(("docker-proxy",pid=1640254,fd=7))
|
||||
LISTEN 0 511 *:3001 *:* users:(("node",pid=1676505,fd=18))
|
||||
LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=1628657,fd=10),("nginx",pid=1628656,fd=10),("nginx",pid=1628655,fd=10),("nginx",pid=1628654,fd=10),("nginx",pid=1628653,fd=10))
|
||||
LISTEN 0 4096 *:22 *:* users:(("systemd",pid=1,fd=39))
|
||||
LISTEN 0 511 [::]:443 [::]:* users:(("nginx",pid=1628657,fd=12),("nginx",pid=1628656,fd=12),("nginx",pid=1628655,fd=12),("nginx",pid=1628654,fd=12),("nginx",pid=1628653,fd=12))
|
||||
LISTEN 0 4096 *:8081 *:* users:(("explorer-config",pid=116,fd=5))
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"vmid": 5000,
|
||||
"hostname": "blockscout-1",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.140",
|
||||
"actual_ip": "192.168.11.140",
|
||||
"status": "running",
|
||||
"has_nginx": true,
|
||||
"service_type": "nginx",
|
||||
"config_path": "/etc/nginx/sites-available/blockscout",
|
||||
"public_domains": ["explorer.d-bis.org"],
|
||||
"services": [{"name":"nginx","type":"systemd","status":"active"}],
|
||||
"listening_ports": [],
|
||||
"health_endpoints": [{"path":"http://192.168.11.140:80","expected_code":200,"actual_code":301,"status":"pass"}],
|
||||
"verified_at": "2026-03-02T14:23:06-08:00"
|
||||
}
|
||||
@@ -0,0 +1,8 @@
|
||||
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=80,fd=14))
|
||||
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=171,fd=6),("nginx",pid=170,fd=6),("nginx",pid=169,fd=6),("nginx",pid=168,fd=6),("nginx",pid=167,fd=6),("nginx",pid=166,fd=6),("nginx",pid=165,fd=6),("nginx",pid=164,fd=6),("nginx",pid=163,fd=6),("nginx",pid=162,fd=6),("nginx",pid=161,fd=6),("nginx",pid=160,fd=6),("nginx",pid=159,fd=6),("nginx",pid=158,fd=6),("nginx",pid=157,fd=6),("nginx",pid=156,fd=6),("nginx",pid=155,fd=6),("nginx",pid=154,fd=6),("nginx",pid=153,fd=6),("nginx",pid=152,fd=6),("nginx",pid=151,fd=6),("nginx",pid=150,fd=6),("nginx",pid=149,fd=6),("nginx",pid=148,fd=6),("nginx",pid=147,fd=6),("nginx",pid=146,fd=6),("nginx",pid=145,fd=6),("nginx",pid=144,fd=6),("nginx",pid=143,fd=6),("nginx",pid=142,fd=6),("nginx",pid=141,fd=6),("nginx",pid=140,fd=6),("nginx",pid=139,fd=6),("nginx",pid=138,fd=6),("nginx",pid=137,fd=6),("nginx",pid=136,fd=6),("nginx",pid=135,fd=6),("nginx",pid=134,fd=6),("nginx",pid=133,fd=6),("nginx",pid=132,fd=6),("nginx",pid=131,fd=6),("nginx",pid=130,fd=6),("nginx",pid=129,fd=6),("nginx",pid=128,fd=6),("nginx",pid=127,fd=6),("nginx",pid=126,fd=6),("nginx",pid=125,fd=6),("nginx",pid=123,fd=6),("nginx",pid=122,fd=6),("nginx",pid=121,fd=6),("nginx",pid=120,fd=6),("nginx",pid=119,fd=6),("nginx",pid=118,fd=6),("nginx",pid=117,fd=6),("nginx",pid=116,fd=6),("nginx",pid=115,fd=6),("nginx",pid=114,fd=6))
|
||||
LISTEN 0 4096 0.0.0.0:3308 0.0.0.0:* users:(("docker-proxy",pid=859,fd=8))
|
||||
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=357,fd=13))
|
||||
LISTEN 0 4096 127.0.0.1:20241 0.0.0.0:* users:(("cloudflared",pid=187,fd=3))
|
||||
LISTEN 0 4096 *:22 *:* users:(("systemd",pid=1,fd=48))
|
||||
LISTEN 0 4096 [::]:3308 [::]:* users:(("docker-proxy",pid=868,fd=8))
|
||||
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=357,fd=14))
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"vmid": 5800,
|
||||
"hostname": "mifos",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.85",
|
||||
"actual_ip": "192.168.11.85",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "web",
|
||||
"config_path": "-",
|
||||
"public_domains": ["mifos.d-bis.org"],
|
||||
"services": [{"name":"http","type":"direct","status":"running"}],
|
||||
"listening_ports": [{"port":80,"protocol":"tcp","process":"http"}],
|
||||
"health_endpoints": [{"path":"http://192.168.11.85:80","expected_code":200,"actual_code":000000,"status":"fail"}],
|
||||
"verified_at": "2026-03-02T14:22:40-08:00"
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=300,fd=13))
|
||||
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=14267,fd=5),("nginx",pid=14266,fd=5),("nginx",pid=14265,fd=5),("nginx",pid=14264,fd=5),("nginx",pid=1727,fd=5))
|
||||
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=300,fd=14))
|
||||
LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=14267,fd=6),("nginx",pid=14266,fd=6),("nginx",pid=14265,fd=6),("nginx",pid=14264,fd=6),("nginx",pid=1727,fd=6))
|
||||
LISTEN 0 4096 *:22 *:* users:(("sshd",pid=121,fd=3),("systemd",pid=1,fd=53))
|
||||
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"vmid": 5801,
|
||||
"hostname": "dapp-smom",
|
||||
"host": "r630-02",
|
||||
"host_ip": "192.168.11.12",
|
||||
"expected_ip": "192.168.11.58",
|
||||
"actual_ip": "192.168.11.58",
|
||||
"status": "running",
|
||||
"has_nginx": false,
|
||||
"service_type": "web",
|
||||
"config_path": "-",
|
||||
"public_domains": ["dapp.d-bis.org"],
|
||||
"services": [{"name":"http","type":"direct","status":"running"}],
|
||||
"listening_ports": [{"port":80,"protocol":"tcp","process":"http"}],
|
||||
"health_endpoints": [{"path":"http://192.168.11.58:80","expected_code":200,"actual_code":200,"status":"pass"}],
|
||||
"verified_at": "2026-03-02T14:22:46-08:00"
|
||||
}
|
||||
@@ -0,0 +1,984 @@
|
||||
[
|
||||
{
|
||||
"domain": "ws.rpc-fireblocks.d-bis.org",
|
||||
"domain_type": "rpc-ws",
|
||||
"timestamp": "2026-03-04T01:12:49-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "ws.rpc-fireblocks.d-bis.org",
|
||||
"issuer": "E8",
|
||||
"expires": "May 22 21:48:21 2026 GMT"
|
||||
},
|
||||
"websocket": {
|
||||
"status": "pass",
|
||||
"http_code": "400",
|
||||
"full_test": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "dbis-admin.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:12:52-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "dbis-admin.d-bis.org",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:56:11 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 3.126302
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-alltra-3.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:12:55-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "fail",
|
||||
"http_code": "502",
|
||||
"error": "error code: 502"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "mifos.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:12:56-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 0.125696
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-hybx-2.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:12:56-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "fail",
|
||||
"http_code": "502",
|
||||
"error": "error code: 502"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "cacti-hybx.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:12:57-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 0.093134
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "sankofa.nexus",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:12:57-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "sankofa.nexus",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:58:17 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.089084,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-alltra.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:12:57-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "fail",
|
||||
"http_code": "502",
|
||||
"error": "error code: 502"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-http-pub.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:12:58-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc.public-0138.defi-oracle.io",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:12:58-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.91.43",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "defi-oracle.io",
|
||||
"issuer": "Cloudflare TLS Issuing ECC CA 3",
|
||||
"expires": "Jun 2 08:38:04 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "studio.sankofa.nexus",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:12:58-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.39.10",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "sankofa.nexus",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 6 03:30:54 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 404,
|
||||
"response_time_seconds": 0.130726
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "dbis-api.d-bis.org",
|
||||
"domain_type": "api",
|
||||
"timestamp": "2026-03-04T01:12:59-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "dbis-api.d-bis.org",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:56:33 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 3.126570
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-hybx-3.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:03-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "fail",
|
||||
"http_code": "502",
|
||||
"error": "error code: 502"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:04-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "dapp.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:04-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.119344,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "www.sankofa.nexus",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:04-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "www.sankofa.nexus",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:59:41 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.075866,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "mim4u.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:05-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "mim4u.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:57:01 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.054379,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "ws.rpc.d-bis.org",
|
||||
"domain_type": "rpc-ws",
|
||||
"timestamp": "2026-03-04T01:13:06-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "ws.rpc.d-bis.org",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 30 03:43:05 2026 GMT"
|
||||
},
|
||||
"websocket": {
|
||||
"status": "pass",
|
||||
"http_code": "400",
|
||||
"full_test": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "phoenix.sankofa.nexus",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:08-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "phoenix.sankofa.nexus",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:57:08 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.053991,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "www.mim4u.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:09-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "www.mim4u.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:59:17 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.033360,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "wss.defi-oracle.io",
|
||||
"domain_type": "rpc-ws",
|
||||
"timestamp": "2026-03-04T01:13:09-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "wss.defi-oracle.io",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 30 03:44:57 2026 GMT"
|
||||
},
|
||||
"websocket": {
|
||||
"status": "pass",
|
||||
"http_code": "400",
|
||||
"full_test": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "the-order.sankofa.nexus",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:11-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "the-order.sankofa.nexus",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:58:53 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.031307,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc2.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:12-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-ws-pub.d-bis.org",
|
||||
"domain_type": "rpc-ws",
|
||||
"timestamp": "2026-03-04T01:13:12-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "rpc-ws-pub.d-bis.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:57:51 2026 GMT"
|
||||
},
|
||||
"websocket": {
|
||||
"status": "pass",
|
||||
"http_code": "400",
|
||||
"full_test": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "dev.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:14-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.091475,
|
||||
"has_hsts": true,
|
||||
"has_csp": false,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-alltra-2.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:15-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "fail",
|
||||
"http_code": "502",
|
||||
"error": "error code: 502"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-http-prv.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:15-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "www.phoenix.sankofa.nexus",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:15-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "www.phoenix.sankofa.nexus",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:59:28 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.025857,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "gitea.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:16-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.116253,
|
||||
"has_hsts": true,
|
||||
"has_csp": false,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "secure.mim4u.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:16-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "secure.mim4u.org",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:58:40 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.042557,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-fireblocks.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:16-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "rpc-fireblocks.d-bis.org",
|
||||
"issuer": "E8",
|
||||
"expires": "May 22 21:47:15 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "training.mim4u.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:16-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "training.mim4u.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:59:06 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.035898,
|
||||
"has_hsts": true,
|
||||
"has_csp": true,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "explorer.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:17-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "explorer.d-bis.org",
|
||||
"issuer": "R13",
|
||||
"expires": "Mar 23 20:48:12 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.010479,
|
||||
"has_hsts": false,
|
||||
"has_csp": false,
|
||||
"has_xfo": false
|
||||
},
|
||||
"blockscout_api": {
|
||||
"status": "pass",
|
||||
"http_code": 200
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "dbis-api-2.d-bis.org",
|
||||
"domain_type": "api",
|
||||
"timestamp": "2026-03-04T01:13:17-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "dbis-api-2.d-bis.org",
|
||||
"issuer": "E8",
|
||||
"expires": "Apr 16 20:56:22 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 3.094985
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "secure.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:20-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "secure.d-bis.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:58:28 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 3.091145
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-hybx.d-bis.org",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:24-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "fail",
|
||||
"http_code": "502",
|
||||
"error": "error code: 502"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "codespaces.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:24-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "172.67.220.49",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "pass",
|
||||
"http_code": 200,
|
||||
"response_time_seconds": 0.106187,
|
||||
"has_hsts": true,
|
||||
"has_csp": false,
|
||||
"has_xfo": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc.defi-oracle.io",
|
||||
"domain_type": "rpc-http",
|
||||
"timestamp": "2026-03-04T01:13:24-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.91.43",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "rpc.defi-oracle.io",
|
||||
"issuer": "Cloudflare TLS Issuing ECC CA 3",
|
||||
"expires": "May 7 09:51:23 2026 GMT"
|
||||
},
|
||||
"rpc_http": {
|
||||
"status": "pass",
|
||||
"chain_id": "0x8a"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "rpc-ws-prv.d-bis.org",
|
||||
"domain_type": "rpc-ws",
|
||||
"timestamp": "2026-03-04T01:13:25-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "rpc-ws-prv.d-bis.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 16 20:57:38 2026 GMT"
|
||||
},
|
||||
"websocket": {
|
||||
"status": "pass",
|
||||
"http_code": "400",
|
||||
"full_test": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "cacti-alltra.d-bis.org",
|
||||
"domain_type": "web",
|
||||
"timestamp": "2026-03-04T01:13:27-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "104.21.86.131",
|
||||
"expected_ip": "any"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "d-bis.org",
|
||||
"issuer": "WE1",
|
||||
"expires": "May 27 07:40:56 2026 GMT"
|
||||
},
|
||||
"https": {
|
||||
"status": "warn",
|
||||
"http_code": 502,
|
||||
"response_time_seconds": 0.105093
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"domain": "ws.rpc2.d-bis.org",
|
||||
"domain_type": "rpc-ws",
|
||||
"timestamp": "2026-03-04T01:13:28-08:00",
|
||||
"tests": {
|
||||
"dns": {
|
||||
"status": "pass",
|
||||
"resolved_ip": "76.53.10.36",
|
||||
"expected_ip": "76.53.10.36"
|
||||
},
|
||||
"ssl": {
|
||||
"status": "pass",
|
||||
"cn": "ws.rpc2.d-bis.org",
|
||||
"issuer": "E7",
|
||||
"expires": "Apr 30 03:43:58 2026 GMT"
|
||||
},
|
||||
"websocket": {
|
||||
"status": "pass",
|
||||
"http_code": "400",
|
||||
"full_test": true
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,14 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:13:28 GMT
|
||||
content-type: text/plain; charset=UTF-8
|
||||
content-length: 15
|
||||
cache-control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
|
||||
expires: Thu, 01 Jan 1970 00:00:01 GMT
|
||||
referrer-policy: same-origin
|
||||
x-frame-options: SAMEORIGIN
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc3de2eb76d31-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.105093
|
||||
@@ -0,0 +1,14 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:12:57 GMT
|
||||
content-type: text/plain; charset=UTF-8
|
||||
content-length: 15
|
||||
cache-control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
|
||||
expires: Thu, 01 Jan 1970 00:00:01 GMT
|
||||
referrer-policy: same-origin
|
||||
x-frame-options: SAMEORIGIN
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc31de8ff5126-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.093134
|
||||
@@ -0,0 +1,17 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:24 GMT
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
x-content-type-options: nosniff
|
||||
x-dns-prefetch-control: off
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-permitted-cross-domain-policies: none
|
||||
cf-cache-status: DYNAMIC
|
||||
strict-transport-security: max-age=31536000; includeSubDomains
|
||||
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=qcGNKSl%2B1%2FhpZwhct9EaLDagq7IWtDHkEu2oS6Lo%2FUWgaiwE51zIt6Yezia8u7P6opUyzaluK8AprwkuF%2FL0XERSyVu6l3AduDyZS9JCIZYTxBP0"}]}
|
||||
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc3c95e0e2ec6-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.106187
|
||||
@@ -0,0 +1,17 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:04 GMT
|
||||
content-type: text/html
|
||||
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com; img-src 'self' data: https:; connect-src 'self' https: wss: http://192.168.11.221:8545 ws://192.168.11.221:8546 https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org; frame-src 'self' https:; frame-ancestors 'self';
|
||||
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=KxOr8L%2F%2FUwRNWcWqJJf%2BIFs9ARU6eNty3NSDyo4YVAImfqWN7FF0zQMBLNqcJ1LzrWPqzIzCOxDVYK30Ue3GAZKlJ3nXiUG3ypTGHNe9"}]}
|
||||
last-modified: Sun, 22 Feb 2026 04:25:15 GMT
|
||||
vary: Accept-Encoding
|
||||
cf-cache-status: DYNAMIC
|
||||
strict-transport-security: max-age=31536000; includeSubDomains
|
||||
x-content-type-options: nosniff
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc34cde8b19db-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.119344
|
||||
@@ -0,0 +1,18 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:12:55 GMT
|
||||
content-type: text/html
|
||||
content-length: 122
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
|
||||
3.126302
|
||||
@@ -0,0 +1,18 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:13:20 GMT
|
||||
content-type: text/html
|
||||
content-length: 122
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
|
||||
3.094985
|
||||
@@ -0,0 +1,18 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:13:03 GMT
|
||||
content-type: text/html
|
||||
content-length: 122
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
|
||||
3.126570
|
||||
@@ -0,0 +1,17 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:15 GMT
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
x-content-type-options: nosniff
|
||||
x-dns-prefetch-control: off
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-permitted-cross-domain-policies: none
|
||||
cf-cache-status: DYNAMIC
|
||||
strict-transport-security: max-age=31536000; includeSubDomains
|
||||
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=jtptiHBbOF1Y6MMarNhpKn6Yn%2FX9IISgmgj2aYgKE5E8fCTMepKblTf5HzA%2BcphjwRiCiPpmKW%2FwLbT%2Bk2IntjQP0jvzbA8HLELeUrw%3D"}]}
|
||||
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc38d6a81b8d4-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.091475
|
||||
@@ -0,0 +1 @@
|
||||
{"average_block_time":2.0e3,"coin_image":"https://coin-images.coingecko.com/coins/images/39140/small/ETH.png?1720706783","coin_price":"1969.73","coin_price_change_percentage":-2.89,"gas_price_updated_at":"2026-03-04T09:13:08.999454Z","gas_prices":{"slow":0.01,"average":0.01,"fast":0.01},"gas_prices_update_in":23065,"gas_used_today":"680967","market_cap":"0.000","network_utilization_percentage":0.0,"secondary_coin_image":null,"secondary_coin_price":null,"static_gas_price":null,"total_addresses":"233","total_blocks":"2547804","total_gas_used":"0","total_transactions":"13578","transactions_today":"16","tvl":null}
|
||||
@@ -0,0 +1,12 @@
|
||||
HTTP/2 200
|
||||
server: nginx/1.18.0 (Ubuntu)
|
||||
date: Wed, 04 Mar 2026 09:13:17 GMT
|
||||
content-type: text/html
|
||||
content-length: 60718
|
||||
last-modified: Sun, 01 Mar 2026 19:27:01 GMT
|
||||
etag: "69a49305-ed2e"
|
||||
cache-control: no-store, no-cache, must-revalidate
|
||||
accept-ranges: bytes
|
||||
|
||||
|
||||
0.010479
|
||||
@@ -0,0 +1,17 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:16 GMT
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
x-content-type-options: nosniff
|
||||
x-dns-prefetch-control: off
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-permitted-cross-domain-policies: none
|
||||
cf-cache-status: DYNAMIC
|
||||
strict-transport-security: max-age=31536000; includeSubDomains
|
||||
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=DADqu2sx18WR1tYPUg6vJ2ATks9BkJV6XPNvZZ7Ejbd3%2FnSxv5KJ9tZq4sEqcEOKtfRZLk29CKzK6R7mXSSV75EUbrm6gAMO8v4eBrmGqQ%3D%3D"}]}
|
||||
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc394fd149dfc-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.116253
|
||||
@@ -0,0 +1,14 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:12:56 GMT
|
||||
content-type: text/plain; charset=UTF-8
|
||||
content-length: 15
|
||||
cache-control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
|
||||
expires: Thu, 01 Jan 1970 00:00:01 GMT
|
||||
referrer-policy: same-origin
|
||||
x-frame-options: SAMEORIGIN
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc31959f108d0-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.125696
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:06 GMT
|
||||
content-type: text/html
|
||||
content-length: 2710
|
||||
vary: Accept-Encoding
|
||||
last-modified: Fri, 27 Feb 2026 06:24:33 GMT
|
||||
etag: "69a138a1-a96"
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:09 GMT
|
||||
content-type: text/html
|
||||
content-length: 60718
|
||||
vary: Accept-Encoding
|
||||
last-modified: Sun, 01 Mar 2026 19:27:01 GMT
|
||||
etag: "69a49305-ed2e"
|
||||
cache-control: no-store, no-cache, must-revalidate
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net https://unpkg.com https://cdnjs.cloudflare.com; style-src 'self' 'unsafe-inline' https://cdnjs.cloudflare.com; img-src 'self' data: https:; font-src 'self' https://cdnjs.cloudflare.com; connect-src 'self' https://explorer.d-bis.org wss://explorer.d-bis.org https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org http://192.168.11.221:8545 ws://192.168.11.221:8546;
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
@@ -0,0 +1 @@
|
||||
error code: 502
|
||||
@@ -0,0 +1 @@
|
||||
error code: 502
|
||||
@@ -0,0 +1 @@
|
||||
error code: 502
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
|
||||
@@ -0,0 +1 @@
|
||||
error code: 502
|
||||
@@ -0,0 +1 @@
|
||||
error code: 502
|
||||
@@ -0,0 +1 @@
|
||||
error code: 502
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","id":1,"result":"0x8a"}
|
||||
@@ -0,0 +1 @@
|
||||
{"jsonrpc":"2.0","result":"0x8a","id":1}
|
||||
@@ -0,0 +1,19 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:12:57 GMT
|
||||
content-type: text/html
|
||||
content-length: 60718
|
||||
vary: Accept-Encoding
|
||||
last-modified: Sun, 01 Mar 2026 19:27:01 GMT
|
||||
etag: "69a49305-ed2e"
|
||||
cache-control: no-store, no-cache, must-revalidate
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net https://unpkg.com https://cdnjs.cloudflare.com; style-src 'self' 'unsafe-inline' https://cdnjs.cloudflare.com; img-src 'self' data: https:; font-src 'self' https://cdnjs.cloudflare.com; connect-src 'self' https://explorer.d-bis.org wss://explorer.d-bis.org https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org http://192.168.11.221:8545 ws://192.168.11.221:8546;
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
|
||||
0.089084
|
||||
@@ -0,0 +1,18 @@
|
||||
HTTP/2 502
|
||||
date: Wed, 04 Mar 2026 09:13:24 GMT
|
||||
content-type: text/html
|
||||
content-length: 122
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
|
||||
3.091145
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:16 GMT
|
||||
content-type: text/html
|
||||
content-length: 2710
|
||||
vary: Accept-Encoding
|
||||
last-modified: Fri, 27 Feb 2026 06:24:33 GMT
|
||||
etag: "69a138a1-a96"
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
HTTP/2 404
|
||||
date: Wed, 04 Mar 2026 09:12:59 GMT
|
||||
content-type: application/json
|
||||
vary: Accept-Encoding
|
||||
nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
|
||||
cf-cache-status: DYNAMIC
|
||||
report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=uPpiXv3dqzJldasZIWQPpjLwK3Ld%2BicomTPz0JrKuQUXahX98AGj%2BkcuPLqauyotqEl0z2ia8%2F%2BWAb9pMEvr2IwBgY%2FHlvGbvN6%2Be9gJ%2F6MxSeZC"}]}
|
||||
server: cloudflare
|
||||
cf-ray: 9d6fc32abb29d4d9-LAX
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
|
||||
|
||||
0.130726
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:12 GMT
|
||||
content-type: text/html
|
||||
content-length: 60718
|
||||
vary: Accept-Encoding
|
||||
last-modified: Sun, 01 Mar 2026 19:27:01 GMT
|
||||
etag: "69a49305-ed2e"
|
||||
cache-control: no-store, no-cache, must-revalidate
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net https://unpkg.com https://cdnjs.cloudflare.com; style-src 'self' 'unsafe-inline' https://cdnjs.cloudflare.com; img-src 'self' data: https:; font-src 'self' https://cdnjs.cloudflare.com; connect-src 'self' https://explorer.d-bis.org wss://explorer.d-bis.org https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org http://192.168.11.221:8545 ws://192.168.11.221:8546;
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:17 GMT
|
||||
content-type: text/html
|
||||
content-length: 2710
|
||||
vary: Accept-Encoding
|
||||
last-modified: Fri, 27 Feb 2026 06:24:33 GMT
|
||||
etag: "69a138a1-a96"
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
@@ -0,0 +1,321 @@
|
||||
# End-to-End Routing Verification Report
|
||||
|
||||
**Date**: 2026-03-04T01:13:30-08:00
|
||||
**Public IP**: 76.53.10.36
|
||||
**Verifier**: intlc
|
||||
|
||||
## Summary
|
||||
|
||||
- **Total domains tested**: 41
|
||||
- **DNS tests passed**: 41
|
||||
- **HTTPS tests passed**: 14
|
||||
- **Failed tests**: 6
|
||||
- **Skipped / optional (not configured or unreachable)**: 0
|
||||
- **Average response time**: 0.6263494545454547s
|
||||
|
||||
## Test Results by Domain
|
||||
|
||||
|
||||
### ws.rpc-fireblocks.d-bis.org
|
||||
- Type: rpc-ws
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### dbis-admin.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-alltra-3.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: fail
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### mifos.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-hybx-2.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: fail
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### cacti-hybx.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### sankofa.nexus
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-alltra.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: fail
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-http-pub.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc.public-0138.defi-oracle.io
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### studio.sankofa.nexus
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### dbis-api.d-bis.org
|
||||
- Type: api
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-hybx-3.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: fail
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### dapp.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### www.sankofa.nexus
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### mim4u.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### ws.rpc.d-bis.org
|
||||
- Type: rpc-ws
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### phoenix.sankofa.nexus
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### www.mim4u.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### wss.defi-oracle.io
|
||||
- Type: rpc-ws
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### the-order.sankofa.nexus
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc2.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-ws-pub.d-bis.org
|
||||
- Type: rpc-ws
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### dev.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-alltra-2.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: fail
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-http-prv.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### www.phoenix.sankofa.nexus
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### gitea.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### secure.mim4u.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-fireblocks.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### training.mim4u.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### explorer.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Blockscout API: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### dbis-api-2.d-bis.org
|
||||
- Type: api
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### secure.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-hybx.d-bis.org
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: fail
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### codespaces.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc.defi-oracle.io
|
||||
- Type: rpc-http
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- RPC: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### rpc-ws-prv.d-bis.org
|
||||
- Type: rpc-ws
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### cacti-alltra.d-bis.org
|
||||
- Type: web
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- HTTPS: warn
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
### ws.rpc2.d-bis.org
|
||||
- Type: rpc-ws
|
||||
- DNS: pass
|
||||
- SSL: pass
|
||||
- Details: See `all_e2e_results.json`
|
||||
|
||||
## Files Generated
|
||||
|
||||
- `all_e2e_results.json` - Complete E2E test results
|
||||
- `*_https_headers.txt` - HTTP response headers per domain
|
||||
- `*_rpc_response.txt` - RPC response per domain
|
||||
- `verification_report.md` - This report
|
||||
|
||||
## Notes
|
||||
|
||||
- **Optional domains:** Domains in `E2E_OPTIONAL_WHEN_FAIL` (default: many d-bis.org/sankofa/mim4u/rpc) have any fail treated as skip so the run passes when off-LAN or services unreachable. Set `E2E_OPTIONAL_WHEN_FAIL=` (empty) for strict mode.
|
||||
- WebSocket tests require `wscat` tool: `npm install -g wscat`
|
||||
- Internal connectivity tests require access to NPMplus container
|
||||
- Explorer (explorer.d-bis.org): optional Blockscout API check; use `SKIP_BLOCKSCOUT_API=1` to skip when backend is unreachable (e.g. off-LAN). Fix runbook: docs/03-deployment/BLOCKSCOUT_FIX_RUNBOOK.md
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Review test results for each domain
|
||||
2. Investigate any failed tests
|
||||
3. Test WebSocket connections for RPC WS domains (if wscat available)
|
||||
4. Test internal connectivity from NPMplus container
|
||||
5. Update source-of-truth JSON after verification
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:09 GMT
|
||||
content-type: text/html
|
||||
content-length: 2710
|
||||
vary: Accept-Encoding
|
||||
last-modified: Fri, 27 Feb 2026 06:24:33 GMT
|
||||
etag: "69a138a1-a96"
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
content-security-policy: upgrade-insecure-requests
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
HTTP/2 200
|
||||
date: Wed, 04 Mar 2026 09:13:16 GMT
|
||||
content-type: text/html
|
||||
content-length: 60718
|
||||
vary: Accept-Encoding
|
||||
last-modified: Sun, 01 Mar 2026 19:27:01 GMT
|
||||
etag: "69a49305-ed2e"
|
||||
cache-control: no-store, no-cache, must-revalidate
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.jsdelivr.net https://unpkg.com https://cdnjs.cloudflare.com; style-src 'self' 'unsafe-inline' https://cdnjs.cloudflare.com; img-src 'self' data: https:; font-src 'self' https://cdnjs.cloudflare.com; connect-src 'self' https://explorer.d-bis.org wss://explorer.d-bis.org https://rpc-http-pub.d-bis.org wss://rpc-ws-pub.d-bis.org http://192.168.11.221:8545 ws://192.168.11.221:8546;
|
||||
accept-ranges: bytes
|
||||
alt-svc: h3=":443"; ma=86400
|
||||
x-xss-protection: 0
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
strict-transport-security: max-age=63072000; includeSubDomains; preload
|
||||
x-content-type-options: nosniff
|
||||
x-frame-options: SAMEORIGIN
|
||||
x-xss-protection: 1; mode=block
|
||||
referrer-policy: strict-origin-when-cross-origin
|
||||
content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https: data:; style-src 'self' 'unsafe-inline' https: data:; font-src 'self' https: data:; img-src 'self' data: https: blob:; connect-src 'self' https: wss: ws:; media-src 'self' https: data:; object-src 'none'; base-uri 'self'; form-action 'self' https:; frame-ancestors 'none'; upgrade-insecure-requests
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user