- Marked submodules ai-mcp-pmm-controller, explorer-monorepo, and smom-dbis-138 as dirty to reflect recent changes. - Updated documentation to clarify operator script usage, including dotenv loading and task execution instructions. - Enhanced the README and various index files to provide clearer navigation and task completion guidance. Made-with: Cursor
8.7 KiB
Detailed Gaps and Issues List
Date: 2026-03-02
Purpose: Consolidated list of items requiring fixes, deployment, or operator action.
1. Explorer API (VMID 5000) — ✅ FIXED (2026-02-28)
| Issue | Status | Fix |
|---|---|---|
/api/config/token-list returns Blockscout error (400) |
✅ Fixed | Nginx patched; config files deployed |
/api/config/networks returns Blockscout error |
✅ Fixed |
Applied: scripts/patch-nginx-explorer-config.sh added locations to HTTP and HTTPS blocks. Config deployed via pct push.
Original cause: location = /api/config/token-list and location = /api/config/networks in fix-nginx-conflicts-vmid5000.sh are not in the live nginx config. Requests fall through to location /api/ and get proxied to Blockscout.
Steps:
- SSH to Proxmox host or enter VMID 5000.
- Run inside VMID 5000:
# From repo root, copy script into container and run: pct exec 5000 -- bash -c "cd /tmp && [your-fix-nginx-script-content]" # Or: scp fix-nginx-conflicts-vmid5000.sh root@<container-ip>:/tmp/ && ssh root@<container-ip> bash /tmp/fix-nginx-conflicts-vmid5000.sh - Deploy config files:
# From repo root (with pct or SSH): ./explorer-monorepo/scripts/deploy-explorer-config-to-vmid5000.sh # Or EXEC_MODE=ssh: EXEC_MODE=ssh EXPLORER_IP=192.168.11.140 ./explorer-monorepo/scripts/deploy-explorer-config-to-vmid5000.sh - Verify:
curl -s https://explorer.d-bis.org/api/config/token-list | jq '.tokens | length' # → 22 curl -s https://explorer.d-bis.org/api/config/networks | jq '.chains | length' # → 4
2. Token-Aggregation Service — ✅ FIXED (2026-02-28)
| Issue | Status | Fix |
|---|---|---|
/health returns {"status":"unhealthy","error":"database \"token_aggregation\" does not exist"} |
✅ Fixed | DB created; migrations run; service restarted |
Applied: Created token_aggregation DB; ran migrations; restarted service. Health now returns "status":"healthy".
Original cause: The deployed token-aggregation service (port 3001) uses DATABASE_URL pointing to a database named token_aggregation, but that database does not exist or migrations were not run.
Steps:
- On VMID 5000 (or wherever PostgreSQL runs):
# Create database if using separate DB: createdb -U postgres token_aggregation # Or ensure DATABASE_URL uses explorer_db (migrations create tables there) - Run migrations:
cd smom-dbis-138/services/token-aggregation DATABASE_URL=postgresql://user:pass@host:5432/token_aggregation bash scripts/run-migrations.sh # Or with explorer_db: DATABASE_URL=postgresql://user:pass@host:5432/explorer_db - Restart token-aggregation:
systemctl restart token-aggregation - Verify:
curl -s http://192.168.11.140:3001/health | jq . # Should return "status":"healthy"
Reference: docs/04-configuration/TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md, smom-dbis-138/services/token-aggregation/scripts/run-migrations.sh
3. Nginx Proxy Order — Token-Aggregation vs Blockscout
| Issue | Status | Fix |
|---|---|---|
/api/v1/ may proxy to Blockscout instead of token-aggregation |
✅ Addressed | fix-nginx-conflicts-vmid5000.sh defines location /api/v1/ before location /api/ (lines 132–142 before 159). When applying config on VMID 5000, use this script to preserve order. |
Reference: explorer-monorepo/scripts/fix-nginx-conflicts-vmid5000.sh — correct order is in repo; operator should use this script when (re)applying nginx config.
4. Explorer Homepage / Wallet Page Tests — Intermittent
| Issue | Status | Fix |
|---|---|---|
verify-all-systems.sh "Explorer homepage" or "Wallet page" may fail |
✅ Improved | Timeout for Explorer homepage and Wallet page increased from 15s to 25s in scripts/verify-all-systems.sh to reduce failures on slow networks. |
Note: Homepage uses SolaceScanScout|Blockscout|blockscout|<!DOCTYPE; wallet uses Chain 138|ChainID 138|Add Chain. test_endpoint already captures curl output to a variable before grep.
5. Wallet Page — Grep Pattern
| Issue | Status | Fix |
|---|---|---|
| Wallet page test looked for "Add Chain 138" | ✅ Fixed | Updated to `Chain 138 |
6. Token-Aggregation Health Test — Resilience
| Issue | Status | Fix |
|---|---|---|
Health test expected healthy only; service returns unhealthy when DB missing |
✅ Fixed | Test now accepts healthy, "status", or unhealthy (service reachable) |
7. Token List Validation — CI
| Issue | Status | Fix |
|---|---|---|
| Token lists not validated in CI | ✅ Fixed | validate-config-files.sh now validates dbis-138, cronos, all-mainnet, DUAL_CHAIN |
| Workflow not triggered on token list changes | ✅ Fixed | validate-config.yml paths include token-lists/** and explorer-monorepo/backend/api/rest/config/metamask/** |
8. DUAL_CHAIN Config Sync
| Issue | Status | Fix |
|---|---|---|
| DUAL_CHAIN files in 3 locations could diverge | ✅ Fixed | scripts/sync-dual-chain-configs.sh syncs from explorer-monorepo/backend/api/rest/config/metamask/ to docs and metamask-integration |
Usage: After editing DUAL_CHAIN files, run ./scripts/sync-dual-chain-configs.sh from repo root.
9. Optional / Future Improvements
| Item | Priority | Notes |
|---|---|---|
| Shellcheck in CI | Low | run-shellcheck.sh --optional — install shellcheck if desired |
| Token-aggregation DB naming | Medium | Align DATABASE_URL: use explorer_db or create token_aggregation and document |
| Explorer homepage test timeout | Done (2026-03-02) | Increased to 25s for Explorer homepage and Wallet page in verify-all-systems.sh |
| All-mainnet token logos (HYDX, HYBX, CHT, AUDA) | Done | Placeholder IPFS logo added |
10. Quick Reference — Scripts
| Script | Purpose |
|---|---|
scripts/verify-all-systems.sh |
Full system verification (explorer, APIs, RPC, token-aggregation) |
scripts/validation/validate-config-files.sh |
Config and token list validation |
scripts/sync-dual-chain-configs.sh |
Sync DUAL_CHAIN configs to all locations |
explorer-monorepo/scripts/fix-nginx-conflicts-vmid5000.sh |
Fix nginx config (run inside VMID 5000) |
explorer-monorepo/scripts/deploy-explorer-config-to-vmid5000.sh |
Deploy token list and networks to VMID 5000 |
smom-dbis-138/services/token-aggregation/scripts/run-migrations.sh |
Run token-aggregation DB migrations |
11. Summary — Fixes Applied in This Session
- verify-all-systems.sh: Wallet page pattern, token-aggregation health test resilience
- validate-config-files.sh: Token list validation (dbis-138, cronos, all-mainnet, DUAL_CHAIN)
- validate-config.yml: Trigger on token list and config changes
- sync-dual-chain-configs.sh: New script to keep DUAL_CHAIN in sync
- DUAL_CHAIN configs: Synced to docs and metamask-integration
11a. Interpreting verification HTTP codes (301, 404, 000)
When running verify-backend-vms.sh, verify-all-systems.sh, or NPMplus checks, the following responses are often expected and do not necessarily indicate a failure:
| Code | Meaning | Typical cause |
|---|---|---|
| 301 | Redirect | HTTPS redirect (e.g. nginx on :80 redirecting to HTTPS). Service is up. |
| 404 | Not found | Wrong port or path used in the check; or NPMplus/proxy returns 404 for a bare path. Service may still be healthy. |
| 000 | No response | Connection failed from the host running the script: wrong host (e.g. checking NPMplus admin from off-LAN), firewall, or service bound to localhost only (e.g. NPMplus admin on :81 inside CT). |
Summary: 301 = HTTPS redirect (normal). 404 = incorrect port/path or NPMplus behaviour. 000 = connectivity/context (host, TLS, or port). Treat as failures only when the intended endpoint and client context match.
12. Remaining Operator Actions (Requires Proxmox/Server Access)
- Apply nginx fix and deploy config on VMID 5000: Run
./scripts/apply-remaining-operator-fixes.shfrom repo root (LAN/operator). Applied 2026-03-02: nginx fix and explorer config deploy completed successfully. - Token-aggregation DB: Run
./scripts/apply-token-aggregation-fix.shto create DB and run migrations inside VMID 5000. If the container has nopostgresuser, runcreatedb/migrations on the host where PostgreSQL runs, or point token-aggregationDATABASE_URLtoexplorer_dband run migrations there (see §2). - Restart token-aggregation after DB fix (script does this when postgres user exists).