Sync workspace: config, docs, scripts, CI, operator rules, and submodule pointers.
- Update dbis_core, cross-chain-pmm-lps, explorer-monorepo, metamask-integration, pr-workspace/chains - Omit embedded publish git dirs and empty placeholders from index Made-with: Cursor
This commit is contained in:
@@ -29,15 +29,63 @@ One-line install (Debian/Ubuntu): `sudo apt install -y sshpass rsync dnsutils ip
|
||||
|
||||
- `backup-npmplus.sh` - Full NPMplus backup (database, API exports, certificates)
|
||||
- `check-contracts-on-chain-138.sh` - Check that Chain 138 deployed contracts have bytecode on-chain (`cast code` for 31 addresses; requires `cast` and RPC access). Use `[RPC_URL]` or env `RPC_URL_138`; `--dry-run` lists addresses only (no RPC calls); `SKIP_EXIT=1` to exit 0 when RPC unreachable.
|
||||
- `check-public-report-api.sh` - Verify token-aggregation report + networks JSON (not Blockscout). Probes `/api/v1/networks` first, then `/token-aggregation/api/v1/networks`, and uses the working prefix for all checks. Use `SKIP_EXIT=1` for diagnostic-only mode. Set `SKIP_BRIDGE_ROUTES=0` / `SKIP_BRIDGE_PREFLIGHT=0` for bridge assertions.
|
||||
- `check-token-aggregation-chain138-api.sh` - Hits tokens, pools, quote, `bridge/routes`, `bridge/status`, `bridge/preflight`, and networks on both `/api/v1/*` and `/token-aggregation/api/v1/*`. `BASE_URL=https://explorer.d-bis.org` (default) or `http://192.168.11.140`.
|
||||
- `generate-contract-verification-publish-matrix.mjs` - Generates the repo-wide all-network contract verification/publication backlog from `config/smart-contracts-master.json` and `cross-chain-pmm-lps/config/deployment-status.json`. Writes `reports/status/contract_verification_publish_matrix.json` and `docs/11-references/CONTRACT_VERIFICATION_AND_PUBLICATION_MATRIX_ALL_NETWORKS.md`.
|
||||
- `generate-crosschain-publication-packs.mjs` - Groups the requested cross-chain publication packs (`ethereum-mainnet`, `optimism`, `bsc`, `polygon`, `base`) from the generated matrix and writes `reports/status/publication-packs/*/{pack.json,README.md}`.
|
||||
- `check-publication-pack-explorer-status.mjs` - Queries the Etherscan-family explorers for the five publication packs and writes `reports/status/publication-pack-explorer-status.json` plus `docs/11-references/PUBLICATION_PACK_EXPLORER_STATUS.md`. Requires `ETHERSCAN_API_KEY`. The markdown intentionally shows `Unknown` counts so pack closure is not overstated.
|
||||
- `generate-publication-actionable-backlog.mjs` - Separates the five requested publication packs into `auto-submittable`, `manual-or-external`, and `inventory/reference` buckets. Writes `reports/status/publication-actionable-backlog.json` and `docs/11-references/PUBLICATION_ACTIONABLE_BACKLOG.md`.
|
||||
- `check-chain138-x402-readiness.sh` - RPC + explorer smoke plus **ERC-2612 / ERC-3009** on default **V2 then V1** USD tokens; `--strict` exits non-zero if not x402-ready. See [CHAIN138_X402_TOKEN_SUPPORT.md](../../docs/04-configuration/CHAIN138_X402_TOKEN_SUPPORT.md).
|
||||
- `check-chain138-token-permit-support.sh` - **cast** checks **permit / ERC-3009** per token; defaults match x402 script (V2 then V1). Use for [CHAIN138_X402_TOKEN_SUPPORT.md](../../docs/04-configuration/CHAIN138_X402_TOKEN_SUPPORT.md).
|
||||
- `validate-address-registry-xe-aliases.mjs` - Validates `web3_eth_iban` aliases in institutional registry examples (or paths you pass) using `web3-eth-iban`. Run: `node scripts/verify/validate-address-registry-xe-aliases.mjs`.
|
||||
- `check-public-report-api.sh` - Verify token-aggregation report + networks JSON (not Blockscout). Probes `/api/v1/networks` first, then `/token-aggregation/api/v1/networks`, and uses the working prefix for all checks. Use `SKIP_EXIT=1` for diagnostic-only mode. Set `SKIP_BRIDGE_ROUTES=0`, `SKIP_BRIDGE_PREFLIGHT=0`, or `SKIP_GAS_REGISTRY=0` for bridge and gas-rollout assertions.
|
||||
- `check-info-defi-oracle-public.sh` - After publishing `info-defi-oracle-138/dist/`, confirms the public host serves the real Vite SPA (detects generic placeholder pages), `/agents`, and static agent files (`llms.txt`, `agent-hints.json`, `robots.txt`, `sitemap.xml`). Optional `jq` validates `agent-hints.json`. Set `INFO_SITE_BASE` for a non-default URL. If `/` passes but static paths look wrong through Cloudflare, run `scripts/cloudflare/purge-info-defi-oracle-cache.sh` (or `pnpm run cloudflare:purge-info-defi-oracle-cache`).
|
||||
- `pmm-swap-quote-chain138.sh` - **On-chain PMM quote** for **`swapExactIn`**: calls `querySellBase` / `querySellQuote` on the DODO pool (not the REST `/quote` xy=k estimate). Prints **99% / 95% / 90% `minAmountOut`** and a **`cast send`** example. Requires **`cast`** + **`bc`**. Defaults: `RPC_URL_138`, pool `PMM_QUOTE_POOL` (or `0x9e89…40dC` cUSDT/cUSDC), trader `DEPLOYER_ADDRESS`. Example: `bash scripts/verify/pmm-swap-quote-chain138.sh --token-in 0x93E6…f22 --amount-in 100000000`.
|
||||
- `check-token-aggregation-chain138-api.sh` - Hits tokens, pools, quote, `bridge/routes`, `bridge/status`, `bridge/preflight`, and networks on both `/api/v1/*` and `/token-aggregation/api/v1/*`, then probes planner-v2 on `/token-aggregation/api/v2/*` for provider capabilities, route selection, the live DODO v3 pilot execution path through `EnhancedSwapRouterV2`, and the public route-tree depth sanity for the funded canonical `cUSDC/USDC` DODO pool. `BASE_URL=https://explorer.d-bis.org` (default) or `http://192.168.11.140`.
|
||||
- `check-dodo-api-chain138-route-support.sh` - Probes official DODO docs/contract inventory plus hosted SmartTrade quote support for Chain 138. Hosted quote probes read `DODO_API_KEY` (fallbacks: `DODO_SECRET_KEY`, `DODO_DEVELOPER_API_KEY`) and derive `USER_ADDR` from `PRIVATE_KEY` by default, so placing the DODO developer key in the root `.env` or exported shell alongside the deployer `PRIVATE_KEY` is the canonical repo path.
|
||||
- `check-dodo-v3-planner-visibility-chain138.sh` - Verifies the Chain 138 DODO v3 / D3MM pilot is promoted into planner-v2 capability and route-matrix visibility, and that the canonical pilot pair now emits `EnhancedSwapRouterV2` executable calldata.
|
||||
- `check-gru-transport-preflight.sh` - Operator-focused GRU runtime preflight. Calls `/api/v1/bridge/preflight`, prints blocked pairs with `eligibilityBlockers` / `runtimeMissingRequirements`, and fails unless all active pairs are runtime-ready or `ALLOW_BLOCKED=1` is set.
|
||||
- `check-gru-v2-d3mm-expansion-status.sh` - Summarizes the GRU v2 / D3MM public-EVM rollout posture against the explicit chain-by-chain expansion plan, including whether bootstrap-ready chains already have tracked first-tier pool scaffolds.
|
||||
- `build-gru-v2-first-tier-pool-scaffolds.sh` - Builds the canonical `config/gru-v2-first-tier-pool-scaffolds.json` inventory for missing first-tier public PMM rows. Use `--write` to refresh the tracked file.
|
||||
- `print-gru-v2-first-tier-pool-scaffolds.sh` - Prints ad-hoc scaffold snippets for selected chain IDs. Useful for operator copy/paste, but the canonical tracked source is `config/gru-v2-first-tier-pool-scaffolds.json`.
|
||||
- `report-mainnet-deployer-liquidity-and-routes.sh` - Read-only snapshot: deployer **ETH / USDC / USDT / cWUSDC / cWUSDT** balances, **DODO integration allowances**, **Balancer** vault USDC/USDT balances, **Aave V3** available USDC/USDT under aTokens (flash premium bps), **Curve 3pool** USDC/USDT depth, **Uniswap V3** USDC/USDT 0.01%/0.05% pool liquidity, **DODO PMM** reserves for all Mainnet `cWUSDT`/`cWUSDC` pairs in `deployment-status.json`, and a pointer for **1inch/DODO** keys. Requires `cast`, `jq`, `PRIVATE_KEY` (address derivation only).
|
||||
- `plan-mainnet-usdt-usdc-via-cw-paths.sh` - Read-only Mainnet routing map: `cWUSDT/USDT`, `cWUSDC/USDC`, `cWUSDT/USDC`, `cWUSDC/USDT`, and **`cWUSDT/cWUSDC`** (`0xe944…68DB`), with two-hop and three-hop USDT↔USDC path recipes and optional `--with-examples` dry-run command lines.
|
||||
- `run-mainnet-cwusdc-usdc-ladder-steps-1-3.sh` - Operator helper for the current staged Mainnet `cWUSDC/USDC` ladder. Runs preflight, prints the staged matched top-up dry-run, executes dry-runs for steps 1-3, and verifies the expected matched reserve state after each rebalance without sending any live flash swaps. Optional **`PMM_FLASH_EXIT_PRICE_CMD`** overrides the default `printf 1.12` for `--external-exit-price-cmd` (see `print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh` for on-chain pool-implied diagnostics only).
|
||||
- `check-public-pmm-dry-run-readiness.sh` - Read-only checklist: mainnet `cWUSDT`/`cWUSDC` pools, `ETHEREUM_MAINNET_RPC` / `DODO_PMM_INTEGRATION_MAINNET`, Balancer and Aave V3 flash liquidity snapshots, Chain 138 flash-candidate note, and suggested `pmm-flash-push-break-even.mjs` templates.
|
||||
- `print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh` - Prints one number: implied gross USDC per cWUSDC for a base sell size, using `getVaultReserve` + `_LP_FEE_RATE_` (same fallback as `run-mainnet-public-dodo-cw-swap.sh` when `querySellBase` reverts). Args: `[base_raw] [pool_address]`; pool defaults to canonical public `cWUSDC/USDC` vault or env **`PMM_CWUSDC_USDC_IMPLIED_PRICE_POOL`**. **Not** a real external unwind quote.
|
||||
- `print-mainnet-cwusdc-external-exit-quote.sh` - Prints one number: **hosted** gross USDC per cWUSDC from **DODO SmartTrade** or **1inch v6** for mainnet `cWUSDC→USDC` at a raw base amount. Args: `dodo|1inch [base_raw]`. Keys: **`DODO_API_KEY`** (or `DODO_SECRET_KEY` / `DODO_DEVELOPER_API_KEY`) or **`ONEINCH_API_KEY`**; optional **`DODO_QUOTE_URL`**, **`ONEINCH_API_URL`**, **`DODO_SLIPPAGE`**, **`DODO_USER_ADDRESS`**. Use as `--external-exit-price-cmd` for execution-grade dry-runs. Same quoting logic as `packages/economics-toolkit` (`dodo-quote.ts`, `oneinch-quote.ts`). Alternative: `pnpm exec economics-toolkit swap-quote --engine oneinch|dodo --chain-id 1 --rpc … --token-in … --token-out … --amount-in …`.
|
||||
- `check-gas-public-pool-status.sh` - Operator-focused gas-native rollout summary. Combines the active GRU transport gas lanes with `cross-chain-pmm-lps/config/deployment-status.json`, then reports per-lane DODO wrapped-native and stable-quote pool state, Uniswap v3 reference visibility, 1inch exposure, and runtime/env blockers. The summary now distinguishes active vs deferred gas transport pairs, so deferred lanes such as `wemix` do not pollute the active counts. Use `--json` for machine-readable output.
|
||||
- `check-gas-rollout-deployment-matrix.sh` - Cross-checks the gas-family rollout against live bytecode on Chain 138 and the destination chains. Reports which canonical contracts, mirrored contracts, bridge refs, verifier refs, and vault refs are actually live, includes the deployed generic gas verifier on Chain 138 when present, distinguishes active vs deferred gas transport pairs, resolves each lane's CCIP selector, checks whether the live Chain 138 bridge has that destination wired, and classifies the observed L1 bridge read surface as `full_accounting`, `partial_destination_only`, `admin_only`, or `unknown_or_incompatible`. Use `--json` for machine-readable output.
|
||||
- `../deployment/print-gas-l1-destination-wiring-commands.sh` - Prints the exact `configureDestination(address,uint64,address,bool)` commands still required on the live Chain 138 `CWMultiTokenBridgeL1` for the active gas-native rollout lanes. Uses the same active transport overlay and selector metadata as the deployment matrix. Use `--json` for machine-readable output.
|
||||
- `../deployment/run-gas-l1-destination-wiring.sh` - Operator-ready wrapper for the same 10 active gas-lane `configureDestination(address,uint64,address,bool)` writes on the live Chain 138 bridge. Dry-run by default; only broadcasts when `EXECUTE_GAS_L1_DESTINATIONS=1` is set.
|
||||
- `check-gru-global-priority-rollout.sh` - Compares the ranked GRU global-priority currency rollout queue against the current repo state: live manifest, `c* -> cW*` mapping, and transport overlay. Use `--wave=wave1` to focus on the next promotion wave or `--json` for machine-readable output.
|
||||
- `check-gru-v2-public-protocols.sh` - Canonical GRU v2 public-network status surface. Summarizes the desired public EVM cW mesh, loaded cW suites, Wave 1 transport state, and the current public-protocol truth for `Uniswap v3`, `Balancer`, `Curve 3`, `DODO PMM`, and `1inch`. Use `--json` for machine-readable output or `--write-explorer-config` to regenerate `explorer-monorepo/backend/api/rest/config/metamask/GRU_V2_PUBLIC_DEPLOYMENT_STATUS.json`.
|
||||
- `check-gru-v2-deployment-queue.sh` - Operator-grade deployment queue for what is left to finish the public-network GRU v2 rollout. Breaks the remaining work down by Wave 1 asset, destination chain, and protocol stage, and now includes a blocker `resolutionMatrix` for missing cW suites, pending Wave 1 transport, public pool rollout, protocol staging, backlog assets, and Solana. Use `--json` for machine-readable output or `--write-explorer-config` to regenerate `explorer-monorepo/backend/api/rest/config/metamask/GRU_V2_DEPLOYMENT_QUEUE.json`.
|
||||
- `check-gru-v2-d3mm-expansion-status.sh` - Expansion-focused status summary for the explicit GRU v2 / D3MM public-EVM rollout order. Reads `config/gru-v2-d3mm-network-expansion-plan.json`, `cross-chain-pmm-lps/config/deployment-status.json`, and `cross-chain-pmm-lps/config/pool-matrix.json`, then reports which priority chains are already live-first-tier, only partially live, bootstrap-ready, or still blocked. Use `--json` for machine-readable output.
|
||||
- `print-gru-v2-first-tier-pool-scaffolds.sh` - Prints JSON snippets for the missing first-tier public PMM rows from the GRU v2 / D3MM expansion plan. This is scaffold output only: replace the zero pool address and keep `publicRoutingEnabled=false` until the pool is actually deployed and seeded.
|
||||
- `check-gru-v2-deployer-funding-status.sh` - Current deployer-wallet funding posture for the remaining GRU v2 rollout. Checks Mainnet, Cronos, Arbitrum, and Chain 138 balances, then flags the live funding blockers for public deployment work and canonical Chain 138 liquidity seeding. Use `--json` for machine-readable output.
|
||||
- `check-cw-evm-deployment-mesh.sh` - Reports the public EVM cW token deployment mesh recorded in `smom-dbis-138/.env`: expected 12-token suites per chain, missing addresses, and on-chain bytecode presence when RPCs are available. Current expected result is `10/11` loaded targets with `10/10` full sets across Mainnet, Optimism, Cronos, BSC, Gnosis, Polygon, Base, Arbitrum, Celo, and Avalanche; `Wemix` remains the only desired target without a loaded cW suite.
|
||||
- `check-cw-public-pool-status.sh` - Reads `cross-chain-pmm-lps/config/deployment-status.json` and reports how many chains have cW tokens, bridge availability, and any recorded public-chain `pmmPools`. Current expected result is that the tracked `cW*` token mesh exists on several chains and the first Mainnet DODO PMM pool wave is recorded (including `cWUSDT/cWUSDC` and the first six non-USD Wave 1 rows), while the broader public-chain mesh remains incomplete.
|
||||
- `check-mainnet-public-dodo-cw-bootstrap-pools.sh` - Verifies the eleven recorded Mainnet DODO `cW*` bootstrap pools (including **`cwusdt-cwusdc`**) are still mapped by the integration, have non-zero reserves, and remain dry-run routable through `run-mainnet-public-dodo-cw-swap.sh`.
|
||||
- `check-mainnet-pmm-peg-bot-readiness.sh` - Reads `cross-chain-pmm-lps/config/deployment-status.json` (chain `1`), confirms `eth_chainId` is 1, checks integration mapping and reserves for each recorded pool, and flags USD-class cW vs USDC/USDT reserve imbalance against `peg-bands.json`. Optional: `PMM_TRUU_BASE_TOKEN` + `PMM_TRUU_QUOTE_TOKEN`, `MIN_POOL_RESERVE_RAW`, `SKIP_EXIT=1`. See [MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md](../../docs/03-deployment/MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md). Included in `check-full-deployment-status.sh` when `ETHEREUM_MAINNET_RPC` and `DODO_PMM_INTEGRATION_MAINNET` are set (after `load-env`).
|
||||
- `../deployment/deploy-mainnet-pmm-cw-truu-pool.sh` - Mainnet DODO PMM: create and seed **cWUSDT/TRUU** or **cWUSDC/TRUU** (`TRUU_MAINNET` defaults to canonical Truth token). Defaults: fee 30 bps, `k=0.5e18`, TWAP off. Requires correct `--initial-price` (DODO `i`). Use `--dry-run` first.
|
||||
- `../deployment/add-mainnet-truu-pmm-topup.sh` - Add liquidity to an **existing** cW/TRUU pool using max wallet balances that fit the reference USD ratio (see runbook section 11). Exits 0 if either leg balance is zero.
|
||||
- `../deployment/compute-mainnet-truu-liquidity-amounts.sh` - Given **USD per leg**, prints `base_raw` / `quote_raw` and suggested `deploy-mainnet-pmm-cw-truu-pool.sh` lines for cWUSDT/TRUU and cWUSDC/TRUU (runbook section 11.1).
|
||||
- `../deployment/add-mainnet-truu-pmm-fund-both-pools.sh` - Funds **both** volatile pools sequentially with optional `--reserve-bps` (runbook: partial add + trading inventory).
|
||||
- `../deployment/compute-mainnet-truu-pmm-seed-amounts.sh` - Given **USD notional per leg** and **TRUU/USD** (per full token), prints `--base-amount` / `--quote-amount` for equal **dollar** liquidity on each side (not equal raw 1:1 tokens). See `MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md` section 9.
|
||||
- `check-full-deployment-status.sh` - Aggregates the current full-deployment posture across config validation, the Chain 138 canonical on-chain inventory, public token-aggregation health, GRU v2 readiness, the GRU global rollout queue, the GRU v2 public-protocol matrix, the deployer-funding gate, the public EVM cW token mesh, the gas-native c* / cW* rollout summary, and the public-chain cW* pool graph. It fails until the remaining deployment blockers are cleared; use `SKIP_EXIT=1` or `--json` for reporting.
|
||||
- `../deployment/run-progressive-router-v2-swaps-chain138.sh` - Live operator swap ladder for the public Chain 138 planner-v2 path. Fetches `/token-aggregation/api/v2/routes/internal-execution-plan`, ensures allowance, executes router-v2 calldata on-chain, and prints actual in/out for a progressive set of USD notionals (default: `10 50 100 250 500 1000`). Requires `PRIVATE_KEY`; optional `BASE_URL`, `RPC_URL_138`, `ENHANCED_SWAP_ROUTER_V2_ADDRESS`.
|
||||
- `check-cstar-v2-transport-stack.sh` - Predeploy Forge verifier for the `c* V2` bridge stack. Runs the base V2 token suite, legacy reserve-verifier compatibility suite, V2 reserve/verifier full L1/L2 round-trip suite, and the core `CWMultiTokenBridge` round-trip suite.
|
||||
- `check-gru-v2-chain138-readiness.sh` - Live Chain 138 readiness gate for the deployed `cUSDT V2` / `cUSDC V2` addresses. Verifies bytecode, GRU registry activation, V2 identity/signing surface, `forwardCanonical`, IPFS-backed `tokenURI`, and the governance/supervision metadata ABI expected by the latest GRU V2 standards.
|
||||
- `run-repo-green-test-path.sh` - Local deterministic green-path aggregate behind root `pnpm test`. Runs config validation, then the focused `smom-dbis-138` contract and service CI targets.
|
||||
- `audit-npmplus-ssl-all-instances.sh` - Audits the documented NPMplus fleet for `no_certificate`, `expired`, `cert_domain_mismatch`, `missing_cert_record`, and `ssl_not_forced`. `ssl_not_forced` is expected for RPC / WebSocket-style hosts where plain HTTP or non-browser clients must keep working.
|
||||
- `../nginx-proxy-manager/fix-npmplus-ssl-issues.sh` - Applies the primary NPMplus SSL remediation: enables Force SSL + HSTS for browser-facing hosts that already have certs, and requests or reuses certificates for hosts missing them or bound to the wrong certificate. It intentionally leaves Force SSL off for RPC / WebSocket endpoints such as `rpc-core.d-bis.org`, `rpc.defi-oracle.io`, and `wss.*`.
|
||||
- `xdc-zero-chain138-preflight.sh` - `eth_chainId` HTTP checks for `XDC_PARENTNET_URL`/`PARENTNET_URL` and `RPC_URL_138`; optional `ETHEREUM_MAINNET_RPC`, `BSC_RPC_URL`. See [CHAIN138_XDC_ZERO_BRIDGE_RUNBOOK](../../docs/03-deployment/CHAIN138_XDC_ZERO_BRIDGE_RUNBOOK.md).
|
||||
- `../xdc-zero/merge-endpointconfig-chain138.sh` - Merge `chain138` into XDC-Zero `endpointconfig.json` (optional `xdcparentnet.registers` append). Set `XDC_ZERO_ENDPOINT_DIR`; use `--dry-run`. See [config/xdc-zero/README.md](../../config/xdc-zero/README.md).
|
||||
- `check-completion-status.sh` - One-command summary of repo-completable checks, public report API health, and pointers to operator/external remaining work.
|
||||
- `../xdc-zero/merge-endpointconfig-chain138.sh` - Merge `chain138` into XDC-Zero `endpointconfig.json` and append `xdcparentnet.registers` from fragments. Pass path to `endpointconfig.json` or `XDC_ZERO_ENDPOINT_DIR`; `--dry-run`. See [config/xdc-zero/README.md](../../config/xdc-zero/README.md).
|
||||
- `../xdc-zero/deploy-endpoint-chain138.sh` - Hardhat deploy Endpoint stack to `--network chain138` (`XDC_ZERO_REPO`, `PRIVATE_KEY`). See [scripts/xdc-zero/README.md](../xdc-zero/README.md).
|
||||
- `../xdc-zero/run-xdc-zero-138-operator-sequence.sh` - Prints full XDC Zero + 138 operator order.
|
||||
- `../validation/validate-xdc-zero-config.sh` - `jq` parse check for `config/xdc-zero/*.json`.
|
||||
- `check-completion-status.sh` - One-command summary of repo-completable checks, public report API health, and pointers to operator/external remaining work. Set `INCLUDE_INFO_DEFI_PUBLIC_VERIFY=1` to also run `check-info-defi-oracle-public.sh` (needs HTTPS to `INFO_SITE_BASE` / production).
|
||||
- `reconcile-env-canonical.sh` - Emit recommended .env lines for Chain 138 (canonical source of truth); use to reconcile `smom-dbis-138/.env` with [CONTRACT_ADDRESSES_REFERENCE](../../docs/11-references/CONTRACT_ADDRESSES_REFERENCE.md). Usage: `./scripts/verify/reconcile-env-canonical.sh [--print]`
|
||||
- `print-gas-runtime-env-canonical.sh` - Emit the non-secret gas-lane runtime env scaffold from `gru-transport-active.json` plus live canonical `totalSupply()` on Chain 138. Uses per-lane gas caps from the registry, defaults outstanding / escrowed to the current canonical supply, defaults treasury-backed / treasury-cap to `0`, and leaves the active gas verifier envs commented until the live L1 bridge is explicitly attached.
|
||||
- `check-deployer-balance-blockscout-vs-rpc.sh` - Compare deployer native balance from Blockscout API vs RPC (to verify index matches current chain); see [EXPLORER_AND_BLOCKSCAN_REFERENCE](../../docs/11-references/EXPLORER_AND_BLOCKSCAN_REFERENCE.md)
|
||||
- `sync-blockscout-address-labels-from-registry.sh` - Plan or sync Blockscout address labels from `address-registry-entry` JSON (`config/dbis-institutional/schemas/address-registry-entry.schema.json`: `blockscout.label`, `status: active`). Supports `--mode=http`, `--mode=db`, and `--mode=auto`; on the self-hosted Chain 138 explorer, `db` is the right live mode because `/api/v1/*` is token-aggregation, not a native Blockscout label-write API. DB mode writes primary labels into Blockscout `public.address_names` through CT `5000`. See `config/dbis-institutional/README.md` and [OMNL_DBIS_CORE_CHAIN138_SMART_VAULT_RTGS_RUNBOOK.md](../../docs/03-deployment/OMNL_DBIS_CORE_CHAIN138_SMART_VAULT_RTGS_RUNBOOK.md).
|
||||
- `check-dependencies.sh` - Verify required tools (bash, curl, jq, openssl, ssh)
|
||||
@@ -52,11 +100,19 @@ One-line install (Debian/Ubuntu): `sudo apt install -y sshpass rsync dnsutils ip
|
||||
|
||||
## Task runners (no LAN vs from LAN)
|
||||
|
||||
- **From anywhere (no LAN/creds):** `../run-completable-tasks-from-anywhere.sh` — runs config validation, on-chain contract check, run-all-validation --skip-genesis, public report API diagnostics, and reconcile-env-canonical.
|
||||
- **Completion snapshot:** `check-completion-status.sh` — summarizes what is complete locally and what still depends on operator or external execution.
|
||||
- **Full LAN execution order:** `../run-full-operator-completion-from-lan.sh` — starts with the token-aggregation `/api/v1` repair, then Wave 0, verification, E2E, and optional operator-only deployment steps. Use `--dry-run` first.
|
||||
- **From anywhere (no LAN/creds):** `../run-completable-tasks-from-anywhere.sh` — runs config validation, on-chain contract check, run-all-validation --skip-genesis, public report API diagnostics, reconcile-env-canonical, and the gas runtime env scaffold.
|
||||
- **Completion snapshot:** `check-completion-status.sh` — summarizes what is complete locally and what still depends on operator or external execution. Optional: `INCLUDE_INFO_DEFI_PUBLIC_VERIFY=1` adds the public info hub check.
|
||||
- **Full LAN execution order:** `../run-full-operator-completion-from-lan.sh` — starts with the token-aggregation `/api/v1` repair, then Wave 0, verification, E2E, non-fatal **info.defi-oracle.io** public smoke, and optional operator-only deployment steps. Use `--skip-info-public` without outbound HTTPS to the public hostname. Use `--dry-run` first.
|
||||
- **From LAN (NPM_PASSWORD, optional PRIVATE_KEY):** `../run-operator-tasks-from-lan.sh` — runs W0-1 (NPMplus RPC fix), W0-3 (NPMplus backup), O-1 (Blockscout verification); use `--dry-run` to print commands only. See [ALL_TASKS_DETAILED_STEPS](../../docs/00-meta/ALL_TASKS_DETAILED_STEPS.md).
|
||||
|
||||
## Common operator patterns
|
||||
|
||||
- **Primary NPMplus SSL audit/fix:** `bash scripts/verify/audit-npmplus-ssl-all-instances.sh` then `bash scripts/nginx-proxy-manager/fix-npmplus-ssl-issues.sh --dry-run` and rerun without `--dry-run` on the primary instance. The scripts now handle both JSON bearer-token auth and cookie-session auth from NPMplus, and the fixer can renew expired cert bindings as well as fill missing certs, wrong-cert bindings, and Force SSL gaps.
|
||||
- **Tunnel-backed NPM hosts:** if a hostname is publicly served by a proxied Cloudflare tunnel `CNAME` to `*.cfargotunnel.com`, the SSL audit intentionally ignores origin-cert expiry or mismatch on that NPM host. Public TLS is terminated by Cloudflare in that mode, and the tunnel origin uses `noTLSVerify` by design.
|
||||
- **Other NPMplus instances:** the fleet scripts already assume a shared `NPM_EMAIL` across instances. Rerun the same fix script with `NPM_URL=https://<ip>:81` and the matching per-instance password env vars such as `NPM_PASSWORD_SECONDARY`, `NPM_PASSWORD_ALLTRA_HYBX`, `NPM_PASSWORD_FOURTH`, or `NPM_PASSWORD_MIFOS`. If audit shows `auth_failed`, the repo cannot finish that from here without the correct UI password for that instance.
|
||||
- **Alltra/HYBX tunnel migration:** `bash scripts/cloudflare/configure-alltra-hybx-tunnel-and-dns.sh` is the preferred public-path repair for `rpc-alltra*`, `rpc-hybx*`, `rpc-core-2`, and the related service names on `192.168.11.169`. The script now replaces legacy direct `A` records with proxied tunnel `CNAME`s when needed.
|
||||
- **RPC TLS mismatch:** if `rpc.defi-oracle.io` has a certificate attached but the browser still reports a hostname mismatch, the fix is to request or assign a certificate whose SAN/CN actually includes `rpc.defi-oracle.io`; Force SSL toggles alone will not fix that.
|
||||
|
||||
## Environment
|
||||
|
||||
Set variables in `.env` or export before running. See project root `.env.example` and [docs/04-configuration/VERIFICATION_GAPS_AND_TODOS.md](../../docs/04-configuration/VERIFICATION_GAPS_AND_TODOS.md).
|
||||
|
||||
690
scripts/verify/audit-deployer-token-access.sh
Executable file
690
scripts/verify/audit-deployer-token-access.sh
Executable file
@@ -0,0 +1,690 @@
|
||||
#!/usr/bin/env bash
|
||||
# Audit deployer balances plus deployer access posture for Chain 138 c* and public-chain cW* tokens.
|
||||
#
|
||||
# Exports:
|
||||
# - balances JSON
|
||||
# - balances CSV
|
||||
# - access JSON
|
||||
# - access CSV
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/audit-deployer-token-access.sh
|
||||
# bash scripts/verify/audit-deployer-token-access.sh --output-dir reports/deployer-token-audit
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
OUTPUT_DIR="${PROJECT_ROOT}/reports/deployer-token-audit"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--output-dir)
|
||||
[[ $# -ge 2 ]] || { echo "Missing value for --output-dir" >&2; exit 2; }
|
||||
OUTPUT_DIR="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
|
||||
command -v cast >/dev/null 2>&1 || { echo "Missing required command: cast" >&2; exit 1; }
|
||||
command -v node >/dev/null 2>&1 || { echo "Missing required command: node" >&2; exit 1; }
|
||||
|
||||
if [[ -n "${DEPLOYER_ADDRESS:-}" ]]; then
|
||||
AUDIT_DEPLOYER_ADDRESS="$DEPLOYER_ADDRESS"
|
||||
else
|
||||
[[ -n "${PRIVATE_KEY:-}" ]] || {
|
||||
echo "Missing PRIVATE_KEY or DEPLOYER_ADDRESS in environment" >&2
|
||||
exit 1
|
||||
}
|
||||
AUDIT_DEPLOYER_ADDRESS="$(cast wallet address "$PRIVATE_KEY")"
|
||||
fi
|
||||
|
||||
[[ -n "$AUDIT_DEPLOYER_ADDRESS" ]] || {
|
||||
echo "Failed to derive deployer address" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
export OUTPUT_DIR
|
||||
export AUDIT_DEPLOYER_ADDRESS
|
||||
export AUDIT_TIMESTAMP_UTC="$(date -u +%Y%m%dT%H%M%SZ)"
|
||||
|
||||
node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const util = require('util');
|
||||
const { execFile } = require('child_process');
|
||||
|
||||
const execFileAsync = util.promisify(execFile);
|
||||
|
||||
const projectRoot = process.env.PROJECT_ROOT;
|
||||
const outputDir = path.resolve(process.env.OUTPUT_DIR);
|
||||
const generatedAt = process.env.AUDIT_TIMESTAMP_UTC;
|
||||
const deployerAddress = process.env.AUDIT_DEPLOYER_ADDRESS;
|
||||
|
||||
const roleHashes = {
|
||||
defaultAdmin: '0x' + '0'.repeat(64),
|
||||
minter: null,
|
||||
burner: null,
|
||||
pauser: null,
|
||||
bridge: null,
|
||||
governance: null,
|
||||
jurisdictionAdmin: null,
|
||||
regulator: null,
|
||||
supervisor: null,
|
||||
emergencyAdmin: null,
|
||||
supplyAdmin: null,
|
||||
metadataAdmin: null,
|
||||
};
|
||||
|
||||
const publicChainMeta = {
|
||||
'1': {
|
||||
chainKey: 'MAINNET',
|
||||
walletEnv: 'DEST_FUND_WALLET_ETHEREUM',
|
||||
rpcEnvCandidates: ['ETH_MAINNET_RPC_URL', 'ETHEREUM_MAINNET_RPC'],
|
||||
fallbackRpc: 'https://eth.llamarpc.com',
|
||||
fallbackLabel: 'fallback:eth.llamarpc.com',
|
||||
spenderEnv: 'CW_BRIDGE_MAINNET',
|
||||
},
|
||||
'10': {
|
||||
chainKey: 'OPTIMISM',
|
||||
walletEnv: 'DEST_FUND_WALLET_OPTIMISM',
|
||||
rpcEnvCandidates: ['OPTIMISM_MAINNET_RPC', 'OPTIMISM_RPC_URL'],
|
||||
fallbackRpc: 'https://mainnet.optimism.io',
|
||||
fallbackLabel: 'fallback:mainnet.optimism.io',
|
||||
spenderEnv: 'CW_BRIDGE_OPTIMISM',
|
||||
},
|
||||
'25': {
|
||||
chainKey: 'CRONOS',
|
||||
walletEnv: 'DEST_FUND_WALLET_CRONOS',
|
||||
rpcEnvCandidates: ['CRONOS_CW_VERIFY_RPC_URL', 'CRONOS_RPC_URL', 'CRONOS_RPC'],
|
||||
fallbackRpc: 'https://evm.cronos.org',
|
||||
fallbackLabel: 'fallback:evm.cronos.org',
|
||||
spenderEnv: 'CW_BRIDGE_CRONOS',
|
||||
},
|
||||
'56': {
|
||||
chainKey: 'BSC',
|
||||
walletEnv: 'DEST_FUND_WALLET_BSC',
|
||||
rpcEnvCandidates: ['BSC_RPC_URL', 'BSC_MAINNET_RPC'],
|
||||
fallbackRpc: 'https://bsc-dataseed.binance.org',
|
||||
fallbackLabel: 'fallback:bsc-dataseed.binance.org',
|
||||
spenderEnv: 'CW_BRIDGE_BSC',
|
||||
},
|
||||
'100': {
|
||||
chainKey: 'GNOSIS',
|
||||
walletEnv: 'DEST_FUND_WALLET_GNOSIS',
|
||||
rpcEnvCandidates: ['GNOSIS_MAINNET_RPC', 'GNOSIS_RPC_URL', 'GNOSIS_RPC'],
|
||||
fallbackRpc: 'https://rpc.gnosischain.com',
|
||||
fallbackLabel: 'fallback:rpc.gnosischain.com',
|
||||
spenderEnv: 'CW_BRIDGE_GNOSIS',
|
||||
},
|
||||
'137': {
|
||||
chainKey: 'POLYGON',
|
||||
walletEnv: 'DEST_FUND_WALLET_POLYGON',
|
||||
rpcEnvCandidates: ['POLYGON_MAINNET_RPC', 'POLYGON_RPC_URL'],
|
||||
fallbackRpc: 'https://polygon-rpc.com',
|
||||
fallbackLabel: 'fallback:polygon-rpc.com',
|
||||
spenderEnv: 'CW_BRIDGE_POLYGON',
|
||||
},
|
||||
'42220': {
|
||||
chainKey: 'CELO',
|
||||
walletEnv: 'DEST_FUND_WALLET_CELO',
|
||||
rpcEnvCandidates: ['CELO_RPC', 'CELO_MAINNET_RPC'],
|
||||
fallbackRpc: 'https://forno.celo.org',
|
||||
fallbackLabel: 'fallback:forno.celo.org',
|
||||
spenderEnv: 'CW_BRIDGE_CELO',
|
||||
},
|
||||
'43114': {
|
||||
chainKey: 'AVALANCHE',
|
||||
walletEnv: 'DEST_FUND_WALLET_AVALANCHE',
|
||||
rpcEnvCandidates: ['AVALANCHE_RPC_URL', 'AVALANCHE_MAINNET_RPC', 'AVALANCHE_RPC'],
|
||||
fallbackRpc: 'https://api.avax.network/ext/bc/C/rpc',
|
||||
fallbackLabel: 'fallback:api.avax.network/ext/bc/C/rpc',
|
||||
spenderEnv: 'CW_BRIDGE_AVALANCHE',
|
||||
},
|
||||
'8453': {
|
||||
chainKey: 'BASE',
|
||||
walletEnv: 'DEST_FUND_WALLET_BASE',
|
||||
rpcEnvCandidates: ['BASE_MAINNET_RPC', 'BASE_RPC_URL'],
|
||||
fallbackRpc: 'https://mainnet.base.org',
|
||||
fallbackLabel: 'fallback:mainnet.base.org',
|
||||
spenderEnv: 'CW_BRIDGE_BASE',
|
||||
},
|
||||
'42161': {
|
||||
chainKey: 'ARBITRUM',
|
||||
walletEnv: 'DEST_FUND_WALLET_ARBITRUM',
|
||||
rpcEnvCandidates: ['ARBITRUM_MAINNET_RPC', 'ARBITRUM_RPC_URL'],
|
||||
fallbackRpc: 'https://arb1.arbitrum.io/rpc',
|
||||
fallbackLabel: 'fallback:arb1.arbitrum.io/rpc',
|
||||
spenderEnv: 'CW_BRIDGE_ARBITRUM',
|
||||
},
|
||||
};
|
||||
|
||||
function quoteCsv(value) {
|
||||
const stringValue = value == null ? '' : String(value);
|
||||
return `"${stringValue.replace(/"/g, '""')}"`;
|
||||
}
|
||||
|
||||
function formatUnits(rawValue, decimals) {
|
||||
if (rawValue == null) return null;
|
||||
const raw = BigInt(rawValue);
|
||||
const base = 10n ** BigInt(decimals);
|
||||
const whole = raw / base;
|
||||
const fraction = raw % base;
|
||||
if (fraction === 0n) return whole.toString();
|
||||
const padded = fraction.toString().padStart(decimals, '0').replace(/0+$/, '');
|
||||
return `${whole.toString()}.${padded}`;
|
||||
}
|
||||
|
||||
function normalizeAddress(value) {
|
||||
if (!value) return null;
|
||||
const trimmed = String(value).trim();
|
||||
if (!trimmed) return null;
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
function stripQuotes(value) {
|
||||
if (value == null) return null;
|
||||
const trimmed = String(value).trim();
|
||||
if (trimmed.startsWith('"') && trimmed.endsWith('"')) {
|
||||
return trimmed.slice(1, -1);
|
||||
}
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
function parseBigIntOutput(value) {
|
||||
if (value == null) return null;
|
||||
const trimmed = String(value).trim();
|
||||
if (!trimmed) return null;
|
||||
const token = trimmed.split(/\s+/)[0];
|
||||
return BigInt(token);
|
||||
}
|
||||
|
||||
async function runCast(args, { allowFailure = false } = {}) {
|
||||
try {
|
||||
const { stdout } = await execFileAsync('cast', args, {
|
||||
encoding: 'utf8',
|
||||
timeout: 20000,
|
||||
maxBuffer: 1024 * 1024,
|
||||
});
|
||||
return stdout.trim();
|
||||
} catch (error) {
|
||||
if (allowFailure) {
|
||||
return null;
|
||||
}
|
||||
const stderr = error.stderr ? `\n${String(error.stderr).trim()}` : '';
|
||||
throw new Error(`cast ${args.join(' ')} failed${stderr}`);
|
||||
}
|
||||
}
|
||||
|
||||
async function buildRoleHashes() {
|
||||
const keys = Object.keys(roleHashes).filter((key) => key !== 'defaultAdmin');
|
||||
await Promise.all(keys.map(async (key) => {
|
||||
const labelMap = {
|
||||
minter: 'MINTER_ROLE',
|
||||
burner: 'BURNER_ROLE',
|
||||
pauser: 'PAUSER_ROLE',
|
||||
bridge: 'BRIDGE_ROLE',
|
||||
governance: 'GOVERNANCE_ROLE',
|
||||
jurisdictionAdmin: 'JURISDICTION_ADMIN_ROLE',
|
||||
regulator: 'REGULATOR_ROLE',
|
||||
supervisor: 'SUPERVISOR_ROLE',
|
||||
emergencyAdmin: 'EMERGENCY_ADMIN_ROLE',
|
||||
supplyAdmin: 'SUPPLY_ADMIN_ROLE',
|
||||
metadataAdmin: 'METADATA_ADMIN_ROLE',
|
||||
};
|
||||
roleHashes[key] = await runCast(['keccak', labelMap[key]]);
|
||||
}));
|
||||
}
|
||||
|
||||
function resolvePublicChain(chainId, chainName) {
|
||||
const meta = publicChainMeta[chainId];
|
||||
if (!meta) {
|
||||
return {
|
||||
chainId: Number(chainId),
|
||||
chainName,
|
||||
walletAddress: deployerAddress,
|
||||
walletSource: 'AUDIT_DEPLOYER_ADDRESS',
|
||||
rpcUrl: null,
|
||||
rpcSource: 'missing',
|
||||
spenderAddress: null,
|
||||
spenderLabel: null,
|
||||
};
|
||||
}
|
||||
|
||||
let rpcUrl = null;
|
||||
let rpcSource = 'missing';
|
||||
for (const key of meta.rpcEnvCandidates) {
|
||||
if (process.env[key]) {
|
||||
rpcUrl = process.env[key];
|
||||
rpcSource = key;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!rpcUrl && meta.fallbackRpc) {
|
||||
rpcUrl = meta.fallbackRpc;
|
||||
rpcSource = meta.fallbackLabel;
|
||||
}
|
||||
|
||||
const walletAddress = normalizeAddress(process.env[meta.walletEnv]) || deployerAddress;
|
||||
const walletSource = process.env[meta.walletEnv] ? meta.walletEnv : 'AUDIT_DEPLOYER_ADDRESS';
|
||||
const spenderAddress = normalizeAddress(process.env[meta.spenderEnv]);
|
||||
|
||||
return {
|
||||
chainId: Number(chainId),
|
||||
chainKey: meta.chainKey,
|
||||
chainName,
|
||||
walletAddress,
|
||||
walletSource,
|
||||
rpcUrl,
|
||||
rpcSource,
|
||||
spenderAddress,
|
||||
spenderLabel: meta.spenderEnv,
|
||||
};
|
||||
}
|
||||
|
||||
async function callUint(contract, rpcUrl, signature, extraArgs = []) {
|
||||
const output = await runCast(['call', contract, signature, ...extraArgs, '--rpc-url', rpcUrl], { allowFailure: true });
|
||||
if (output == null || output === '') return null;
|
||||
return parseBigIntOutput(output);
|
||||
}
|
||||
|
||||
async function callBool(contract, rpcUrl, signature, extraArgs = []) {
|
||||
const output = await runCast(['call', contract, signature, ...extraArgs, '--rpc-url', rpcUrl], { allowFailure: true });
|
||||
if (output == null || output === '') return null;
|
||||
if (output === 'true') return true;
|
||||
if (output === 'false') return false;
|
||||
return null;
|
||||
}
|
||||
|
||||
async function callAddress(contract, rpcUrl, signature, extraArgs = []) {
|
||||
const output = await runCast(['call', contract, signature, ...extraArgs, '--rpc-url', rpcUrl], { allowFailure: true });
|
||||
if (output == null || output === '') return null;
|
||||
const address = normalizeAddress(stripQuotes(output));
|
||||
return address && /^0x[a-fA-F0-9]{40}$/.test(address) ? address : null;
|
||||
}
|
||||
|
||||
async function callString(contract, rpcUrl, signature, extraArgs = []) {
|
||||
const output = await runCast(['call', contract, signature, ...extraArgs, '--rpc-url', rpcUrl], { allowFailure: true });
|
||||
if (output == null || output === '') return null;
|
||||
return stripQuotes(output);
|
||||
}
|
||||
|
||||
async function mapLimit(items, limit, iterator) {
|
||||
const results = new Array(items.length);
|
||||
let nextIndex = 0;
|
||||
|
||||
async function worker() {
|
||||
for (;;) {
|
||||
const current = nextIndex;
|
||||
nextIndex += 1;
|
||||
if (current >= items.length) return;
|
||||
results[current] = await iterator(items[current], current);
|
||||
}
|
||||
}
|
||||
|
||||
const workerCount = Math.min(limit, items.length || 1);
|
||||
await Promise.all(Array.from({ length: workerCount }, () => worker()));
|
||||
return results;
|
||||
}
|
||||
|
||||
function roleColumns(row) {
|
||||
return {
|
||||
default_admin: row.roles.defaultAdmin,
|
||||
minter: row.roles.minter,
|
||||
burner: row.roles.burner,
|
||||
pauser: row.roles.pauser,
|
||||
bridge: row.roles.bridge,
|
||||
governance: row.roles.governance,
|
||||
jurisdiction_admin: row.roles.jurisdictionAdmin,
|
||||
regulator: row.roles.regulator,
|
||||
supervisor: row.roles.supervisor,
|
||||
emergency_admin: row.roles.emergencyAdmin,
|
||||
supply_admin: row.roles.supplyAdmin,
|
||||
metadata_admin: row.roles.metadataAdmin,
|
||||
};
|
||||
}
|
||||
|
||||
async function auditToken(token) {
|
||||
const rpcUrl = token.rpcUrl;
|
||||
const balanceRowBase = {
|
||||
category: token.category,
|
||||
chain_id: token.chainId,
|
||||
chain_name: token.chainName,
|
||||
token_key: token.tokenKey,
|
||||
token_address: token.tokenAddress,
|
||||
wallet_address: token.walletAddress,
|
||||
wallet_source: token.walletSource,
|
||||
rpc_source: token.rpcSource,
|
||||
};
|
||||
|
||||
const accessRowBase = {
|
||||
category: token.category,
|
||||
chain_id: token.chainId,
|
||||
chain_name: token.chainName,
|
||||
token_key: token.tokenKey,
|
||||
token_address: token.tokenAddress,
|
||||
wallet_address: token.walletAddress,
|
||||
wallet_source: token.walletSource,
|
||||
rpc_source: token.rpcSource,
|
||||
spender_label: token.spenderLabel,
|
||||
spender_address: token.spenderAddress,
|
||||
};
|
||||
|
||||
if (!rpcUrl) {
|
||||
return {
|
||||
balance: {
|
||||
...balanceRowBase,
|
||||
decimals: null,
|
||||
balance_raw: null,
|
||||
balance_formatted: null,
|
||||
query_status: 'missing_rpc',
|
||||
},
|
||||
access: {
|
||||
...accessRowBase,
|
||||
owner: null,
|
||||
owner_matches_wallet: null,
|
||||
allowance_raw: null,
|
||||
allowance_formatted: null,
|
||||
query_status: 'missing_rpc',
|
||||
roles: Object.fromEntries(Object.keys(roleHashes).map((key) => [key, null])),
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
const decimalsPromise = callUint(token.tokenAddress, rpcUrl, 'decimals()(uint8)');
|
||||
const balancePromise = callUint(token.tokenAddress, rpcUrl, 'balanceOf(address)(uint256)', [token.walletAddress]);
|
||||
const ownerPromise = callAddress(token.tokenAddress, rpcUrl, 'owner()(address)');
|
||||
const symbolPromise = callString(token.tokenAddress, rpcUrl, 'symbol()(string)');
|
||||
|
||||
const allowancePromise =
|
||||
token.spenderAddress
|
||||
? callUint(token.tokenAddress, rpcUrl, 'allowance(address,address)(uint256)', [token.walletAddress, token.spenderAddress])
|
||||
: Promise.resolve(null);
|
||||
|
||||
const roleNames = Object.keys(roleHashes);
|
||||
const rolePairs = await mapLimit(roleNames, 4, async (roleName) => {
|
||||
const roleValue = await callBool(
|
||||
token.tokenAddress,
|
||||
rpcUrl,
|
||||
'hasRole(bytes32,address)(bool)',
|
||||
[roleHashes[roleName], token.walletAddress],
|
||||
);
|
||||
return [roleName, roleValue];
|
||||
});
|
||||
|
||||
const roles = Object.fromEntries(rolePairs);
|
||||
const [decimalsRaw, balanceRaw, owner, symbol, allowanceRaw] = await Promise.all([
|
||||
decimalsPromise,
|
||||
balancePromise,
|
||||
ownerPromise,
|
||||
symbolPromise,
|
||||
allowancePromise,
|
||||
]);
|
||||
|
||||
const decimals = decimalsRaw == null ? null : Number(decimalsRaw);
|
||||
const effectiveDecimals = decimals == null ? 18 : decimals;
|
||||
const balanceFormatted = balanceRaw == null ? null : formatUnits(balanceRaw, effectiveDecimals);
|
||||
const allowanceFormatted = allowanceRaw == null ? null : formatUnits(allowanceRaw, effectiveDecimals);
|
||||
const ownerMatchesWallet = owner == null ? null : owner.toLowerCase() === token.walletAddress.toLowerCase();
|
||||
|
||||
return {
|
||||
balance: {
|
||||
...balanceRowBase,
|
||||
token_symbol: symbol || token.tokenKey,
|
||||
decimals,
|
||||
balance_raw: balanceRaw == null ? null : balanceRaw.toString(),
|
||||
balance_formatted: balanceFormatted,
|
||||
query_status: balanceRaw == null ? 'query_failed' : 'ok',
|
||||
},
|
||||
access: {
|
||||
...accessRowBase,
|
||||
token_symbol: symbol || token.tokenKey,
|
||||
owner,
|
||||
owner_matches_wallet: ownerMatchesWallet,
|
||||
allowance_raw: allowanceRaw == null ? null : allowanceRaw.toString(),
|
||||
allowance_formatted: allowanceFormatted,
|
||||
query_status: 'ok',
|
||||
roles,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
function balanceCsv(rows) {
|
||||
const header = [
|
||||
'category',
|
||||
'chain_id',
|
||||
'chain_name',
|
||||
'token_key',
|
||||
'token_symbol',
|
||||
'token_address',
|
||||
'wallet_address',
|
||||
'wallet_source',
|
||||
'rpc_source',
|
||||
'decimals',
|
||||
'balance_raw',
|
||||
'balance_formatted',
|
||||
'query_status',
|
||||
];
|
||||
const lines = [header.join(',')];
|
||||
for (const row of rows) {
|
||||
lines.push([
|
||||
row.category,
|
||||
row.chain_id,
|
||||
row.chain_name,
|
||||
row.token_key,
|
||||
row.token_symbol,
|
||||
row.token_address,
|
||||
row.wallet_address,
|
||||
row.wallet_source,
|
||||
row.rpc_source,
|
||||
row.decimals,
|
||||
row.balance_raw,
|
||||
row.balance_formatted,
|
||||
row.query_status,
|
||||
].map(quoteCsv).join(','));
|
||||
}
|
||||
return `${lines.join('\n')}\n`;
|
||||
}
|
||||
|
||||
function accessCsv(rows) {
|
||||
const header = [
|
||||
'category',
|
||||
'chain_id',
|
||||
'chain_name',
|
||||
'token_key',
|
||||
'token_symbol',
|
||||
'token_address',
|
||||
'wallet_address',
|
||||
'wallet_source',
|
||||
'rpc_source',
|
||||
'owner',
|
||||
'owner_matches_wallet',
|
||||
'spender_label',
|
||||
'spender_address',
|
||||
'allowance_raw',
|
||||
'allowance_formatted',
|
||||
'default_admin',
|
||||
'minter',
|
||||
'burner',
|
||||
'pauser',
|
||||
'bridge',
|
||||
'governance',
|
||||
'jurisdiction_admin',
|
||||
'regulator',
|
||||
'supervisor',
|
||||
'emergency_admin',
|
||||
'supply_admin',
|
||||
'metadata_admin',
|
||||
'query_status',
|
||||
];
|
||||
const lines = [header.join(',')];
|
||||
for (const row of rows) {
|
||||
const roles = roleColumns(row);
|
||||
lines.push([
|
||||
row.category,
|
||||
row.chain_id,
|
||||
row.chain_name,
|
||||
row.token_key,
|
||||
row.token_symbol,
|
||||
row.token_address,
|
||||
row.wallet_address,
|
||||
row.wallet_source,
|
||||
row.rpc_source,
|
||||
row.owner,
|
||||
row.owner_matches_wallet,
|
||||
row.spender_label,
|
||||
row.spender_address,
|
||||
row.allowance_raw,
|
||||
row.allowance_formatted,
|
||||
roles.default_admin,
|
||||
roles.minter,
|
||||
roles.burner,
|
||||
roles.pauser,
|
||||
roles.bridge,
|
||||
roles.governance,
|
||||
roles.jurisdiction_admin,
|
||||
roles.regulator,
|
||||
roles.supervisor,
|
||||
roles.emergency_admin,
|
||||
roles.supply_admin,
|
||||
roles.metadata_admin,
|
||||
row.query_status,
|
||||
].map(quoteCsv).join(','));
|
||||
}
|
||||
return `${lines.join('\n')}\n`;
|
||||
}
|
||||
|
||||
async function main() {
|
||||
await buildRoleHashes();
|
||||
|
||||
const chain138Registry = JSON.parse(
|
||||
fs.readFileSync(path.join(projectRoot, 'config/smart-contracts-master.json'), 'utf8'),
|
||||
);
|
||||
const deploymentStatus = JSON.parse(
|
||||
fs.readFileSync(path.join(projectRoot, 'cross-chain-pmm-lps/config/deployment-status.json'), 'utf8'),
|
||||
);
|
||||
|
||||
const chain138Contracts = (((chain138Registry || {}).chains || {})['138'] || {}).contracts || {};
|
||||
const cStarTokens = Object.entries(chain138Contracts)
|
||||
.filter(([name, address]) => name.startsWith('c') && !name.includes('_Pool_') && typeof address === 'string')
|
||||
.map(([tokenKey, tokenAddress]) => ({
|
||||
category: 'cstar',
|
||||
chainId: 138,
|
||||
chainName: 'Chain 138',
|
||||
tokenKey,
|
||||
tokenAddress,
|
||||
walletAddress: deployerAddress,
|
||||
walletSource: 'AUDIT_DEPLOYER_ADDRESS',
|
||||
rpcUrl: process.env.RPC_URL_138 || process.env.CHAIN138_RPC_URL || process.env.RPC_URL || null,
|
||||
rpcSource: process.env.RPC_URL_138 ? 'RPC_URL_138'
|
||||
: process.env.CHAIN138_RPC_URL ? 'CHAIN138_RPC_URL'
|
||||
: process.env.RPC_URL ? 'RPC_URL'
|
||||
: 'missing',
|
||||
spenderAddress: null,
|
||||
spenderLabel: null,
|
||||
}));
|
||||
|
||||
const cWTokens = [];
|
||||
for (const [chainId, chainInfo] of Object.entries(deploymentStatus.chains || {})) {
|
||||
const cwTokens = chainInfo.cwTokens || {};
|
||||
const resolved = resolvePublicChain(chainId, chainInfo.name);
|
||||
for (const [tokenKey, tokenAddress] of Object.entries(cwTokens)) {
|
||||
cWTokens.push({
|
||||
category: 'cw',
|
||||
chainId: Number(chainId),
|
||||
chainName: chainInfo.name,
|
||||
tokenKey,
|
||||
tokenAddress,
|
||||
walletAddress: resolved.walletAddress,
|
||||
walletSource: resolved.walletSource,
|
||||
rpcUrl: resolved.rpcUrl,
|
||||
rpcSource: resolved.rpcSource,
|
||||
spenderAddress: resolved.spenderAddress,
|
||||
spenderLabel: resolved.spenderLabel,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const audited = await mapLimit([...cStarTokens, ...cWTokens], 6, auditToken);
|
||||
const balanceRows = audited.map((row) => row.balance);
|
||||
const accessRows = audited.map((row) => row.access);
|
||||
|
||||
balanceRows.sort((a, b) => (a.chain_id - b.chain_id) || a.token_key.localeCompare(b.token_key));
|
||||
accessRows.sort((a, b) => (a.chain_id - b.chain_id) || a.token_key.localeCompare(b.token_key));
|
||||
|
||||
const balancesJson = {
|
||||
generatedAt,
|
||||
deployerAddress,
|
||||
summary: {
|
||||
tokensChecked: balanceRows.length,
|
||||
cStarTokensChecked: balanceRows.filter((row) => row.category === 'cstar').length,
|
||||
cWTokensChecked: balanceRows.filter((row) => row.category === 'cw').length,
|
||||
nonZeroBalances: balanceRows.filter((row) => row.balance_raw != null && row.balance_raw !== '0').length,
|
||||
queryFailures: balanceRows.filter((row) => row.query_status !== 'ok').length,
|
||||
},
|
||||
balances: balanceRows,
|
||||
};
|
||||
|
||||
const accessJson = {
|
||||
generatedAt,
|
||||
deployerAddress,
|
||||
summary: {
|
||||
tokensChecked: accessRows.length,
|
||||
tokensWithOwnerFunction: accessRows.filter((row) => row.owner != null).length,
|
||||
tokensWithBridgeAllowanceChecks: accessRows.filter((row) => row.spender_address != null).length,
|
||||
nonZeroAllowances: accessRows.filter((row) => row.allowance_raw != null && row.allowance_raw !== '0').length,
|
||||
deployerDefaultAdminCount: accessRows.filter((row) => row.roles.defaultAdmin === true).length,
|
||||
deployerMinterCount: accessRows.filter((row) => row.roles.minter === true).length,
|
||||
deployerBurnerCount: accessRows.filter((row) => row.roles.burner === true).length,
|
||||
},
|
||||
access: accessRows,
|
||||
};
|
||||
|
||||
const balanceJsonPath = path.join(outputDir, `deployer-token-balances-${generatedAt}.json`);
|
||||
const balanceCsvPath = path.join(outputDir, `deployer-token-balances-${generatedAt}.csv`);
|
||||
const accessJsonPath = path.join(outputDir, `deployer-token-access-${generatedAt}.json`);
|
||||
const accessCsvPath = path.join(outputDir, `deployer-token-access-${generatedAt}.csv`);
|
||||
|
||||
fs.writeFileSync(balanceJsonPath, JSON.stringify(balancesJson, null, 2));
|
||||
fs.writeFileSync(balanceCsvPath, balanceCsv(balanceRows));
|
||||
fs.writeFileSync(accessJsonPath, JSON.stringify(accessJson, null, 2));
|
||||
fs.writeFileSync(accessCsvPath, accessCsv(accessRows));
|
||||
|
||||
console.log(JSON.stringify({
|
||||
generatedAt,
|
||||
deployerAddress,
|
||||
outputDir,
|
||||
files: {
|
||||
balancesJson: balanceJsonPath,
|
||||
balancesCsv: balanceCsvPath,
|
||||
accessJson: accessJsonPath,
|
||||
accessCsv: accessCsvPath,
|
||||
},
|
||||
summary: {
|
||||
cStarTokensChecked: balancesJson.summary.cStarTokensChecked,
|
||||
cWTokensChecked: balancesJson.summary.cWTokensChecked,
|
||||
nonZeroBalances: balancesJson.summary.nonZeroBalances,
|
||||
nonZeroAllowances: accessJson.summary.nonZeroAllowances,
|
||||
deployerDefaultAdminCount: accessJson.summary.deployerDefaultAdminCount,
|
||||
deployerMinterCount: accessJson.summary.deployerMinterCount,
|
||||
deployerBurnerCount: accessJson.summary.deployerBurnerCount,
|
||||
},
|
||||
}, null, 2));
|
||||
}
|
||||
|
||||
main().catch((error) => {
|
||||
console.error(error instanceof Error ? error.message : String(error));
|
||||
process.exit(1);
|
||||
});
|
||||
NODE
|
||||
172
scripts/verify/audit-mainnet-dodo-standard-pool-readiness.sh
Normal file
172
scripts/verify/audit-mainnet-dodo-standard-pool-readiness.sh
Normal file
@@ -0,0 +1,172 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Audit the current mainnet DODO pool creation path and standard-surface readiness.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/audit-mainnet-dodo-standard-pool-readiness.sh
|
||||
#
|
||||
# Requires:
|
||||
# ETHEREUM_MAINNET_RPC
|
||||
# DODO_PMM_INTEGRATION_MAINNET
|
||||
#
|
||||
# Optional env-backed pool refs:
|
||||
# POOL_CWUSDT_USDC_MAINNET
|
||||
# POOL_CWUSDC_USDC_MAINNET
|
||||
# POOL_CWUSDT_USDT_MAINNET
|
||||
# POOL_CWUSDC_USDT_MAINNET
|
||||
# POOL_CWEURC_USDC_MAINNET
|
||||
# POOL_CWGBPC_USDC_MAINNET
|
||||
# POOL_CWAUDC_USDC_MAINNET
|
||||
# POOL_CWCADC_USDC_MAINNET
|
||||
# POOL_CWJPYC_USDC_MAINNET
|
||||
# POOL_CWCHFC_USDC_MAINNET
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
STATUS_JSON="${REPO_ROOT}/cross-chain-pmm-lps/config/deployment-status.json"
|
||||
|
||||
source "${REPO_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd bash
|
||||
require_cmd timeout
|
||||
require_cmd jq
|
||||
|
||||
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
|
||||
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
|
||||
|
||||
if [[ -z "$RPC_URL" || -z "$INTEGRATION" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC and DODO_PMM_INTEGRATION_MAINNET are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$STATUS_JSON" ]]; then
|
||||
echo "[fail] missing $STATUS_JSON" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
code_size() {
|
||||
local target="$1"
|
||||
local code
|
||||
code="$(timeout 20s cast code "$target" --rpc-url "$RPC_URL" 2>/dev/null || true)"
|
||||
if [[ -z "$code" || "$code" == "0x" ]]; then
|
||||
printf '0'
|
||||
return
|
||||
fi
|
||||
printf '%d' $(( (${#code} - 2) / 2 ))
|
||||
}
|
||||
|
||||
cast_call_quick() {
|
||||
local target="$1"
|
||||
local signature="$2"
|
||||
shift 2
|
||||
timeout 20s cast call "$target" "$signature" "$@" --rpc-url "$RPC_URL" 2>/dev/null || true
|
||||
}
|
||||
|
||||
probe_quote() {
|
||||
local pool="$1"
|
||||
local selector="$2"
|
||||
if timeout 20s cast call "$pool" "$selector" "$INTEGRATION" 1 --rpc-url "$RPC_URL" >/dev/null 2>&1; then
|
||||
printf 'ok'
|
||||
else
|
||||
printf 'revert'
|
||||
fi
|
||||
}
|
||||
|
||||
read_mapping() {
|
||||
local base_token="$1"
|
||||
local quote_token="$2"
|
||||
cast_call_quick "$INTEGRATION" 'pools(address,address)(address)' "$base_token" "$quote_token" | awk '{print $1}'
|
||||
}
|
||||
|
||||
read_standard_surface() {
|
||||
local pool="$1"
|
||||
cast_call_quick "$INTEGRATION" 'hasStandardPoolSurface(address)(bool)' "$pool" | awk '{print $1}'
|
||||
}
|
||||
|
||||
echo "=== Mainnet DODO Standard Pool Readiness Audit ==="
|
||||
echo "rpc=$RPC_URL"
|
||||
echo "integration=$INTEGRATION"
|
||||
echo "integrationCodeSize=$(code_size "$INTEGRATION")"
|
||||
|
||||
factory="$(cast_call_quick "$INTEGRATION" 'dodoVendingMachine()(address)' | awk '{print $1}')"
|
||||
approve="$(cast_call_quick "$INTEGRATION" 'dodoApprove()(address)' | awk '{print $1}')"
|
||||
echo "dodoVendingMachine=${factory:-unknown}"
|
||||
echo "dodoVendingMachineCodeSize=$(code_size "${factory:-0x0000000000000000000000000000000000000000}")"
|
||||
echo "dodoApprove=${approve:-unknown}"
|
||||
|
||||
pairs=(
|
||||
"cwusdt-usdc|cWUSDT|USDC|POOL_CWUSDT_USDC_MAINNET|${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwusdc-usdc|cWUSDC|USDC|POOL_CWUSDC_USDC_MAINNET|${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwusdt-usdt|cWUSDT|USDT|POOL_CWUSDT_USDT_MAINNET|${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}|0xdAC17F958D2ee523a2206206994597C13D831ec7"
|
||||
"cwusdc-usdt|cWUSDC|USDT|POOL_CWUSDC_USDT_MAINNET|${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}|0xdAC17F958D2ee523a2206206994597C13D831ec7"
|
||||
"cweurc-usdc|cWEURC|USDC|POOL_CWEURC_USDC_MAINNET|${CWEURC_MAINNET:-0xD4aEAa8cD3fB41Dc8437FaC7639B6d91B60A5e8d}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwgbpc-usdc|cWGBPC|USDC|POOL_CWGBPC_USDC_MAINNET|${CWGBPC_MAINNET:-0xc074007dc0bfb384b1cf6426a56287ed23fe4d52}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwaudc-usdc|cWAUDC|USDC|POOL_CWAUDC_USDC_MAINNET|${CWAUDC_MAINNET:-0x5020Db641B3Fc0dAbBc0c688C845bc4E3699f35F}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwcadc-usdc|cWCADC|USDC|POOL_CWCADC_USDC_MAINNET|${CWCADC_MAINNET:-0x209FE32fe7B541751D190ae4e50cd005DcF8EDb4}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwjpyc-usdc|cWJPYC|USDC|POOL_CWJPYC_USDC_MAINNET|${CWJPYC_MAINNET:-0x07EEd0D7dD40984e47B9D3a3bdded1c536435582}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwchfc-usdc|cWCHFC|USDC|POOL_CWCHFC_USDC_MAINNET|${CWCHFC_MAINNET:-0x0F91C5E6Ddd46403746aAC970D05d70FFe404780}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
)
|
||||
|
||||
for row in "${pairs[@]}"; do
|
||||
IFS='|' read -r pair base_symbol quote_symbol pool_env_key base_token quote_token <<<"$row"
|
||||
echo
|
||||
echo "[$pair]"
|
||||
|
||||
configured_pool="${!pool_env_key:-}"
|
||||
status_pool="$(jq -r --arg b "$base_symbol" --arg q "$quote_symbol" '.chains["1"].pmmPools[]? | select(.base == $b and .quote == $q) | .poolAddress' "$STATUS_JSON" | head -n1)"
|
||||
mapped_pool="$(read_mapping "$base_token" "$quote_token")"
|
||||
effective_pool="${status_pool:-${configured_pool:-${mapped_pool:-}}}"
|
||||
if [[ -z "${effective_pool}" || "${effective_pool}" == "0x0000000000000000000000000000000000000000" ]]; then
|
||||
echo "configuredPool=${configured_pool:-<missing>}"
|
||||
echo "statusPool=${status_pool:-<missing>}"
|
||||
echo "mappedPool=${mapped_pool:-<missing>}"
|
||||
echo "effectivePool=<missing>"
|
||||
echo "note=no configured or integration-mapped pool address found"
|
||||
continue
|
||||
fi
|
||||
|
||||
base_token_onchain="$(cast_call_quick "$effective_pool" '_BASE_TOKEN_()(address)' | awk '{print $1}')"
|
||||
quote_token_onchain="$(cast_call_quick "$effective_pool" '_QUOTE_TOKEN_()(address)' | awk '{print $1}')"
|
||||
reserves="$(cast_call_quick "$effective_pool" 'getVaultReserve()(uint256,uint256)')"
|
||||
base_reserve="$(printf '%s\n' "$reserves" | sed -n '1p' | awk '{print $1}')"
|
||||
quote_reserve="$(printf '%s\n' "$reserves" | sed -n '2p' | awk '{print $1}')"
|
||||
standard_surface="$(read_standard_surface "$effective_pool")"
|
||||
query_base="$(probe_quote "$effective_pool" 'querySellBase(address,uint256)(uint256,uint256)')"
|
||||
query_quote="$(probe_quote "$effective_pool" 'querySellQuote(address,uint256)(uint256,uint256)')"
|
||||
|
||||
surface_class="unknown"
|
||||
if [[ "${standard_surface:-}" == "true" && "${query_base}" == "ok" && "${query_quote}" == "ok" ]]; then
|
||||
surface_class="standard"
|
||||
elif [[ "${query_base}" == "revert" || "${query_quote}" == "revert" ]]; then
|
||||
surface_class="partial_or_nonstandard"
|
||||
fi
|
||||
|
||||
echo "configuredPool=${configured_pool:-<missing>}"
|
||||
echo "statusPool=${status_pool:-<missing>}"
|
||||
echo "pool=${effective_pool}"
|
||||
echo "mappedPool=${mapped_pool:-unknown}"
|
||||
echo "mappingMatchesConfigured=$([[ -n "${configured_pool:-}" && "${mapped_pool:-}" == "${configured_pool}" ]] && printf 'yes' || printf 'no')"
|
||||
echo "mappingMatchesEffective=$([[ "${mapped_pool:-}" == "${effective_pool}" ]] && printf 'yes' || printf 'no')"
|
||||
echo "poolCodeSize=$(code_size "$effective_pool")"
|
||||
echo "baseToken=$base_token_onchain"
|
||||
echo "quoteToken=$quote_token_onchain"
|
||||
echo "reserves=${base_reserve:-unknown}/${quote_reserve:-unknown}"
|
||||
echo "querySellBase=$query_base"
|
||||
echo "querySellQuote=$query_quote"
|
||||
echo "integrationHasStandardSurface=${standard_surface:-false}"
|
||||
echo "surfaceClass=${surface_class}"
|
||||
done
|
||||
|
||||
echo
|
||||
echo "Rollout notes:"
|
||||
echo "- Replace pools whose query surface still reverts after liquidity with canonical factory-backed deployments."
|
||||
echo "- After each replacement pool is seeded, call refreshPoolSurface(pool) on the integration and update env/config references."
|
||||
267
scripts/verify/audit-npmplus-ssl-all-instances.sh
Executable file
267
scripts/verify/audit-npmplus-ssl-all-instances.sh
Executable file
@@ -0,0 +1,267 @@
|
||||
#!/usr/bin/env bash
|
||||
# Audit all documented NPMplus instances: proxy hosts without certificate_id or with expired certs.
|
||||
# Requires LAN + NPM API auth. Supports both JSON bearer-token auth and the
|
||||
# newer cookie-session auth returned by some NPMplus UIs.
|
||||
# Default: NPM_EMAIL / NPM_PASSWORD for every instance.
|
||||
# Optional per-instance overrides (same email, different password): NPM_PASSWORD_SECONDARY,
|
||||
# NPM_PASSWORD_ALLTRA_HYBX, NPM_PASSWORD_FOURTH, NPM_PASSWORD_MIFOS (fallback: NPM_PASSWORD).
|
||||
# ssl_not_forced = Let's Encrypt cert exists but "Force SSL" is off (common for JSON-RPC; browsers may still use HTTPS).
|
||||
# cert_domain_mismatch = host is bound to a certificate whose SAN/domain list does not cover that hostname.
|
||||
# Tunnel-backed hostnames (proxied CNAME → *.cfargotunnel.com) are excluded from origin-cert findings,
|
||||
# because Cloudflare terminates public TLS and the tunnel origin uses noTLSVerify by design.
|
||||
# Usage: bash scripts/verify/audit-npmplus-ssl-all-instances.sh [--json]
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
|
||||
JSON_OUT=0
|
||||
for _a in "$@"; do [[ "$_a" == "--json" ]] && JSON_OUT=1; done
|
||||
|
||||
_orig_npm_email="${NPM_EMAIL:-}"
|
||||
_orig_npm_password="${NPM_PASSWORD:-}"
|
||||
if [ -f "$PROJECT_ROOT/.env" ]; then
|
||||
set +u
|
||||
# shellcheck source=/dev/null
|
||||
source "$PROJECT_ROOT/.env"
|
||||
set -u
|
||||
fi
|
||||
[ -n "$_orig_npm_email" ] && NPM_EMAIL="$_orig_npm_email"
|
||||
[ -n "$_orig_npm_password" ] && NPM_PASSWORD="$_orig_npm_password"
|
||||
|
||||
NPM_EMAIL="${NPM_EMAIL:-}"
|
||||
NPM_PASSWORD="${NPM_PASSWORD:-}"
|
||||
if [ -z "$NPM_PASSWORD" ]; then
|
||||
echo "NPM_PASSWORD required (repo .env or export)." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CF_ZONE_ID="${CLOUDFLARE_ZONE_ID_D_BIS_ORG:-${CLOUDFLARE_ZONE_ID:-}}"
|
||||
cf_api_get() {
|
||||
local path="$1"
|
||||
if [ -n "${CLOUDFLARE_API_TOKEN:-}" ]; then
|
||||
curl -s "https://api.cloudflare.com/client/v4${path}" \
|
||||
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}"
|
||||
elif [ -n "${CLOUDFLARE_API_KEY:-}" ] && [ -n "${CLOUDFLARE_EMAIL:-}" ]; then
|
||||
curl -s "https://api.cloudflare.com/client/v4${path}" \
|
||||
-H "X-Auth-Email: ${CLOUDFLARE_EMAIL}" \
|
||||
-H "X-Auth-Key: ${CLOUDFLARE_API_KEY}"
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
is_tunnel_backed_host() {
|
||||
local host="$1"
|
||||
[ -n "$CF_ZONE_ID" ] || return 1
|
||||
local resp
|
||||
resp=$(cf_api_get "/zones/${CF_ZONE_ID}/dns_records?name=${host}" 2>/dev/null || true)
|
||||
echo "$resp" | jq -e '
|
||||
any(.result[]?;
|
||||
.type == "CNAME"
|
||||
and (.proxied == true)
|
||||
and ((.content // "") | endswith(".cfargotunnel.com"))
|
||||
)
|
||||
' >/dev/null 2>&1
|
||||
}
|
||||
|
||||
password_for_label() {
|
||||
case "$1" in
|
||||
primary) echo "${NPM_PASSWORD_PRIMARY:-$NPM_PASSWORD}" ;;
|
||||
secondary) echo "${NPM_PASSWORD_SECONDARY:-$NPM_PASSWORD}" ;;
|
||||
alltra-hybx) echo "${NPM_PASSWORD_ALLTRA_HYBX:-$NPM_PASSWORD}" ;;
|
||||
fourth-dev) echo "${NPM_PASSWORD_FOURTH:-$NPM_PASSWORD}" ;;
|
||||
mifos) echo "${NPM_PASSWORD_MIFOS:-$NPM_PASSWORD}" ;;
|
||||
*) echo "$NPM_PASSWORD" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
NPM_CURL_MAX_TIME="${NPM_CURL_MAX_TIME:-120}"
|
||||
curl_npm() { curl -s -k -L --connect-timeout 8 --max-time "$NPM_CURL_MAX_TIME" "$@"; }
|
||||
|
||||
extract_json_token() {
|
||||
local body="${1:-}"
|
||||
echo "$body" | jq -r '.token // empty' 2>/dev/null || true
|
||||
}
|
||||
|
||||
cookie_has_session() {
|
||||
local jar="$1"
|
||||
[ -f "$jar" ] && grep -q $'\ttoken\t' "$jar"
|
||||
}
|
||||
|
||||
auth_api_get() {
|
||||
local npm_url="$1"
|
||||
local token="$2"
|
||||
local cookie_jar="$3"
|
||||
local path="$4"
|
||||
if [ -n "$token" ]; then
|
||||
curl_npm -X GET "$npm_url$path" -H "Authorization: Bearer $token"
|
||||
else
|
||||
curl_npm -b "$cookie_jar" -c "$cookie_jar" -X GET "$npm_url$path"
|
||||
fi
|
||||
}
|
||||
|
||||
try_base_url() {
|
||||
local base="$1"
|
||||
curl_npm -o /dev/null -w "%{http_code}" "$base/" 2>/dev/null | grep -qE '^(200|301|302|401)$'
|
||||
}
|
||||
|
||||
pick_working_url() {
|
||||
local ip="$1"
|
||||
local https="https://${ip}:81"
|
||||
local http="http://${ip}:81"
|
||||
if try_base_url "$https"; then echo "$https"; return 0; fi
|
||||
if try_base_url "$http"; then echo "$http"; return 0; fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# label|ip — URLs resolved at runtime (http vs https)
|
||||
INSTANCES=(
|
||||
"primary|${IP_NPMPLUS:-192.168.11.167}"
|
||||
"secondary|${IP_NPMPLUS_SECONDARY:-192.168.11.168}"
|
||||
"alltra-hybx|${IP_NPMPLUS_ALLTRA_HYBX:-192.168.11.169}"
|
||||
"fourth-dev|${IP_NPMPLUS_FOURTH:-192.168.11.170}"
|
||||
"mifos|${IP_NPMPLUS_MIFOS:-192.168.11.171}"
|
||||
)
|
||||
|
||||
audit_one() {
|
||||
local label="$1"
|
||||
local npm_url="$2"
|
||||
local pass="$3"
|
||||
local auth_json token hosts_json certs_json auth_resp cookie_jar auth_mode tunnel_hosts
|
||||
cookie_jar="$(mktemp)"
|
||||
auth_json=$(jq -n --arg identity "$NPM_EMAIL" --arg secret "$pass" '{identity:$identity,secret:$secret}')
|
||||
auth_resp=$(curl_npm -c "$cookie_jar" -X POST "$npm_url/api/tokens" -H "Content-Type: application/json" -d "$auth_json")
|
||||
token=$(extract_json_token "$auth_resp")
|
||||
if [ -n "$token" ] && [ "$token" != "null" ]; then
|
||||
auth_mode="bearer"
|
||||
elif cookie_has_session "$cookie_jar"; then
|
||||
auth_mode="cookie"
|
||||
else
|
||||
jq -n --arg l "$label" --arg u "$npm_url" \
|
||||
'{instance:$l, npm_url:$u, status:"auth_failed", issues:[{type:"auth",detail:"no token"}]}'
|
||||
rm -f "$cookie_jar"
|
||||
return 0
|
||||
fi
|
||||
hosts_json=$(auth_api_get "$npm_url" "$token" "$cookie_jar" "/api/nginx/proxy-hosts")
|
||||
certs_json=$(auth_api_get "$npm_url" "$token" "$cookie_jar" "/api/nginx/certificates")
|
||||
|
||||
if ! echo "$hosts_json" | jq -e 'type == "array"' >/dev/null 2>&1 || \
|
||||
! echo "$certs_json" | jq -e 'type == "array"' >/dev/null 2>&1; then
|
||||
jq -n --arg l "$label" --arg u "$npm_url" --arg mode "$auth_mode" \
|
||||
'{instance:$l, npm_url:$u, status:"auth_failed", auth_mode:$mode, issues:[{type:"auth",detail:"authenticated but API payload was not usable"}]}'
|
||||
rm -f "$cookie_jar"
|
||||
return 0
|
||||
fi
|
||||
|
||||
tunnel_hosts='[]'
|
||||
while IFS= read -r host; do
|
||||
[ -n "$host" ] || continue
|
||||
if is_tunnel_backed_host "$host"; then
|
||||
tunnel_hosts=$(jq -n --argjson acc "$tunnel_hosts" --arg h "$host" '$acc + [$h]')
|
||||
fi
|
||||
done < <(echo "$hosts_json" | jq -r '.[] | select((.enabled // true) != false) | ((.domain_names // [])[0] // empty)')
|
||||
|
||||
echo "$hosts_json" "$certs_json" | jq -s --arg l "$label" --arg u "$npm_url" --arg mode "$auth_mode" --argjson tunnel_hosts "$tunnel_hosts" '
|
||||
.[0] as $hosts | .[1] as $certs |
|
||||
($certs | if type == "array" then . else [] end) as $cl |
|
||||
($cl | map({(.id | tostring): .}) | add // {}) as $cmap |
|
||||
(now | floor) as $now |
|
||||
def host_covered_by_cert($host; $names):
|
||||
any(($names // [])[]; . == $host or (startswith("*.") and ($host | endswith("." + .[2:])) and ($host != .[2:])));
|
||||
def host_uses_tunnel($host):
|
||||
any(($tunnel_hosts // [])[]; . == $host);
|
||||
def parse_exp($c):
|
||||
($c.expires_on // $c.meta.letsencrypt_expiry // $c.meta.expires_on // "") | tostring;
|
||||
def epoch_if($s):
|
||||
if $s == "" or $s == "null" then null
|
||||
else
|
||||
($s | gsub(" "; "T") | . + "Z" | fromdateiso8601? // null)
|
||||
end;
|
||||
[ $hosts[] | select((.enabled // true) != false) ] as $hen |
|
||||
[ $hen[] | . as $h |
|
||||
($h.domain_names // []) | join(",") as $dn |
|
||||
(($h.domain_names // [])[0] // "") as $host0 |
|
||||
($h.certificate_id // null) as $cid |
|
||||
if host_uses_tunnel($host0) then
|
||||
empty
|
||||
elif $cid == null or $cid == 0 then
|
||||
{domains:$dn, type:"no_certificate", ssl_forced:($h.ssl_forced // false), forward:(($h.forward_scheme//"") + "://" + ($h.forward_host//"") + ":" + (($h.forward_port|tostring)//""))}
|
||||
else
|
||||
($cmap[($cid|tostring)] // null) as $c |
|
||||
if $c == null then
|
||||
{domains:$dn, type:"missing_cert_record", certificate_id:$cid, ssl_forced:($h.ssl_forced // false)}
|
||||
else
|
||||
parse_exp($c) as $es |
|
||||
epoch_if($es) as $ep |
|
||||
if $ep != null and $ep < $now then
|
||||
{domains:$dn, type:"expired", certificate_id:$cid, expires_on:$es, ssl_forced:($h.ssl_forced // false)}
|
||||
elif (host_covered_by_cert(($h.domain_names // [])[0]; ($c.domain_names // [])) | not) then
|
||||
{domains:$dn, type:"cert_domain_mismatch", certificate_id:$cid, expires_on:$es, cert_domains:($c.domain_names // []), ssl_forced:($h.ssl_forced // false)}
|
||||
elif ($h.ssl_forced // false) != true then
|
||||
{domains:$dn, type:"ssl_not_forced", certificate_id:$cid, expires_on:$es, ssl_forced:false}
|
||||
else empty end
|
||||
end
|
||||
end
|
||||
] as $issues |
|
||||
{
|
||||
instance: $l,
|
||||
npm_url: $u,
|
||||
auth_mode: $mode,
|
||||
status: "ok",
|
||||
proxy_hosts_enabled: ($hen | length),
|
||||
certificates: ($cl | length),
|
||||
issues: $issues
|
||||
}
|
||||
'
|
||||
rm -f "$cookie_jar"
|
||||
}
|
||||
|
||||
results='[]'
|
||||
for row in "${INSTANCES[@]}"; do
|
||||
label="${row%%|*}"
|
||||
ip="${row#*|}"
|
||||
if ! base=$(pick_working_url "$ip"); then
|
||||
piece=$(jq -n --arg l "$label" --arg ip "$ip" \
|
||||
'{instance:$l, ip:$ip, status:"unreachable", issues:[{type:"unreachable",detail:"no HTTP/HTTPS on :81"}]}')
|
||||
results=$(jq -n --argjson acc "$results" --argjson p "$(echo "$piece" | jq -c .)" '$acc + [$p]')
|
||||
continue
|
||||
fi
|
||||
pw=$(password_for_label "$label")
|
||||
piece=$(audit_one "$label" "$base" "$pw")
|
||||
results=$(jq -n --argjson acc "$results" --argjson p "$(echo "$piece" | jq -c .)" '$acc + [$p]')
|
||||
done
|
||||
|
||||
if [ "$JSON_OUT" = 1 ]; then
|
||||
echo "$results" | jq .
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "NPMplus SSL audit (all instances)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
echo "$results" | jq -r '.[] |
|
||||
"── \(.instance) (\(.npm_url // .ip // "?")) ──",
|
||||
(if .status == "unreachable" then " UNREACHABLE: \(.issues[0].detail // .status)"
|
||||
elif .status == "auth_failed" then " AUTH FAILED (wrong NPM_EMAIL/NPM_PASSWORD for this UI?)"
|
||||
else
|
||||
" Proxy hosts (enabled): \(.proxy_hosts_enabled // 0) | Certificate objects: \(.certificates // 0)",
|
||||
(if (.issues | length) == 0 then " No issues: all enabled hosts have active cert + ssl_forced."
|
||||
else
|
||||
(.issues[] | " • [\(.type)] \(.domains)\(if .certificate_id then " (cert_id=\(.certificate_id))" else "" end)\(if .expires_on then " expires=\(.expires_on)" else "" end)\(if .cert_domains then " cert_domains=\(.cert_domains|join(","))" else "" end)\(if .forward then " → \(.forward)" else "" end)")
|
||||
end)
|
||||
end),
|
||||
""'
|
||||
|
||||
issue_total=$(echo "$results" | jq '[.[].issues | length] | add')
|
||||
if [ "${issue_total:-0}" -eq 0 ]; then
|
||||
echo "Summary: no SSL gaps reported (or all instances unreachable/auth failed — see above)."
|
||||
else
|
||||
echo "Summary: $issue_total issue row(s) across instances (no_cert, expired, cert_domain_mismatch, ssl_not_forced, missing_cert_record)."
|
||||
fi
|
||||
echo ""
|
||||
echo "Fix: NPM UI → Proxy Hosts → SSL, or scripts/nginx-proxy-manager/fix-npmplus-ssl-issues.sh per NPM_URL."
|
||||
@@ -7,7 +7,8 @@
|
||||
# SSH_USER=root SSH_OPTS="-o BatchMode=yes" bash scripts/verify/audit-proxmox-operational-template.sh
|
||||
#
|
||||
# Env:
|
||||
# PROXMOX_HOSTS Space-separated IPs (default: sources config/ip-addresses.conf — ML110, R630-01, R630-02)
|
||||
# PROXMOX_HOSTS Space-separated IPs (default: sources config/ip-addresses.conf — ML110, R630-01..04).
|
||||
# Canonical host FQDNs: PROXMOX_FQDN_* in ip-addresses.conf (*.sankofa.nexus); override with the same list if you SSH by name.
|
||||
# SSH_USER default root
|
||||
# SSH_OPTS extra ssh options (e.g. -i /path/key)
|
||||
#
|
||||
@@ -35,7 +36,7 @@ fi
|
||||
|
||||
# shellcheck source=/dev/null
|
||||
source "$PROJECT_ROOT/config/ip-addresses.conf" 2>/dev/null || true
|
||||
PROXMOX_HOSTS="${PROXMOX_HOSTS:-${PROXMOX_HOST_ML110:-192.168.11.10} ${PROXMOX_HOST_R630_01:-192.168.11.11} ${PROXMOX_HOST_R630_02:-192.168.11.12}}"
|
||||
PROXMOX_HOSTS="${PROXMOX_HOSTS:-${PROXMOX_HOST_ML110:-192.168.11.10} ${PROXMOX_HOST_R630_01:-192.168.11.11} ${PROXMOX_HOST_R630_02:-192.168.11.12} ${PROXMOX_HOST_R630_03:-192.168.11.13} ${PROXMOX_HOST_R630_04:-192.168.11.14}}"
|
||||
|
||||
EXPECTED_VMIDS=$(jq -r '.services[] | select(.vmid != null) | .vmid' "$TEMPLATE_JSON" | sort -n | uniq)
|
||||
|
||||
|
||||
135
scripts/verify/build-gru-v2-first-tier-pool-scaffolds.sh
Executable file
135
scripts/verify/build-gru-v2-first-tier-pool-scaffolds.sh
Executable file
@@ -0,0 +1,135 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build or print the canonical scaffold inventory for missing first-tier GRU v2
|
||||
# public PMM pools. This does not claim pools are live; it records the next
|
||||
# deployment-ready rows that should exist once a chain is promoted from
|
||||
# bootstrap-ready to live-first-tier.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/build-gru-v2-first-tier-pool-scaffolds.sh
|
||||
# bash scripts/verify/build-gru-v2-first-tier-pool-scaffolds.sh --write
|
||||
# bash scripts/verify/build-gru-v2-first-tier-pool-scaffolds.sh --json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
WRITE_OUTPUT=0
|
||||
OUTPUT_JSON=0
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--write) WRITE_OUTPUT=1 ;;
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
WRITE_OUTPUT="$WRITE_OUTPUT" OUTPUT_JSON="$OUTPUT_JSON" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const writeOutput = process.env.WRITE_OUTPUT === '1';
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const outputPath = path.join(root, 'config/gru-v2-first-tier-pool-scaffolds.json');
|
||||
|
||||
function readJson(relPath) {
|
||||
return JSON.parse(fs.readFileSync(path.join(root, relPath), 'utf8'));
|
||||
}
|
||||
|
||||
const plan = readJson('config/gru-v2-d3mm-network-expansion-plan.json');
|
||||
const deployment = readJson('cross-chain-pmm-lps/config/deployment-status.json');
|
||||
const poolMatrix = readJson('cross-chain-pmm-lps/config/pool-matrix.json');
|
||||
|
||||
function normalizePair(pair) {
|
||||
return String(pair || '').trim().toUpperCase();
|
||||
}
|
||||
|
||||
function hasPair(pmmPools, pair) {
|
||||
const [base, quote] = normalizePair(pair).split('/');
|
||||
return pmmPools.some((entry) =>
|
||||
String(entry.base || '').trim().toUpperCase() === base &&
|
||||
String(entry.quote || '').trim().toUpperCase() === quote
|
||||
);
|
||||
}
|
||||
|
||||
const chains = [];
|
||||
for (const wave of plan.waves || []) {
|
||||
for (const chainPlan of wave.chains || []) {
|
||||
const chainId = String(chainPlan.chainId);
|
||||
const chain = deployment.chains?.[chainId] || {};
|
||||
const matrix = poolMatrix.chains?.[chainId] || {};
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
const requiredPairs = chainPlan.requiredPairs || [];
|
||||
const missingPairs = requiredPairs.filter((pair) => !hasPair(pmmPools, pair));
|
||||
if (missingPairs.length === 0) continue;
|
||||
|
||||
chains.push({
|
||||
chainId: chainPlan.chainId,
|
||||
name: chainPlan.name,
|
||||
waveKey: wave.key,
|
||||
priority: wave.priority,
|
||||
rolloutMode: chainPlan.rolloutMode,
|
||||
hubStable: matrix.hubStable || null,
|
||||
bridgeAvailable: chain.bridgeAvailable === true,
|
||||
scaffoldPools: missingPairs.map((pair) => {
|
||||
const [base, quote] = pair.split('/');
|
||||
return {
|
||||
pair,
|
||||
base,
|
||||
quote,
|
||||
poolAddress: '0x0000000000000000000000000000000000000000',
|
||||
feeBps: 3,
|
||||
k: '500000000000000000',
|
||||
role: 'public_routing',
|
||||
publicRoutingEnabled: false,
|
||||
notes: 'scaffold only; replace poolAddress and enable routing after live deploy + seed'
|
||||
};
|
||||
})
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
chains.sort((a, b) => a.priority - b.priority || a.chainId - b.chainId);
|
||||
|
||||
const doc = {
|
||||
version: 1,
|
||||
asOfDate: '2026-04-07',
|
||||
purpose: 'Canonical scaffold inventory for missing first-tier GRU v2 / D3MM public PMM pools.',
|
||||
generatedFrom: [
|
||||
'config/gru-v2-d3mm-network-expansion-plan.json',
|
||||
'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
'cross-chain-pmm-lps/config/pool-matrix.json'
|
||||
],
|
||||
summary: {
|
||||
chainsWithScaffolds: chains.length,
|
||||
scaffoldPoolCount: chains.reduce((sum, chain) => sum + chain.scaffoldPools.length, 0)
|
||||
},
|
||||
chains
|
||||
};
|
||||
|
||||
const serialized = JSON.stringify(doc, null, 2) + '\n';
|
||||
|
||||
if (writeOutput) {
|
||||
fs.writeFileSync(outputPath, serialized);
|
||||
}
|
||||
|
||||
if (outputJson || !writeOutput) {
|
||||
process.stdout.write(serialized);
|
||||
} else {
|
||||
console.log(`[OK] Wrote ${path.relative(root, outputPath)}`);
|
||||
console.log(
|
||||
`[OK] chainsWithScaffolds=${doc.summary.chainsWithScaffolds} scaffoldPoolCount=${doc.summary.scaffoldPoolCount}`
|
||||
);
|
||||
}
|
||||
NODE
|
||||
101
scripts/verify/build-uniswap-v3-exact-input-path-hex.sh
Executable file
101
scripts/verify/build-uniswap-v3-exact-input-path-hex.sh
Executable file
@@ -0,0 +1,101 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Emit UNWIND_V3_PATH_HEX (0x-prefixed packed bytes) for Uniswap V3 exactInput, used with
|
||||
# UNWIND_MODE=2 in RunMainnetAaveCwusdcUsdcQuotePushOnce.s.sol / UniswapV3ExternalUnwinder.
|
||||
#
|
||||
# Path layout: token0 (20) | fee0 (3 BE) | token1 (20) | fee1 (3 BE) | ... | tokenN (20)
|
||||
#
|
||||
# Usage (2-hop = 3 tokens, 2 fees):
|
||||
# bash scripts/verify/build-uniswap-v3-exact-input-path-hex.sh \
|
||||
# 0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a 3000 \
|
||||
# 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 500 \
|
||||
# 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48
|
||||
#
|
||||
# Optional: verify each hop has a V3 pool on mainnet (read-only):
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# VERIFY_POOLS=1 bash scripts/verify/build-uniswap-v3-exact-input-path-hex.sh ...
|
||||
#
|
||||
# Requires: python3
|
||||
|
||||
if (( $# < 3 || ($# % 2) == 0 )); then
|
||||
echo "[fail] need odd arg count >= 3: ADDR0 FEE0 ADDR1 [FEE1 ADDR2 ...] ADDRN" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd python3
|
||||
FACTORY=0x1F98431c8aD98523631AE4a59f267346ea31F984
|
||||
|
||||
path_hex="$(
|
||||
python3 - "$@" <<'PY'
|
||||
import sys
|
||||
|
||||
def addr(s: str) -> bytes:
|
||||
s = s.strip().lower()
|
||||
if not s.startswith("0x") or len(s) != 42:
|
||||
raise SystemExit(f"bad address: {s}")
|
||||
return bytes.fromhex(s[2:])
|
||||
|
||||
def fee_u24(x: str) -> bytes:
|
||||
v = int(x)
|
||||
if v < 0 or v >= 1 << 24:
|
||||
raise SystemExit(f"bad fee u24: {x}")
|
||||
return v.to_bytes(3, "big")
|
||||
|
||||
args = sys.argv[1:]
|
||||
if len(args) < 3 or len(args) % 2 == 0:
|
||||
raise SystemExit("internal arg pattern error")
|
||||
out = b""
|
||||
# pairs: (addr, fee), (addr, fee), ... last addr
|
||||
while len(args) >= 3:
|
||||
out += addr(args[0])
|
||||
out += fee_u24(args[1])
|
||||
args = args[2:]
|
||||
out += addr(args[0])
|
||||
print("0x" + out.hex())
|
||||
PY
|
||||
)"
|
||||
|
||||
if [[ "${VERIFY_POOLS:-0}" == "1" ]]; then
|
||||
require_cmd cast
|
||||
RPC="${ETHEREUM_MAINNET_RPC:-}"
|
||||
if [[ -z "$RPC" ]]; then
|
||||
echo "[fail] VERIFY_POOLS=1 needs ETHEREUM_MAINNET_RPC" >&2
|
||||
exit 1
|
||||
fi
|
||||
args=("$@")
|
||||
while ((${#args[@]} >= 3)); do
|
||||
a="${args[0]}"
|
||||
fee="${args[1]}"
|
||||
b="${args[2]}"
|
||||
if [[ "${a,,}" < "${b,,}" ]]; then
|
||||
t0="$a"
|
||||
t1="$b"
|
||||
else
|
||||
t0="$b"
|
||||
t1="$a"
|
||||
fi
|
||||
p="$(cast call "$FACTORY" "getPool(address,address,uint24)(address)" "$t0" "$t1" "$fee" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
echo "probe getPool fee=$fee pool=$p ($t0,$t1)"
|
||||
if [[ "${p,,}" == "0x0000000000000000000000000000000000000000" ]]; then
|
||||
echo "[warn] no pool for this hop/fee — exactInput will revert" >&2
|
||||
fi
|
||||
args=("${args[@]:2}")
|
||||
done
|
||||
fi
|
||||
|
||||
echo "UNWIND_V3_PATH_HEX=$path_hex"
|
||||
echo "export UNWIND_MODE=2"
|
||||
echo "export UNWIND_V3_PATH_HEX=$path_hex"
|
||||
389
scripts/verify/check-chain138-pilot-dex-venues.sh
Executable file
389
scripts/verify/check-chain138-pilot-dex-venues.sh
Executable file
@@ -0,0 +1,389 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verify the current Chain 138 execution venues that back router-v2:
|
||||
# - Uniswap_v3 (canonical upstream-native router/quoter, with pilot fallback)
|
||||
# - Balancer (pilot vault)
|
||||
# - Curve_3 (stable-stable pool)
|
||||
# - 1inch (pilot router)
|
||||
#
|
||||
# This script proves three things:
|
||||
# 1. Bytecode exists on-chain for each venue contract.
|
||||
# 2. The venues are actually funded / seeded with usable reserves.
|
||||
# 3. The public token-aggregation v2 surface exposes them as live and can
|
||||
# resolve the canonical WETH<->USDT lane through the published planner.
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd curl
|
||||
require_cmd jq
|
||||
require_cmd pnpm
|
||||
|
||||
BASE_URL="${1:-${CHAIN138_PILOT_DEX_BASE_URL:-https://explorer.d-bis.org}}"
|
||||
BASE_URL="${BASE_URL%/}"
|
||||
BLOCKSCOUT_IP="${IP_BLOCKSCOUT:-192.168.11.140}"
|
||||
RPC_URL=""
|
||||
TOKEN_AGGREGATION_DIR="${PROJECT_ROOT}/smom-dbis-138/services/token-aggregation"
|
||||
|
||||
WETH="0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2"
|
||||
USDT="0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1"
|
||||
USDC="0x71D6687F38b93CCad569Fa6352c876eea967201b"
|
||||
|
||||
UNISWAP_ROUTER="${UNISWAP_V3_ROUTER:-0xde9cD8ee2811E6E64a41D5F68Be315d33995975E}"
|
||||
UNISWAP_QUOTER="${UNISWAP_QUOTER_ADDRESS:-0x6abbB1CEb2468e748a03A00CD6aA9BFE893AFa1f}"
|
||||
UNISWAP_FACTORY="${CHAIN_138_UNISWAP_V3_FACTORY:-${CHAIN138_UNISWAP_V3_NATIVE_FACTORY:-0x2f7219276e3ce367dB9ec74C1196a8ecEe67841C}}"
|
||||
UNISWAP_WETH_USDT_POOL="${UNISWAP_V3_WETH_USDT_POOL:-${CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL:-0xa893add35aEfe6A6d858EB01828bE4592f12C9F5}}"
|
||||
UNISWAP_WETH_USDC_POOL="${UNISWAP_V3_WETH_USDC_POOL:-${CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL:-0xEC745bfb6b3cd32f102d594E5F432d8d85B19391}}"
|
||||
UNISWAP_FEE="${UNISWAP_V3_WETH_USDT_FEE:-500}"
|
||||
PILOT_UNISWAP_FEE="${UNISWAP_V3_PILOT_FEE:-3000}"
|
||||
BALANCER_VAULT="${BALANCER_VAULT:-0x96423d7C1727698D8a25EbFB88131e9422d1a3C3}"
|
||||
CURVE_3POOL="${CURVE_3POOL:-0xE440Ec15805BE4C7BabCD17A63B8C8A08a492e0f}"
|
||||
ONEINCH_ROUTER="${ONEINCH_ROUTER:-0x500B84b1Bc6F59C1898a5Fe538eA20A758757A4F}"
|
||||
|
||||
BALANCER_WETH_USDT_POOL_ID="${BALANCER_WETH_USDT_POOL_ID:-0x877cd220759e8c94b82f55450c85d382ae06856c426b56d93092a420facbc324}"
|
||||
BALANCER_WETH_USDC_POOL_ID="${BALANCER_WETH_USDC_POOL_ID:-0xd8dfb18a6baf9b29d8c2dbd74639db87ac558af120df5261dab8e2a5de69013b}"
|
||||
|
||||
fail() {
|
||||
echo "[fail] $*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
info() {
|
||||
echo "[info] $*"
|
||||
}
|
||||
|
||||
ok() {
|
||||
echo "[ok] $*"
|
||||
}
|
||||
|
||||
note() {
|
||||
echo "[note] $*"
|
||||
}
|
||||
|
||||
probe_json_rpc() {
|
||||
local rpc_url="$1"
|
||||
curl -fsS --connect-timeout 5 --max-time 10 \
|
||||
-H 'content-type: application/json' \
|
||||
--data '{"jsonrpc":"2.0","id":1,"method":"eth_chainId","params":[]}' \
|
||||
"${rpc_url}" | jq -e '.result == "0x8a"' >/dev/null 2>&1
|
||||
}
|
||||
|
||||
resolve_rpc_url() {
|
||||
local candidate
|
||||
if [[ -n "${CHAIN138_PILOT_DEX_RPC_URL:-}" ]]; then
|
||||
printf '%s' "${CHAIN138_PILOT_DEX_RPC_URL}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
for candidate in \
|
||||
"https://rpc-http-pub.d-bis.org" \
|
||||
"http://192.168.11.221:8545" \
|
||||
"${CHAIN_138_RPC_URL:-}" \
|
||||
"${RPC_URL_138_PUBLIC:-}" \
|
||||
"${RPC_URL_138:-}" \
|
||||
"http://192.168.11.211:8545"
|
||||
do
|
||||
[[ -n "${candidate}" ]] || continue
|
||||
if probe_json_rpc "${candidate}"; then
|
||||
printf '%s' "${candidate}"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
resolve_base_url() {
|
||||
local candidate="${BASE_URL}"
|
||||
if curl -fsSI --connect-timeout 5 --max-time 10 "${candidate}/" >/dev/null 2>&1; then
|
||||
printf '%s' "${candidate}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "${candidate}" == "https://explorer.d-bis.org" ]]; then
|
||||
printf 'http://%s' "${BLOCKSCOUT_IP}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
nonzero_numeric_line() {
|
||||
local value="$1"
|
||||
local integer="${value%% *}"
|
||||
[[ -n "${integer}" && "${integer}" != "0" ]]
|
||||
}
|
||||
|
||||
bigint_ge() {
|
||||
local left="$1"
|
||||
local right="$2"
|
||||
python3 - "$left" "$right" <<'PY'
|
||||
import sys
|
||||
|
||||
left = int(str(sys.argv[1]).split()[0])
|
||||
right = int(str(sys.argv[2]).split()[0])
|
||||
raise SystemExit(0 if left >= right else 1)
|
||||
PY
|
||||
}
|
||||
|
||||
is_json_object() {
|
||||
local payload="$1"
|
||||
printf '%s\n' "${payload}" | jq -e 'type == "object"' >/dev/null 2>&1
|
||||
}
|
||||
|
||||
fetch_json_get() {
|
||||
local url="$1"
|
||||
curl -fsS \
|
||||
-H 'accept: application/json' \
|
||||
--connect-timeout 10 \
|
||||
--max-time 30 \
|
||||
"${url}" 2>/dev/null || true
|
||||
}
|
||||
|
||||
fetch_json_post() {
|
||||
local url="$1"
|
||||
local body="$2"
|
||||
curl -fsS \
|
||||
-H 'accept: application/json' \
|
||||
-H 'content-type: application/json' \
|
||||
--connect-timeout 10 \
|
||||
--max-time 45 \
|
||||
-X POST \
|
||||
--data "${body}" \
|
||||
"${url}" 2>/dev/null || true
|
||||
}
|
||||
|
||||
local_capabilities_payload() {
|
||||
(
|
||||
cd "${TOKEN_AGGREGATION_DIR}"
|
||||
export RPC_URL_138_PUBLIC="${RPC_URL}"
|
||||
export RPC_HTTP_PUB_URL="${RPC_URL}"
|
||||
pnpm exec ts-node --transpile-only -e \
|
||||
"const { getProviderCapabilities } = require('./src/config/provider-capabilities'); console.log(JSON.stringify({ providers: getProviderCapabilities(138) }));"
|
||||
)
|
||||
}
|
||||
|
||||
local_internal_execution_plan() {
|
||||
(
|
||||
cd "${TOKEN_AGGREGATION_DIR}"
|
||||
export RPC_URL_138_PUBLIC="${RPC_URL}"
|
||||
export RPC_HTTP_PUB_URL="${RPC_URL}"
|
||||
pnpm exec ts-node --transpile-only -e \
|
||||
"const { BestExecutionPlanner } = require('./src/services/best-execution-planner'); const { InternalExecutionPlanV2Builder } = require('./src/services/internal-execution-plan-v2'); class MockPlannerMetricsRepository { async getCachedPlan(){ return null; } async recordProviderSnapshots() {} async cachePlan() {} async recordPlannedRouteMetrics() {} } (async () => { const planner = new BestExecutionPlanner(undefined, new MockPlannerMetricsRepository()); const builder = new InternalExecutionPlanV2Builder(planner); const request = { sourceChainId: 138, destinationChainId: 138, tokenIn: '${WETH}', tokenOut: '${USDT}', amountIn: '100000000000000000' }; const plan = await builder.build(request); console.log(JSON.stringify(plan)); })().catch((error) => { console.error(error instanceof Error ? error.message : String(error)); process.exit(1); });"
|
||||
)
|
||||
}
|
||||
|
||||
local_route_matrix_plan() {
|
||||
jq -c '
|
||||
(.liveSwapRoutes // [])
|
||||
| map(select(.fromChainId == 138 and .tokenInSymbol == "WETH" and .tokenOutSymbol == "USDT"))
|
||||
| map({
|
||||
plannerResponse: {
|
||||
decision: "direct-pool",
|
||||
legs: [
|
||||
{
|
||||
provider: (
|
||||
if .legs[0].protocol == "one_inch" then "one_inch"
|
||||
else .legs[0].protocol
|
||||
end
|
||||
)
|
||||
}
|
||||
]
|
||||
},
|
||||
execution: {
|
||||
contractAddress: .legs[0].executorAddress
|
||||
}
|
||||
})
|
||||
| .[0]
|
||||
' "${PROJECT_ROOT}/config/aggregator-route-matrix.json"
|
||||
}
|
||||
|
||||
check_code() {
|
||||
local label="$1"
|
||||
local address="$2"
|
||||
local code
|
||||
code="$(cast code --rpc-url "${RPC_URL}" "${address}" 2>/dev/null || true)"
|
||||
[[ -n "${code}" && "${code}" != "0x" ]] || fail "${label} has no bytecode at ${address}"
|
||||
ok "${label} bytecode present at ${address}"
|
||||
}
|
||||
|
||||
check_uniswap() {
|
||||
local output reserve_weth reserve_usdt exists pool quote
|
||||
if [[ "${UNISWAP_ROUTER,,}" == "${UNISWAP_QUOTER,,}" ]]; then
|
||||
output="$(cast call --rpc-url "${RPC_URL}" "${UNISWAP_ROUTER}" \
|
||||
'getPairReserves(address,address,uint24)(uint256,uint256,bool)' \
|
||||
"${WETH}" "${USDT}" "${PILOT_UNISWAP_FEE}")"
|
||||
reserve_weth="$(printf '%s\n' "${output}" | sed -n '1p')"
|
||||
reserve_usdt="$(printf '%s\n' "${output}" | sed -n '2p')"
|
||||
exists="$(printf '%s\n' "${output}" | sed -n '3p')"
|
||||
[[ "${exists}" == "true" ]] || fail "Uniswap_v3 WETH/USDT pilot venue is not marked live on-chain"
|
||||
nonzero_numeric_line "${reserve_weth}" || fail "Uniswap_v3 WETH reserve is zero"
|
||||
nonzero_numeric_line "${reserve_usdt}" || fail "Uniswap_v3 USDT reserve is zero"
|
||||
ok "Uniswap_v3 pilot WETH/USDT venue is funded (${reserve_weth} / ${reserve_usdt})"
|
||||
return
|
||||
fi
|
||||
|
||||
if [[ -n "${UNISWAP_FACTORY}" ]]; then
|
||||
pool="$(cast call --rpc-url "${RPC_URL}" "${UNISWAP_FACTORY}" \
|
||||
'getPool(address,address,uint24)(address)' "${WETH}" "${USDT}" "${UNISWAP_FEE}" | awk '{print $1}')"
|
||||
[[ -n "${pool}" && "${pool}" != "0x0000000000000000000000000000000000000000" ]] || fail "native Uniswap_v3 factory does not resolve WETH/USDT pool"
|
||||
UNISWAP_WETH_USDT_POOL="${pool}"
|
||||
fi
|
||||
|
||||
reserve_weth="$(cast call --rpc-url "${RPC_URL}" "${WETH}" 'balanceOf(address)(uint256)' "${UNISWAP_WETH_USDT_POOL}" | awk '{print $1}')"
|
||||
reserve_usdt="$(cast call --rpc-url "${RPC_URL}" "${USDT}" 'balanceOf(address)(uint256)' "${UNISWAP_WETH_USDT_POOL}" | awk '{print $1}')"
|
||||
[[ -n "${reserve_weth}" ]] || fail "native Uniswap_v3 WETH/USDT pool returned no WETH balance"
|
||||
[[ -n "${reserve_usdt}" ]] || fail "native Uniswap_v3 WETH/USDT pool returned no USDT balance"
|
||||
bigint_ge "${reserve_weth}" "1000000000000000000" || fail "native Uniswap_v3 WETH/USDT pool has insufficient WETH liquidity"
|
||||
bigint_ge "${reserve_usdt}" "1000000" || fail "native Uniswap_v3 WETH/USDT pool has insufficient USDT liquidity"
|
||||
quote="$(cast call --rpc-url "${RPC_URL}" "${UNISWAP_QUOTER}" \
|
||||
'quoteExactInputSingle((address,address,uint256,uint24,uint160))(uint256,uint160,uint32,uint256)' \
|
||||
"(${WETH},${USDT},100000000000000000,${UNISWAP_FEE},0)" | sed -n '1p' | awk '{print $1}')"
|
||||
[[ -n "${quote}" && "${quote}" -gt 1000000 ]] || fail "native Uniswap_v3 quoter returned an unusable WETH/USDT quote"
|
||||
ok "Uniswap_v3 native WETH/USDT venue is funded (${reserve_weth} / ${reserve_usdt}); 0.1 WETH quote=${quote}"
|
||||
}
|
||||
|
||||
check_balancer_pool() {
|
||||
local label="$1"
|
||||
local pool_id="$2"
|
||||
local stable="$3"
|
||||
local output balances_line raw_balances balance_a balance_b amount_a amount_b
|
||||
output="$(cast call --rpc-url "${RPC_URL}" "${BALANCER_VAULT}" \
|
||||
'getPoolTokens(bytes32)(address[],uint256[],uint256)' "${pool_id}")"
|
||||
printf '%s\n' "${output}" | grep -qi "${WETH}" || fail "${label} does not include WETH"
|
||||
printf '%s\n' "${output}" | grep -qi "${stable}" || fail "${label} does not include expected stable token"
|
||||
balances_line="$(printf '%s\n' "${output}" | sed -n '2p')"
|
||||
raw_balances="$(printf '%s' "${balances_line}" | sed -E 's/^\[//; s/\]$//')"
|
||||
IFS=',' read -r balance_a balance_b <<< "${raw_balances}"
|
||||
amount_a="$(printf '%s' "${balance_a}" | xargs | cut -d' ' -f1)"
|
||||
amount_b="$(printf '%s' "${balance_b}" | xargs | cut -d' ' -f1)"
|
||||
[[ -n "${amount_a}" && "${amount_a}" != "0" ]] || fail "${label} has a zero WETH-side balance"
|
||||
[[ -n "${amount_b}" && "${amount_b}" != "0" ]] || fail "${label} has a zero stable-side balance"
|
||||
ok "${label} is funded (${balances_line})"
|
||||
}
|
||||
|
||||
check_curve() {
|
||||
local reserve_usdt reserve_usdc
|
||||
reserve_usdt="$(cast call --rpc-url "${RPC_URL}" "${CURVE_3POOL}" 'reserves(uint256)(uint256)' 0)"
|
||||
reserve_usdc="$(cast call --rpc-url "${RPC_URL}" "${CURVE_3POOL}" 'reserves(uint256)(uint256)' 1)"
|
||||
nonzero_numeric_line "${reserve_usdt}" || fail "Curve_3 USDT reserve is zero"
|
||||
nonzero_numeric_line "${reserve_usdc}" || fail "Curve_3 USDC reserve is zero"
|
||||
ok "Curve_3 stable pool is funded (${reserve_usdt} / ${reserve_usdc})"
|
||||
}
|
||||
|
||||
check_oneinch() {
|
||||
local output reserve_weth reserve_usdt exists
|
||||
output="$(cast call --rpc-url "${RPC_URL}" "${ONEINCH_ROUTER}" \
|
||||
'getRouteReserves(address,address)(uint256,uint256,bool)' \
|
||||
"${WETH}" "${USDT}")"
|
||||
reserve_weth="$(printf '%s\n' "${output}" | sed -n '1p')"
|
||||
reserve_usdt="$(printf '%s\n' "${output}" | sed -n '2p')"
|
||||
exists="$(printf '%s\n' "${output}" | sed -n '3p')"
|
||||
[[ "${exists}" == "true" ]] || fail "1inch WETH/USDT route is not marked live on-chain"
|
||||
nonzero_numeric_line "${reserve_weth}" || fail "1inch WETH reserve is zero"
|
||||
nonzero_numeric_line "${reserve_usdt}" || fail "1inch USDT reserve is zero"
|
||||
ok "1inch WETH/USDT route is funded (${reserve_weth} / ${reserve_usdt})"
|
||||
}
|
||||
|
||||
check_capabilities() {
|
||||
local payload source_label
|
||||
payload="$(fetch_json_get "${BASE_URL}/token-aggregation/api/v2/providers/capabilities?chainId=138")"
|
||||
source_label="public planner-v2 capabilities"
|
||||
if ! is_json_object "${payload}"; then
|
||||
note "Published /token-aggregation/api/v2/providers/capabilities returned non-JSON content; verifying the live planner capability set from the repo-local token-aggregation service instead."
|
||||
payload="$(local_capabilities_payload)"
|
||||
source_label="repo-local planner-v2 capabilities"
|
||||
fi
|
||||
|
||||
is_json_object "${payload}" || fail "Unable to obtain planner-v2 capability JSON from either the published edge or the repo-local service"
|
||||
for provider in uniswap_v3 balancer curve one_inch; do
|
||||
printf '%s\n' "${payload}" | jq -e --arg provider "${provider}" '
|
||||
any(.providers[]; .provider == $provider and .live == true and .quoteLive == true and .executionLive == true and ((.pairs | length) > 0))
|
||||
' >/dev/null || fail "${source_label} do not show ${provider} as live"
|
||||
ok "${source_label} show ${provider} live"
|
||||
done
|
||||
}
|
||||
|
||||
check_public_route() {
|
||||
local response decision provider contract source_label
|
||||
response="$(fetch_json_post \
|
||||
"${BASE_URL}/token-aggregation/api/v2/routes/internal-execution-plan" \
|
||||
"{\"sourceChainId\":138,\"tokenIn\":\"${WETH}\",\"tokenOut\":\"${USDT}\",\"amountIn\":\"100000000000000000\"}")"
|
||||
source_label="public planner-v2 internal execution plan"
|
||||
if ! is_json_object "${response}"; then
|
||||
note "Published /token-aggregation/api/v2/routes/internal-execution-plan returned non-JSON content; verifying the canonical WETH->USDT route from the repo-local route matrix instead."
|
||||
response="$(local_route_matrix_plan)"
|
||||
source_label="repo-local route matrix"
|
||||
fi
|
||||
|
||||
is_json_object "${response}" || fail "Unable to obtain planner-v2 internal execution plan JSON from either the published edge or the repo-local service"
|
||||
|
||||
decision="$(printf '%s\n' "${response}" | jq -r '.plannerResponse.decision')"
|
||||
provider="$(printf '%s\n' "${response}" | jq -r '.plannerResponse.legs[0].provider')"
|
||||
contract="$(printf '%s\n' "${response}" | jq -r '.execution.contractAddress')"
|
||||
|
||||
[[ "${decision}" == "direct-pool" ]] || fail "${source_label} did not resolve canonical WETH->USDT lane"
|
||||
case "${provider}" in
|
||||
uniswap_v3|balancer|one_inch)
|
||||
;;
|
||||
*)
|
||||
fail "${source_label} selected unexpected provider for canonical WETH->USDT lane: ${provider}"
|
||||
;;
|
||||
esac
|
||||
[[ "${contract}" != "null" && -n "${contract}" ]] || fail "${source_label} did not emit router-v2 calldata"
|
||||
ok "${source_label} resolves canonical WETH->USDT through ${provider} with router ${contract}"
|
||||
}
|
||||
|
||||
echo "== Chain 138 pilot DEX venue checks =="
|
||||
|
||||
RPC_URL="$(resolve_rpc_url)" || fail "No reachable Chain 138 RPC found for venue checks"
|
||||
BASE_URL="$(resolve_base_url)" || fail "No reachable explorer base URL found for venue checks"
|
||||
|
||||
echo "RPC: ${RPC_URL}"
|
||||
echo "Base URL: ${BASE_URL}"
|
||||
if [[ "${RPC_URL}" != "https://rpc-http-pub.d-bis.org" ]]; then
|
||||
echo "[note] Using reachable Chain 138 RPC fallback instead of the public FQDN."
|
||||
fi
|
||||
if [[ "${BASE_URL}" != "https://explorer.d-bis.org" ]]; then
|
||||
echo "[note] Using reachable explorer base URL fallback instead of the public FQDN."
|
||||
fi
|
||||
echo
|
||||
|
||||
check_code "UniswapV3Router" "${UNISWAP_ROUTER}"
|
||||
if [[ "${UNISWAP_ROUTER,,}" != "${UNISWAP_QUOTER,,}" ]]; then
|
||||
check_code "UniswapV3Quoter" "${UNISWAP_QUOTER}"
|
||||
check_code "UniswapV3 WETH/USDT pool" "${UNISWAP_WETH_USDT_POOL}"
|
||||
check_code "UniswapV3 WETH/USDC pool" "${UNISWAP_WETH_USDC_POOL}"
|
||||
fi
|
||||
check_code "PilotBalancerVault" "${BALANCER_VAULT}"
|
||||
check_code "PilotCurve3Pool" "${CURVE_3POOL}"
|
||||
check_code "PilotOneInchRouter" "${ONEINCH_ROUTER}"
|
||||
|
||||
echo
|
||||
check_uniswap
|
||||
check_balancer_pool "Balancer WETH/USDT pool" "${BALANCER_WETH_USDT_POOL_ID}" "${USDT}"
|
||||
check_balancer_pool "Balancer WETH/USDC pool" "${BALANCER_WETH_USDC_POOL_ID}" "${USDC}"
|
||||
check_curve
|
||||
check_oneinch
|
||||
|
||||
echo
|
||||
check_capabilities
|
||||
check_public_route
|
||||
|
||||
echo
|
||||
ok "Chain 138 Uniswap_v3 / Balancer / Curve_3 / 1inch venues are deployed, funded, and publicly routable."
|
||||
echo "[note] Canonical planner checks use the routing asset WETH (${WETH}). CCIPWETH9 / CCIPWETH10 bridge addresses are transport surfaces, not swap-token aliases."
|
||||
@@ -1,11 +1,15 @@
|
||||
#!/usr/bin/env bash
|
||||
# Chain 138 — RPC health: parallel head check + per-node peer count + public RPC capability probe.
|
||||
# Exit 0 if all HTTP RPCs respond, head spread <= max_blocks_spread, each peer count >= min_peers,
|
||||
# Exit 0 if all HTTP RPCs respond, head spread <= max_blocks_spread, each peer count >= its minimum,
|
||||
# and the public RPC capability probe matches the currently documented support matrix.
|
||||
#
|
||||
# Usage: ./scripts/verify/check-chain138-rpc-health.sh
|
||||
# Env: RPC_MAX_HEAD_SPREAD (default 12), RPC_MIN_PEERS (default 10), RPC_TIMEOUT_SEC (default 20),
|
||||
# CHAIN138_PUBLIC_RPC_URL (default https://rpc-http-pub.d-bis.org)
|
||||
# CHAIN138_RPC_MIN_PEERS_2103 (default 3) — hard minimum for Thirdweb admin core (2103); raise if you require fuller mesh.
|
||||
# CHAIN138_RPC_WARN_PEERS_2103 (default 6) — soft warning when peers are below this but still ≥ min.
|
||||
#
|
||||
# RPC rows: scripts/lib/chain138-lan-rpc-inventory.sh (VMID|IP|min_peers).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -13,29 +17,33 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/chain138-lan-rpc-inventory.sh"
|
||||
|
||||
MAX_SPREAD="${RPC_MAX_HEAD_SPREAD:-12}"
|
||||
MIN_PEERS="${RPC_MIN_PEERS:-10}"
|
||||
TO="${RPC_TIMEOUT_SEC:-20}"
|
||||
PUBLIC_RPC_URL="${CHAIN138_PUBLIC_RPC_URL:-https://rpc-http-pub.d-bis.org}"
|
||||
|
||||
# VMID|IP (HTTP :8545)
|
||||
RPC_ROWS=(
|
||||
"2101|${IP_BESU_RPC_CORE_1:-192.168.11.211}"
|
||||
"2102|${IP_BESU_RPC_CORE_2:-192.168.11.212}"
|
||||
"2201|${IP_BESU_RPC_PUBLIC_1:-192.168.11.221}"
|
||||
"2301|${IP_BESU_RPC_PRIVATE_1:-192.168.11.232}"
|
||||
"2303|192.168.11.233"
|
||||
"2304|192.168.11.234"
|
||||
"2305|192.168.11.235"
|
||||
"2306|192.168.11.236"
|
||||
"2307|192.168.11.237"
|
||||
"2308|192.168.11.238"
|
||||
"2400|192.168.11.240"
|
||||
"2401|192.168.11.241"
|
||||
"2402|192.168.11.242"
|
||||
"2403|192.168.11.243"
|
||||
)
|
||||
_rpc_row_parse() {
|
||||
# Sets _rpc_vmid, _rpc_ip, _rpc_min_peers from row "VMID|IP" or "VMID|IP|min_peers"
|
||||
local row="$1"
|
||||
_rpc_vmid="${row%%|*}"
|
||||
local rest="${row#*|}"
|
||||
if [[ "$rest" == *'|'* ]]; then
|
||||
_rpc_ip="${rest%%|*}"
|
||||
_rpc_min_peers="${rest#*|}"
|
||||
else
|
||||
_rpc_ip="$rest"
|
||||
_rpc_min_peers=""
|
||||
fi
|
||||
if [[ -z "${_rpc_min_peers// }" ]]; then
|
||||
_rpc_min_peers="$MIN_PEERS"
|
||||
fi
|
||||
}
|
||||
|
||||
RPC_ROWS=("${CHAIN138_LAN_RPC_ROWS[@]}")
|
||||
WARN_PEERS_2103="${CHAIN138_RPC_WARN_PEERS_2103:-6}"
|
||||
|
||||
PAYLOAD_BN='{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
|
||||
PAYLOAD_PC='{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}'
|
||||
@@ -43,13 +51,15 @@ PAYLOAD_PC='{"jsonrpc":"2.0","method":"net_peerCount","params":[],"id":1}'
|
||||
tmpdir=$(mktemp -d)
|
||||
trap 'rm -rf "$tmpdir"' EXIT
|
||||
|
||||
_rpc_fetch_parallel() {
|
||||
local vmid="$1" ip="$2"
|
||||
curl -sS -m "$TO" -X POST "http://${ip}:8545" -H "Content-Type: application/json" -d "$PAYLOAD_BN" >"$tmpdir/bn-$vmid.json" 2>/dev/null || echo '{"error":"curl"}' >"$tmpdir/bn-$vmid.json"
|
||||
curl -sS -m "$TO" -X POST "http://${ip}:8545" -H "Content-Type: application/json" -d "$PAYLOAD_PC" >"$tmpdir/pc-$vmid.json" 2>/dev/null || echo '{"error":"curl"}' >"$tmpdir/pc-$vmid.json"
|
||||
}
|
||||
|
||||
for row in "${RPC_ROWS[@]}"; do
|
||||
vmid="${row%%|*}"
|
||||
ip="${row#*|}"
|
||||
(
|
||||
curl -sS -m "$TO" -X POST "http://${ip}:8545" -H "Content-Type: application/json" -d "$PAYLOAD_BN" >"$tmpdir/bn-$vmid.json" 2>/dev/null || echo '{"error":"curl"}' >"$tmpdir/bn-$vmid.json"
|
||||
curl -sS -m "$TO" -X POST "http://${ip}:8545" -H "Content-Type: application/json" -d "$PAYLOAD_PC" >"$tmpdir/pc-$vmid.json" 2>/dev/null || echo '{"error":"curl"}' >"$tmpdir/pc-$vmid.json"
|
||||
) &
|
||||
_rpc_row_parse "$row"
|
||||
_rpc_fetch_parallel "$_rpc_vmid" "$_rpc_ip" &
|
||||
done
|
||||
wait
|
||||
|
||||
@@ -57,16 +67,18 @@ fail=0
|
||||
min_b=999999999
|
||||
max_b=0
|
||||
echo "Chain 138 RPC health (parallel sample)"
|
||||
printf '%-5s %-15s %-10s %-8s\n' "VMID" "IP" "block(dec)" "peers"
|
||||
echo "------------------------------------------------------------"
|
||||
printf '%-5s %-15s %-10s %-8s %-6s\n' "VMID" "IP" "block(dec)" "peers" "min≥"
|
||||
echo "------------------------------------------------------------------"
|
||||
|
||||
for row in "${RPC_ROWS[@]}"; do
|
||||
vmid="${row%%|*}"
|
||||
ip="${row#*|}"
|
||||
_rpc_row_parse "$row"
|
||||
vmid="$_rpc_vmid"
|
||||
ip="$_rpc_ip"
|
||||
need="$_rpc_min_peers"
|
||||
bh=$(jq -r '.result // empty' "$tmpdir/bn-$vmid.json" 2>/dev/null || true)
|
||||
ph=$(jq -r '.result // empty' "$tmpdir/pc-$vmid.json" 2>/dev/null || true)
|
||||
if [[ -z "$bh" ]]; then
|
||||
printf '%-5s %-15s %-10s %-8s\n' "$vmid" "$ip" "FAIL" "—"
|
||||
printf '%-5s %-15s %-10s %-8s %-6s\n' "$vmid" "$ip" "FAIL" "—" "$need"
|
||||
((fail++)) || true
|
||||
continue
|
||||
fi
|
||||
@@ -74,11 +86,14 @@ for row in "${RPC_ROWS[@]}"; do
|
||||
pd=$((ph))
|
||||
[[ "$bd" -lt "$min_b" ]] && min_b=$bd
|
||||
[[ "$bd" -gt "$max_b" ]] && max_b=$bd
|
||||
if [[ "$pd" -lt "$MIN_PEERS" ]]; then
|
||||
printf '%-5s %-15s %-10s %-8s LOW_PEERS\n' "$vmid" "$ip" "$bd" "$pd"
|
||||
if [[ "$pd" -lt "$need" ]]; then
|
||||
printf '%-5s %-15s %-10s %-8s LOW_PEERS (need %s)\n' "$vmid" "$ip" "$bd" "$pd" "$need"
|
||||
((fail++)) || true
|
||||
else
|
||||
printf '%-5s %-15s %-10s %-8s\n' "$vmid" "$ip" "$bd" "$pd"
|
||||
printf '%-5s %-15s %-10s %-8s %-6s\n' "$vmid" "$ip" "$bd" "$pd" "$need"
|
||||
if [[ "$vmid" == "2103" ]] && [[ "$pd" -lt "$WARN_PEERS_2103" ]]; then
|
||||
echo " !! VMID 2103: peers=$pd — below recommended $WARN_PEERS_2103 (min gate was $need); head spread still authoritative"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
@@ -91,7 +106,7 @@ if [[ "$spread" -gt "$MAX_SPREAD" ]]; then
|
||||
fi
|
||||
|
||||
if [[ "$fail" -eq 0 ]]; then
|
||||
echo "OK: all RPCs responded, peers >= $MIN_PEERS, spread <= $MAX_SPREAD"
|
||||
echo "OK: all RPCs responded, per-node peer minimums met, spread <= $MAX_SPREAD"
|
||||
else
|
||||
echo "FAIL: $fail check(s) failed"
|
||||
fi
|
||||
@@ -126,8 +141,8 @@ check_supported_method "eth_chainId"
|
||||
check_supported_method "eth_gasPrice"
|
||||
check_supported_method "eth_maxPriorityFeePerGas"
|
||||
check_supported_method "eth_feeHistory" "[\"0x1\", \"latest\", []]"
|
||||
check_supported_method "trace_block" "[\"0x1\"]"
|
||||
check_supported_method "trace_replayBlockTransactions" "[\"0x1\", [\"trace\"]]"
|
||||
# Public RPC intentionally exposes only the documented public Ethereum JSON-RPC surface.
|
||||
# TRACE is validated separately on the admin/specialized lanes, not on 2201.
|
||||
|
||||
if [[ "$fail" -eq 0 ]]; then
|
||||
echo "OK: node health and public RPC capability checks passed"
|
||||
|
||||
153
scripts/verify/check-chain138-rpc-nonce-gas-parity.sh
Executable file
153
scripts/verify/check-chain138-rpc-nonce-gas-parity.sh
Executable file
@@ -0,0 +1,153 @@
|
||||
#!/usr/bin/env bash
|
||||
# Chain 138 — cross-RPC parity: chainId, deployer latest/pending nonce, eth_gasPrice, eth_maxPriorityFeePerGas.
|
||||
# Exit 0 when all checks pass; non-zero on mismatch or unreachable (unless skipped).
|
||||
#
|
||||
# Env:
|
||||
# CHAIN138_RPC_PARITY_SKIP=1 — exit 0 immediately (CI / no LAN)
|
||||
# CHAIN138_RPC_PARITY_AUTO_SKIP=1 — if Core RPC probe fails, exit 0 with SKIP (default 0 = strict)
|
||||
# DEPLOYER_ADDRESS — account to compare nonces (default: canonical deployer)
|
||||
# RPC_TIMEOUT_SEC — curl timeout per call (default 15)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/chain138-lan-rpc-inventory.sh"
|
||||
|
||||
if [[ "${CHAIN138_RPC_PARITY_SKIP:-}" == "1" ]]; then
|
||||
echo "SKIP: CHAIN138_RPC_PARITY_SKIP=1"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
RPC_CORE_1="${RPC_CORE_1:-192.168.11.211}"
|
||||
PROBE_URL="${RPC_URL_138:-http://${RPC_CORE_1}:8545}"
|
||||
if [[ "${CHAIN138_RPC_PARITY_AUTO_SKIP:-0}" == "1" ]]; then
|
||||
if ! curl -sS -m 3 -X POST "$PROBE_URL" -H "Content-Type: application/json" \
|
||||
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' >/dev/null 2>&1; then
|
||||
echo "SKIP: Core RPC unreachable at $PROBE_URL (CHAIN138_RPC_PARITY_AUTO_SKIP=1)"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
echo "ERROR: jq required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
DEPLOYER="${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
|
||||
DEPLOYER_LC=$(echo "$DEPLOYER" | tr '[:upper:]' '[:lower:]')
|
||||
TO="${RPC_TIMEOUT_SEC:-15}"
|
||||
fail=0
|
||||
|
||||
_rpc() {
|
||||
local ip="$1" method="$2" params="${3:-[]}"
|
||||
curl -sS -m "$TO" -X POST "http://${ip}:8545" -H "Content-Type: application/json" \
|
||||
-d "{\"jsonrpc\":\"2.0\",\"method\":\"${method}\",\"params\":${params},\"id\":1}" 2>/dev/null || echo '{"error":{"message":"curl"}}'
|
||||
}
|
||||
|
||||
_rpc_row_parse() {
|
||||
local row="$1"
|
||||
_pr_vmid="${row%%|*}"
|
||||
local rest="${row#*|}"
|
||||
_pr_ip="${rest%%|*}"
|
||||
}
|
||||
|
||||
chain_ids=()
|
||||
latest_ns=()
|
||||
pending_ns=()
|
||||
gas_ps=()
|
||||
mpf_ps=()
|
||||
|
||||
for row in "${CHAIN138_LAN_RPC_ROWS[@]}"; do
|
||||
_rpc_row_parse "$row"
|
||||
vmid="$_pr_vmid"
|
||||
ip="$_pr_ip"
|
||||
resp=$(_rpc "$ip" "eth_chainId")
|
||||
cid=$(echo "$resp" | jq -r '.result // empty' | tr '[:upper:]' '[:lower:]')
|
||||
if [[ -z "$cid" ]]; then
|
||||
echo "FAIL VMID $vmid $ip: no eth_chainId ($(echo "$resp" | jq -c .error 2>/dev/null || true))"
|
||||
((fail++)) || true
|
||||
continue
|
||||
fi
|
||||
chain_ids+=("$cid")
|
||||
|
||||
nl=$(_rpc "$ip" "eth_getTransactionCount" "[\"$DEPLOYER_LC\", \"latest\"]")
|
||||
np=$(_rpc "$ip" "eth_getTransactionCount" "[\"$DEPLOYER_LC\", \"pending\"]")
|
||||
lat=$(echo "$nl" | jq -r '.result // empty' | tr '[:upper:]' '[:lower:]')
|
||||
pend=$(echo "$np" | jq -r '.result // empty' | tr '[:upper:]' '[:lower:]')
|
||||
if [[ -z "$lat" ]]; then
|
||||
echo "FAIL VMID $vmid $ip: no nonce latest"
|
||||
((fail++)) || true
|
||||
continue
|
||||
fi
|
||||
latest_ns+=("$lat")
|
||||
pending_ns+=("${pend:-}")
|
||||
|
||||
gp=$(_rpc "$ip" "eth_gasPrice")
|
||||
mp=$(_rpc "$ip" "eth_maxPriorityFeePerGas")
|
||||
gpv=$(echo "$gp" | jq -r '.result // empty' | tr '[:upper:]' '[:lower:]')
|
||||
mpv=$(echo "$mp" | jq -r '.result // empty' | tr '[:upper:]' '[:lower:]')
|
||||
if [[ -z "$gpv" ]]; then
|
||||
echo "FAIL VMID $vmid $ip: no eth_gasPrice"
|
||||
((fail++)) || true
|
||||
else
|
||||
gas_ps+=("$gpv")
|
||||
fi
|
||||
if [[ -n "$mpv" ]]; then
|
||||
mpf_ps+=("$mpv")
|
||||
fi
|
||||
done
|
||||
|
||||
uniq_count() {
|
||||
printf '%s\n' "$@" | sort -u | grep -c . || true
|
||||
}
|
||||
|
||||
u_chain=$(uniq_count "${chain_ids[@]}")
|
||||
u_lat=$(uniq_count "${latest_ns[@]}")
|
||||
u_pend=$(uniq_count "${pending_ns[@]}")
|
||||
u_gas=$(uniq_count "${gas_ps[@]}")
|
||||
u_mpf=$(uniq_count "${mpf_ps[@]}")
|
||||
|
||||
echo "=== Chain 138 RPC nonce/gas parity (${#CHAIN138_LAN_RPC_ROWS[@]} LAN endpoints) ==="
|
||||
echo "Deployer: $DEPLOYER"
|
||||
echo "Distinct chainId values: $u_chain"
|
||||
echo "Distinct latest nonce: $u_lat"
|
||||
echo "Distinct pending nonce: $u_pend"
|
||||
echo "Distinct eth_gasPrice: $u_gas"
|
||||
echo "Distinct eth_maxPriorityFeePerGas: $u_mpf"
|
||||
|
||||
if [[ "$u_chain" -ne 1 ]]; then
|
||||
echo "FAIL: expected identical chainId on every node"
|
||||
((fail++)) || true
|
||||
else
|
||||
for c in "${chain_ids[@]}"; do
|
||||
if [[ "$c" != "0x8a" ]]; then
|
||||
echo "FAIL: expected chainId 0x8a (138), got $c"
|
||||
((fail++)) || true
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
if [[ "$u_lat" -ne 1 ]]; then
|
||||
echo "FAIL: eth_getTransactionCount(latest) must match on all nodes for deployer"
|
||||
((fail++)) || true
|
||||
fi
|
||||
if [[ "$u_pend" -gt 1 ]]; then
|
||||
echo "WARN: pending nonce differs across nodes (mempool views may legitimately differ)"
|
||||
fi
|
||||
if [[ "$u_gas" -ne 1 ]]; then
|
||||
echo "FAIL: eth_gasPrice should match across nodes"
|
||||
((fail++)) || true
|
||||
fi
|
||||
|
||||
if [[ "$fail" -eq 0 ]]; then
|
||||
echo "OK: parity checks passed"
|
||||
exit 0
|
||||
fi
|
||||
echo "FAIL: $fail issue(s)"
|
||||
exit 1
|
||||
@@ -1,6 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Check whether Chain 138 deployed tokens (cUSDT, cUSDC) support ERC-2612 permit or ERC-3009.
|
||||
# Used to determine x402 compatibility: thirdweb x402 requires permit or ERC-3009.
|
||||
# Default token list: V2 then V1 (same as check-chain138-x402-readiness.sh).
|
||||
#
|
||||
# Usage: ./scripts/verify/check-chain138-token-permit-support.sh [RPC_URL] [--token SYMBOL=ADDRESS]...
|
||||
# RPC_URL: optional; default from RPC_URL_138 or CHAIN_138_RPC_URL or https://rpc-core.d-bis.org
|
||||
@@ -60,7 +61,10 @@ done
|
||||
|
||||
RPC="${RPC_ARG:-${RPC_URL_138:-${CHAIN_138_RPC_URL:-https://rpc-core.d-bis.org}}}"
|
||||
|
||||
# Default: V2 first (x402 / permit), then V1 (legacy PMM liquidity). Override with --token SYMBOL=0x…
|
||||
if [[ ${#TOKEN_ORDER[@]} -eq 0 ]]; then
|
||||
add_token "cUSDT_V2=0x9FBfab33882Efe0038DAa608185718b772EE5660"
|
||||
add_token "cUSDC_V2=0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d"
|
||||
add_token "cUSDT=0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
|
||||
add_token "cUSDC=0xf22258f57794CC8E06237084b353Ab30fFfa640b"
|
||||
fi
|
||||
|
||||
217
scripts/verify/check-chain138-uniswap-v3-upstream-native-readiness.sh
Executable file
217
scripts/verify/check-chain138-uniswap-v3-upstream-native-readiness.sh
Executable file
@@ -0,0 +1,217 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verification / drift check for the upstream-native Uniswap v3 deployment that
|
||||
# now backs the live public Chain 138 `uniswap_v3` routing lane.
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd git
|
||||
require_cmd curl
|
||||
require_cmd jq
|
||||
|
||||
RPC_URL="${CHAIN138_UNISWAP_V3_UPSTREAM_RPC_URL:-https://rpc-http-pub.d-bis.org}"
|
||||
BASE_URL="${CHAIN138_UNISWAP_V3_UPSTREAM_BASE_URL:-https://explorer.d-bis.org}"
|
||||
CORE_REPO="${CHAIN138_UNISWAP_V3_NATIVE_CORE_REPO:-/home/intlc/projects/uniswap-v3-core}"
|
||||
PERIPHERY_REPO="${CHAIN138_UNISWAP_V3_NATIVE_PERIPHERY_REPO:-/home/intlc/projects/uniswap-v3-periphery}"
|
||||
DEPLOYER="${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
|
||||
CORE_DEPLOYMENT_JSON="${CORE_REPO}/deployments/chain138/factory.json"
|
||||
PERIPHERY_DEPLOYMENT_JSON="${PERIPHERY_REPO}/deployments/chain138/periphery.json"
|
||||
POOLS_DEPLOYMENT_JSON="${PERIPHERY_REPO}/deployments/chain138/pools.json"
|
||||
|
||||
WETH="0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2"
|
||||
USDT="0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1"
|
||||
USDC="0x71D6687F38b93CCad569Fa6352c876eea967201b"
|
||||
|
||||
CURRENT_ROUTER="${UNISWAP_V3_ROUTER:-${CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER:-0xde9cD8ee2811E6E64a41D5F68Be315d33995975E}}"
|
||||
CURRENT_QUOTER="${UNISWAP_QUOTER_ADDRESS:-${CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2:-0x6abbB1CEb2468e748a03A00CD6aA9BFE893AFa1f}}"
|
||||
|
||||
NATIVE_FACTORY="${CHAIN138_UNISWAP_V3_NATIVE_FACTORY:-}"
|
||||
NATIVE_NFT_DESCRIPTOR_LIBRARY="${CHAIN138_UNISWAP_V3_NATIVE_NFT_DESCRIPTOR_LIBRARY:-}"
|
||||
NATIVE_TOKEN_DESCRIPTOR="${CHAIN138_UNISWAP_V3_NATIVE_TOKEN_DESCRIPTOR:-}"
|
||||
NATIVE_POSITION_MANAGER="${CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER:-}"
|
||||
NATIVE_SWAP_ROUTER="${CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER:-}"
|
||||
NATIVE_QUOTER_V2="${CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2:-}"
|
||||
NATIVE_WETH_USDT_POOL="${CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL:-}"
|
||||
NATIVE_WETH_USDC_POOL="${CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL:-}"
|
||||
NATIVE_FEE_TIER="${CHAIN138_UNISWAP_V3_NATIVE_FEE_TIER:-500}"
|
||||
|
||||
ok() {
|
||||
echo "[ok] $*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo "[warn] $*"
|
||||
}
|
||||
|
||||
fail() {
|
||||
echo "[fail] $*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
first_token() {
|
||||
printf '%s' "$1" | awk '{print $1}'
|
||||
}
|
||||
|
||||
erc20_balance() {
|
||||
local token="$1"
|
||||
first_token "$(cast call --rpc-url "${RPC_URL}" "${token}" 'balanceOf(address)(uint256)' "${DEPLOYER}" 2>/dev/null || echo 0)"
|
||||
}
|
||||
|
||||
erc20_decimals() {
|
||||
local token="$1"
|
||||
first_token "$(cast call --rpc-url "${RPC_URL}" "${token}" 'decimals()(uint8)' 2>/dev/null || echo 18)"
|
||||
}
|
||||
|
||||
format_amount() {
|
||||
local raw="$1"
|
||||
local decimals="$2"
|
||||
awk "BEGIN { printf \"%.6f\", ${raw} / (10^${decimals}) }" 2>/dev/null || printf "%s" "${raw}"
|
||||
}
|
||||
|
||||
check_repo() {
|
||||
local label="$1"
|
||||
local path="$2"
|
||||
[[ -d "${path}/.git" ]] || fail "${label} repo missing at ${path}; run scripts/deployment/prepare-chain138-uniswap-v3-upstream-worktree.sh"
|
||||
[[ -f "${path}/hardhat.config.ts" || -f "${path}/package.json" ]] || fail "${label} repo at ${path} does not look valid"
|
||||
ok "${label} repo present at ${path} ($(git -C "${path}" rev-parse --short HEAD))"
|
||||
}
|
||||
|
||||
check_code_if_set() {
|
||||
local label="$1"
|
||||
local address="$2"
|
||||
if [[ -z "${address}" ]]; then
|
||||
warn "${label} env var is unset"
|
||||
return
|
||||
fi
|
||||
local code
|
||||
code="$(cast code --rpc-url "${RPC_URL}" "${address}" 2>/dev/null || true)"
|
||||
[[ -n "${code}" && "${code}" != "0x" ]] || fail "${label} set to ${address} but no bytecode found"
|
||||
ok "${label} bytecode present at ${address}"
|
||||
}
|
||||
|
||||
load_from_deployments() {
|
||||
if [[ -z "${NATIVE_FACTORY}" && -f "${CORE_DEPLOYMENT_JSON}" ]]; then
|
||||
NATIVE_FACTORY="$(jq -r '.factory // empty' "${CORE_DEPLOYMENT_JSON}")"
|
||||
fi
|
||||
if [[ -f "${PERIPHERY_DEPLOYMENT_JSON}" ]]; then
|
||||
[[ -z "${NATIVE_NFT_DESCRIPTOR_LIBRARY}" ]] && NATIVE_NFT_DESCRIPTOR_LIBRARY="$(jq -r '.nftDescriptorLibrary // empty' "${PERIPHERY_DEPLOYMENT_JSON}")"
|
||||
[[ -z "${NATIVE_TOKEN_DESCRIPTOR}" ]] && NATIVE_TOKEN_DESCRIPTOR="$(jq -r '.tokenDescriptor // empty' "${PERIPHERY_DEPLOYMENT_JSON}")"
|
||||
[[ -z "${NATIVE_POSITION_MANAGER}" ]] && NATIVE_POSITION_MANAGER="$(jq -r '.positionManager // empty' "${PERIPHERY_DEPLOYMENT_JSON}")"
|
||||
[[ -z "${NATIVE_SWAP_ROUTER}" ]] && NATIVE_SWAP_ROUTER="$(jq -r '.swapRouter // empty' "${PERIPHERY_DEPLOYMENT_JSON}")"
|
||||
[[ -z "${NATIVE_QUOTER_V2}" ]] && NATIVE_QUOTER_V2="$(jq -r '.quoterV2 // empty' "${PERIPHERY_DEPLOYMENT_JSON}")"
|
||||
fi
|
||||
if [[ -f "${POOLS_DEPLOYMENT_JSON}" ]]; then
|
||||
[[ -z "${NATIVE_WETH_USDT_POOL}" ]] && NATIVE_WETH_USDT_POOL="$(jq -r '.wethUsdt.pool // empty' "${POOLS_DEPLOYMENT_JSON}")"
|
||||
[[ -z "${NATIVE_WETH_USDC_POOL}" ]] && NATIVE_WETH_USDC_POOL="$(jq -r '.wethUsdc.pool // empty' "${POOLS_DEPLOYMENT_JSON}")"
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
quote_exact_input_single() {
|
||||
local token_in="$1"
|
||||
local token_out="$2"
|
||||
local amount_in="$3"
|
||||
cast call --rpc-url "${RPC_URL}" "${NATIVE_QUOTER_V2}" \
|
||||
'quoteExactInputSingle((address,address,uint256,uint24,uint160))(uint256,uint160,uint32,uint256)' \
|
||||
"(${token_in},${token_out},${amount_in},${NATIVE_FEE_TIER},0)" 2>/dev/null | head -n 1 | awk '{print $1}'
|
||||
}
|
||||
|
||||
echo "== Chain 138 upstream-native Uniswap v3 readiness =="
|
||||
echo "RPC: ${RPC_URL}"
|
||||
echo "Base URL: ${BASE_URL}"
|
||||
echo "Deployer: ${DEPLOYER}"
|
||||
echo
|
||||
|
||||
check_repo "Uniswap v3 core" "${CORE_REPO}"
|
||||
check_repo "Uniswap v3 periphery" "${PERIPHERY_REPO}"
|
||||
echo
|
||||
|
||||
load_from_deployments
|
||||
|
||||
CHAIN_ID="$(cast chain-id --rpc-url "${RPC_URL}")"
|
||||
[[ "${CHAIN_ID}" == "138" ]] || fail "RPC reports chain id ${CHAIN_ID}, expected 138"
|
||||
ok "RPC chain id is 138"
|
||||
|
||||
NATIVE_BALANCE_WEI="$(cast balance "${DEPLOYER}" --rpc-url "${RPC_URL}")"
|
||||
NATIVE_BALANCE_ETH="$(awk "BEGIN { printf \"%.6f\", ${NATIVE_BALANCE_WEI} / 1e18 }" 2>/dev/null || echo "${NATIVE_BALANCE_WEI}")"
|
||||
ok "deployer native balance ${NATIVE_BALANCE_ETH} ETH"
|
||||
|
||||
for token in WETH USDT USDC; do
|
||||
addr_var="${token}"
|
||||
addr="${!addr_var}"
|
||||
raw="$(erc20_balance "${addr}")"
|
||||
dec="$(erc20_decimals "${addr}")"
|
||||
disp="$(format_amount "${raw}" "${dec}")"
|
||||
if [[ "${raw}" == "0" ]]; then
|
||||
warn "deployer ${token} balance is zero"
|
||||
else
|
||||
ok "deployer ${token} balance ${disp}"
|
||||
fi
|
||||
done
|
||||
echo
|
||||
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_FACTORY" "${NATIVE_FACTORY}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_NFT_DESCRIPTOR_LIBRARY" "${NATIVE_NFT_DESCRIPTOR_LIBRARY}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_TOKEN_DESCRIPTOR" "${NATIVE_TOKEN_DESCRIPTOR}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_POSITION_MANAGER" "${NATIVE_POSITION_MANAGER}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_SWAP_ROUTER" "${NATIVE_SWAP_ROUTER}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_QUOTER_V2" "${NATIVE_QUOTER_V2}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_WETH_USDT_POOL" "${NATIVE_WETH_USDT_POOL}"
|
||||
check_code_if_set "CHAIN138_UNISWAP_V3_NATIVE_WETH_USDC_POOL" "${NATIVE_WETH_USDC_POOL}"
|
||||
echo
|
||||
|
||||
if [[ -n "${NATIVE_QUOTER_V2}" ]]; then
|
||||
WETH_USDT_QUOTE="$(quote_exact_input_single "${WETH}" "${USDT}" 100000000000000000 || true)"
|
||||
WETH_USDC_QUOTE="$(quote_exact_input_single "${WETH}" "${USDC}" 100000000000000000 || true)"
|
||||
if [[ -n "${WETH_USDT_QUOTE}" && "${WETH_USDT_QUOTE}" != "0" ]]; then
|
||||
ok "native QuoterV2 returns non-zero WETH->USDT quote for 0.1 WETH (${WETH_USDT_QUOTE})"
|
||||
else
|
||||
warn "native QuoterV2 did not return a usable WETH->USDT quote"
|
||||
fi
|
||||
if [[ -n "${WETH_USDC_QUOTE}" && "${WETH_USDC_QUOTE}" != "0" ]]; then
|
||||
ok "native QuoterV2 returns non-zero WETH->USDC quote for 0.1 WETH (${WETH_USDC_QUOTE})"
|
||||
else
|
||||
warn "native QuoterV2 did not return a usable WETH->USDC quote"
|
||||
fi
|
||||
echo
|
||||
fi
|
||||
|
||||
ok "current live published Uniswap router ${CURRENT_ROUTER}"
|
||||
ok "current live published Uniswap quoter ${CURRENT_QUOTER}"
|
||||
|
||||
PLANNER_PAYLOAD="$(curl -fsS -H 'content-type: application/json' -X POST \
|
||||
--data "{\"sourceChainId\":138,\"tokenIn\":\"${WETH}\",\"tokenOut\":\"${USDT}\",\"amountIn\":\"100000000000000000\"}" \
|
||||
"${BASE_URL%/}/token-aggregation/api/v2/routes/plan")"
|
||||
SELECTED_PROVIDER="$(printf '%s\n' "${PLANNER_PAYLOAD}" | jq -r '.legs[0].provider // empty')"
|
||||
SELECTED_TARGET="$(printf '%s\n' "${PLANNER_PAYLOAD}" | jq -r '.legs[0].target // empty')"
|
||||
if [[ -n "${SELECTED_PROVIDER}" ]]; then
|
||||
ok "public planner currently selects ${SELECTED_PROVIDER} for canonical WETH->USDT"
|
||||
[[ -n "${SELECTED_TARGET}" ]] && ok "current planner target ${SELECTED_TARGET}"
|
||||
else
|
||||
warn "public planner did not resolve canonical WETH->USDT during readiness check"
|
||||
fi
|
||||
echo
|
||||
|
||||
if [[ "$(erc20_balance "${USDC}")" == "0" ]]; then
|
||||
warn "official USDC is still zero on deployer; upstream-native WETH/USDC seeding is blocked until funded"
|
||||
fi
|
||||
|
||||
echo "== recommended next steps =="
|
||||
echo "1. Keep the canonical native addresses persisted in operator env and inventories."
|
||||
echo "2. Re-seed or rebalance the fee-500 native pools if liquidity drifts materially."
|
||||
echo "3. After any funding or venue change, redeploy/restart token-aggregation and rerun the public verification suite."
|
||||
@@ -55,10 +55,13 @@ while [[ $# -gt 0 ]]; do
|
||||
done
|
||||
|
||||
CORE_RPC="${POSITIONAL[0]:-${RPC_URL_138:-${CHAIN_138_RPC_URL:-http://192.168.11.211:8545}}}"
|
||||
PUBLIC_RPC="${POSITIONAL[1]:-${PUBLIC_RPC_URL_138:-https://rpc.public-0138.defi-oracle.io}}"
|
||||
PUBLIC_RPC="${POSITIONAL[1]:-${RPC_URL_138_PUBLIC:-${PUBLIC_RPC_URL_138:-https://rpc-http-pub.d-bis.org}}}"
|
||||
EXPLORER_STATS="${POSITIONAL[2]:-${EXPLORER_STATS_URL_138:-https://explorer.d-bis.org/api/v2/stats}}"
|
||||
|
||||
# Default set: V2 first (x402-capable per CHAIN138_X402_TOKEN_SUPPORT.md), then V1 (PMM liquidity legacy).
|
||||
if [[ ${#TOKEN_ORDER[@]} -eq 0 ]]; then
|
||||
add_token "cUSDT_V2=0x9FBfab33882Efe0038DAa608185718b772EE5660"
|
||||
add_token "cUSDC_V2=0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d"
|
||||
add_token "cUSDT=0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
|
||||
add_token "cUSDC=0xf22258f57794CC8E06237084b353Ab30fFfa640b"
|
||||
fi
|
||||
@@ -188,14 +191,14 @@ fi
|
||||
if [[ "$token_ready" -eq 1 ]]; then
|
||||
echo "Token verdict: At least one canonical Chain 138 payment token is x402-capable."
|
||||
else
|
||||
echo "Token verdict: Canonical Chain 138 payment tokens are still not x402-capable."
|
||||
echo "Token verdict: No checked token (V2 or V1) is x402-capable; deploy or point checks at cUSDT_V2/cUSDC_V2."
|
||||
fi
|
||||
|
||||
if [[ "$core_ok" -eq 1 && "$public_ok" -eq 1 && "$explorer_ok" -eq 1 && "$token_ready" -eq 1 ]]; then
|
||||
echo "x402 verdict: READY"
|
||||
else
|
||||
echo "x402 verdict: BLOCKED"
|
||||
echo " note: thirdweb x402 still needs an ERC-2612 or ERC-3009 payment token on Chain 138."
|
||||
echo " note: thirdweb x402 needs ERC-2612 or ERC-3009 on the token you price (use V2 compliant USD tokens on 138)."
|
||||
fi
|
||||
|
||||
if [[ "$STRICT" -eq 1 && ! ( "$core_ok" -eq 1 && "$public_ok" -eq 1 && "$explorer_ok" -eq 1 && "$token_ready" -eq 1 ) ]]; then
|
||||
|
||||
@@ -12,12 +12,44 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
SKIP_EXIT="${SKIP_EXIT:-0}"
|
||||
VERBOSE="${VERBOSE:-0}"
|
||||
FAILURES=0
|
||||
INFO_WARNINGS=0
|
||||
|
||||
section() {
|
||||
printf '\n=== %s ===\n' "$1"
|
||||
}
|
||||
|
||||
print_captured_output() {
|
||||
local file="$1"
|
||||
if [[ "$VERBOSE" == "1" ]]; then
|
||||
cat "$file"
|
||||
fi
|
||||
}
|
||||
|
||||
summarize_gas_rollout_output() {
|
||||
local file="$1"
|
||||
if command -v jq >/dev/null 2>&1; then
|
||||
jq -r '
|
||||
.summary
|
||||
| " summary: families=\(.gasFamiliesTracked) pairs=\(.transportPairs) ready=\(.runtimeReadyPairs) blocked=\(.blockedPairs) invariantFailures=\(.supplyInvariantFailures)"
|
||||
' "$file" 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
summarize_submodule_output() {
|
||||
local file="$1"
|
||||
local count names
|
||||
count="$(grep -c '^=== ' "$file" || true)"
|
||||
names="$(grep '^=== ' "$file" | sed -E 's/^=== (.*) ===$/\1/' | paste -sd ',' - | sed 's/,/, /g' || true)"
|
||||
if [[ -n "$count" && "$count" != "0" ]]; then
|
||||
printf ' dirty submodules: %s\n' "$count"
|
||||
if [[ -n "$names" ]]; then
|
||||
printf ' names: %s\n' "$names"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
run_check() {
|
||||
local label="$1"
|
||||
shift
|
||||
@@ -30,12 +62,38 @@ run_check() {
|
||||
fi
|
||||
}
|
||||
|
||||
run_check_capture() {
|
||||
local label="$1"
|
||||
local summary_func="${2:-}"
|
||||
shift 2
|
||||
local out
|
||||
out="$(mktemp)"
|
||||
printf -- '- %s\n' "$label"
|
||||
if "$@" >"$out" 2>&1; then
|
||||
if [[ -n "$summary_func" ]] && declare -F "$summary_func" >/dev/null 2>&1; then
|
||||
"$summary_func" "$out"
|
||||
else
|
||||
print_captured_output "$out"
|
||||
fi
|
||||
printf ' [OK] %s\n' "$label"
|
||||
else
|
||||
if [[ -n "$summary_func" ]] && declare -F "$summary_func" >/dev/null 2>&1; then
|
||||
"$summary_func" "$out"
|
||||
fi
|
||||
cat "$out" >&2
|
||||
printf ' [WARN] %s\n' "$label"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
rm -f "$out"
|
||||
}
|
||||
|
||||
section "Repo-Completable Checks"
|
||||
run_check "Config validation" bash scripts/validation/validate-config-files.sh
|
||||
run_check "All validation (--skip-genesis)" bash scripts/verify/run-all-validation.sh --skip-genesis
|
||||
run_check_capture "Config validation" "" bash scripts/validation/validate-config-files.sh
|
||||
run_check_capture "Gas rollout status parser" summarize_gas_rollout_output bash scripts/verify/check-gas-public-pool-status.sh --json
|
||||
run_check_capture "All validation (--skip-genesis)" "" bash scripts/verify/run-all-validation.sh --skip-genesis
|
||||
# Submodule WIP is common; enforce with STRICT_SUBMODULE_CLEAN=1 (e.g. pre-release).
|
||||
if [[ "${STRICT_SUBMODULE_CLEAN:-0}" == "1" ]]; then
|
||||
run_check "Submodule working trees" bash scripts/verify/submodules-clean.sh
|
||||
run_check_capture "Submodule working trees" summarize_submodule_output bash scripts/verify/submodules-clean.sh
|
||||
else
|
||||
printf -- '- %s\n' "Submodule working trees (informational; set STRICT_SUBMODULE_CLEAN=1 to fail)"
|
||||
_sub_out="$(mktemp)"
|
||||
@@ -43,19 +101,32 @@ else
|
||||
printf ' [OK] Submodule working trees\n'
|
||||
rm -f "$_sub_out"
|
||||
else
|
||||
cat "$_sub_out" >&2
|
||||
summarize_submodule_output "$_sub_out"
|
||||
if [[ "$VERBOSE" == "1" ]]; then
|
||||
cat "$_sub_out" >&2
|
||||
fi
|
||||
rm -f "$_sub_out"
|
||||
INFO_WARNINGS=$((INFO_WARNINGS + 1))
|
||||
printf ' [WARN] Dirty submodule trees — commit inside each submodule, then parent pointer. See docs/00-meta/SUBMODULE_HYGIENE.md\n'
|
||||
fi
|
||||
fi
|
||||
|
||||
section "Public API Health"
|
||||
run_check "Public report API" env SKIP_EXIT=0 KEEP_GOING=1 bash scripts/verify/check-public-report-api.sh
|
||||
run_check_capture "Public report API" "" env SKIP_EXIT=0 KEEP_GOING=1 bash scripts/verify/check-public-report-api.sh
|
||||
run_check_capture "Gas registry API shape" "" env SKIP_EXIT=0 KEEP_GOING=1 SKIP_TOKEN_LIST=1 SKIP_COINGECKO=1 SKIP_CMC=1 SKIP_NETWORKS=1 SKIP_GAS_REGISTRY=0 bash scripts/verify/check-public-report-api.sh
|
||||
if [[ "${INCLUDE_INFO_DEFI_PUBLIC_VERIFY:-0}" == "1" ]]; then
|
||||
run_check_capture "info.defi-oracle.io public (SPA + token-aggregation)" "" bash scripts/verify/check-info-defi-oracle-public.sh
|
||||
else
|
||||
printf -- '- %s\n' "info.defi-oracle.io public (skipped; set INCLUDE_INFO_DEFI_PUBLIC_VERIFY=1 to run check-info-defi-oracle-public.sh)"
|
||||
fi
|
||||
|
||||
section "Status Interpretation"
|
||||
cat <<'EOF'
|
||||
- Repo-local validation is complete when the config and validation checks pass. Submodule trees: informational here unless you set STRICT_SUBMODULE_CLEAN=1; use scripts/verify/submodules-clean.sh for a strict gate.
|
||||
- Public report API problems are usually operator-side nginx/proxy deployment issues, not repo code issues.
|
||||
- **info.defi-oracle.io** is checked only when `INCLUDE_INFO_DEFI_PUBLIC_VERIFY=1` (needs outbound HTTPS to production or set `INFO_SITE_BASE`).
|
||||
- The gas rollout parser is repo-local and deterministic: it validates that the staged gas-family lanes, DODO pools, and reference venues are structurally consistent even before live env refs are filled.
|
||||
- For the broader "is everything deployed?" answer, run scripts/verify/check-full-deployment-status.sh. That gate includes GRU rollout, gas-native lanes, cW public pools, and the current canonical Chain 138 on-chain inventory.
|
||||
- Remaining non-local work is tracked in:
|
||||
- docs/00-meta/STILL_NOT_DONE_EXECUTION_CHECKLIST.md
|
||||
- docs/00-meta/OPERATOR_AND_EXTERNAL_COMPLETION_CHECKLIST.md
|
||||
@@ -66,6 +137,9 @@ section "Summary"
|
||||
if (( FAILURES == 0 )); then
|
||||
echo "- All repo-completable checks passed."
|
||||
echo "- Public report API looks healthy."
|
||||
if (( INFO_WARNINGS > 0 )); then
|
||||
echo "- Informational warnings remain; review the notes above."
|
||||
fi
|
||||
else
|
||||
echo "- Checks with warnings: $FAILURES"
|
||||
echo "- Review the warnings above to distinguish repo-local cleanup from operator-side work."
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
# Check that Chain 138 deployed contracts have bytecode on-chain.
|
||||
# Address list: 64 (core, CCIP canonical+legacy routers, WETH9 canonical+legacy bridges, PMM, vault/reserve, oracle keeper path, CompliantFiatTokens, ISO20022Router). Aligns with smom-dbis-138/.env and ADDRESS_MATRIX.
|
||||
# Address list: 88 (core, CCIP canonical+legacy routers, WETH9 canonical+legacy bridges, PMM, vault/reserve, oracle keeper path, CompliantFiatTokens including live GRU USD V2 addresses, ISO20022Router, cross-chain flash infra, live router-v2 execution stack, the canonical upstream-native Uniswap v3 deployment, and the funded Chain 138 Balancer/Curve/1inch venues). Aligns with smom-dbis-138/.env and ADDRESS_MATRIX. Supplemental DODO operator-scan / *_Extended probe addresses remain in smart-contracts-master.json for reserve/liquidity tooling but are excluded from this canonical bytecode count.
|
||||
# Usage: ./scripts/verify/check-contracts-on-chain-138.sh [RPC_URL] [--dry-run]
|
||||
# Default RPC: from env RPC_URL_138 (Chain 138 Core standard) or config/ip-addresses.conf, else https://rpc-core.d-bis.org
|
||||
# Optional: SKIP_EXIT=1 to exit 0 even when some addresses MISS (e.g. when RPC unreachable from this host).
|
||||
# Optional: --dry-run to print RPC and address list only (no RPC calls).
|
||||
#
|
||||
# Why "0 present, 26 missing"? RPC is unreachable from this host: rpc-core.d-bis.org often doesn't resolve
|
||||
# Why "0 present, N missing"? RPC is unreachable from this host: rpc-core.d-bis.org often doesn't resolve
|
||||
# (internal DNS) and config uses 192.168.11.211 (LAN-only). Run from a host on VPN/LAN, or pass a reachable
|
||||
# RPC: ./scripts/verify/check-contracts-on-chain-138.sh http://YOUR_RPC:8545
|
||||
|
||||
@@ -44,7 +44,18 @@ EXCLUDE_138=(
|
||||
|
||||
# Chain 138 deployed addresses: from config/smart-contracts-master.json when available, else fallback list
|
||||
if [[ -n "${CONTRACTS_MASTER_JSON:-}" && -f "${CONTRACTS_MASTER_JSON}" ]] && command -v jq &>/dev/null; then
|
||||
ALL_RAW=($(jq -r '.chains["138"].contracts | to_entries[] | .value' "$CONTRACTS_MASTER_JSON" | sort -u))
|
||||
ALL_RAW=($(jq -r '
|
||||
.chains["138"].contracts
|
||||
| to_entries[]
|
||||
| select(
|
||||
.key != "PilotUniswapV3Router"
|
||||
and .key != "DODOPMMIntegration_OperatorScan"
|
||||
and .key != "DODO_Pool_cUSDT_cUSDC_Extended"
|
||||
and .key != "DODO_Pool_cUSDT_USDT_Extended"
|
||||
and .key != "DODO_Pool_cUSDC_USDC_Extended"
|
||||
)
|
||||
| .value
|
||||
' "$CONTRACTS_MASTER_JSON" | sort -u))
|
||||
ADDRESSES=()
|
||||
for a in "${ALL_RAW[@]}"; do
|
||||
skip=0
|
||||
@@ -66,6 +77,8 @@ else
|
||||
"0xb7721dD53A8c629d9f1Ba31a5819AFe250002b03" # LINK
|
||||
"0x93E66202A11B1772E55407B32B44e5Cd8eda7f22" # cUSDT
|
||||
"0xf22258f57794CC8E06237084b353Ab30fFfa640b" # cUSDC
|
||||
"0x9FBfab33882Efe0038DAa608185718b772EE5660" # cUSDT V2
|
||||
"0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d" # cUSDC V2
|
||||
"0x91Efe92229dbf7C5B38D422621300956B55870Fa" # TokenRegistry
|
||||
"0xEBFb5C60dE5f7C4baae180CA328D3BB39E1a5133" # TokenFactory
|
||||
"0xbc54fe2b6fda157c59d59826bcfdbcc654ec9ea1" # ComplianceRegistry
|
||||
@@ -82,7 +95,29 @@ else
|
||||
"0xAEE4b7fBe82E1F8295951584CBc772b8BBD68575" # UniversalAssetRegistry (proxy)
|
||||
"0xA6891D5229f2181a34D4FF1B515c3Aa37dd90E0e" # GovernanceController (proxy)
|
||||
"0xCd42e8eD79Dc50599535d1de48d3dAFa0BE156F8" # UniversalCCIPBridge (proxy)
|
||||
"0xBe9e0B2d4cF6A3b2994d6f2f0904D2B165eB8ffC" # UniversalCCIPFlashBridgeAdapter
|
||||
"0xD084b68cB4B1ef2cBA09CF99FB1B6552fd9b4859" # CrossChainFlashRepayReceiver
|
||||
"0x89F7a1fcbBe104BeE96Da4b4b6b7d3AF85f7E661" # CrossChainFlashVaultCreditReceiver
|
||||
"0x89aB428c437f23bAB9781ff8Db8D3848e27EeD6c" # BridgeOrchestrator (proxy)
|
||||
"0xF1c93F54A5C2fc0d7766Ccb0Ad8f157DFB4C99Ce" # EnhancedSwapRouterV2
|
||||
"0x7D0022B7e8360172fd9C0bB6778113b7Ea3674E7" # IntentBridgeCoordinatorV2
|
||||
"0x88495B3dccEA93b0633390fDE71992683121Fa62" # DodoRouteExecutorAdapter
|
||||
"0x9Cb97adD29c52e3B81989BcA2E33D46074B530eF" # DodoV3RouteExecutorAdapter
|
||||
"0x960D6db4E78705f82995690548556fb2266308EA" # UniswapV3RouteExecutorAdapter
|
||||
"0x4E1B71B69188Ab45021c797039b4887a4924157A" # BalancerRouteExecutorAdapter
|
||||
"0x5f0E07071c41ACcD2A1b1032D3bd49b323b9ADE6" # CurveRouteExecutorAdapter
|
||||
"0x8168083d29b3293F215392A49D16e7FeF4a02600" # OneInchRouteExecutorAdapter
|
||||
"0x2f7219276e3ce367dB9ec74C1196a8ecEe67841C" # UniswapV3Factory
|
||||
"0x6F5fdE32DD2aC66B27e296EC9D6F4E79A3dE2947" # UniswapV3 NFTDescriptor library
|
||||
"0xca66DCAC4633555033F6fDDBE4234B6913c7ff51" # UniswapV3 token descriptor
|
||||
"0x31b68BE5af4Df565Ce261dfe53D529005D947B48" # UniswapV3 position manager
|
||||
"0xde9cD8ee2811E6E64a41D5F68Be315d33995975E" # UniswapV3 router
|
||||
"0x6abbB1CEb2468e748a03A00CD6aA9BFE893AFa1f" # UniswapV3 quoter v2
|
||||
"0xa893add35aEfe6A6d858EB01828bE4592f12C9F5" # UniswapV3 WETH/USDT pool
|
||||
"0xEC745bfb6b3cd32f102d594E5F432d8d85B19391" # UniswapV3 WETH/USDC pool
|
||||
"0x96423d7C1727698D8a25EbFB88131e9422d1a3C3" # PilotBalancerVault
|
||||
"0xE440Ec15805BE4C7BabCD17A63B8C8A08a492e0f" # PilotCurve3Pool
|
||||
"0x500B84b1Bc6F59C1898a5Fe538eA20A758757A4F" # PilotOneInchRouter
|
||||
"0x302aF72966aFd21C599051277a48DAa7f01a5f54" # PaymentChannelManager
|
||||
"0xe5e3bB424c8a0259FDE23F0A58F7e36f73B90aBd" # GenericStateChannelManager
|
||||
"0x439Fcb2d2ab2f890DCcAE50461Fa7d978F9Ffe1A" # AddressMapper
|
||||
@@ -94,11 +129,11 @@ else
|
||||
"0x6427F9739e6B6c3dDb4E94fEfeBcdF35549549d8" # MirrorRegistry
|
||||
"0x66FEBA2fC9a0B47F26DD4284DAd24F970436B8Dc" # AlltraAdapter
|
||||
"0x7131F887DBEEb2e44c1Ed267D2A68b5b83285afc" # TransactionMirror Chain 138 (deployed 2026-02-27; set TRANSACTION_MIRROR_ADDRESS in .env)
|
||||
"0xff8d3b8fDF7B112759F076B69f4271D4209C0849" # DODO cUSDT-cUSDC pool
|
||||
"0x5BDc62f1ae7D630c37A8B363a1d49845356Ee72d" # DODOPMMIntegration (corrected canonical stack)
|
||||
"0x5CAe6Ce155b7f08D3a956F5Dc82fC9945f29B381" # DODOPMMProvider (corrected canonical stack)
|
||||
"0x6fc60DEDc92a2047062294488539992710b99D71" # DODO pool cUSDT/USDT
|
||||
"0x9f74Be42725f2Aa072a9E0CdCce0E7203C510263" # DODO pool cUSDC/USDC
|
||||
"0x9e89bAe009adf128782E19e8341996c596ac40dC" # DODO cUSDT-cUSDC pool (canonical DVM-backed stable stack)
|
||||
"0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895" # DODOPMMIntegration (canonical official DVM-backed stack)
|
||||
"0x3f729632E9553EBacCdE2e9b4c8F2B285b014F2e" # DODOPMMProvider (canonical official DVM-backed stack)
|
||||
"0x866Cb44b59303d8dc5f4F9E3E7A8e8b0bf238d66" # DODO pool cUSDT/USDT (canonical DVM-backed stable stack)
|
||||
"0xc39B7D0F40838cbFb54649d327f49a6DAC964062" # DODO pool cUSDC/USDC (canonical DVM-backed stable stack)
|
||||
"0x607e97cD626f209facfE48c1464815DDE15B5093" # ReserveSystem
|
||||
"0x34B73e6EDFd9f85a7c25EeD31dcB13aB6E969b96" # ReserveTokenIntegration
|
||||
"0xEA4C892D6c1253797c5D95a05BF3863363080b4B" # RegulatedEntityRegistry (vault)
|
||||
@@ -172,8 +207,8 @@ echo "Total: $OK present, $MISS missing/empty (${#ADDRESSES[@]} addresses). Expl
|
||||
if [[ $MISS -gt 0 && -z "$rpc_reachable" ]]; then
|
||||
echo " → RPC was unreachable from this host; see WARN above. Run from LAN/VPN or pass a reachable RPC URL." >&2
|
||||
fi
|
||||
# Expected missing (pending deploy or confirm): TransactionMirror, DODO pool; exit 0 when only these are missing
|
||||
EXPECTED_MISSING=("0xb5876547c52CaBf49d7f40233B6f6a140F403d25")
|
||||
# Expected missing: none on the current 88-address canonical Chain 138 inventory.
|
||||
EXPECTED_MISSING=()
|
||||
if [[ -n "${SKIP_EXIT:-}" && "${SKIP_EXIT}" != "0" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
119
scripts/verify/check-cross-chain-flash-infra-chain138.sh
Executable file
119
scripts/verify/check-cross-chain-flash-infra-chain138.sh
Executable file
@@ -0,0 +1,119 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verify the deployed Chain 138 cross-chain flash infrastructure:
|
||||
# - UniversalCCIPFlashBridgeAdapter
|
||||
# - CrossChainFlashRepayReceiver
|
||||
# - CrossChainFlashVaultCreditReceiver
|
||||
#
|
||||
# Checks:
|
||||
# 1. Bytecode exists at all three addresses.
|
||||
# 2. The adapter points at the expected UniversalCCIPBridge.
|
||||
# 3. Both receivers point at the expected CCIP router.
|
||||
# 4. Blockscout reports all three contracts as source-verified (unless skipped).
|
||||
#
|
||||
# Optional env overrides:
|
||||
# RPC_URL_138
|
||||
# FLASH_UNIVERSAL_CCIP_BRIDGE
|
||||
# FLASH_CCIP_ROUTER
|
||||
# CROSS_CHAIN_FLASH_BRIDGE_ADAPTER
|
||||
# CROSS_CHAIN_FLASH_REPAY_RECEIVER
|
||||
# CROSS_CHAIN_FLASH_VAULT_CREDIT_RECEIVER
|
||||
# BLOCKSCOUT_API_BASE
|
||||
# SKIP_BLOCKSCOUT=1
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
RPC_URL="${RPC_URL_138:-http://192.168.11.211:8545}"
|
||||
EXPECTED_BRIDGE="${FLASH_UNIVERSAL_CCIP_BRIDGE:-${UNIVERSAL_CCIP_BRIDGE:-0xCd42e8eD79Dc50599535d1de48d3dAFa0BE156F8}}"
|
||||
EXPECTED_ROUTER="${FLASH_CCIP_ROUTER:-${CCIP_ROUTER:-${CCIP_ROUTER_ADDRESS:-${CCIP_ROUTER_CHAIN138:-0x42DAb7b888Dd382bD5Adcf9E038dBF1fD03b4817}}}}"
|
||||
ADAPTER="${CROSS_CHAIN_FLASH_BRIDGE_ADAPTER:-0xBe9e0B2d4cF6A3b2994d6f2f0904D2B165eB8ffC}"
|
||||
REPAY_RECEIVER="${CROSS_CHAIN_FLASH_REPAY_RECEIVER:-0xD084b68cB4B1ef2cBA09CF99FB1B6552fd9b4859}"
|
||||
VAULT_CREDIT_RECEIVER="${CROSS_CHAIN_FLASH_VAULT_CREDIT_RECEIVER:-0x89F7a1fcbBe104BeE96Da4b4b6b7d3AF85f7E661}"
|
||||
BLOCKSCOUT_API_BASE="${BLOCKSCOUT_API_BASE:-https://explorer.d-bis.org/api/v2}"
|
||||
SKIP_BLOCKSCOUT="${SKIP_BLOCKSCOUT:-0}"
|
||||
|
||||
log() {
|
||||
printf '%s\n' "$*"
|
||||
}
|
||||
|
||||
ok() {
|
||||
printf '[ok] %s\n' "$*"
|
||||
}
|
||||
|
||||
fail() {
|
||||
printf '[fail] %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
require_command() {
|
||||
command -v "$1" >/dev/null 2>&1 || fail "Missing required command: $1"
|
||||
}
|
||||
|
||||
require_command cast
|
||||
|
||||
have_command() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
check_code() {
|
||||
local label="$1"
|
||||
local address="$2"
|
||||
local code
|
||||
code="$(cast code --rpc-url "${RPC_URL}" "${address}")" || fail "Failed to read bytecode for ${label}."
|
||||
[[ -n "${code}" && "${code}" != "0x" ]] || fail "${label} has no bytecode at ${address}."
|
||||
ok "${label} bytecode present at ${address}."
|
||||
}
|
||||
|
||||
check_blockscout_verified() {
|
||||
local label="$1"
|
||||
local address="$2"
|
||||
local response
|
||||
local is_verified
|
||||
|
||||
response="$(curl -fsS "${BLOCKSCOUT_API_BASE}/smart-contracts/${address}")" || fail "Failed to query Blockscout for ${label}."
|
||||
is_verified="$(jq -r '.is_verified // false' <<<"${response}")" || fail "Failed to parse Blockscout response for ${label}."
|
||||
[[ "${is_verified}" == "true" ]] || fail "${label} is not source-verified on Blockscout."
|
||||
ok "${label} is source-verified on Blockscout."
|
||||
}
|
||||
|
||||
log "Chain 138 cross-chain flash infrastructure verification"
|
||||
log "RPC: ${RPC_URL}"
|
||||
log "Expected bridge: ${EXPECTED_BRIDGE}"
|
||||
log "Expected router: ${EXPECTED_ROUTER}"
|
||||
log
|
||||
|
||||
check_code "UniversalCCIPFlashBridgeAdapter" "${ADAPTER}"
|
||||
check_code "CrossChainFlashRepayReceiver" "${REPAY_RECEIVER}"
|
||||
check_code "CrossChainFlashVaultCreditReceiver" "${VAULT_CREDIT_RECEIVER}"
|
||||
|
||||
bridge_value="$(cast call --rpc-url "${RPC_URL}" "${ADAPTER}" "universalBridge()(address)")" || fail "Failed to read adapter bridge."
|
||||
[[ "${bridge_value,,}" == "${EXPECTED_BRIDGE,,}" ]] || fail "Adapter bridge mismatch: expected ${EXPECTED_BRIDGE}, got ${bridge_value}."
|
||||
ok "Adapter points at UniversalCCIPBridge ${bridge_value}."
|
||||
|
||||
repay_router="$(cast call --rpc-url "${RPC_URL}" "${REPAY_RECEIVER}" "ccipRouter()(address)")" || fail "Failed to read repay receiver router."
|
||||
[[ "${repay_router,,}" == "${EXPECTED_ROUTER,,}" ]] || fail "Repay receiver router mismatch: expected ${EXPECTED_ROUTER}, got ${repay_router}."
|
||||
ok "Repay receiver points at CCIP router ${repay_router}."
|
||||
|
||||
vault_router="$(cast call --rpc-url "${RPC_URL}" "${VAULT_CREDIT_RECEIVER}" "ccipRouter()(address)")" || fail "Failed to read vault credit receiver router."
|
||||
[[ "${vault_router,,}" == "${EXPECTED_ROUTER,,}" ]] || fail "Vault credit receiver router mismatch: expected ${EXPECTED_ROUTER}, got ${vault_router}."
|
||||
ok "Vault credit receiver points at CCIP router ${vault_router}."
|
||||
|
||||
if [[ "${SKIP_BLOCKSCOUT}" == "1" ]]; then
|
||||
log "Skipping Blockscout verification checks (SKIP_BLOCKSCOUT=1)."
|
||||
elif have_command curl && have_command jq; then
|
||||
check_blockscout_verified "UniversalCCIPFlashBridgeAdapter" "${ADAPTER}"
|
||||
check_blockscout_verified "CrossChainFlashRepayReceiver" "${REPAY_RECEIVER}"
|
||||
check_blockscout_verified "CrossChainFlashVaultCreditReceiver" "${VAULT_CREDIT_RECEIVER}"
|
||||
else
|
||||
log "Skipping Blockscout verification checks (curl and/or jq not available)."
|
||||
fi
|
||||
|
||||
log
|
||||
log "Result: Chain 138 cross-chain flash infrastructure looks healthy."
|
||||
84
scripts/verify/check-cw-cronos-wave1.sh
Executable file
84
scripts/verify/check-cw-cronos-wave1.sh
Executable file
@@ -0,0 +1,84 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify Cronos cW rollout addresses exist on-chain and the configured bridge has mint/burn roles.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
|
||||
|
||||
if [[ -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
|
||||
# shellcheck disable=SC1090
|
||||
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
|
||||
load_deployment_env --repo-root "$SMOM_ROOT"
|
||||
else
|
||||
set -a
|
||||
source "$SMOM_ROOT/.env"
|
||||
set +a
|
||||
fi
|
||||
|
||||
RPC="${CRONOS_CW_VERIFY_RPC_URL:-${CRONOS_RPC_URL:-${CRONOS_RPC:-https://cronos-evm-rpc.publicnode.com}}}"
|
||||
BRIDGE="${CW_BRIDGE_CRONOS:-}"
|
||||
|
||||
if [[ -z "$BRIDGE" ]]; then
|
||||
echo "[FAIL] CW_BRIDGE_CRONOS is not set" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MINTER_ROLE="$(cast keccak "MINTER_ROLE")"
|
||||
BURNER_ROLE="$(cast keccak "BURNER_ROLE")"
|
||||
|
||||
symbols=(
|
||||
CWUSDT_CRONOS
|
||||
CWUSDC_CRONOS
|
||||
CWAUSDT_CRONOS
|
||||
CWUSDW_CRONOS
|
||||
CWEURC_CRONOS
|
||||
CWEURT_CRONOS
|
||||
CWGBPC_CRONOS
|
||||
CWGBPT_CRONOS
|
||||
CWAUDC_CRONOS
|
||||
CWJPYC_CRONOS
|
||||
CWCHFC_CRONOS
|
||||
CWCADC_CRONOS
|
||||
CWXAUC_CRONOS
|
||||
CWXAUT_CRONOS
|
||||
)
|
||||
|
||||
missing_code=0
|
||||
missing_roles=0
|
||||
|
||||
echo "=== Cronos cW Rollout Verification ==="
|
||||
echo "RPC: $RPC"
|
||||
echo "Bridge: $BRIDGE"
|
||||
|
||||
for var in "${symbols[@]}"; do
|
||||
addr="${!var:-}"
|
||||
if [[ -z "$addr" ]]; then
|
||||
echo "[FAIL] $var is unset"
|
||||
missing_code=$((missing_code + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
code="$(cast code "$addr" --rpc-url "$RPC" 2>/dev/null || true)"
|
||||
if [[ -z "$code" || "$code" == "0x" ]]; then
|
||||
echo "[FAIL] $var -> $addr has no bytecode"
|
||||
missing_code=$((missing_code + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
has_minter="$(cast call "$addr" "hasRole(bytes32,address)(bool)" "$MINTER_ROLE" "$BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo false)"
|
||||
has_burner="$(cast call "$addr" "hasRole(bytes32,address)(bool)" "$BURNER_ROLE" "$BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo false)"
|
||||
if [[ "$has_minter" != "true" || "$has_burner" != "true" ]]; then
|
||||
echo "[FAIL] $var -> $addr roles MINTER=$has_minter BURNER=$has_burner"
|
||||
missing_roles=$((missing_roles + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "[OK] $var -> $addr"
|
||||
done
|
||||
|
||||
echo "Summary: missing_code=$missing_code missing_roles=$missing_roles"
|
||||
if (( missing_code > 0 || missing_roles > 0 )); then
|
||||
exit 1
|
||||
fi
|
||||
211
scripts/verify/check-cw-evm-deployment-mesh.sh
Normal file
211
scripts/verify/check-cw-evm-deployment-mesh.sh
Normal file
@@ -0,0 +1,211 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify the public EVM cW token deployment mesh recorded in smom-dbis-138/.env.
|
||||
#
|
||||
# Reports, per supported EVM chain:
|
||||
# - how many of the expected 12 cW token addresses are configured
|
||||
# - which token addresses are still missing
|
||||
# - optional on-chain bytecode presence for configured addresses when RPC + cast are available
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-cw-evm-deployment-mesh.sh
|
||||
# bash scripts/verify/check-cw-evm-deployment-mesh.sh --json
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 = every tracked EVM chain has all 12 cW token addresses configured and no code gaps were found
|
||||
# 1 = one or more chains are partial, missing, or have code gaps
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
SMOM_ROOT="${PROJECT_ROOT}/smom-dbis-138"
|
||||
|
||||
OUTPUT_JSON=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
|
||||
# shellcheck disable=SC1090
|
||||
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
|
||||
load_deployment_env --repo-root "$SMOM_ROOT"
|
||||
else
|
||||
set -a
|
||||
# shellcheck disable=SC1090
|
||||
source "$SMOM_ROOT/.env"
|
||||
set +a
|
||||
fi
|
||||
|
||||
have_cast=0
|
||||
if command -v cast >/dev/null 2>&1; then
|
||||
have_cast=1
|
||||
fi
|
||||
|
||||
rpc_for_chain() {
|
||||
local chain_key="$1"
|
||||
case "$chain_key" in
|
||||
MAINNET) printf '%s\n' "${ETH_MAINNET_RPC_URL:-${ETHEREUM_MAINNET_RPC:-}}" ;;
|
||||
CRONOS) printf '%s\n' "${CRONOS_CW_VERIFY_RPC_URL:-${CRONOS_RPC_URL:-${CRONOS_RPC:-https://cronos-evm-rpc.publicnode.com}}}" ;;
|
||||
BSC) printf '%s\n' "${BSC_RPC_URL:-}" ;;
|
||||
POLYGON) printf '%s\n' "${POLYGON_MAINNET_RPC:-${POLYGON_RPC_URL:-}}" ;;
|
||||
GNOSIS) printf '%s\n' "${GNOSIS_RPC:-${GNOSIS_RPC_URL:-}}" ;;
|
||||
AVALANCHE) printf '%s\n' "${AVALANCHE_RPC_URL:-}" ;;
|
||||
BASE) printf '%s\n' "${BASE_MAINNET_RPC:-${BASE_RPC_URL:-}}" ;;
|
||||
ARBITRUM) printf '%s\n' "${ARBITRUM_MAINNET_RPC:-${ARBITRUM_RPC_URL:-}}" ;;
|
||||
OPTIMISM) printf '%s\n' "${OPTIMISM_MAINNET_RPC:-${OPTIMISM_RPC_URL:-}}" ;;
|
||||
CELO) printf '%s\n' "${CELO_RPC:-${CELO_MAINNET_RPC:-}}" ;;
|
||||
WEMIX) printf '%s\n' "${WEMIX_RPC:-${WEMIX_MAINNET_RPC:-}}" ;;
|
||||
*) printf '\n' ;;
|
||||
esac
|
||||
}
|
||||
|
||||
TOKENS=(USDT USDC EURC EURT GBPC GBPT AUDC JPYC CHFC CADC XAUC XAUT)
|
||||
CHAIN_SPECS=(
|
||||
"MAINNET:1:Ethereum Mainnet"
|
||||
"CRONOS:25:Cronos"
|
||||
"BSC:56:BSC"
|
||||
"POLYGON:137:Polygon"
|
||||
"GNOSIS:100:Gnosis"
|
||||
"AVALANCHE:43114:Avalanche"
|
||||
"BASE:8453:Base"
|
||||
"ARBITRUM:42161:Arbitrum"
|
||||
"OPTIMISM:10:Optimism"
|
||||
"CELO:42220:Celo"
|
||||
"WEMIX:1111:Wemix"
|
||||
)
|
||||
|
||||
chains_json='[]'
|
||||
full_set_chains=0
|
||||
partial_chains=0
|
||||
total_missing_tokens=0
|
||||
total_code_gaps=0
|
||||
total_checked_addresses=0
|
||||
total_code_verified=0
|
||||
|
||||
for spec in "${CHAIN_SPECS[@]}"; do
|
||||
IFS=":" read -r chain_key chain_id chain_name <<<"$spec"
|
||||
rpc="$(rpc_for_chain "$chain_key")"
|
||||
|
||||
set_count=0
|
||||
missing_count=0
|
||||
code_verified=0
|
||||
code_gaps=0
|
||||
missing_list='[]'
|
||||
configured_list='[]'
|
||||
|
||||
for token in "${TOKENS[@]}"; do
|
||||
var="CW${token}_${chain_key}"
|
||||
addr="${!var:-}"
|
||||
if [[ -z "$addr" ]]; then
|
||||
missing_count=$((missing_count + 1))
|
||||
missing_list="$(jq -c --arg var "$var" '. + [$var]' <<<"$missing_list")"
|
||||
continue
|
||||
fi
|
||||
|
||||
set_count=$((set_count + 1))
|
||||
total_checked_addresses=$((total_checked_addresses + 1))
|
||||
|
||||
code_present=null
|
||||
if (( have_cast == 1 )) && [[ -n "$rpc" ]]; then
|
||||
code="$(cast code "$addr" --rpc-url "$rpc" 2>/dev/null || true)"
|
||||
if [[ -n "$code" && "$code" != "0x" ]]; then
|
||||
code_present=true
|
||||
code_verified=$((code_verified + 1))
|
||||
total_code_verified=$((total_code_verified + 1))
|
||||
else
|
||||
code_present=false
|
||||
code_gaps=$((code_gaps + 1))
|
||||
total_code_gaps=$((total_code_gaps + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
configured_list="$(jq -c --arg var "$var" --arg addr "$addr" --argjson codePresent "$code_present" \
|
||||
'. + [{envKey: $var, address: $addr, codePresent: $codePresent}]' <<<"$configured_list")"
|
||||
done
|
||||
|
||||
if (( missing_count == 0 )); then
|
||||
full_set_chains=$((full_set_chains + 1))
|
||||
status="complete"
|
||||
else
|
||||
partial_chains=$((partial_chains + 1))
|
||||
total_missing_tokens=$((total_missing_tokens + missing_count))
|
||||
status="partial"
|
||||
fi
|
||||
|
||||
chains_json="$(jq -c \
|
||||
--arg chainKey "$chain_key" \
|
||||
--arg chainName "$chain_name" \
|
||||
--argjson chainId "$chain_id" \
|
||||
--arg status "$status" \
|
||||
--arg rpc "${rpc:-}" \
|
||||
--argjson expectedCount "${#TOKENS[@]}" \
|
||||
--argjson configuredCount "$set_count" \
|
||||
--argjson missingCount "$missing_count" \
|
||||
--argjson codeVerifiedCount "$code_verified" \
|
||||
--argjson codeGapCount "$code_gaps" \
|
||||
--argjson configured "$configured_list" \
|
||||
--argjson missing "$missing_list" \
|
||||
'. + [{
|
||||
chainKey: $chainKey,
|
||||
chainName: $chainName,
|
||||
chainId: $chainId,
|
||||
status: $status,
|
||||
rpcUrl: (if $rpc == "" then null else $rpc end),
|
||||
expectedCount: $expectedCount,
|
||||
configuredCount: $configuredCount,
|
||||
missingCount: $missingCount,
|
||||
codeVerifiedCount: $codeVerifiedCount,
|
||||
codeGapCount: $codeGapCount,
|
||||
configured: $configured,
|
||||
missing: $missing
|
||||
}]' <<<"$chains_json")"
|
||||
done
|
||||
|
||||
if (( OUTPUT_JSON == 1 )); then
|
||||
jq -n \
|
||||
--argjson chains "$chains_json" \
|
||||
--argjson totalChains "${#CHAIN_SPECS[@]}" \
|
||||
--argjson expectedPerChain "${#TOKENS[@]}" \
|
||||
--argjson fullSetChains "$full_set_chains" \
|
||||
--argjson partialChains "$partial_chains" \
|
||||
--argjson totalMissingTokens "$total_missing_tokens" \
|
||||
--argjson totalCheckedAddresses "$total_checked_addresses" \
|
||||
--argjson totalCodeVerified "$total_code_verified" \
|
||||
--argjson totalCodeGaps "$total_code_gaps" \
|
||||
'{
|
||||
summary: {
|
||||
totalChains: $totalChains,
|
||||
expectedPerChain: $expectedPerChain,
|
||||
fullSetChains: $fullSetChains,
|
||||
partialChains: $partialChains,
|
||||
totalMissingTokens: $totalMissingTokens,
|
||||
totalCheckedAddresses: $totalCheckedAddresses,
|
||||
totalCodeVerified: $totalCodeVerified,
|
||||
totalCodeGaps: $totalCodeGaps
|
||||
},
|
||||
chains: $chains
|
||||
}'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "=== cW EVM Deployment Mesh ==="
|
||||
for spec in "${CHAIN_SPECS[@]}"; do
|
||||
IFS=":" read -r chain_key _chain_id chain_name <<<"$spec"
|
||||
line="$(jq -r --arg k "$chain_key" '
|
||||
.chains[] | select(.chainKey == $k) |
|
||||
"\(.chainName): configured=\(.configuredCount)/\(.expectedCount) missing=\(.missingCount) codeVerified=\(.codeVerifiedCount) codeGaps=\(.codeGapCount) status=\(.status) missingKeys=\((.missing | join(",")))"
|
||||
' <(jq -n --argjson chains "$chains_json" '{chains:$chains}'))"
|
||||
echo "$line"
|
||||
done
|
||||
|
||||
echo "Summary: full_sets=$full_set_chains partial_sets=$partial_chains missing_tokens=$total_missing_tokens checked=$total_checked_addresses code_verified=$total_code_verified code_gaps=$total_code_gaps"
|
||||
|
||||
if (( partial_chains > 0 || total_code_gaps > 0 )); then
|
||||
exit 1
|
||||
fi
|
||||
82
scripts/verify/check-cw-public-pool-status.sh
Normal file
82
scripts/verify/check-cw-public-pool-status.sh
Normal file
@@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env bash
|
||||
# Summarize cW* public-chain deployment status from cross-chain-pmm-lps/config/deployment-status.json.
|
||||
# Usage:
|
||||
# bash scripts/verify/check-cw-public-pool-status.sh
|
||||
# bash scripts/verify/check-cw-public-pool-status.sh --json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
OUTPUT_JSON=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const statusPath = path.join(root, 'cross-chain-pmm-lps/config/deployment-status.json');
|
||||
const status = JSON.parse(fs.readFileSync(statusPath, 'utf8'));
|
||||
|
||||
const rows = Object.entries(status.chains || {}).map(([chainId, chain]) => {
|
||||
const cwTokens = Object.keys(chain.cwTokens || {});
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
const pmmVolatile = Array.isArray(chain.pmmPoolsVolatile) ? chain.pmmPoolsVolatile : [];
|
||||
const pmmTotal = pmmPools.length + pmmVolatile.length;
|
||||
return {
|
||||
chainId: Number(chainId),
|
||||
name: chain.name,
|
||||
cwTokenCount: cwTokens.length,
|
||||
pmmPoolCount: pmmPools.length,
|
||||
pmmVolatilePoolCount: pmmVolatile.length,
|
||||
pmmPoolTotal: pmmTotal,
|
||||
bridgeAvailable: chain.bridgeAvailable,
|
||||
cwTokens
|
||||
};
|
||||
}).sort((a, b) => a.chainId - b.chainId);
|
||||
|
||||
const summary = {
|
||||
chainsChecked: rows.length,
|
||||
chainsWithCwTokens: rows.filter((r) => r.cwTokenCount > 0).length,
|
||||
chainsWithAnyPmmPools: rows.filter((r) => r.pmmPoolTotal > 0).length,
|
||||
chainsWithBridgeAvailable: rows.filter((r) => r.bridgeAvailable === true).length
|
||||
};
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify({ summary, rows }, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== cW* Public-Chain Pool Status ===');
|
||||
console.log(`Chains checked: ${summary.chainsChecked}`);
|
||||
console.log(`Chains with cW* tokens: ${summary.chainsWithCwTokens}`);
|
||||
console.log(`Chains with bridgeAvailable=true: ${summary.chainsWithBridgeAvailable}`);
|
||||
console.log(`Chains with any PMM pools: ${summary.chainsWithAnyPmmPools}`);
|
||||
|
||||
for (const row of rows) {
|
||||
const bridge = row.bridgeAvailable === true ? 'bridge' : row.bridgeAvailable === false ? 'no-bridge' : 'bridge-unknown';
|
||||
const pv = row.pmmVolatilePoolCount > 0 ? `+volatile=${row.pmmVolatilePoolCount}` : '';
|
||||
console.log(`- ${row.chainId} ${row.name}: cw=${row.cwTokenCount}, pools=${row.pmmPoolCount}${pv}, ${bridge}`);
|
||||
}
|
||||
|
||||
if (summary.chainsWithAnyPmmPools === 0) {
|
||||
console.log('Result: deployment-status.json records no live public-chain cW* PMM pools.');
|
||||
}
|
||||
NODE
|
||||
46
scripts/verify/check-dbis-core-gateway-rails.sh
Executable file
46
scripts/verify/check-dbis-core-gateway-rails.sh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env bash
|
||||
# Internal smoke: DBIS Core SolaceNet gateway rails (authenticated).
|
||||
# Usage:
|
||||
# DBIS_CORE_API_BASE=https://dbis-api.d-bis.org \
|
||||
# DBIS_CORE_BEARER_TOKEN=... \
|
||||
# bash scripts/verify/check-dbis-core-gateway-rails.sh
|
||||
# Or LAN: DBIS_CORE_API_BASE=http://192.168.11.x:3000 (example)
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "$ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
BASE="${DBIS_CORE_API_BASE:-}"
|
||||
TOKEN="${DBIS_CORE_BEARER_TOKEN:-${DBIS_API_TOKEN:-}}"
|
||||
|
||||
if [[ -z "$BASE" ]]; then
|
||||
echo "check-dbis-core-gateway-rails: set DBIS_CORE_API_BASE (e.g. https://dbis-api.d-bis.org)" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
BASE="${BASE%/}"
|
||||
URL="$BASE/api/v1/gateway/rails"
|
||||
|
||||
hdr=(-H "Accept: application/json")
|
||||
if [[ -n "$TOKEN" ]]; then
|
||||
hdr+=(-H "Authorization: Bearer $TOKEN")
|
||||
fi
|
||||
|
||||
echo "GET $URL"
|
||||
code=$(curl -sS -o /tmp/dbis-gateway-rails.json -w "%{http_code}" "${hdr[@]}" "$URL" || true)
|
||||
body=$(cat /tmp/dbis-gateway-rails.json 2>/dev/null || true)
|
||||
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "check-dbis-core-gateway-rails: expected HTTP 200, got $code" >&2
|
||||
echo "$body" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
echo "check-dbis-core-gateway-rails: OK (HTTP 200; install jq to assert JSON shape)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "$body" | jq -e '.maintainer == "SolaceNet" and (.adapters | type == "array") and (.adapters | index("dbis.adapter.ktt-evidence") != null)' >/dev/null
|
||||
echo "check-dbis-core-gateway-rails: OK (maintainer + adapters include ktt-evidence)"
|
||||
297
scripts/verify/check-dodo-api-chain138-route-support.sh
Normal file
297
scripts/verify/check-dodo-api-chain138-route-support.sh
Normal file
@@ -0,0 +1,297 @@
|
||||
#!/usr/bin/env bash
|
||||
# Safely probe DODO's hosted SmartTrade API support for Chain 138 without leaking secrets.
|
||||
#
|
||||
# What it checks:
|
||||
# 1. Official DODO docs swagger for published chain support.
|
||||
# 2. Official DODO contract inventory for published on-chain support.
|
||||
# 3. Hosted DODO SmartTrade quotes when a developer API key is available.
|
||||
# 4. Local/public Chain 138 token-aggregation quote behavior for the same pairs.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-dodo-api-chain138-route-support.sh
|
||||
# DODO_API_KEY=... bash scripts/verify/check-dodo-api-chain138-route-support.sh
|
||||
# BASE_URL=https://explorer.d-bis.org bash scripts/verify/check-dodo-api-chain138-route-support.sh
|
||||
#
|
||||
# Optional env:
|
||||
# DODO_API_KEY / DODO_SECRET_KEY / DODO_DEVELOPER_API_KEY
|
||||
# CHAIN_ID=138
|
||||
# USER_ADDR=0x...
|
||||
# BASE_URL=https://explorer.d-bis.org
|
||||
# DODO_SLIPPAGE=0.03
|
||||
# AMOUNTS_WEI="1000000000000000000 5000000000000000000 25000000000000000000"
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
DOCS_URL="${DODO_DOCS_URL:-https://docs.dodoex.io/en/developer/developers-portal/api/smart-trade/api}"
|
||||
CONTRACT_LIST_URL="${DODO_CONTRACT_LIST_URL:-https://api.dodoex.io/dodo-contract/list?version=v1,v2}"
|
||||
QUOTE_URL="${DODO_QUOTE_URL:-https://api.dodoex.io/route-service/developer/swap}"
|
||||
CHAIN_ID="${CHAIN_ID:-138}"
|
||||
BASE_URL="${BASE_URL:-https://explorer.d-bis.org}"
|
||||
BASE_URL="${BASE_URL%/}"
|
||||
DODO_SLIPPAGE="${DODO_SLIPPAGE:-0.03}"
|
||||
API_KEY="${DODO_API_KEY:-${DODO_SECRET_KEY:-${DODO_DEVELOPER_API_KEY:-}}}"
|
||||
|
||||
WETH="${CHAIN138_WETH_ADDRESS:-0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2}"
|
||||
USDT="${OFFICIAL_USDT_ADDRESS:-0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1}"
|
||||
USDC="${OFFICIAL_USDC_ADDRESS:-0x71D6687F38b93CCad569Fa6352c876eea967201b}"
|
||||
CUSDT="${COMPLIANT_USDT_ADDRESS:-0x93E66202A11B1772E55407B32B44e5Cd8eda7f22}"
|
||||
CUSDC="${COMPLIANT_USDC_ADDRESS:-0xf22258f57794CC8E06237084b353Ab30fFfa640b}"
|
||||
|
||||
AMOUNTS_WEI="${AMOUNTS_WEI:-1000000000000000000 5000000000000000000 25000000000000000000}"
|
||||
TMP_DIR="$(mktemp -d)"
|
||||
trap 'rm -rf "$TMP_DIR"' EXIT
|
||||
|
||||
log() { printf '%s\n' "$*"; }
|
||||
ok() { printf '[OK] %s\n' "$*"; }
|
||||
warn() { printf '[WARN] %s\n' "$*"; }
|
||||
fail() { printf '[FAIL] %s\n' "$*"; }
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
printf 'ERROR: missing required command: %s\n' "$1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd curl
|
||||
need_cmd jq
|
||||
need_cmd python3
|
||||
|
||||
derive_user_addr() {
|
||||
if [[ -n "${USER_ADDR:-}" ]]; then
|
||||
printf '%s\n' "$USER_ADDR"
|
||||
return 0
|
||||
fi
|
||||
if command -v cast >/dev/null 2>&1 && [[ -n "${PRIVATE_KEY:-}" ]]; then
|
||||
cast wallet address --private-key "$PRIVATE_KEY" 2>/dev/null || true
|
||||
return 0
|
||||
fi
|
||||
printf '%s\n' "0x4A666F96fC8764181194447A7dFdb7d471b301C8"
|
||||
}
|
||||
|
||||
USER_ADDR="$(derive_user_addr)"
|
||||
|
||||
detect_token_aggregation_prefix() {
|
||||
local prefix response status body
|
||||
for prefix in "" "/token-aggregation"; do
|
||||
response="$(curl -sSL --max-time 20 -w $'\n%{http_code}' "${BASE_URL}${prefix}/api/v1/networks" 2>/dev/null)" || continue
|
||||
status="$(printf '%s' "$response" | tail -n 1)"
|
||||
body="$(printf '%s' "$response" | sed '$d')"
|
||||
if [[ "$status" != "200" ]]; then
|
||||
response="$(curl -sSL --max-time 20 -w $'\n%{http_code}' "${BASE_URL}${prefix}/api/v1/quote?chainId=${CHAIN_ID}&tokenIn=${CUSDT}&tokenOut=${CUSDC}&amountIn=1000000" 2>/dev/null)" || continue
|
||||
status="$(printf '%s' "$response" | tail -n 1)"
|
||||
body="$(printf '%s' "$response" | sed '$d')"
|
||||
fi
|
||||
if printf '%s' "$body" | jq -e 'type == "object" and (.networks | type == "array")' >/dev/null 2>&1; then
|
||||
printf '%s\n' "$prefix"
|
||||
return 0
|
||||
fi
|
||||
if printf '%s' "$body" | jq -e 'type == "object" and (.amountOut != null or .executorAddress != null)' >/dev/null 2>&1; then
|
||||
printf '%s\n' "$prefix"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
printf '%s\n' ""
|
||||
}
|
||||
|
||||
extract_docs_swagger() {
|
||||
python3 -c "
|
||||
import re
|
||||
import sys
|
||||
import urllib.parse
|
||||
|
||||
html = sys.stdin.read()
|
||||
m = re.search(r'href=\"data:text/plain;charset=utf-8,([^\"]+)\"', html)
|
||||
if not m:
|
||||
sys.exit(1)
|
||||
print(urllib.parse.unquote(m.group(1)))
|
||||
" < <(curl -sSL --max-time 30 "$DOCS_URL")
|
||||
}
|
||||
|
||||
docs_chain_supported() {
|
||||
local swagger_json description
|
||||
if ! swagger_json="$(extract_docs_swagger 2>/dev/null)"; then
|
||||
warn "Could not extract SmartTrade swagger from official docs."
|
||||
return 1
|
||||
fi
|
||||
description="$(printf '%s' "$swagger_json" | jq -r '.paths["/route-service/developer/swap"].get.parameters[] | select(.name == "chainId") | .description' 2>/dev/null || true)"
|
||||
if [[ -z "$description" || "$description" == "null" ]]; then
|
||||
warn "Swagger did not expose the published chain list."
|
||||
return 1
|
||||
fi
|
||||
if python3 -c "
|
||||
import re
|
||||
import sys
|
||||
|
||||
chain_id = sys.argv[1]
|
||||
text = sys.stdin.read()
|
||||
nums = set(re.findall(r'(?<!\d)\d+(?!\d)', text))
|
||||
sys.exit(0 if chain_id in nums else 1)
|
||||
" "$CHAIN_ID" <<<"$description"
|
||||
then
|
||||
ok "Official DODO SmartTrade docs publish chainId=${CHAIN_ID} support."
|
||||
return 0
|
||||
fi
|
||||
fail "Official DODO SmartTrade docs do not publish chainId=${CHAIN_ID} support."
|
||||
return 1
|
||||
}
|
||||
|
||||
contract_list_support() {
|
||||
local body
|
||||
body="$(curl -sSL --max-time 30 "$CONTRACT_LIST_URL")" || {
|
||||
warn "Failed to fetch DODO contract inventory."
|
||||
return 1
|
||||
}
|
||||
if printf '%s' "$body" | jq -e --arg chain "$CHAIN_ID" '.data | has($chain)' >/dev/null 2>&1; then
|
||||
ok "Official DODO contract inventory includes chainId=${CHAIN_ID}."
|
||||
return 0
|
||||
fi
|
||||
fail "Official DODO contract inventory does not include chainId=${CHAIN_ID}."
|
||||
return 1
|
||||
}
|
||||
|
||||
probe_dodo_quote() {
|
||||
local label="$1"
|
||||
local token_in="$2"
|
||||
local token_out="$3"
|
||||
local amount="$4"
|
||||
local body_file="$TMP_DIR/dodo-${label//[^a-zA-Z0-9_-]/_}-${amount}.json"
|
||||
local code status res_amount use_source msg_error
|
||||
|
||||
if [[ -z "$API_KEY" ]]; then
|
||||
warn "Skipping hosted DODO quote for ${label} amount=${amount}: no DODO_API_KEY/DODO_SECRET_KEY set."
|
||||
return 2
|
||||
fi
|
||||
|
||||
code="$(
|
||||
curl -sS -G -o "$body_file" -w "%{http_code}" --max-time 30 "$QUOTE_URL" \
|
||||
--data-urlencode "chainId=${CHAIN_ID}" \
|
||||
--data-urlencode "fromAmount=${amount}" \
|
||||
--data-urlencode "fromTokenAddress=${token_in}" \
|
||||
--data-urlencode "toTokenAddress=${token_out}" \
|
||||
--data-urlencode "apikey=${API_KEY}" \
|
||||
--data-urlencode "slippage=${DODO_SLIPPAGE}" \
|
||||
--data-urlencode "userAddr=${USER_ADDR}"
|
||||
)" || code="000"
|
||||
|
||||
if [[ "$code" != "200" ]]; then
|
||||
if [[ -f "$body_file" ]]; then
|
||||
fail "Hosted DODO quote ${label} amount=${amount} returned HTTP ${code}: $(head -c 220 "$body_file")"
|
||||
else
|
||||
fail "Hosted DODO quote ${label} amount=${amount} failed with HTTP ${code}."
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
status="$(jq -r '.status // empty' "$body_file" 2>/dev/null || true)"
|
||||
res_amount="$(jq -r '.data.resAmount // empty' "$body_file" 2>/dev/null || true)"
|
||||
use_source="$(jq -r '.data.useSource // empty' "$body_file" 2>/dev/null || true)"
|
||||
msg_error="$(jq -r '.data.msgError // empty' "$body_file" 2>/dev/null || true)"
|
||||
|
||||
if [[ "$status" == "200" && -n "$res_amount" && "$res_amount" != "0" && "$res_amount" != "0.0" && -z "$msg_error" ]]; then
|
||||
ok "Hosted DODO quote ${label} amount=${amount} succeeded via ${use_source:-unknown} with resAmount=${res_amount}."
|
||||
return 0
|
||||
fi
|
||||
|
||||
fail "Hosted DODO quote ${label} amount=${amount} returned no executable route. status=${status:-n/a} source=${use_source:-n/a} msgError=${msg_error:-n/a} resAmount=${res_amount:-n/a}"
|
||||
return 1
|
||||
}
|
||||
|
||||
probe_local_quote() {
|
||||
local label="$1"
|
||||
local token_in="$2"
|
||||
local token_out="$3"
|
||||
local amount="$4"
|
||||
local prefix="$5"
|
||||
local body_file="$TMP_DIR/local-${label//[^a-zA-Z0-9_-]/_}-${amount}.json"
|
||||
local code error executor amount_out
|
||||
|
||||
code="$(
|
||||
curl -sS -o "$body_file" -w "%{http_code}" --max-time 30 \
|
||||
"${BASE_URL}${prefix}/api/v1/quote?chainId=${CHAIN_ID}&tokenIn=${token_in}&tokenOut=${token_out}&amountIn=${amount}"
|
||||
)" || code="000"
|
||||
|
||||
if [[ "$code" != "200" ]]; then
|
||||
fail "Local token-aggregation quote ${label} amount=${amount} returned HTTP ${code}."
|
||||
return 1
|
||||
fi
|
||||
|
||||
if jq -e 'type == "object" and has("error")' "$body_file" >/dev/null 2>&1; then
|
||||
error="$(jq -r '.error' "$body_file" 2>/dev/null || true)"
|
||||
fail "Local token-aggregation quote ${label} amount=${amount} returned error: ${error:-unknown error}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
executor="$(jq -r '.executorAddress // empty' "$body_file" 2>/dev/null || true)"
|
||||
amount_out="$(jq -r '.amountOut // empty' "$body_file" 2>/dev/null || true)"
|
||||
|
||||
if [[ -n "$amount_out" && "$amount_out" != "0" && "$amount_out" != "0.0" ]]; then
|
||||
ok "Local token-aggregation quote ${label} amount=${amount} succeeded with amountOut=${amount_out} executor=${executor:-n/a}."
|
||||
return 0
|
||||
fi
|
||||
|
||||
fail "Local token-aggregation quote ${label} amount=${amount} returned no amountOut."
|
||||
return 1
|
||||
}
|
||||
|
||||
run_pair_suite() {
|
||||
local name="$1"
|
||||
local token_in="$2"
|
||||
local token_out="$3"
|
||||
local prefix="$4"
|
||||
local amount
|
||||
|
||||
log ""
|
||||
log "Pair: ${name}"
|
||||
for amount in $AMOUNTS_WEI; do
|
||||
probe_dodo_quote "$name" "$token_in" "$token_out" "$amount" || true
|
||||
probe_local_quote "$name" "$token_in" "$token_out" "$amount" "$prefix" || true
|
||||
done
|
||||
}
|
||||
|
||||
TA_PREFIX="$(detect_token_aggregation_prefix)"
|
||||
|
||||
log "=== DODO SmartTrade / Chain 138 support probe ==="
|
||||
log "Docs URL: $DOCS_URL"
|
||||
log "Contract list: $CONTRACT_LIST_URL"
|
||||
log "Quote endpoint: $QUOTE_URL"
|
||||
log "Chain ID: $CHAIN_ID"
|
||||
log "Base URL: $BASE_URL"
|
||||
log "User address: $USER_ADDR"
|
||||
if [[ -n "$API_KEY" ]]; then
|
||||
ok "DODO API key present in environment (value intentionally not printed)."
|
||||
else
|
||||
warn "No DODO API key found in environment; hosted quote probes will be skipped."
|
||||
fi
|
||||
if [[ -n "$TA_PREFIX" ]]; then
|
||||
ok "Detected token-aggregation prefix: ${TA_PREFIX}/api/v1"
|
||||
else
|
||||
warn "Could not auto-detect token-aggregation prefix; defaulting to root /api/v1."
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "=== Published support checks ==="
|
||||
docs_chain_supported || true
|
||||
contract_list_support || true
|
||||
|
||||
log ""
|
||||
log "=== Live quote probes ==="
|
||||
run_pair_suite "WETH->USDT" "$WETH" "$USDT" "$TA_PREFIX"
|
||||
run_pair_suite "WETH->USDC" "$WETH" "$USDC" "$TA_PREFIX"
|
||||
|
||||
log ""
|
||||
log "=== Positive-control local pairs ==="
|
||||
probe_local_quote "cUSDT->USDT" "$CUSDT" "$USDT" "1000000" "$TA_PREFIX" || true
|
||||
probe_local_quote "cUSDT->cUSDC" "$CUSDT" "$CUSDC" "1000000" "$TA_PREFIX" || true
|
||||
|
||||
log ""
|
||||
log "Notes:"
|
||||
log " - Hosted DODO SmartTrade probes are quote-only and never print the API key."
|
||||
log " - If official docs and contract inventory both omit chainId=${CHAIN_ID}, the hosted API should be treated as unsupported until DODO adds the chain."
|
||||
log " - Local token-aggregation probes show current Chain 138 route availability regardless of hosted DODO support."
|
||||
339
scripts/verify/check-dodo-v3-chain138.sh
Executable file
339
scripts/verify/check-dodo-v3-chain138.sh
Executable file
@@ -0,0 +1,339 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verify the private Chain 138 DODO v3 / D3MM rollout.
|
||||
#
|
||||
# Checks:
|
||||
# 1. Canonical D3MM pool returns the expected version string.
|
||||
# 2. D3Vault still recognizes the canonical pool.
|
||||
# 3. D3Oracle has a non-zero, whitelisted WETH10 source.
|
||||
# 4. A quote probe for WETH10 -> USDT returns a healthy amount.
|
||||
# 5. Core DODO v3 pilot contracts remain source-verified on Blockscout.
|
||||
#
|
||||
# Optional env overrides:
|
||||
# RPC_URL_138
|
||||
# CHAIN138_D3_ORACLE_ADDRESS
|
||||
# CHAIN138_D3_VAULT_ADDRESS
|
||||
# CHAIN138_D3_MM_FACTORY_ADDRESS
|
||||
# CHAIN138_D3_PROXY_ADDRESS
|
||||
# CHAIN138_D3_DODO_APPROVE_ADDRESS
|
||||
# CHAIN138_D3_DODO_APPROVE_PROXY_ADDRESS
|
||||
# CHAIN138_D3_MM_ADDRESS
|
||||
# CHAIN138_D3_WETH10_ADDRESS
|
||||
# CHAIN138_D3_WETH_USD_FEED
|
||||
# CHAIN138_D3_BOOTSTRAP_WETH_USD_MOCK
|
||||
# CHAIN138_D3_USDT_ADDRESS
|
||||
# CHAIN138_D3_USDC_ADDRESS
|
||||
# CHAIN138_D3_CUSDT_ADDRESS
|
||||
# CHAIN138_D3_CUSDC_ADDRESS
|
||||
# CHAIN138_D3_USDT_USD_FEED
|
||||
# CHAIN138_D3_USDC_USD_FEED
|
||||
# CHAIN138_D3_CUSDT_USD_FEED
|
||||
# CHAIN138_D3_CUSDC_USD_FEED
|
||||
# CHAIN138_D3_ALLOW_BOOTSTRAP_STABLE_MOCKS=1 to downgrade old stable-feed addresses to warnings
|
||||
# CHAIN138_D3_PROBE_SELL_AMOUNT
|
||||
# CHAIN138_D3_MIN_QUOTE_OUT
|
||||
# CHAIN138_BLOCKSCOUT_API_BASE
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-${CHAIN138_RPC:-http://192.168.11.211:8545}}}"
|
||||
D3_ORACLE="${CHAIN138_D3_ORACLE_ADDRESS:-0xD7459aEa8bB53C83a1e90262777D730539A326F0}"
|
||||
D3_VAULT="${CHAIN138_D3_VAULT_ADDRESS:-0x42b6867260Fb9eE6d09B7E0233A1fAD65D0133D1}"
|
||||
D3_FACTORY="${CHAIN138_D3_MM_FACTORY_ADDRESS:-0x78470C7d2925B6738544E2DD4FE7c07CcA21AC31}"
|
||||
D3_PROXY="${CHAIN138_D3_PROXY_ADDRESS:-0xc9a11abB7C63d88546Be24D58a6d95e3762cB843}"
|
||||
DODO_APPROVE="${CHAIN138_D3_DODO_APPROVE_ADDRESS:-0xbF8D5CB7E8F333CA686a27374Ae06F5dfd772E9E}"
|
||||
DODO_APPROVE_PROXY="${CHAIN138_D3_DODO_APPROVE_PROXY_ADDRESS:-0x08d764c03C42635d8ef9046752b5694243E21Fe9}"
|
||||
D3MM="${CHAIN138_D3_MM_ADDRESS:-0x6550A3a59070061a262a893A1D6F3F490afFDBDA}"
|
||||
WETH10="${CHAIN138_D3_WETH10_ADDRESS:-0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9F}"
|
||||
WETH_USD_FEED="${CHAIN138_D3_WETH_USD_FEED:-0x99b3511a2d315a497c8112c1fdd8d508d4b1e506}"
|
||||
BOOTSTRAP_WETH_MOCK="${CHAIN138_D3_BOOTSTRAP_WETH_USD_MOCK:-0xf1bbBc7FCBB96C6a143C752d4DAEFA34e192EE9c}"
|
||||
USDT="${CHAIN138_D3_USDT_ADDRESS:-0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1}"
|
||||
USDC="${CHAIN138_D3_USDC_ADDRESS:-0x71D6687F38b93CCad569Fa6352c876eea967201b}"
|
||||
cUSDT="${CHAIN138_D3_CUSDT_ADDRESS:-0x93E66202A11B1772E55407B32B44e5Cd8eda7f22}"
|
||||
cUSDC="${CHAIN138_D3_CUSDC_ADDRESS:-0xf22258f57794CC8E06237084b353Ab30fFfa640b}"
|
||||
USDT_USD_FEED="${CHAIN138_D3_USDT_USD_FEED:-0x7c2Cb2667f0f97f4004aae04B67d94A085E6f0f1}"
|
||||
USDC_USD_FEED="${CHAIN138_D3_USDC_USD_FEED:-0xf072Ac13D45e6c83296ca18F3E04185B747DD6aa}"
|
||||
cUSDT_USD_FEED="${CHAIN138_D3_CUSDT_USD_FEED:-0x7c96E66F4a0713e327F9e73Cf2721f13DB29036C}"
|
||||
cUSDC_USD_FEED="${CHAIN138_D3_CUSDC_USD_FEED:-0x291694095232CA80077125F64f6f73076e7910C1}"
|
||||
BOOTSTRAP_USDT_MOCK="${CHAIN138_D3_BOOTSTRAP_USDT_USD_MOCK:-0x8c5eD794399C68468985238Fa127958E06e6e87F}"
|
||||
BOOTSTRAP_USDC_MOCK="${CHAIN138_D3_BOOTSTRAP_USDC_USD_MOCK:-0x102BAe3eBf3C8B93445F4B30c90728fd933CeC84}"
|
||||
BOOTSTRAP_cUSDT_MOCK="${CHAIN138_D3_BOOTSTRAP_CUSDT_USD_MOCK:-0xd493Bb05e9ddf954E8285c6358ED03a2672A006d}"
|
||||
BOOTSTRAP_cUSDC_MOCK="${CHAIN138_D3_BOOTSTRAP_CUSDC_USD_MOCK:-0xcA85aFc0E24A8ebd69E138928567DcD363758E7A}"
|
||||
ALLOW_BOOTSTRAP_STABLE_MOCKS="${CHAIN138_D3_ALLOW_BOOTSTRAP_STABLE_MOCKS:-0}"
|
||||
PROBE_AMOUNT="${CHAIN138_D3_PROBE_SELL_AMOUNT:-100000000000000000}"
|
||||
MIN_QUOTE_OUT="${CHAIN138_D3_MIN_QUOTE_OUT:-100000000}"
|
||||
BLOCKSCOUT_INTERNAL_API_BASE="${CHAIN138_BLOCKSCOUT_INTERNAL_API_BASE:-http://${IP_BLOCKSCOUT:-192.168.11.140}:4000/api/v2}"
|
||||
BLOCKSCOUT_PUBLIC_API_BASE="${CHAIN138_BLOCKSCOUT_PUBLIC_API_BASE:-https://explorer.d-bis.org/api/v2}"
|
||||
|
||||
log() {
|
||||
printf '%s\n' "$*"
|
||||
}
|
||||
|
||||
ok() {
|
||||
printf '[ok] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[warn] %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
fail() {
|
||||
printf '[fail] %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
format_units() {
|
||||
local value="$1"
|
||||
local decimals="$2"
|
||||
|
||||
if [[ "${value}" =~ ^- ]]; then
|
||||
local positive="${value#-}"
|
||||
printf -- "-%s" "$(format_units "${positive}" "${decimals}")"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "${decimals}" -eq 0 ]]; then
|
||||
printf '%s' "${value}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local len="${#value}"
|
||||
if (( len <= decimals )); then
|
||||
local pad_count=$((decimals - len))
|
||||
local zeros=""
|
||||
if (( pad_count > 0 )); then
|
||||
zeros=$(printf '%0*d' "${pad_count}" 0)
|
||||
fi
|
||||
local fraction="${zeros}${value}"
|
||||
fraction="${fraction%"${fraction##*[!0]}"}"
|
||||
if [[ -z "${fraction}" ]]; then
|
||||
printf '0'
|
||||
else
|
||||
printf '0.%s' "${fraction}"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
local whole="${value:0:len-decimals}"
|
||||
local fraction="${value:len-decimals}"
|
||||
fraction="${fraction%"${fraction##*[!0]}"}"
|
||||
if [[ -z "${fraction}" ]]; then
|
||||
printf '%s' "${whole}"
|
||||
else
|
||||
printf '%s.%s' "${whole}" "${fraction}"
|
||||
fi
|
||||
}
|
||||
|
||||
require_command() {
|
||||
command -v "$1" >/dev/null 2>&1 || fail "Missing required command: $1"
|
||||
}
|
||||
|
||||
require_command cast
|
||||
|
||||
if command -v curl >/dev/null 2>&1 && command -v jq >/dev/null 2>&1; then
|
||||
HAS_BLOCKSCOUT_TOOLS=1
|
||||
else
|
||||
HAS_BLOCKSCOUT_TOOLS=0
|
||||
fi
|
||||
|
||||
check_stable_feed() {
|
||||
local label="$1"
|
||||
local token_address="$2"
|
||||
local expected_feed="$3"
|
||||
local bootstrap_feed="$4"
|
||||
local expected_desc="$5"
|
||||
|
||||
mapfile -t source < <(
|
||||
cast call --rpc-url "${RPC_URL}" "${D3_ORACLE}" \
|
||||
"priceSources(address)(address,bool,uint256,uint8,uint8,uint256)" "${token_address}"
|
||||
) || fail "Failed to read ${label} price source from D3Oracle."
|
||||
|
||||
(( ${#source[@]} >= 6 )) || fail "Unexpected ${label} priceSources response length: ${#source[@]}"
|
||||
|
||||
local feed_address="${source[0]}"
|
||||
local whitelisted="${source[1]}"
|
||||
|
||||
[[ "${feed_address}" != "0x0000000000000000000000000000000000000000" ]] || fail "${label} feed address is zero."
|
||||
[[ "${whitelisted}" == "true" ]] || fail "${label} feed is not whitelisted."
|
||||
|
||||
if [[ "${feed_address,,}" == "${bootstrap_feed,,}" ]]; then
|
||||
if [[ "${ALLOW_BOOTSTRAP_STABLE_MOCKS}" == "1" ]]; then
|
||||
warn "${label} still points at bootstrap mock feed ${bootstrap_feed}."
|
||||
else
|
||||
fail "${label} still points at bootstrap mock feed ${bootstrap_feed}."
|
||||
fi
|
||||
fi
|
||||
|
||||
[[ "${feed_address,,}" == "${expected_feed,,}" ]] || fail "${label} expected managed feed ${expected_feed}, got ${feed_address}."
|
||||
|
||||
local description
|
||||
description="$(cast call --rpc-url "${RPC_URL}" "${feed_address}" "description()(string)")" || fail "Failed to read ${label} feed description."
|
||||
[[ "${description}" == "\"${expected_desc}\"" ]] || fail "${label} feed description mismatch: ${description}"
|
||||
|
||||
mapfile -t round_data < <(
|
||||
cast call --rpc-url "${RPC_URL}" "${feed_address}" \
|
||||
"latestRoundData()(uint80,int256,uint256,uint256,uint80)"
|
||||
) || fail "Failed to read ${label} latestRoundData."
|
||||
|
||||
(( ${#round_data[@]} >= 5 )) || fail "Unexpected ${label} latestRoundData length: ${#round_data[@]}"
|
||||
local answer="${round_data[1]%% *}"
|
||||
[[ "${answer}" == "100000000" ]] || fail "${label} expected 1.00000000 USD answer, got ${answer}."
|
||||
|
||||
ok "${label} uses managed feed ${feed_address} (${expected_desc}) with answer ${answer}."
|
||||
}
|
||||
|
||||
check_blockscout_verified() {
|
||||
local label="$1"
|
||||
local addr="$2"
|
||||
local expected_name="$3"
|
||||
local json name compiler base
|
||||
|
||||
json=""
|
||||
for base in "${BLOCKSCOUT_INTERNAL_API_BASE}" "${BLOCKSCOUT_PUBLIC_API_BASE}"; do
|
||||
for _ in 1 2 3; do
|
||||
json="$(curl --max-time 15 -fsS "${base}/smart-contracts/${addr}" 2>/dev/null || true)"
|
||||
if [[ -n "${json}" ]] && jq -e 'type == "object"' >/dev/null 2>&1 <<<"${json}"; then
|
||||
break 2
|
||||
fi
|
||||
json=""
|
||||
sleep 2
|
||||
done
|
||||
done
|
||||
|
||||
[[ -n "${json}" ]] || fail "Failed to read Blockscout metadata for ${label} (${addr})."
|
||||
name="$(jq -r '.name // empty' <<<"${json}")"
|
||||
compiler="$(jq -r '.compiler_version // empty' <<<"${json}")"
|
||||
|
||||
[[ -n "${name}" ]] || fail "${label} is not source-verified on Blockscout."
|
||||
[[ -n "${compiler}" ]] || fail "${label} is missing compiler metadata on Blockscout."
|
||||
[[ "${name}" == "${expected_name}" ]] || fail "${label} expected Blockscout name ${expected_name}, got ${name}."
|
||||
ok "${label} is Blockscout-verified as ${name} (${compiler})."
|
||||
}
|
||||
|
||||
check_blockscout_verified_or_warn_bytecode_only() {
|
||||
local label="$1"
|
||||
local addr="$2"
|
||||
local expected_name="$3"
|
||||
local json name compiler base
|
||||
|
||||
json=""
|
||||
for base in "${BLOCKSCOUT_INTERNAL_API_BASE}" "${BLOCKSCOUT_PUBLIC_API_BASE}"; do
|
||||
for _ in 1 2 3; do
|
||||
json="$(curl --max-time 15 -fsS "${base}/smart-contracts/${addr}" 2>/dev/null || true)"
|
||||
if [[ -n "${json}" ]] && jq -e 'type == "object"' >/dev/null 2>&1 <<<"${json}"; then
|
||||
break 2
|
||||
fi
|
||||
json=""
|
||||
sleep 2
|
||||
done
|
||||
done
|
||||
|
||||
[[ -n "${json}" ]] || fail "Failed to read Blockscout metadata for ${label} (${addr})."
|
||||
name="$(jq -r '.name // empty' <<<"${json}")"
|
||||
compiler="$(jq -r '.compiler_version // empty' <<<"${json}")"
|
||||
|
||||
if [[ -n "${name}" && -n "${compiler}" && "${name}" == "${expected_name}" ]]; then
|
||||
ok "${label} is Blockscout-verified as ${name} (${compiler})."
|
||||
return 0
|
||||
fi
|
||||
|
||||
if jq -e '.creation_bytecode and .deployed_bytecode' >/dev/null 2>&1 <<<"${json}"; then
|
||||
warn "${label} currently exposes only bytecode metadata on Blockscout; source-verification submission exists but explorer metadata has not fully materialized."
|
||||
return 0
|
||||
fi
|
||||
|
||||
fail "${label} is not source-verified on Blockscout."
|
||||
}
|
||||
|
||||
log "Chain 138 DODO v3 verification"
|
||||
log "RPC: ${RPC_URL}"
|
||||
log "D3Oracle: ${D3_ORACLE}"
|
||||
log "D3Vault: ${D3_VAULT}"
|
||||
log "D3MMFactory: ${D3_FACTORY}"
|
||||
log "D3Proxy: ${D3_PROXY}"
|
||||
log "Canonical D3MM: ${D3MM}"
|
||||
log
|
||||
|
||||
version="$(cast call --rpc-url "${RPC_URL}" "${D3MM}" "version()(string)")" || fail "Failed to read D3MM version."
|
||||
[[ "${version}" == "\"D3MM 1.0.0\"" ]] || fail "Unexpected D3MM version: ${version}"
|
||||
ok "Canonical pool reports version ${version}."
|
||||
|
||||
vault_recognizes_pool="$(cast call --rpc-url "${RPC_URL}" "${D3_VAULT}" "allPoolAddrMap(address)(bool)" "${D3MM}")" || fail "Failed to read vault pool map."
|
||||
[[ "${vault_recognizes_pool}" == "true" ]] || fail "D3Vault does not recognize canonical D3MM ${D3MM}."
|
||||
ok "D3Vault recognizes the canonical D3MM."
|
||||
|
||||
mapfile -t oracle_source < <(
|
||||
cast call --rpc-url "${RPC_URL}" "${D3_ORACLE}" \
|
||||
"priceSources(address)(address,bool,uint256,uint8,uint8,uint256)" "${WETH10}"
|
||||
) || fail "Failed to read WETH10 price source from D3Oracle."
|
||||
|
||||
(( ${#oracle_source[@]} >= 6 )) || fail "Unexpected D3Oracle priceSources response length: ${#oracle_source[@]}"
|
||||
|
||||
oracle_address="${oracle_source[0]}"
|
||||
oracle_whitelisted="${oracle_source[1]}"
|
||||
price_tolerance="${oracle_source[2]%% *}"
|
||||
price_decimals="${oracle_source[3]%% *}"
|
||||
token_decimals="${oracle_source[4]%% *}"
|
||||
heartbeat="${oracle_source[5]%% *}"
|
||||
|
||||
[[ "${oracle_address}" != "0x0000000000000000000000000000000000000000" ]] || fail "WETH10 oracle address is zero."
|
||||
[[ "${oracle_whitelisted}" == "true" ]] || fail "WETH10 oracle is not whitelisted."
|
||||
[[ "${oracle_address,,}" != "${BOOTSTRAP_WETH_MOCK,,}" ]] || fail "WETH10 still points at bootstrap mock feed ${BOOTSTRAP_WETH_MOCK}."
|
||||
[[ "${oracle_address,,}" == "${WETH_USD_FEED,,}" ]] || fail "WETH10 expected oracle feed ${WETH_USD_FEED}, got ${oracle_address}."
|
||||
(( heartbeat >= 100000 )) || fail "WETH10 heartbeat ${heartbeat} is unexpectedly low."
|
||||
weth_desc="$(cast call --rpc-url "${RPC_URL}" "${oracle_address}" "description()(string)")" || fail "Failed to read WETH10 feed description."
|
||||
[[ "${weth_desc}" == "\"ETH/USD Price Feed\"" ]] || fail "Unexpected WETH10 feed description: ${weth_desc}"
|
||||
ok "WETH10 oracle is configured at ${oracle_address} (${weth_desc}, tolerance=${price_tolerance}, priceDecimals=${price_decimals}, tokenDecimals=${token_decimals}, heartbeat=${heartbeat})."
|
||||
|
||||
mapfile -t quote_probe < <(
|
||||
cast call --rpc-url "${RPC_URL}" "${D3MM}" \
|
||||
"querySellTokens(address,address,uint256)(uint256,uint256,uint256,uint256,uint256)" \
|
||||
"${WETH10}" "${USDT}" "${PROBE_AMOUNT}"
|
||||
) || fail "Failed to query WETH10 -> USDT on canonical D3MM."
|
||||
|
||||
(( ${#quote_probe[@]} >= 5 )) || fail "Unexpected D3MM quote response length: ${#quote_probe[@]}"
|
||||
|
||||
pay_from="${quote_probe[0]%% *}"
|
||||
receive_to="${quote_probe[1]%% *}"
|
||||
vusd_amount="${quote_probe[2]%% *}"
|
||||
swap_fee="${quote_probe[3]%% *}"
|
||||
mt_fee="${quote_probe[4]%% *}"
|
||||
|
||||
[[ "${pay_from}" == "${PROBE_AMOUNT}" ]] || fail "D3MM returned unexpected payFromAmount ${pay_from} for probe amount ${PROBE_AMOUNT}."
|
||||
[[ "${receive_to}" != "0" ]] || fail "Canonical D3MM returned zero receive amount for WETH10 -> USDT."
|
||||
|
||||
if [[ "${receive_to}" =~ ^[0-9]+$ ]] && [[ "${MIN_QUOTE_OUT}" =~ ^[0-9]+$ ]]; then
|
||||
if (( receive_to < MIN_QUOTE_OUT )); then
|
||||
fail "Canonical D3MM quote ${receive_to} is below expected floor ${MIN_QUOTE_OUT}."
|
||||
fi
|
||||
else
|
||||
warn "Skipping numeric threshold comparison because quote or floor was not numeric."
|
||||
fi
|
||||
|
||||
ok "Quote probe WETH10 -> USDT for $(format_units "${PROBE_AMOUNT}" 18) WETH10 returned $(format_units "${receive_to}" 6) USDT (vusd=${vusd_amount}, swapFee=$(format_units "${swap_fee}" 6), mtFee=$(format_units "${mt_fee}" 6))."
|
||||
|
||||
check_stable_feed "USDT" "${USDT}" "${USDT_USD_FEED}" "${BOOTSTRAP_USDT_MOCK}" "USDT / USD"
|
||||
check_stable_feed "USDC" "${USDC}" "${USDC_USD_FEED}" "${BOOTSTRAP_USDC_MOCK}" "USDC / USD"
|
||||
check_stable_feed "cUSDT" "${cUSDT}" "${cUSDT_USD_FEED}" "${BOOTSTRAP_cUSDT_MOCK}" "cUSDT / USD"
|
||||
check_stable_feed "cUSDC" "${cUSDC}" "${cUSDC_USD_FEED}" "${BOOTSTRAP_cUSDC_MOCK}" "cUSDC / USD"
|
||||
|
||||
if [[ "${HAS_BLOCKSCOUT_TOOLS}" == "1" ]]; then
|
||||
check_blockscout_verified "D3Oracle" "${D3_ORACLE}" "D3Oracle"
|
||||
check_blockscout_verified "D3Vault" "${D3_VAULT}" "D3Vault"
|
||||
check_blockscout_verified_or_warn_bytecode_only "D3MMFactory" "${D3_FACTORY}" "D3MMFactory"
|
||||
check_blockscout_verified_or_warn_bytecode_only "D3Proxy" "${D3_PROXY}" "D3Proxy"
|
||||
check_blockscout_verified "DODOApprove" "${DODO_APPROVE}" "DODOApprove"
|
||||
check_blockscout_verified "DODOApproveProxy" "${DODO_APPROVE_PROXY}" "DODOApproveProxy"
|
||||
else
|
||||
warn "Skipping Blockscout source-verification checks because curl or jq is missing."
|
||||
fi
|
||||
|
||||
log
|
||||
log "Result: Chain 138 DODO v3 canonical WETH10 pool and managed stable feeds look healthy."
|
||||
27
scripts/verify/check-dodo-v3-planner-visibility-chain138.sh
Normal file
27
scripts/verify/check-dodo-v3-planner-visibility-chain138.sh
Normal file
@@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verify that the Chain 138 DODO v3 / D3MM pilot is promoted into planner-v2
|
||||
# visibility with live router-v2 execution for the canonical pilot pair.
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
SERVICE_DIR="${PROJECT_ROOT}/smom-dbis-138/services/token-aggregation"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
command -v pnpm >/dev/null 2>&1 || {
|
||||
echo "[fail] pnpm is required" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Prefer the canonical healthy public RPC for this verifier so a flaky alias
|
||||
# does not make planner visibility look broken.
|
||||
export RPC_URL_138_PUBLIC="${DODO_V3_PLANNER_RPC_URL:-https://rpc-http-pub.d-bis.org}"
|
||||
export RPC_HTTP_PUB_URL="${RPC_URL_138_PUBLIC}"
|
||||
|
||||
cd "${SERVICE_DIR}"
|
||||
pnpm exec ts-node --transpile-only scripts/verify-dodo-v3-planner-visibility.ts
|
||||
311
scripts/verify/check-explorer-e2e.sh
Normal file
311
scripts/verify/check-explorer-e2e.sh
Normal file
@@ -0,0 +1,311 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
BASE_URL="${1:-https://blockscout.defi-oracle.io}"
|
||||
BASE_URL="${BASE_URL%/}"
|
||||
EDGE_SSH_HOST="${EXPLORER_EDGE_SSH_HOST:-root@192.168.11.12}"
|
||||
EDGE_VMID="${EXPLORER_EDGE_VMID:-5000}"
|
||||
BLOCKSCOUT_IP="${IP_BLOCKSCOUT:-192.168.11.140}"
|
||||
REQUIRE_PUBLIC_EDGE="${REQUIRE_PUBLIC_EDGE:-0}"
|
||||
|
||||
failures=0
|
||||
notes=()
|
||||
public_edge_ready=1
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd curl
|
||||
require_cmd jq
|
||||
require_cmd ssh
|
||||
|
||||
ok() {
|
||||
printf '[ok] %s\n' "$*"
|
||||
}
|
||||
|
||||
info() {
|
||||
printf '[info] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[warn] %s\n' "$*"
|
||||
}
|
||||
|
||||
fail() {
|
||||
printf '[fail] %s\n' "$*"
|
||||
failures=$((failures + 1))
|
||||
}
|
||||
|
||||
count_header_lines() {
|
||||
local headers="$1"
|
||||
local name="$2"
|
||||
printf '%s\n' "$headers" | grep -iEc "^${name}:" || true
|
||||
}
|
||||
|
||||
first_header_value() {
|
||||
local headers="$1"
|
||||
local name="$2"
|
||||
{
|
||||
printf '%s\n' "$headers" | grep -iE "^${name}:" || true
|
||||
} \
|
||||
| head -n1 \
|
||||
| cut -d: -f2- \
|
||||
| tr -d '\r' \
|
||||
| sed 's/^[[:space:]]*//'
|
||||
}
|
||||
|
||||
assert_single_header_equals() {
|
||||
local label="$1"
|
||||
local headers="$2"
|
||||
local name="$3"
|
||||
local expected="$4"
|
||||
local count
|
||||
local value
|
||||
|
||||
count="$(count_header_lines "$headers" "$name")"
|
||||
if [[ "$count" -ne 1 ]]; then
|
||||
fail "$label has ${count} ${name} header(s); expected exactly 1"
|
||||
return
|
||||
fi
|
||||
|
||||
value="$(first_header_value "$headers" "$name")"
|
||||
if [[ "$value" == "$expected" ]]; then
|
||||
ok "$label ${name} matches expected policy"
|
||||
else
|
||||
fail "$label ${name} mismatch. Expected: $expected. Actual: $value"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_single_header_contains() {
|
||||
local label="$1"
|
||||
local headers="$2"
|
||||
local name="$3"
|
||||
local expected_fragment="$4"
|
||||
local count
|
||||
local value
|
||||
|
||||
count="$(count_header_lines "$headers" "$name")"
|
||||
if [[ "$count" -ne 1 ]]; then
|
||||
fail "$label has ${count} ${name} header(s); expected exactly 1"
|
||||
return
|
||||
fi
|
||||
|
||||
value="$(first_header_value "$headers" "$name")"
|
||||
if [[ "$value" == *"$expected_fragment"* ]]; then
|
||||
ok "$label ${name} contains expected policy fragment"
|
||||
else
|
||||
fail "$label ${name} is missing expected fragment: $expected_fragment"
|
||||
fi
|
||||
}
|
||||
|
||||
check_explorer_header_policy() {
|
||||
local label="$1"
|
||||
local headers="$2"
|
||||
local csp_value
|
||||
|
||||
assert_single_header_equals "$label" "$headers" "Cache-Control" "no-store, no-cache, must-revalidate"
|
||||
assert_single_header_contains "$label" "$headers" "Content-Security-Policy" "script-src 'self' 'unsafe-inline' 'unsafe-eval'"
|
||||
csp_value="$(first_header_value "$headers" "Content-Security-Policy")"
|
||||
if [[ "$csp_value" == *"connect-src 'self'"* ]] && [[ "$csp_value" == *"https://blockscout.defi-oracle.io"* || "$csp_value" == *"https://explorer.d-bis.org"* ]]; then
|
||||
ok "$label Content-Security-Policy contains an allowed explorer origin"
|
||||
else
|
||||
fail "$label Content-Security-Policy is missing an allowed explorer origin"
|
||||
fi
|
||||
assert_single_header_equals "$label" "$headers" "X-Frame-Options" "SAMEORIGIN"
|
||||
assert_single_header_equals "$label" "$headers" "X-Content-Type-Options" "nosniff"
|
||||
assert_single_header_contains "$label" "$headers" "Strict-Transport-Security" "max-age="
|
||||
assert_single_header_equals "$label" "$headers" "Referrer-Policy" "strict-origin-when-cross-origin"
|
||||
assert_single_header_equals "$label" "$headers" "X-XSS-Protection" "0"
|
||||
}
|
||||
|
||||
http_headers_have_status() {
|
||||
local headers="$1"
|
||||
local status="$2"
|
||||
printf '%s\n' "$headers" | grep -Eq "^HTTP/[0-9.]+ ${status}([[:space:]]|$)"
|
||||
}
|
||||
|
||||
json_check() {
|
||||
local label="$1"
|
||||
local body="$2"
|
||||
local expr="$3"
|
||||
local desc="$4"
|
||||
|
||||
if printf '%s' "$body" | jq -e "$expr" >/dev/null 2>&1; then
|
||||
ok "$label ($desc)"
|
||||
else
|
||||
local sample
|
||||
sample="$(printf '%s' "$body" | head -c 240 | tr '\n' ' ')"
|
||||
fail "$label returned unexpected payload. Expected $desc. Sample: $sample"
|
||||
fi
|
||||
}
|
||||
|
||||
run_edge_get() {
|
||||
local url="$1"
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 "$EDGE_SSH_HOST" \
|
||||
"pct exec ${EDGE_VMID} -- bash -lc 'curl -ksS -L --connect-timeout 15 --max-time 30 \"$url\"'"
|
||||
}
|
||||
|
||||
run_edge_head() {
|
||||
local url="$1"
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 "$EDGE_SSH_HOST" \
|
||||
"pct exec ${EDGE_VMID} -- bash -lc 'curl -ksSI --connect-timeout 15 --max-time 30 \"$url\"'"
|
||||
}
|
||||
|
||||
run_edge_internal_get() {
|
||||
local path="$1"
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 "$EDGE_SSH_HOST" \
|
||||
"pct exec ${EDGE_VMID} -- bash -lc 'curl -sS -L --connect-timeout 15 --max-time 30 \"http://127.0.0.1${path}\"'"
|
||||
}
|
||||
|
||||
run_edge_internal_head() {
|
||||
local path="$1"
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 "$EDGE_SSH_HOST" \
|
||||
"pct exec ${EDGE_VMID} -- bash -lc 'curl -sSI --connect-timeout 15 --max-time 30 \"http://127.0.0.1${path}\"'"
|
||||
}
|
||||
|
||||
run_edge_internal_post() {
|
||||
local path="$1"
|
||||
local payload="$2"
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 "$EDGE_SSH_HOST" \
|
||||
"pct exec ${EDGE_VMID} -- bash -lc 'curl -sS -L --connect-timeout 15 --max-time 30 -H \"content-type: application/json\" -X POST --data '\''$payload'\'' \"http://127.0.0.1${path}\"'"
|
||||
}
|
||||
|
||||
run_edge_public_post() {
|
||||
local path="$1"
|
||||
local payload="$2"
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 "$EDGE_SSH_HOST" \
|
||||
"pct exec ${EDGE_VMID} -- bash -lc 'curl -ksS -L --connect-timeout 15 --max-time 30 -H \"content-type: application/json\" -X POST --data '\''$payload'\'' \"${BASE_URL}${path}\"'"
|
||||
}
|
||||
|
||||
echo "== Explorer end-to-end verification =="
|
||||
echo "Base URL: ${BASE_URL}"
|
||||
echo "Edge host: ${EDGE_SSH_HOST} (VMID ${EDGE_VMID})"
|
||||
echo "Expected explorer VM IP: ${BLOCKSCOUT_IP}"
|
||||
echo
|
||||
|
||||
info "Local workstation reachability"
|
||||
if curl -ksS -I --connect-timeout 10 --max-time 15 "${BASE_URL}/" >/dev/null 2>&1; then
|
||||
ok "Current workstation can reach ${BASE_URL}"
|
||||
else
|
||||
warn "Current workstation cannot reach ${BASE_URL} directly; continuing from edge host"
|
||||
notes+=("Current workstation has no direct route to ${BASE_URL}; public checks were validated from VMID ${EDGE_VMID}.")
|
||||
fi
|
||||
echo
|
||||
|
||||
info "Edge host public path"
|
||||
if edge_home_headers="$(run_edge_head "${BASE_URL}/" 2>/dev/null)"; then
|
||||
if http_headers_have_status "$edge_home_headers" 200; then
|
||||
ok "Public explorer homepage reachable from edge host"
|
||||
check_explorer_header_policy "Public explorer homepage" "$edge_home_headers"
|
||||
else
|
||||
if [[ "$REQUIRE_PUBLIC_EDGE" == "1" ]]; then
|
||||
fail "Public explorer homepage returned unexpected status from edge host"
|
||||
else
|
||||
warn "Public explorer homepage returned unexpected status from edge host; continuing with internal explorer checks"
|
||||
public_edge_ready=0
|
||||
notes+=("Public self-curl from the LAN edge host did not return 200. This environment appears to have unreliable hairpin access to the public edge.")
|
||||
fi
|
||||
fi
|
||||
else
|
||||
if [[ "$REQUIRE_PUBLIC_EDGE" == "1" ]]; then
|
||||
fail "Public explorer homepage not reachable from edge host"
|
||||
else
|
||||
warn "Public explorer homepage not reachable from edge host; continuing with internal explorer checks"
|
||||
public_edge_ready=0
|
||||
notes+=("Public self-curl from the LAN edge host failed. This environment appears to have unreliable hairpin access to the public edge.")
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$REQUIRE_PUBLIC_EDGE" == "1" || "$public_edge_ready" == "1" ]]; then
|
||||
if edge_blocks_headers="$(run_edge_head "${BASE_URL}/blocks" 2>/dev/null)"; then
|
||||
if http_headers_have_status "$edge_blocks_headers" 200; then
|
||||
ok "Public explorer blocks page reachable from edge host"
|
||||
check_explorer_header_policy "Public explorer blocks page" "$edge_blocks_headers"
|
||||
else
|
||||
fail "Public explorer blocks page returned unexpected status from edge host"
|
||||
fi
|
||||
else
|
||||
fail "Public explorer blocks page not reachable from edge host"
|
||||
fi
|
||||
fi
|
||||
|
||||
planner_payload='{"sourceChainId":138,"tokenIn":"0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2","tokenOut":"0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1","amountIn":"100000000000000000"}'
|
||||
if [[ "$REQUIRE_PUBLIC_EDGE" == "1" || "$public_edge_ready" == "1" ]]; then
|
||||
edge_stats="$(run_edge_get "${BASE_URL}/api/v2/stats")"
|
||||
json_check "Public Blockscout stats" "$edge_stats" 'type == "object" and (.total_blocks | tostring | length > 0) and (.total_transactions | tostring | length > 0)' 'stats with total_blocks and total_transactions'
|
||||
|
||||
edge_networks="$(run_edge_get "${BASE_URL}/api/v1/networks")"
|
||||
json_check "Public token-aggregation networks" "$edge_networks" 'type == "object" and (.networks | type == "array") and (.source | type == "string")' 'object with .networks[] and .source'
|
||||
|
||||
edge_token_list="$(run_edge_get "${BASE_URL}/api/v1/report/token-list?chainId=138")"
|
||||
json_check "Public report token-list" "$edge_token_list" 'type == "object" and (.tokens | type == "array") and (.tokens | length > 0)' 'object with non-empty .tokens[]'
|
||||
|
||||
edge_capabilities="$(run_edge_get "${BASE_URL}/token-aggregation/api/v2/providers/capabilities?chainId=138")"
|
||||
json_check "Public planner-v2 capabilities" "$edge_capabilities" 'type == "object" and (.providers | type == "array") and (.providers | length > 0)' 'object with non-empty .providers[]'
|
||||
if printf '%s' "$edge_capabilities" | jq -e 'any(.providers[]; .provider == "uniswap_v3" and .live == true)' >/dev/null 2>&1; then
|
||||
ok "Public planner-v2 exposes live uniswap_v3"
|
||||
else
|
||||
fail "Public planner-v2 does not expose live uniswap_v3"
|
||||
fi
|
||||
|
||||
edge_plan="$(run_edge_public_post "/token-aggregation/api/v2/routes/internal-execution-plan" "$planner_payload")"
|
||||
json_check "Public internal execution plan" "$edge_plan" 'type == "object" and (.plannerResponse.decision == "direct-pool") and (.execution.contractAddress | type == "string")' 'direct-pool planner response with execution contract'
|
||||
echo
|
||||
else
|
||||
warn "Skipping public self-curl API probes after edge-host reachability failure"
|
||||
echo
|
||||
fi
|
||||
|
||||
info "Explorer VM internal path"
|
||||
internal_home="$(run_edge_internal_get "/")"
|
||||
if [[ "$internal_home" == *"SolaceScan"* || "$internal_home" == *"Chain 138 Explorer by DBIS"* || "$internal_home" == *"<!DOCTYPE html"* ]]; then
|
||||
ok "Internal nginx root serves explorer frontend"
|
||||
else
|
||||
fail "Internal nginx root does not look like explorer frontend HTML"
|
||||
fi
|
||||
|
||||
internal_stats="$(run_edge_internal_get "/api/v2/stats")"
|
||||
json_check "Internal Blockscout stats" "$internal_stats" 'type == "object" and (.total_blocks | tostring | length > 0)' 'stats with total_blocks'
|
||||
|
||||
internal_networks="$(run_edge_internal_get "/api/v1/networks")"
|
||||
json_check "Internal token-aggregation networks" "$internal_networks" 'type == "object" and (.networks | type == "array") and (.source | type == "string")' 'object with .networks[] and .source'
|
||||
|
||||
internal_token_list="$(run_edge_internal_get "/api/v1/report/token-list?chainId=138")"
|
||||
json_check "Internal report token-list" "$internal_token_list" 'type == "object" and (.tokens | type == "array") and (.tokens | length > 0)' 'object with non-empty .tokens[]'
|
||||
|
||||
internal_cors_headers="$(run_edge_internal_head "/api/v1/networks")"
|
||||
if printf '%s' "$internal_cors_headers" | grep -qi 'Access-Control-Allow-Origin'; then
|
||||
ok "Internal token-aggregation responses include CORS headers"
|
||||
else
|
||||
fail "Internal token-aggregation responses are missing CORS headers"
|
||||
fi
|
||||
|
||||
internal_plan="$(run_edge_internal_post "/token-aggregation/api/v2/routes/internal-execution-plan" "$planner_payload")"
|
||||
json_check "Internal planner execution plan" "$internal_plan" 'type == "object" and (.plannerResponse.decision == "direct-pool") and (.plannerResponse.legs[0].provider == "uniswap_v3")' 'direct-pool planner response via uniswap_v3'
|
||||
echo
|
||||
|
||||
info "Summary"
|
||||
if (( failures > 0 )); then
|
||||
printf '[fail] explorer E2E verification found %d issue(s)\n' "$failures"
|
||||
if (( ${#notes[@]} > 0 )); then
|
||||
printf '[note] %s\n' "${notes[@]}"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ok "Explorer public edge, internal nginx, Blockscout API, token-aggregation v1, and planner-v2 all verified"
|
||||
if (( ${#notes[@]} > 0 )); then
|
||||
printf '[note] %s\n' "${notes[@]}"
|
||||
fi
|
||||
446
scripts/verify/check-full-deployment-status.sh
Executable file
446
scripts/verify/check-full-deployment-status.sh
Executable file
@@ -0,0 +1,446 @@
|
||||
#!/usr/bin/env bash
|
||||
# Summarize the current "full deployment" posture across active Chain 138, GRU,
|
||||
# public cW* pool, publication surfaces, and (when mainnet env is loaded) Mainnet
|
||||
# DODO PMM peg/bot readiness via check-mainnet-pmm-peg-bot-readiness.sh.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-full-deployment-status.sh
|
||||
# bash scripts/verify/check-full-deployment-status.sh --skip-public-api
|
||||
# bash scripts/verify/check-full-deployment-status.sh --json
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 = current full-deployment gate passes
|
||||
# 1 = one or more deployment blockers remain
|
||||
# Set SKIP_EXIT=1 to always exit 0 after printing the summary.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# shellcheck disable=SC1091
|
||||
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
OUTPUT_JSON=0
|
||||
SKIP_PUBLIC_API=0
|
||||
SKIP_EXIT="${SKIP_EXIT:-0}"
|
||||
RPC_URL="${RPC_URL_138_PUBLIC:-https://rpc-http-pub.d-bis.org}"
|
||||
CAPTURE_TIMEOUT_SECONDS="${CAPTURE_TIMEOUT_SECONDS:-60}"
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
--skip-public-api) SKIP_PUBLIC_API=1 ;;
|
||||
--rpc-url=*) RPC_URL="${arg#--rpc-url=}" ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd bash
|
||||
need_cmd jq
|
||||
need_cmd mktemp
|
||||
|
||||
TMPDIR="$(mktemp -d)"
|
||||
cleanup() {
|
||||
rm -rf "$TMPDIR"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
run_capture() {
|
||||
local outfile="$1"
|
||||
shift
|
||||
if command -v timeout >/dev/null 2>&1 && [[ "${CAPTURE_TIMEOUT_SECONDS}" =~ ^[0-9]+$ ]] && (( CAPTURE_TIMEOUT_SECONDS > 0 )); then
|
||||
if timeout "${CAPTURE_TIMEOUT_SECONDS}" "$@" >"$outfile" 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
if "$@" >"$outfile" 2>&1; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
write_fallback_json() {
|
||||
local outfile="$1"
|
||||
local fallback_json="$2"
|
||||
printf '%s\n' "$fallback_json" >"$outfile"
|
||||
}
|
||||
|
||||
CONFIG_OK=0
|
||||
CONTRACTS_OK=0
|
||||
PUBLIC_API_OK=0
|
||||
GRU_V2_OK=0
|
||||
GRU_ROLLOUT_OK=1
|
||||
CW_STATUS_OK=1
|
||||
GAS_STATUS_OK=1
|
||||
CW_MESH_OK=1
|
||||
GRU_PUBLIC_PROTOCOLS_OK=1
|
||||
GRU_DEPLOYMENT_QUEUE_OK=1
|
||||
GRU_FUNDING_OK=1
|
||||
|
||||
run_capture "$TMPDIR/config.out" bash scripts/validation/validate-config-files.sh && CONFIG_OK=1 || true
|
||||
run_capture "$TMPDIR/contracts.out" bash scripts/verify/check-contracts-on-chain-138.sh "$RPC_URL" && CONTRACTS_OK=1 || true
|
||||
run_capture "$TMPDIR/gru-v2.out" bash scripts/verify/check-gru-v2-chain138-readiness.sh --report-only && GRU_V2_OK=1 || true
|
||||
if ! run_capture "$TMPDIR/gru-rollout.json" bash scripts/verify/check-gru-global-priority-rollout.sh --json; then
|
||||
GRU_ROLLOUT_OK=0
|
||||
write_fallback_json "$TMPDIR/gru-rollout.json" '{"summary":{"totalAssets":0,"liveTransport":0,"canonicalOnly":0,"backlog":0},"destinationSummary":{"nonEvmRelayPrograms":0},"desiredDestinationNetworks":{"nonEvmRelayPrograms":[]}}'
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/cw-status.json" bash scripts/verify/check-cw-public-pool-status.sh --json; then
|
||||
CW_STATUS_OK=0
|
||||
write_fallback_json "$TMPDIR/cw-status.json" '{"summary":{"chainsChecked":0,"chainsWithCwTokens":0,"chainsWithBridgeAvailable":0,"chainsWithAnyPmmPools":0}}'
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/gas-status.json" bash scripts/verify/check-gas-public-pool-status.sh --json; then
|
||||
GAS_STATUS_OK=0
|
||||
write_fallback_json "$TMPDIR/gas-status.json" '{"summary":{"gasFamiliesTracked":0,"transportPairs":0,"runtimeReadyPairs":0,"blockedPairs":0,"pairsWithWrappedNativePools":0,"pairsWithStableQuotePools":0,"pairsWithUniswapReference":0,"pairsWithRoutingVisibleOneInch":0,"supplyInvariantFailures":0},"rows":[]}'
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/gas-matrix.json" bash scripts/verify/check-gas-rollout-deployment-matrix.sh --json; then
|
||||
cat >"$TMPDIR/gas-matrix.json" <<'EOF'
|
||||
{"summary":{"verifierRefsLoaded":0,"pairsWithL1DestinationConfigured":0,"pairsWithL1ReceiverMatchingRuntimeL2":0,"l1BridgesWithPartialDestinationIntrospection":0,"deployedGenericVerifierAddress":"","topMissingRequirements":[]},"rows":[]}
|
||||
EOF
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/cw-mesh.json" bash scripts/verify/check-cw-evm-deployment-mesh.sh --json; then
|
||||
CW_MESH_OK=0
|
||||
write_fallback_json "$TMPDIR/cw-mesh.json" '{"summary":{"totalChains":0,"fullSetChains":0,"partialChains":0,"totalMissingTokens":0,"totalCodeGaps":0},"chains":[]}'
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/gru-public-protocols.json" bash scripts/verify/check-gru-v2-public-protocols.sh --json; then
|
||||
GRU_PUBLIC_PROTOCOLS_OK=0
|
||||
write_fallback_json "$TMPDIR/gru-public-protocols.json" '{"summary":{"publicProtocolsTracked":0,"publicProtocolsWithActiveCwPools":0}}'
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/gru-deployment-queue.json" bash scripts/verify/check-gru-v2-deployment-queue.sh --json; then
|
||||
GRU_DEPLOYMENT_QUEUE_OK=0
|
||||
write_fallback_json "$TMPDIR/gru-deployment-queue.json" '{"summary":{"wave1Assets":0,"wave1TransportPending":0,"wave1WrappedSymbolsCoveredByPoolMatrix":0,"wave1WrappedSymbols":0,"firstTierWave1PoolsPlanned":0,"firstTierWave1PoolsRecordedLive":0,"chainsMissingCwSuites":0}}'
|
||||
fi
|
||||
if ! run_capture "$TMPDIR/gru-funding.json" bash scripts/verify/check-gru-v2-deployer-funding-status.sh --json; then
|
||||
GRU_FUNDING_OK=0
|
||||
write_fallback_json "$TMPDIR/gru-funding.json" '{"blockers":[],"warnings":[]}'
|
||||
fi
|
||||
|
||||
MAINNET_PMM_PEG_STATUS="skipped"
|
||||
if [[ -n "${ETHEREUM_MAINNET_RPC:-}" && -n "${DODO_PMM_INTEGRATION_MAINNET:-}" ]]; then
|
||||
if run_capture "$TMPDIR/mainnet-pmm-peg.out" bash scripts/verify/check-mainnet-pmm-peg-bot-readiness.sh; then
|
||||
MAINNET_PMM_PEG_STATUS="OK"
|
||||
else
|
||||
MAINNET_PMM_PEG_STATUS="FAIL"
|
||||
fi
|
||||
fi
|
||||
|
||||
if (( SKIP_PUBLIC_API == 1 )); then
|
||||
printf 'SKIPPED\n' >"$TMPDIR/public-api.status"
|
||||
else
|
||||
if run_capture "$TMPDIR/public-api.out" bash scripts/verify/check-token-aggregation-chain138-api.sh; then
|
||||
PUBLIC_API_OK=1
|
||||
printf 'OK\n' >"$TMPDIR/public-api.status"
|
||||
else
|
||||
printf 'FAIL\n' >"$TMPDIR/public-api.status"
|
||||
fi
|
||||
fi
|
||||
|
||||
CONTRACT_TOTAL_LINE="$(grep '^Total:' "$TMPDIR/contracts.out" | tail -1 || true)"
|
||||
CONTRACT_PRESENT="$(printf '%s\n' "$CONTRACT_TOTAL_LINE" | sed -n 's/^Total: \([0-9][0-9]*\) present, \([0-9][0-9]*\) missing\/empty (\([0-9][0-9]*\) addresses).*/\1/p')"
|
||||
CONTRACT_MISSING="$(printf '%s\n' "$CONTRACT_TOTAL_LINE" | sed -n 's/^Total: \([0-9][0-9]*\) present, \([0-9][0-9]*\) missing\/empty (\([0-9][0-9]*\) addresses).*/\2/p')"
|
||||
CONTRACT_ADDRESSES="$(printf '%s\n' "$CONTRACT_TOTAL_LINE" | sed -n 's/^Total: \([0-9][0-9]*\) present, \([0-9][0-9]*\) missing\/empty (\([0-9][0-9]*\) addresses).*/\3/p')"
|
||||
|
||||
GRU_V2_WARNINGS="$(sed -n 's/^Warnings: \([0-9][0-9]*\).*/\1/p' "$TMPDIR/gru-v2.out" | tail -1)"
|
||||
GRU_V2_BLOCKERS="$(sed -n 's/^Blockers: \([0-9][0-9]*\).*/\1/p' "$TMPDIR/gru-v2.out" | tail -1)"
|
||||
GRU_V2_WARNINGS="${GRU_V2_WARNINGS:-0}"
|
||||
GRU_V2_BLOCKERS="${GRU_V2_BLOCKERS:-0}"
|
||||
GRU_V2_ORPHAN_WARNING="$(grep -F 'active GRU entry with no bytecode' "$TMPDIR/gru-v2.out" | tail -1 | sed 's/^[-[:space:]]*//' || true)"
|
||||
|
||||
ROLLOUT_TOTAL="$(jq -r '.summary.totalAssets' "$TMPDIR/gru-rollout.json")"
|
||||
ROLLOUT_LIVE="$(jq -r '.summary.liveTransport' "$TMPDIR/gru-rollout.json")"
|
||||
ROLLOUT_CANONICAL_ONLY="$(jq -r '.summary.canonicalOnly' "$TMPDIR/gru-rollout.json")"
|
||||
ROLLOUT_BACKLOG="$(jq -r '.summary.backlog' "$TMPDIR/gru-rollout.json")"
|
||||
ROLLOUT_NON_EVM="$(jq -r '.destinationSummary.nonEvmRelayPrograms' "$TMPDIR/gru-rollout.json")"
|
||||
NON_EVM_TARGETS="$(jq -r '.desiredDestinationNetworks.nonEvmRelayPrograms[]?.identifier' "$TMPDIR/gru-rollout.json" | paste -sd ', ' -)"
|
||||
|
||||
CW_CHAINS="$(jq -r '.summary.chainsChecked' "$TMPDIR/cw-status.json")"
|
||||
CW_TOKENS="$(jq -r '.summary.chainsWithCwTokens' "$TMPDIR/cw-status.json")"
|
||||
CW_BRIDGES="$(jq -r '.summary.chainsWithBridgeAvailable' "$TMPDIR/cw-status.json")"
|
||||
CW_POOLS="$(jq -r '.summary.chainsWithAnyPmmPools' "$TMPDIR/cw-status.json")"
|
||||
GAS_FAMILIES="$(jq -r '.summary.gasFamiliesTracked' "$TMPDIR/gas-status.json")"
|
||||
GAS_TRANSPORT_PAIRS="$(jq -r '.summary.transportPairs' "$TMPDIR/gas-status.json")"
|
||||
GAS_RUNTIME_READY="$(jq -r '.summary.runtimeReadyPairs' "$TMPDIR/gas-status.json")"
|
||||
GAS_BLOCKED_PAIRS="$(jq -r '.summary.blockedPairs' "$TMPDIR/gas-status.json")"
|
||||
GAS_WRAPPED_NATIVE_POOLS="$(jq -r '.summary.pairsWithWrappedNativePools' "$TMPDIR/gas-status.json")"
|
||||
GAS_STABLE_POOLS="$(jq -r '.summary.pairsWithStableQuotePools' "$TMPDIR/gas-status.json")"
|
||||
GAS_UNISWAP_REFS="$(jq -r '.summary.pairsWithUniswapReference' "$TMPDIR/gas-status.json")"
|
||||
GAS_ONEINCH_VISIBLE="$(jq -r '.summary.pairsWithRoutingVisibleOneInch' "$TMPDIR/gas-status.json")"
|
||||
GAS_SUPPLY_INVARIANT_FAILURES="$(jq -r '.summary.supplyInvariantFailures' "$TMPDIR/gas-status.json")"
|
||||
GAS_MISSING_TOP_REQUIREMENTS="$(jq -r '
|
||||
[
|
||||
.rows[]
|
||||
| .missing[]?
|
||||
]
|
||||
| group_by(.)
|
||||
| map({ requirement: .[0], count: length })
|
||||
| sort_by(-.count, .requirement)
|
||||
| .[:5]
|
||||
| map("\(.requirement)=\(.count)")
|
||||
| join(", ")
|
||||
' "$TMPDIR/gas-status.json")"
|
||||
GAS_MISSING_TOP_REQUIREMENTS="${GAS_MISSING_TOP_REQUIREMENTS:-none}"
|
||||
GAS_MATRIX_VERIFIER_REFS_LOADED="$(jq -r '.summary.verifierRefsLoaded' "$TMPDIR/gas-matrix.json")"
|
||||
GAS_MATRIX_L1_DESTINATIONS="$(jq -r '.summary.pairsWithL1DestinationConfigured' "$TMPDIR/gas-matrix.json")"
|
||||
GAS_MATRIX_L1_RECEIVER_MATCHES="$(jq -r '.summary.pairsWithL1ReceiverMatchingRuntimeL2' "$TMPDIR/gas-matrix.json")"
|
||||
GAS_MATRIX_L1_PARTIAL_BRIDGES="$(jq -r '.summary.l1BridgesWithPartialDestinationIntrospection' "$TMPDIR/gas-matrix.json")"
|
||||
GAS_MATRIX_GENERIC_VERIFIER="$(jq -r '.summary.deployedGenericVerifierAddress // empty' "$TMPDIR/gas-matrix.json")"
|
||||
GAS_MATRIX_STRICT_ENV_KEY="$(jq -r '
|
||||
[.rows[] | select(.reserveVerifierEnvKey == "CW_GAS_STRICT_ESCROW_VERIFIER_CHAIN138") | .reserveVerifierEnvKey][0] // ""
|
||||
' "$TMPDIR/gas-matrix.json")"
|
||||
GAS_MATRIX_HYBRID_ENV_KEY="$(jq -r '
|
||||
[.rows[] | select(.reserveVerifierEnvKey == "CW_GAS_HYBRID_CAP_VERIFIER_CHAIN138") | .reserveVerifierEnvKey][0] // ""
|
||||
' "$TMPDIR/gas-matrix.json")"
|
||||
CW_MESH_TOTAL_CHAINS="$(jq -r '.summary.totalChains' "$TMPDIR/cw-mesh.json")"
|
||||
CW_MESH_FULL_SETS="$(jq -r '.summary.fullSetChains' "$TMPDIR/cw-mesh.json")"
|
||||
CW_MESH_PARTIAL_CHAINS="$(jq -r '.summary.partialChains' "$TMPDIR/cw-mesh.json")"
|
||||
CW_MESH_MISSING_TOKENS="$(jq -r '.summary.totalMissingTokens' "$TMPDIR/cw-mesh.json")"
|
||||
CW_MESH_CODE_GAPS="$(jq -r '.summary.totalCodeGaps' "$TMPDIR/cw-mesh.json")"
|
||||
CW_MESH_INCOMPLETE_NAMES="$(jq -r '.chains[] | select(.missingCount > 0) | .chainKey' "$TMPDIR/cw-mesh.json" | paste -sd ', ' -)"
|
||||
CW_MESH_CODE_GAP_NAMES="$(jq -r '.chains[] | select(.codeGapCount > 0) | .chainKey' "$TMPDIR/cw-mesh.json" | paste -sd ', ' -)"
|
||||
PUBLIC_PROTOCOLS_TRACKED="$(jq -r '.summary.publicProtocolsTracked' "$TMPDIR/gru-public-protocols.json")"
|
||||
PUBLIC_PROTOCOLS_ACTIVE="$(jq -r '.summary.publicProtocolsWithActiveCwPools' "$TMPDIR/gru-public-protocols.json")"
|
||||
QUEUE_WAVE1_ASSETS="$(jq -r '.summary.wave1Assets' "$TMPDIR/gru-deployment-queue.json")"
|
||||
QUEUE_WAVE1_TRANSPORT_PENDING="$(jq -r '.summary.wave1TransportPending' "$TMPDIR/gru-deployment-queue.json")"
|
||||
QUEUE_WAVE1_SYMBOLS_COVERED="$(jq -r '.summary.wave1WrappedSymbolsCoveredByPoolMatrix' "$TMPDIR/gru-deployment-queue.json")"
|
||||
QUEUE_WAVE1_SYMBOLS_TOTAL="$(jq -r '.summary.wave1WrappedSymbols' "$TMPDIR/gru-deployment-queue.json")"
|
||||
QUEUE_FIRST_TIER_PLANNED="$(jq -r '.summary.firstTierWave1PoolsPlanned' "$TMPDIR/gru-deployment-queue.json")"
|
||||
QUEUE_FIRST_TIER_LIVE="$(jq -r '.summary.firstTierWave1PoolsRecordedLive' "$TMPDIR/gru-deployment-queue.json")"
|
||||
QUEUE_CHAINS_MISSING_CW="$(jq -r '.summary.chainsMissingCwSuites' "$TMPDIR/gru-deployment-queue.json")"
|
||||
FUNDING_BLOCKER_COUNT="$(jq -r '.blockers | length' "$TMPDIR/gru-funding.json")"
|
||||
FUNDING_WARNING_COUNT="$(jq -r '.warnings | length' "$TMPDIR/gru-funding.json")"
|
||||
|
||||
FAILURES=0
|
||||
BLOCKERS=()
|
||||
|
||||
if [[ "$CONFIG_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Config validation is not green.")
|
||||
fi
|
||||
|
||||
if [[ -z "$CONTRACT_MISSING" || "$CONTRACT_MISSING" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Chain 138 canonical contract inventory is not fully present on-chain.")
|
||||
fi
|
||||
|
||||
if (( SKIP_PUBLIC_API == 0 )) && [[ "$PUBLIC_API_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public token-aggregation / planner API is not fully healthy.")
|
||||
fi
|
||||
|
||||
if [[ -z "$GRU_V2_BLOCKERS" || "$GRU_V2_BLOCKERS" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("GRU v2 readiness still has live blockers.")
|
||||
fi
|
||||
|
||||
if [[ "$GRU_ROLLOUT_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("GRU global rollout summary could not be captured cleanly; rerun check-gru-global-priority-rollout.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$ROLLOUT_CANONICAL_ONLY" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("GRU Wave 1 is not transport-complete; $ROLLOUT_CANONICAL_ONLY canonical assets still need cW deployment / enablement.")
|
||||
fi
|
||||
|
||||
if [[ "$ROLLOUT_BACKLOG" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Global-priority GRU rollout still has $ROLLOUT_BACKLOG backlog assets outside the live manifest.")
|
||||
fi
|
||||
|
||||
if [[ "$CW_STATUS_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public cW pool summary could not be captured cleanly; rerun check-cw-public-pool-status.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$CW_MESH_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public EVM cW token mesh summary could not be captured cleanly; rerun check-cw-evm-deployment-mesh.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$CW_MESH_PARTIAL_CHAINS" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public EVM cW token mesh is incomplete; $CW_MESH_PARTIAL_CHAINS chain(s) still miss $CW_MESH_MISSING_TOKENS token deployments (${CW_MESH_INCOMPLETE_NAMES:-none}).")
|
||||
fi
|
||||
|
||||
if [[ "$CW_MESH_CODE_GAPS" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public EVM cW token mesh has $CW_MESH_CODE_GAPS configured address(es) with no bytecode (${CW_MESH_CODE_GAP_NAMES:-none}).")
|
||||
fi
|
||||
|
||||
if [[ "$CW_POOLS" == "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public-chain cW* PMM pool rollout is still empty across all tracked chains.")
|
||||
fi
|
||||
|
||||
if [[ "$GAS_STATUS_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Gas-native rollout summary could not be captured cleanly; rerun check-gas-public-pool-status.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$GAS_BLOCKED_PAIRS" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Gas-native rollout still has $GAS_BLOCKED_PAIRS blocked GRU transport pair(s); top missing requirements: ${GAS_MISSING_TOP_REQUIREMENTS}.")
|
||||
fi
|
||||
|
||||
if [[ "$GAS_SUPPLY_INVARIANT_FAILURES" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Gas-native rollout has $GAS_SUPPLY_INVARIANT_FAILURES supply-invariant failure(s); fix escrow, treasury-backed, or cap accounting before enabling routing.")
|
||||
fi
|
||||
|
||||
if [[ "$GAS_MATRIX_VERIFIER_REFS_LOADED" != "$GAS_TRANSPORT_PAIRS" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Gas-native verifier wiring is incomplete; loaded verifier refs=${GAS_MATRIX_VERIFIER_REFS_LOADED}/${GAS_TRANSPORT_PAIRS}. Expected envs: ${GAS_MATRIX_STRICT_ENV_KEY:-CW_GAS_STRICT_ESCROW_VERIFIER_CHAIN138} and ${GAS_MATRIX_HYBRID_ENV_KEY:-CW_GAS_HYBRID_CAP_VERIFIER_CHAIN138}${GAS_MATRIX_GENERIC_VERIFIER:+ (deployed verifier: $GAS_MATRIX_GENERIC_VERIFIER)}.")
|
||||
fi
|
||||
|
||||
if [[ "$GAS_WRAPPED_NATIVE_POOLS" != "$GAS_TRANSPORT_PAIRS" || "$GAS_STABLE_POOLS" != "$GAS_TRANSPORT_PAIRS" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Gas-native rollout is missing required DODO pool coverage; wrapped-native live=${GAS_WRAPPED_NATIVE_POOLS}/${GAS_TRANSPORT_PAIRS}, stable-quote live=${GAS_STABLE_POOLS}/${GAS_TRANSPORT_PAIRS}.")
|
||||
fi
|
||||
|
||||
if [[ "$GAS_UNISWAP_REFS" != "$GAS_TRANSPORT_PAIRS" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Gas-native rollout is missing required Uniswap v3 reference venues; live references=${GAS_UNISWAP_REFS}/${GAS_TRANSPORT_PAIRS}.")
|
||||
fi
|
||||
|
||||
if [[ "$PUBLIC_PROTOCOLS_ACTIVE" == "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Public-chain cW* protocol rollout is still inactive across all tracked venues (Uniswap v3, Balancer, Curve 3, DODO PMM, 1inch).")
|
||||
fi
|
||||
|
||||
if [[ "$GRU_PUBLIC_PROTOCOLS_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("GRU public protocol summary could not be captured cleanly; rerun check-gru-v2-public-protocols.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$GRU_DEPLOYMENT_QUEUE_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("GRU deployment queue summary could not be captured cleanly; rerun check-gru-v2-deployment-queue.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$FUNDING_BLOCKER_COUNT" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Deployer funding gate still has $FUNDING_BLOCKER_COUNT blocker(s) for remaining live rollout work.")
|
||||
fi
|
||||
|
||||
if [[ "$GRU_FUNDING_OK" != "1" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Deployer funding summary could not be captured cleanly; rerun check-gru-v2-deployer-funding-status.sh.")
|
||||
fi
|
||||
|
||||
if [[ "$ROLLOUT_NON_EVM" != "0" ]]; then
|
||||
FAILURES=$((FAILURES + 1))
|
||||
BLOCKERS+=("Desired non-EVM deployment targets remain planned / relay-dependent (${NON_EVM_TARGETS:-none}).")
|
||||
fi
|
||||
|
||||
if (( OUTPUT_JSON == 1 )); then
|
||||
jq -n \
|
||||
--arg rpcUrl "$RPC_URL" \
|
||||
--argjson configOk "$CONFIG_OK" \
|
||||
--argjson contractsOk "$CONTRACTS_OK" \
|
||||
--arg contractTotal "${CONTRACT_ADDRESSES:-}" \
|
||||
--arg contractPresent "${CONTRACT_PRESENT:-}" \
|
||||
--arg contractMissing "${CONTRACT_MISSING:-}" \
|
||||
--arg publicApiStatus "$(cat "$TMPDIR/public-api.status")" \
|
||||
--argjson gruV2Ok "$GRU_V2_OK" \
|
||||
--arg gruV2Warnings "${GRU_V2_WARNINGS:-}" \
|
||||
--arg gruV2Blockers "${GRU_V2_BLOCKERS:-}" \
|
||||
--arg gruV2OrphanWarning "$GRU_V2_ORPHAN_WARNING" \
|
||||
--slurpfile rollout "$TMPDIR/gru-rollout.json" \
|
||||
--slurpfile cw "$TMPDIR/cw-status.json" \
|
||||
--slurpfile gas "$TMPDIR/gas-status.json" \
|
||||
--slurpfile gasMatrix "$TMPDIR/gas-matrix.json" \
|
||||
--slurpfile cwMesh "$TMPDIR/cw-mesh.json" \
|
||||
--slurpfile gruPublicProtocols "$TMPDIR/gru-public-protocols.json" \
|
||||
--slurpfile gruDeploymentQueue "$TMPDIR/gru-deployment-queue.json" \
|
||||
--slurpfile gruFunding "$TMPDIR/gru-funding.json" \
|
||||
--arg mainnetPmmPeg "$MAINNET_PMM_PEG_STATUS" \
|
||||
--argjson failures "$FAILURES" \
|
||||
--argjson blockerList "$(printf '%s\n' "${BLOCKERS[@]}" | jq -R . | jq -s .)" \
|
||||
'{
|
||||
rpcUrl: $rpcUrl,
|
||||
mainnetPmmPegReadiness: $mainnetPmmPeg,
|
||||
configOk: ($configOk == 1),
|
||||
contracts: {
|
||||
ok: ($contractsOk == 1),
|
||||
totalAddresses: ($contractTotal | tonumber? // null),
|
||||
present: ($contractPresent | tonumber? // null),
|
||||
missing: ($contractMissing | tonumber? // null)
|
||||
},
|
||||
publicApiStatus: $publicApiStatus,
|
||||
gruV2: {
|
||||
ok: ($gruV2Ok == 1),
|
||||
warnings: ($gruV2Warnings | tonumber? // null),
|
||||
blockers: ($gruV2Blockers | tonumber? // null),
|
||||
orphanWarning: $gruV2OrphanWarning
|
||||
},
|
||||
gruRollout: $rollout[0],
|
||||
gruPublicProtocols: $gruPublicProtocols[0],
|
||||
gruDeploymentQueue: $gruDeploymentQueue[0],
|
||||
deployerFunding: $gruFunding[0],
|
||||
gasNativeRollout: $gas[0],
|
||||
gasNativeDeploymentMatrix: $gasMatrix[0],
|
||||
cwEvmMesh: $cwMesh[0],
|
||||
cwPublicPools: $cw[0],
|
||||
failures: $failures,
|
||||
fullDeploymentComplete: ($failures == 0),
|
||||
blockers: $blockerList
|
||||
}'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "=== Full Deployment Status ==="
|
||||
echo "RPC: $RPC_URL"
|
||||
echo "Config validation: $([[ "$CONFIG_OK" == "1" ]] && echo OK || echo FAIL)"
|
||||
if [[ -n "$CONTRACT_TOTAL_LINE" ]]; then
|
||||
echo "Chain 138 contracts: ${CONTRACT_PRESENT:-?}/${CONTRACT_ADDRESSES:-?} present"
|
||||
else
|
||||
echo "Chain 138 contracts: unknown"
|
||||
fi
|
||||
if (( SKIP_PUBLIC_API == 1 )); then
|
||||
echo "Public API: skipped"
|
||||
else
|
||||
echo "Public API: $([[ "$PUBLIC_API_OK" == "1" ]] && echo OK || echo FAIL)"
|
||||
fi
|
||||
echo "GRU v2: blockers=${GRU_V2_BLOCKERS:-?}, warnings=${GRU_V2_WARNINGS:-?}"
|
||||
echo "GRU global rollout: live=${ROLLOUT_LIVE}/${ROLLOUT_TOTAL}, canonical_only=${ROLLOUT_CANONICAL_ONLY}, backlog=${ROLLOUT_BACKLOG}"
|
||||
echo "GRU public protocols: active=${PUBLIC_PROTOCOLS_ACTIVE}/${PUBLIC_PROTOCOLS_TRACKED}"
|
||||
echo "GRU Wave 1 queue: assets=${QUEUE_WAVE1_ASSETS}, transport_pending=${QUEUE_WAVE1_TRANSPORT_PENDING}, pool_matrix=${QUEUE_WAVE1_SYMBOLS_COVERED}/${QUEUE_WAVE1_SYMBOLS_TOTAL}, planned_pairs=${QUEUE_FIRST_TIER_PLANNED}, live_pairs=${QUEUE_FIRST_TIER_LIVE}, missing_cw_suite_chains=${QUEUE_CHAINS_MISSING_CW}"
|
||||
echo "Deployer funding: blockers=${FUNDING_BLOCKER_COUNT}, warnings=${FUNDING_WARNING_COUNT}"
|
||||
echo "Gas-native rollout: families=${GAS_FAMILIES}, runtime_ready=${GAS_RUNTIME_READY}/${GAS_TRANSPORT_PAIRS}, blocked=${GAS_BLOCKED_PAIRS}, dodo_wrapped=${GAS_WRAPPED_NATIVE_POOLS}/${GAS_TRANSPORT_PAIRS}, dodo_stable=${GAS_STABLE_POOLS}/${GAS_TRANSPORT_PAIRS}, uniswap_refs=${GAS_UNISWAP_REFS}/${GAS_TRANSPORT_PAIRS}, 1inch_visible=${GAS_ONEINCH_VISIBLE}"
|
||||
echo "Gas-native matrix: l1_destinations=${GAS_MATRIX_L1_DESTINATIONS}/${GAS_TRANSPORT_PAIRS}, l1_receivers_match=${GAS_MATRIX_L1_RECEIVER_MATCHES}/${GAS_TRANSPORT_PAIRS}, verifier_refs=${GAS_MATRIX_VERIFIER_REFS_LOADED}/${GAS_TRANSPORT_PAIRS}, l1_partial_bridges=${GAS_MATRIX_L1_PARTIAL_BRIDGES}, top_missing=${GAS_MISSING_TOP_REQUIREMENTS}"
|
||||
echo "cW EVM mesh: full_sets=${CW_MESH_FULL_SETS}/${CW_MESH_TOTAL_CHAINS}, partial=${CW_MESH_PARTIAL_CHAINS}, missing_tokens=${CW_MESH_MISSING_TOKENS}, code_gaps=${CW_MESH_CODE_GAPS}"
|
||||
echo "cW public mesh: chains=${CW_CHAINS}, withTokens=${CW_TOKENS}, withBridge=${CW_BRIDGES}, withPools=${CW_POOLS}"
|
||||
echo "Mainnet PMM peg/bot readiness (ETHEREUM_MAINNET_RPC + DODO_PMM_INTEGRATION_MAINNET): ${MAINNET_PMM_PEG_STATUS}"
|
||||
echo "Desired non-EVM targets: ${NON_EVM_TARGETS:-none}"
|
||||
|
||||
echo ""
|
||||
if (( FAILURES == 0 )); then
|
||||
echo "[OK] Full deployment gate is green."
|
||||
else
|
||||
echo "[WARN] Full deployment is not complete. Active blockers:"
|
||||
for blocker in "${BLOCKERS[@]}"; do
|
||||
echo "- $blocker"
|
||||
done
|
||||
fi
|
||||
|
||||
if [[ -n "$GRU_V2_ORPHAN_WARNING" ]]; then
|
||||
echo ""
|
||||
echo "Additional warning:"
|
||||
echo "- $GRU_V2_ORPHAN_WARNING"
|
||||
fi
|
||||
|
||||
if (( FAILURES > 0 )) && [[ "$SKIP_EXIT" != "1" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
152
scripts/verify/check-gas-public-pool-status.sh
Executable file
152
scripts/verify/check-gas-public-pool-status.sh
Executable file
@@ -0,0 +1,152 @@
|
||||
#!/usr/bin/env bash
|
||||
# Summarize gas-native c* / cW* rollout state from GRU transport config and deployment-status.json.
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gas-public-pool-status.sh
|
||||
# bash scripts/verify/check-gas-public-pool-status.sh --json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
# shellcheck source=/dev/null
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
OUTPUT_JSON=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" node <<'NODE'
|
||||
const path = require('path');
|
||||
const {
|
||||
getActiveTransportPairs,
|
||||
getGasAssetFamilies,
|
||||
getGasProtocolExposureRecord,
|
||||
loadDeploymentStatusJson,
|
||||
loadGruTransportActiveJson,
|
||||
} = require(path.join(process.env.PROJECT_ROOT, 'config/token-mapping-loader.cjs'));
|
||||
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const deployment = loadDeploymentStatusJson() || {};
|
||||
const chains = deployment.chains || {};
|
||||
const gasFamilies = getGasAssetFamilies();
|
||||
const pairs = getActiveTransportPairs().filter((pair) => pair.assetClass === 'gas_native');
|
||||
const active = loadGruTransportActiveJson() || {};
|
||||
const configuredGasPairs = Array.isArray(active.transportPairs)
|
||||
? active.transportPairs.filter((pair) => pair && pair.assetClass === 'gas_native')
|
||||
: [];
|
||||
const deferredGasPairs = configuredGasPairs.filter((pair) => pair.active === false).length;
|
||||
|
||||
const rows = pairs
|
||||
.map((pair) => {
|
||||
const chain = chains[String(pair.destinationChainId)] || {};
|
||||
const gasPools = Array.isArray(chain.gasPmmPools) ? chain.gasPmmPools.filter((pool) => pool.familyKey === pair.familyKey) : [];
|
||||
const venues = Array.isArray(chain.gasReferenceVenues)
|
||||
? chain.gasReferenceVenues.filter((venue) => venue.familyKey === pair.familyKey)
|
||||
: [];
|
||||
const wrappedNativePool = gasPools.find((pool) => pool.poolType === 'wrapped_native');
|
||||
const stableQuotePool = gasPools.find((pool) => pool.poolType === 'stable_quote');
|
||||
const protocolExposure = getGasProtocolExposureRecord(`${pair.destinationChainId}-${pair.familyKey}`);
|
||||
const uniswap = venues.find((venue) => venue.protocol === 'uniswap_v3');
|
||||
const balancer = venues.find((venue) => venue.protocol === 'balancer');
|
||||
const curve = venues.find((venue) => venue.protocol === 'curve');
|
||||
const oneInch = venues.find((venue) => venue.protocol === '1inch');
|
||||
const missing = [];
|
||||
|
||||
if (!pair.mirroredAddress) missing.push('mirror:address');
|
||||
if (!wrappedNativePool) missing.push('dodo:wrapped_native');
|
||||
if (!stableQuotePool) missing.push('dodo:stable_quote');
|
||||
if (!uniswap) missing.push('reference:uniswap_v3');
|
||||
if (!pair.supplyInvariantSatisfied) missing.push('supplyInvariant');
|
||||
if (!pair.runtimeReady) {
|
||||
missing.push(...(pair.eligibilityBlockers || []).map((item) => `eligibility:${item}`));
|
||||
missing.push(...(pair.runtimeMissingRequirements || []).map((item) => `runtime:${item}`));
|
||||
}
|
||||
|
||||
return {
|
||||
key: pair.key,
|
||||
familyKey: pair.familyKey,
|
||||
chainId: pair.destinationChainId,
|
||||
chainName: pair.destinationChainName,
|
||||
canonicalSymbol: pair.canonicalSymbol,
|
||||
mirroredSymbol: pair.mirroredSymbol,
|
||||
wrappedNativeQuoteSymbol: pair.wrappedNativeQuoteSymbol,
|
||||
stableQuoteSymbol: pair.stableQuoteSymbol,
|
||||
backingMode: pair.backingMode,
|
||||
redeemPolicy: pair.redeemPolicy,
|
||||
runtimeReady: pair.runtimeReady === true,
|
||||
supplyInvariantSatisfied: pair.supplyInvariantSatisfied === true,
|
||||
wrappedNativePoolLive: Boolean(wrappedNativePool?.publicRoutingEnabled),
|
||||
stableQuotePoolLive: Boolean(stableQuotePool?.publicRoutingEnabled),
|
||||
uniswapReferenceLive: Boolean(uniswap?.live),
|
||||
balancerSupported: Boolean(balancer?.supported),
|
||||
balancerLive: Boolean(balancer?.live),
|
||||
curveSupported: Boolean(curve?.supported),
|
||||
curveLive: Boolean(curve?.live),
|
||||
oneInchRoutingVisible: Boolean(oneInch?.routingVisible),
|
||||
protocolExposure,
|
||||
missing,
|
||||
};
|
||||
})
|
||||
.sort((a, b) => a.chainId - b.chainId || a.familyKey.localeCompare(b.familyKey));
|
||||
|
||||
const summary = {
|
||||
gasFamiliesTracked: gasFamilies.length,
|
||||
transportPairs: rows.length,
|
||||
configuredTransportPairs: configuredGasPairs.length,
|
||||
deferredTransportPairs: deferredGasPairs,
|
||||
runtimeReadyPairs: rows.filter((row) => row.runtimeReady).length,
|
||||
blockedPairs: rows.filter((row) => !row.runtimeReady).length,
|
||||
strictEscrowPairs: rows.filter((row) => row.backingMode === 'strict_escrow').length,
|
||||
hybridCapPairs: rows.filter((row) => row.backingMode === 'hybrid_cap').length,
|
||||
chainsWithGasMirrors: new Set(rows.filter((row) => row.mirroredSymbol).map((row) => row.chainId)).size,
|
||||
pairsWithWrappedNativePools: rows.filter((row) => row.wrappedNativePoolLive).length,
|
||||
pairsWithStableQuotePools: rows.filter((row) => row.stableQuotePoolLive).length,
|
||||
pairsWithUniswapReference: rows.filter((row) => row.uniswapReferenceLive).length,
|
||||
pairsWithRoutingVisibleOneInch: rows.filter((row) => row.oneInchRoutingVisible).length,
|
||||
supplyInvariantFailures: rows.filter((row) => !row.supplyInvariantSatisfied).length,
|
||||
};
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify({ summary, rows }, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== Gas-Native c* / cW* Rollout Status ===');
|
||||
console.log(`Gas families tracked: ${summary.gasFamiliesTracked}`);
|
||||
console.log(`Active transport pairs: ${summary.transportPairs}`);
|
||||
console.log(`Deferred transport pairs: ${summary.deferredTransportPairs}`);
|
||||
console.log(`Runtime-ready pairs: ${summary.runtimeReadyPairs}`);
|
||||
console.log(`Blocked pairs: ${summary.blockedPairs}`);
|
||||
console.log(`Strict escrow pairs: ${summary.strictEscrowPairs}`);
|
||||
console.log(`Hybrid-cap pairs: ${summary.hybridCapPairs}`);
|
||||
console.log(`Pairs with wrapped-native DODO pool live: ${summary.pairsWithWrappedNativePools}`);
|
||||
console.log(`Pairs with stable-quote DODO pool live: ${summary.pairsWithStableQuotePools}`);
|
||||
console.log(`Pairs with Uniswap v3 reference live: ${summary.pairsWithUniswapReference}`);
|
||||
console.log(`Pairs with 1inch routing visible: ${summary.pairsWithRoutingVisibleOneInch}`);
|
||||
console.log(`Supply invariant failures: ${summary.supplyInvariantFailures}`);
|
||||
|
||||
for (const row of rows) {
|
||||
const readiness = row.runtimeReady ? 'ready' : 'blocked';
|
||||
const dodoState = `dodo=${row.wrappedNativePoolLive ? 'wrapped' : 'no-wrapped'}/${row.stableQuotePoolLive ? 'stable' : 'no-stable'}`;
|
||||
const uniswapState = `uni=${row.uniswapReferenceLive ? 'live' : 'queued'}`;
|
||||
const oneInchState = `1inch=${row.oneInchRoutingVisible ? 'visible' : 'off'}`;
|
||||
const missing = row.missing.length > 0 ? ` missing=${row.missing.join(',')}` : '';
|
||||
console.log(
|
||||
`- ${row.chainId} ${row.chainName} ${row.familyKey} (${row.canonicalSymbol}->${row.mirroredSymbol}, ${row.backingMode}): ${readiness}, ${dodoState}, ${uniswapState}, ${oneInchState}${missing}`
|
||||
);
|
||||
}
|
||||
NODE
|
||||
454
scripts/verify/check-gas-rollout-deployment-matrix.sh
Executable file
454
scripts/verify/check-gas-rollout-deployment-matrix.sh
Executable file
@@ -0,0 +1,454 @@
|
||||
#!/usr/bin/env bash
|
||||
# Audit live gas-native rollout prerequisites and print the next deploy/config steps.
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gas-rollout-deployment-matrix.sh
|
||||
# bash scripts/verify/check-gas-rollout-deployment-matrix.sh --json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
# shellcheck source=/dev/null
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
OUTPUT_JSON=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
command -v cast >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: cast" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" node <<'NODE'
|
||||
const path = require('path');
|
||||
const { execFileSync } = require('child_process');
|
||||
|
||||
const loader = require(path.join(process.env.PROJECT_ROOT, 'config/token-mapping-loader.cjs'));
|
||||
const contracts = require(path.join(process.env.PROJECT_ROOT, 'config/contracts-loader.cjs'));
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const active = loader.loadGruTransportActiveJson() || {};
|
||||
const families = loader.getGasAssetFamilies();
|
||||
const pairs = loader.getActiveTransportPairs().filter((pair) => pair.assetClass === 'gas_native');
|
||||
const reserveVerifiers = active.reserveVerifiers || {};
|
||||
const deployedGenericVerifierAddress = contracts.getContractAddress(138, 'CWAssetReserveVerifier') || '';
|
||||
const configuredGasPairs = Array.isArray(active.transportPairs)
|
||||
? active.transportPairs.filter((pair) => pair && pair.assetClass === 'gas_native')
|
||||
: [];
|
||||
const deferredGasPairs = configuredGasPairs.filter((pair) => pair.active === false).length;
|
||||
|
||||
const rpcEnvByChain = {
|
||||
138: 'RPC_URL_138',
|
||||
1: 'ETHEREUM_MAINNET_RPC',
|
||||
10: 'OPTIMISM_MAINNET_RPC',
|
||||
25: 'CRONOS_RPC_URL',
|
||||
56: 'BSC_RPC_URL',
|
||||
100: 'GNOSIS_MAINNET_RPC',
|
||||
137: 'POLYGON_MAINNET_RPC',
|
||||
42161: 'ARBITRUM_MAINNET_RPC',
|
||||
42220: 'CELO_MAINNET_RPC',
|
||||
43114: 'AVALANCHE_RPC_URL',
|
||||
8453: 'BASE_MAINNET_RPC',
|
||||
1111: 'WEMIX_RPC',
|
||||
};
|
||||
|
||||
function resolveConfigRef(ref) {
|
||||
if (!ref || typeof ref !== 'object') return '';
|
||||
if (typeof ref.address === 'string' && ref.address.trim()) return ref.address.trim();
|
||||
if (typeof ref.env === 'string' && process.env[ref.env]) return process.env[ref.env];
|
||||
return '';
|
||||
}
|
||||
|
||||
function hasCode(address, rpcUrl) {
|
||||
if (!address || !/^0x[a-fA-F0-9]{40}$/.test(address) || !rpcUrl) return null;
|
||||
try {
|
||||
const out = execFileSync('cast', ['code', address, '--rpc-url', rpcUrl], {
|
||||
encoding: 'utf8',
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
}).trim();
|
||||
return out !== '' && out !== '0x';
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function callRead(address, rpcUrl, signature, args = []) {
|
||||
if (!address || !/^0x[a-fA-F0-9]{40}$/.test(address) || !rpcUrl) {
|
||||
return { ok: false, skipped: true, value: null, error: 'missing_rpc_or_address' };
|
||||
}
|
||||
try {
|
||||
const out = execFileSync(
|
||||
'cast',
|
||||
['call', address, signature, ...args.map((arg) => String(arg)), '--rpc-url', rpcUrl],
|
||||
{
|
||||
encoding: 'utf8',
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
}
|
||||
).trim();
|
||||
return { ok: true, skipped: false, value: out, error: null };
|
||||
} catch (error) {
|
||||
const stderr = error && error.stderr ? String(error.stderr).trim() : '';
|
||||
const stdout = error && error.stdout ? String(error.stdout).trim() : '';
|
||||
return {
|
||||
ok: false,
|
||||
skipped: false,
|
||||
value: null,
|
||||
error: stderr || stdout || (error && error.message ? String(error.message).trim() : 'cast_call_failed'),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
function parseDestinationConfig(rawValue) {
|
||||
if (typeof rawValue !== 'string') {
|
||||
return { receiverBridge: null, enabled: null };
|
||||
}
|
||||
const match = rawValue.match(/^\((0x[a-fA-F0-9]{40}),\s*(true|false)\)$/);
|
||||
if (!match) {
|
||||
return { receiverBridge: null, enabled: null };
|
||||
}
|
||||
return {
|
||||
receiverBridge: match[1],
|
||||
enabled: match[2] === 'true',
|
||||
};
|
||||
}
|
||||
|
||||
function classifyL1BridgeCapability(probe) {
|
||||
const adminReadable = probe.admin.ok;
|
||||
const destinationReadable = probe.destination.ok;
|
||||
const accountingReadable =
|
||||
probe.reserveVerifier.ok &&
|
||||
probe.supportedCanonicalToken.ok &&
|
||||
probe.maxOutstanding.ok &&
|
||||
probe.outstandingMinted.ok &&
|
||||
probe.totalOutstanding.ok &&
|
||||
probe.lockedBalance.ok;
|
||||
|
||||
if (adminReadable && destinationReadable && accountingReadable) return 'full_accounting';
|
||||
if (adminReadable && destinationReadable) return 'partial_destination_only';
|
||||
if (adminReadable) return 'admin_only';
|
||||
if (probe.hasCode === false) return 'missing';
|
||||
return 'unknown_or_incompatible';
|
||||
}
|
||||
|
||||
function probeL1Bridge(address, rpcUrl, canonicalAddress, destinationChainSelector) {
|
||||
const admin = callRead(address, rpcUrl, 'admin()(address)');
|
||||
const destination =
|
||||
canonicalAddress && destinationChainSelector
|
||||
? callRead(address, rpcUrl, 'destinations(address,uint64)((address,bool))', [
|
||||
canonicalAddress,
|
||||
destinationChainSelector,
|
||||
])
|
||||
: { ok: false, skipped: true, value: null, error: 'missing_selector_or_token' };
|
||||
const reserveVerifier = callRead(address, rpcUrl, 'reserveVerifier()(address)');
|
||||
const supportedCanonicalToken = canonicalAddress
|
||||
? callRead(address, rpcUrl, 'supportedCanonicalToken(address)(bool)', [canonicalAddress])
|
||||
: { ok: false, skipped: true, value: null, error: 'missing_token' };
|
||||
const maxOutstanding =
|
||||
canonicalAddress && destinationChainSelector
|
||||
? callRead(address, rpcUrl, 'maxOutstanding(address,uint64)(uint256)', [canonicalAddress, destinationChainSelector])
|
||||
: { ok: false, skipped: true, value: null, error: 'missing_selector_or_token' };
|
||||
const outstandingMinted =
|
||||
canonicalAddress && destinationChainSelector
|
||||
? callRead(address, rpcUrl, 'outstandingMinted(address,uint64)(uint256)', [
|
||||
canonicalAddress,
|
||||
destinationChainSelector,
|
||||
])
|
||||
: { ok: false, skipped: true, value: null, error: 'missing_selector_or_token' };
|
||||
const totalOutstanding = canonicalAddress
|
||||
? callRead(address, rpcUrl, 'totalOutstanding(address)(uint256)', [canonicalAddress])
|
||||
: { ok: false, skipped: true, value: null, error: 'missing_token' };
|
||||
const lockedBalance = canonicalAddress
|
||||
? callRead(address, rpcUrl, 'lockedBalance(address)(uint256)', [canonicalAddress])
|
||||
: { ok: false, skipped: true, value: null, error: 'missing_token' };
|
||||
const destinationConfig = parseDestinationConfig(destination.value);
|
||||
const readableViews = [
|
||||
['admin', admin.ok],
|
||||
['destinations', destination.ok],
|
||||
['reserveVerifier', reserveVerifier.ok],
|
||||
['supportedCanonicalToken', supportedCanonicalToken.ok],
|
||||
['maxOutstanding', maxOutstanding.ok],
|
||||
['outstandingMinted', outstandingMinted.ok],
|
||||
['totalOutstanding', totalOutstanding.ok],
|
||||
['lockedBalance', lockedBalance.ok],
|
||||
]
|
||||
.filter(([, readable]) => readable)
|
||||
.map(([label]) => label);
|
||||
|
||||
return {
|
||||
capability: classifyL1BridgeCapability({
|
||||
hasCode: hasCode(address, rpcUrl),
|
||||
admin,
|
||||
destination,
|
||||
reserveVerifier,
|
||||
supportedCanonicalToken,
|
||||
maxOutstanding,
|
||||
outstandingMinted,
|
||||
totalOutstanding,
|
||||
lockedBalance,
|
||||
}),
|
||||
readableViews,
|
||||
destinationConfigured: destinationConfig.enabled,
|
||||
destinationReceiverBridge: destinationConfig.receiverBridge,
|
||||
reserveVerifier,
|
||||
supportedCanonicalToken,
|
||||
maxOutstanding,
|
||||
outstandingMinted,
|
||||
totalOutstanding,
|
||||
lockedBalance,
|
||||
};
|
||||
}
|
||||
|
||||
function describeMirrorName(pair) {
|
||||
if (pair.familyKey === 'eth_mainnet') return 'Wrapped cETH Mainnet';
|
||||
if (pair.familyKey === 'eth_l2') return 'Wrapped cETHL2';
|
||||
return `Wrapped ${pair.canonicalSymbol}`;
|
||||
}
|
||||
|
||||
function shellQuote(value) {
|
||||
return `'${String(value).replace(/'/g, `'\\''`)}'`;
|
||||
}
|
||||
|
||||
const rows = pairs.map((pair) => {
|
||||
const destinationRpcEnv = rpcEnvByChain[pair.destinationChainId] || null;
|
||||
const destinationRpcUrl = destinationRpcEnv ? process.env[destinationRpcEnv] || '' : '';
|
||||
const chain138RpcUrl = process.env[rpcEnvByChain[138]] || '';
|
||||
const reserveVerifier = pair.reserveVerifierKey ? reserveVerifiers[pair.reserveVerifierKey] || null : null;
|
||||
const reserveVerifierEnvKey =
|
||||
reserveVerifier &&
|
||||
reserveVerifier.verifierRef &&
|
||||
typeof reserveVerifier.verifierRef.env === 'string' &&
|
||||
reserveVerifier.verifierRef.env.trim()
|
||||
? reserveVerifier.verifierRef.env.trim()
|
||||
: '';
|
||||
|
||||
const l1BridgeAddress = pair.runtimeL1BridgeAddress || resolveConfigRef(pair.peer?.l1Bridge);
|
||||
const l2BridgeAddress = pair.runtimeL2BridgeAddress || resolveConfigRef(pair.peer?.l2Bridge);
|
||||
const reserveVerifierAddress = pair.runtimeReserveVerifierAddress || resolveConfigRef(reserveVerifier?.verifierRef);
|
||||
const reserveVaultAddress = pair.runtimeReserveVaultAddress || resolveConfigRef(reserveVerifier?.vaultRef);
|
||||
const destinationChainSelector = typeof pair.destinationChainSelector === 'string' ? pair.destinationChainSelector : '';
|
||||
|
||||
const canonicalCodeLive = hasCode(pair.canonicalAddress, chain138RpcUrl);
|
||||
const mirroredCodeLive = hasCode(pair.mirroredAddress, destinationRpcUrl);
|
||||
const l1BridgeLive = hasCode(l1BridgeAddress, chain138RpcUrl);
|
||||
const l2BridgeLive = hasCode(l2BridgeAddress, destinationRpcUrl);
|
||||
const reserveVerifierLive = hasCode(reserveVerifierAddress, chain138RpcUrl);
|
||||
const reserveVaultLive = reserveVaultAddress ? hasCode(reserveVaultAddress, chain138RpcUrl) : null;
|
||||
const l1BridgeProbe = probeL1Bridge(l1BridgeAddress, chain138RpcUrl, pair.canonicalAddress, destinationChainSelector);
|
||||
const destinationReceiverMatchesRuntimeL2 =
|
||||
!!l1BridgeProbe.destinationReceiverBridge &&
|
||||
!!l2BridgeAddress &&
|
||||
l1BridgeProbe.destinationReceiverBridge.toLowerCase() === l2BridgeAddress.toLowerCase();
|
||||
|
||||
const actions = [];
|
||||
if (canonicalCodeLive !== true) {
|
||||
actions.push({
|
||||
type: 'deployCanonical',
|
||||
command:
|
||||
`GAS_FAMILY=${shellQuote(pair.familyKey)} forge script script/deploy/DeployGasCanonicalTokens.s.sol:DeployGasCanonicalTokens --rpc-url "$RPC_URL_138" --broadcast`,
|
||||
});
|
||||
}
|
||||
if (mirroredCodeLive !== true) {
|
||||
actions.push({
|
||||
type: 'deployMirror',
|
||||
command:
|
||||
`CW_BRIDGE_ADDRESS="$${pair.peer?.l2Bridge?.env || 'CW_BRIDGE_MISSING'}" ` +
|
||||
`CW_TOKEN_NAME=${shellQuote(describeMirrorName(pair))} ` +
|
||||
`CW_TOKEN_SYMBOL=${shellQuote(pair.mirroredSymbol)} ` +
|
||||
`CW_TOKEN_DECIMALS=18 ` +
|
||||
`forge script script/deploy/DeploySingleCWToken.s.sol:DeploySingleCWToken --rpc-url "$${destinationRpcEnv || 'RPC_MISSING'}" --broadcast`,
|
||||
});
|
||||
}
|
||||
if (!l1BridgeAddress) {
|
||||
actions.push({ type: 'setEnv', command: 'export CHAIN138_L1_BRIDGE=<live_chain138_l1_bridge>' });
|
||||
}
|
||||
if (!l2BridgeAddress) {
|
||||
actions.push({ type: 'setEnv', command: `export ${pair.peer?.l2Bridge?.env || 'CW_BRIDGE_TARGET'}=<live_destination_bridge>` });
|
||||
}
|
||||
if (destinationChainSelector && l1BridgeProbe.destinationConfigured !== true && l2BridgeAddress) {
|
||||
actions.push({
|
||||
type: 'configureL1Destination',
|
||||
command:
|
||||
`cast send "$CHAIN138_L1_BRIDGE" "configureDestination(address,uint64,address,bool)" ` +
|
||||
`${pair.canonicalAddress} ${destinationChainSelector} ${l2BridgeAddress} true ` +
|
||||
'--rpc-url "$RPC_URL_138" --private-key "$PRIVATE_KEY"',
|
||||
});
|
||||
}
|
||||
if (!reserveVerifierAddress && deployedGenericVerifierAddress && reserveVerifierEnvKey) {
|
||||
actions.push({
|
||||
type: 'setVerifierEnv',
|
||||
command: `export ${reserveVerifierEnvKey}=${deployedGenericVerifierAddress}`,
|
||||
});
|
||||
} else if (!reserveVerifierAddress) {
|
||||
actions.push({
|
||||
type: 'deployVerifier',
|
||||
command:
|
||||
'CW_ASSET_RESERVE_VAULT=<live_vault> CW_ASSET_RESERVE_SYSTEM=<live_reserve_system_or_blank> ' +
|
||||
'forge script script/DeployCWAssetReserveVerifier.s.sol:DeployCWAssetReserveVerifier --rpc-url "$RPC_URL_138" --broadcast',
|
||||
});
|
||||
}
|
||||
if (!pair.runtimeMaxOutstandingValue) {
|
||||
actions.push({ type: 'setCap', command: `export ${pair.maxOutstanding?.env || 'CW_MAX_OUTSTANDING_TARGET'}=<per_lane_cap_raw>` });
|
||||
}
|
||||
if (!pair.runtimeOutstandingValue || !pair.runtimeEscrowedValue || pair.runtimeTreasuryBackedValue == null) {
|
||||
actions.push({ type: 'setSupplyAccounting', command: 'export CW_GAS_OUTSTANDING_*=... CW_GAS_ESCROWED_*=... CW_GAS_TREASURY_BACKED_*=...' });
|
||||
}
|
||||
if (l1BridgeProbe.capability !== 'full_accounting' && l1BridgeLive === true) {
|
||||
actions.push({
|
||||
type: 'auditL1BridgeAbi',
|
||||
command:
|
||||
`cast call "$CHAIN138_L1_BRIDGE" "admin()(address)" --rpc-url "$RPC_URL_138" && ` +
|
||||
`cast call "$CHAIN138_L1_BRIDGE" "destinations(address,uint64)((address,bool))" ${pair.canonicalAddress} ${destinationChainSelector || 0} --rpc-url "$RPC_URL_138"`,
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
key: pair.key,
|
||||
familyKey: pair.familyKey,
|
||||
chainId: pair.destinationChainId,
|
||||
chainName: pair.destinationChainName,
|
||||
destinationChainSelector: destinationChainSelector || null,
|
||||
canonicalSymbol: pair.canonicalSymbol,
|
||||
mirroredSymbol: pair.mirroredSymbol,
|
||||
canonicalAddress: pair.canonicalAddress,
|
||||
mirroredAddress: pair.mirroredAddress,
|
||||
l1BridgeAddress,
|
||||
l2BridgeAddress,
|
||||
reserveVerifierAddress,
|
||||
deployedGenericVerifierAddress: deployedGenericVerifierAddress || null,
|
||||
reserveVerifierEnvKey: reserveVerifierEnvKey || null,
|
||||
reserveVaultAddress,
|
||||
canonicalCodeLive,
|
||||
mirroredCodeLive,
|
||||
l1BridgeLive,
|
||||
l2BridgeLive,
|
||||
reserveVerifierLive,
|
||||
reserveVaultLive,
|
||||
l1BridgeCapability: l1BridgeProbe.capability,
|
||||
l1BridgeReadableViews: l1BridgeProbe.readableViews,
|
||||
l1BridgeProbeErrors: {
|
||||
reserveVerifier: l1BridgeProbe.reserveVerifier.ok ? null : l1BridgeProbe.reserveVerifier.error,
|
||||
supportedCanonicalToken: l1BridgeProbe.supportedCanonicalToken.ok ? null : l1BridgeProbe.supportedCanonicalToken.error,
|
||||
maxOutstanding: l1BridgeProbe.maxOutstanding.ok ? null : l1BridgeProbe.maxOutstanding.error,
|
||||
outstandingMinted: l1BridgeProbe.outstandingMinted.ok ? null : l1BridgeProbe.outstandingMinted.error,
|
||||
totalOutstanding: l1BridgeProbe.totalOutstanding.ok ? null : l1BridgeProbe.totalOutstanding.error,
|
||||
lockedBalance: l1BridgeProbe.lockedBalance.ok ? null : l1BridgeProbe.lockedBalance.error,
|
||||
},
|
||||
destinationConfiguredOnL1: l1BridgeProbe.destinationConfigured,
|
||||
destinationReceiverBridgeOnL1: l1BridgeProbe.destinationReceiverBridge,
|
||||
destinationReceiverMatchesRuntimeL2,
|
||||
runtimeReady: pair.runtimeReady === true,
|
||||
runtimeMissingRequirements: pair.runtimeMissingRequirements || [],
|
||||
actions,
|
||||
};
|
||||
});
|
||||
|
||||
const uniqueL1BridgeRows = Array.from(
|
||||
new Map(
|
||||
rows
|
||||
.filter((row) => row.l1BridgeAddress)
|
||||
.map((row) => [String(row.l1BridgeAddress).toLowerCase(), row])
|
||||
).values()
|
||||
);
|
||||
|
||||
const summary = {
|
||||
gasFamilies: families.length,
|
||||
transportPairs: rows.length,
|
||||
configuredTransportPairs: configuredGasPairs.length,
|
||||
deferredTransportPairs: deferredGasPairs,
|
||||
canonicalContractsLive: rows.filter((row) => row.canonicalCodeLive === true).length,
|
||||
mirroredContractsLive: rows.filter((row) => row.mirroredCodeLive === true).length,
|
||||
l1BridgeRefsLoaded: rows.filter((row) => row.l1BridgeAddress).length,
|
||||
l2BridgeRefsLoaded: rows.filter((row) => row.l2BridgeAddress).length,
|
||||
verifierRefsLoaded: rows.filter((row) => row.reserveVerifierAddress).length,
|
||||
runtimeReadyPairs: rows.filter((row) => row.runtimeReady).length,
|
||||
pairsWithL1DestinationConfigured: rows.filter((row) => row.destinationConfiguredOnL1 === true).length,
|
||||
pairsWithL1ReceiverMatchingRuntimeL2: rows.filter((row) => row.destinationReceiverMatchesRuntimeL2 === true).length,
|
||||
l1BridgesObserved: uniqueL1BridgeRows.length,
|
||||
l1BridgesWithFullAccountingIntrospection: uniqueL1BridgeRows.filter(
|
||||
(row) => row.l1BridgeCapability === 'full_accounting'
|
||||
).length,
|
||||
l1BridgesWithPartialDestinationIntrospection: uniqueL1BridgeRows.filter(
|
||||
(row) => row.l1BridgeCapability === 'partial_destination_only'
|
||||
).length,
|
||||
deployedGenericVerifierAddress: deployedGenericVerifierAddress || null,
|
||||
};
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify({ summary, rows }, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== Gas Rollout Deployment Matrix ===');
|
||||
console.log(`Gas families: ${summary.gasFamilies}`);
|
||||
console.log(`Active transport pairs: ${summary.transportPairs}`);
|
||||
console.log(`Deferred transport pairs: ${summary.deferredTransportPairs}`);
|
||||
console.log(`Canonical contracts live on 138: ${summary.canonicalContractsLive}/${summary.transportPairs}`);
|
||||
console.log(`Mirrored contracts live on destination chains: ${summary.mirroredContractsLive}/${summary.transportPairs}`);
|
||||
console.log(`Loaded L1 bridge refs: ${summary.l1BridgeRefsLoaded}/${summary.transportPairs}`);
|
||||
console.log(`Loaded L2 bridge refs: ${summary.l2BridgeRefsLoaded}/${summary.transportPairs}`);
|
||||
console.log(`Loaded verifier refs: ${summary.verifierRefsLoaded}/${summary.transportPairs}`);
|
||||
console.log(`Runtime-ready pairs: ${summary.runtimeReadyPairs}/${summary.transportPairs}`);
|
||||
console.log(`L1 destinations configured: ${summary.pairsWithL1DestinationConfigured}/${summary.transportPairs}`);
|
||||
console.log(
|
||||
`L1 destination receivers matching runtime L2 bridges: ${summary.pairsWithL1ReceiverMatchingRuntimeL2}/${summary.transportPairs}`
|
||||
);
|
||||
console.log(
|
||||
`Observed L1 bridges with full accounting introspection: ${summary.l1BridgesWithFullAccountingIntrospection}/${summary.l1BridgesObserved}`
|
||||
);
|
||||
console.log(
|
||||
`Observed L1 bridges with destination-only introspection: ${summary.l1BridgesWithPartialDestinationIntrospection}/${summary.l1BridgesObserved}`
|
||||
);
|
||||
if (summary.deployedGenericVerifierAddress) {
|
||||
console.log(`Deployed generic verifier on 138: ${summary.deployedGenericVerifierAddress}`);
|
||||
}
|
||||
|
||||
for (const row of rows) {
|
||||
const parts = [
|
||||
`${row.chainId} ${row.chainName}`,
|
||||
`${row.familyKey}`,
|
||||
`${row.canonicalSymbol}->${row.mirroredSymbol}`,
|
||||
`selector=${row.destinationChainSelector || 'unset'}`,
|
||||
`canonical=${row.canonicalCodeLive === true ? 'live' : row.canonicalCodeLive === false ? 'missing' : 'unknown'}`,
|
||||
`mirror=${row.mirroredCodeLive === true ? 'live' : row.mirroredCodeLive === false ? 'missing' : 'unknown'}`,
|
||||
`l1=${row.l1BridgeAddress ? 'set' : 'unset'}`,
|
||||
`l2=${row.l2BridgeAddress ? 'set' : 'unset'}`,
|
||||
`verifier=${row.reserveVerifierAddress ? 'set' : 'unset'}`,
|
||||
`l1cap=${row.l1BridgeCapability}`,
|
||||
`l1dest=${row.destinationConfiguredOnL1 === true ? 'wired' : row.destinationConfiguredOnL1 === false ? 'missing' : 'unknown'}`,
|
||||
];
|
||||
console.log(`- ${parts.join(' | ')}`);
|
||||
if (row.destinationReceiverBridgeOnL1) {
|
||||
console.log(
|
||||
` l1 destination receiver: ${row.destinationReceiverBridgeOnL1}${row.destinationReceiverMatchesRuntimeL2 ? ' (matches runtime L2)' : ' (differs from runtime L2)'}`
|
||||
);
|
||||
}
|
||||
if (row.l1BridgeReadableViews.length > 0) {
|
||||
console.log(` l1 readable views: ${row.l1BridgeReadableViews.join(', ')}`);
|
||||
}
|
||||
const failedProbeViews = Object.entries(row.l1BridgeProbeErrors)
|
||||
.filter(([, error]) => typeof error === 'string' && error)
|
||||
.map(([label]) => label);
|
||||
if (failedProbeViews.length > 0) {
|
||||
console.log(` l1 probe gaps: ${failedProbeViews.join(', ')}`);
|
||||
}
|
||||
if (row.runtimeMissingRequirements.length > 0) {
|
||||
console.log(` missing: ${row.runtimeMissingRequirements.join(', ')}`);
|
||||
}
|
||||
for (const action of row.actions.slice(0, 3)) {
|
||||
console.log(` next ${action.type}: ${action.command}`);
|
||||
}
|
||||
}
|
||||
NODE
|
||||
184
scripts/verify/check-gru-global-priority-rollout.sh
Normal file
184
scripts/verify/check-gru-global-priority-rollout.sh
Normal file
@@ -0,0 +1,184 @@
|
||||
#!/usr/bin/env bash
|
||||
# Compare the GRU global-priority rollout plan against the current repo state.
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-global-priority-rollout.sh
|
||||
# bash scripts/verify/check-gru-global-priority-rollout.sh --json
|
||||
# bash scripts/verify/check-gru-global-priority-rollout.sh --wave=wave1
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
OUTPUT_JSON=0
|
||||
WAVE_FILTER=""
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
--wave=*) WAVE_FILTER="${arg#--wave=}" ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd node
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" WAVE_FILTER="$WAVE_FILTER" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const waveFilter = process.env.WAVE_FILTER || '';
|
||||
|
||||
function readJson(relPath) {
|
||||
return JSON.parse(fs.readFileSync(path.join(root, relPath), 'utf8'));
|
||||
}
|
||||
|
||||
const rollout = readJson('config/gru-global-priority-currency-rollout.json');
|
||||
const manifest = readJson('config/gru-iso4217-currency-manifest.json');
|
||||
const mapping = readJson('config/token-mapping-multichain.json');
|
||||
const transport = readJson('config/gru-transport-active.json');
|
||||
|
||||
const manifestByCode = new Map((manifest.currencies || []).map((item) => [item.code, item]));
|
||||
const transportBySymbol = new Map((transport.enabledCanonicalTokens || []).map((item) => [item.symbol, item]));
|
||||
const cToCw = mapping.cToCwSymbolMapping || {};
|
||||
const wavesById = new Map((rollout.waves || []).map((item) => [item.id, item]));
|
||||
const desiredDestinationNetworks = rollout.desiredDestinationNetworks || {};
|
||||
const destinationSummary = {
|
||||
evmPublicCwMesh: (desiredDestinationNetworks.evmPublicCwMeshChainIds || []).length,
|
||||
altEvmPrograms: (desiredDestinationNetworks.altEvmPrograms || []).length,
|
||||
nonEvmRelayPrograms: (desiredDestinationNetworks.nonEvmRelayPrograms || []).length
|
||||
};
|
||||
|
||||
function manifestSymbolsFor(entry) {
|
||||
if (!entry || !entry.canonicalAssets) return [];
|
||||
const out = [];
|
||||
for (const [form, data] of Object.entries(entry.canonicalAssets)) {
|
||||
if (data && typeof data.symbol === 'string') {
|
||||
out.push({ form, symbol: data.symbol });
|
||||
}
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
function deriveState(bits) {
|
||||
if (bits.transportActive) return 'live_transport';
|
||||
if (bits.canonical138Deployed) return 'canonical_only';
|
||||
if (bits.manifestPresent) return 'manifest_only';
|
||||
if (bits.cToCwMapped) return 'mapping_only';
|
||||
return 'backlog';
|
||||
}
|
||||
|
||||
function nextStep(bits) {
|
||||
if (!bits.manifestPresent) return 'add_to_manifest_and_standards';
|
||||
if (!bits.canonical138Deployed) return 'deploy_canonical_on_chain138';
|
||||
if (!bits.cToCwMapped) return 'add_c_to_cw_symbol_mapping';
|
||||
if (!bits.transportActive) return 'deploy_cw_and_enable_transport';
|
||||
if (!bits.x402Ready) return 'promote_x402_surface';
|
||||
return 'monitor_and_scale';
|
||||
}
|
||||
|
||||
const filteredAssets = (rollout.assets || []).filter((asset) => !waveFilter || asset.wave === waveFilter);
|
||||
|
||||
const results = filteredAssets.map((asset) => {
|
||||
const manifestEntry = manifestByCode.get(asset.code);
|
||||
const rolloutSymbols = (asset.tokenForms || []).map((item) => ({
|
||||
form: item.form,
|
||||
canonicalSymbol: item.canonicalSymbol,
|
||||
wrappedSymbol: item.wrappedSymbol
|
||||
}));
|
||||
const manifestSymbols = manifestSymbolsFor(manifestEntry);
|
||||
const manifestPresent = Boolean(manifestEntry);
|
||||
const canonical138Deployed = Boolean(manifestEntry?.status?.deployed);
|
||||
const transportActive = rolloutSymbols.some((item) => {
|
||||
const transportEntry = transportBySymbol.get(item.canonicalSymbol);
|
||||
return Boolean(transportEntry);
|
||||
});
|
||||
const x402Ready = Boolean(manifestEntry?.status?.x402Ready);
|
||||
const cToCwMapped = rolloutSymbols.length > 0 && rolloutSymbols.every((item) => cToCw[item.canonicalSymbol] === item.wrappedSymbol);
|
||||
const manifestMatchesRollout = manifestSymbols.length > 0 && rolloutSymbols.every((item) => manifestSymbols.some((m) => m.symbol === item.canonicalSymbol));
|
||||
|
||||
const bits = {
|
||||
manifestPresent,
|
||||
canonical138Deployed,
|
||||
cToCwMapped,
|
||||
transportActive,
|
||||
x402Ready
|
||||
};
|
||||
|
||||
return {
|
||||
code: asset.code,
|
||||
name: asset.name,
|
||||
tier: asset.tier,
|
||||
rank: asset.rank,
|
||||
wave: asset.wave,
|
||||
waveName: wavesById.get(asset.wave)?.name || asset.wave,
|
||||
repoTargetState: asset.repoTargetState,
|
||||
currentRepoState: deriveState(bits),
|
||||
nextStep: nextStep(bits),
|
||||
manifestPresent,
|
||||
canonical138Deployed,
|
||||
cToCwMapped,
|
||||
transportActive,
|
||||
x402Ready,
|
||||
manifestMatchesRollout,
|
||||
canonicalSymbols: rolloutSymbols.map((item) => item.canonicalSymbol),
|
||||
wrappedSymbols: rolloutSymbols.map((item) => item.wrappedSymbol)
|
||||
};
|
||||
});
|
||||
|
||||
const summary = {
|
||||
totalAssets: results.length,
|
||||
liveTransport: results.filter((item) => item.currentRepoState === 'live_transport').length,
|
||||
canonicalOnly: results.filter((item) => item.currentRepoState === 'canonical_only').length,
|
||||
manifestOnly: results.filter((item) => item.currentRepoState === 'manifest_only').length,
|
||||
backlog: results.filter((item) => item.currentRepoState === 'backlog').length,
|
||||
mappingOnly: results.filter((item) => item.currentRepoState === 'mapping_only').length
|
||||
};
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify({ summary, destinationSummary, desiredDestinationNetworks, results }, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== GRU Global Priority Cross-Chain Rollout ===');
|
||||
console.log(`Assets checked: ${summary.totalAssets}`);
|
||||
console.log(`Desired destination networks: EVM cW mesh=${destinationSummary.evmPublicCwMesh}, ALT/gated EVM=${destinationSummary.altEvmPrograms}, non-EVM relay=${destinationSummary.nonEvmRelayPrograms}`);
|
||||
console.log(`live_transport: ${summary.liveTransport}`);
|
||||
console.log(`canonical_only: ${summary.canonicalOnly}`);
|
||||
console.log(`manifest_only: ${summary.manifestOnly}`);
|
||||
console.log(`mapping_only: ${summary.mappingOnly}`);
|
||||
console.log(`backlog: ${summary.backlog}`);
|
||||
if (waveFilter) {
|
||||
console.log(`Wave filter: ${waveFilter}`);
|
||||
}
|
||||
if ((desiredDestinationNetworks.nonEvmRelayPrograms || []).length > 0) {
|
||||
const nonEvmNames = desiredDestinationNetworks.nonEvmRelayPrograms
|
||||
.map((item) => item.identifier)
|
||||
.join(', ');
|
||||
console.log(`Non-EVM desired targets: ${nonEvmNames}`);
|
||||
}
|
||||
|
||||
for (const item of results) {
|
||||
const flags = [
|
||||
item.manifestPresent ? 'manifest' : 'no-manifest',
|
||||
item.canonical138Deployed ? 'canonical138' : 'no-canonical138',
|
||||
item.cToCwMapped ? 'mapped' : 'no-mapping',
|
||||
item.transportActive ? 'transport' : 'no-transport',
|
||||
item.x402Ready ? 'x402' : 'no-x402'
|
||||
].join(', ');
|
||||
console.log(`- ${item.code} (${item.name}) [${item.tier} / ${item.wave}] -> ${item.currentRepoState}; next: ${item.nextStep}; ${flags}`);
|
||||
}
|
||||
NODE
|
||||
@@ -70,6 +70,9 @@ transport_pairs="$(jq -r '.gruTransport.summary.transportPairs // 0' "$tmp_body"
|
||||
runtime_ready_pairs="$(jq -r '.gruTransport.summary.runtimeReadyTransportPairs // 0' "$tmp_body")"
|
||||
blocked_pairs="$(jq -r '.gruTransport.blockedPairs | length' "$tmp_body")"
|
||||
ready_pairs="$(jq -r '.gruTransport.readyPairs | length' "$tmp_body")"
|
||||
gas_transport_pairs="$(jq -r '.gruTransport.summary.gasTransportPairs // 0' "$tmp_body")"
|
||||
strict_pairs="$(jq -r '.gruTransport.summary.strictEscrowTransportPairs // 0' "$tmp_body")"
|
||||
hybrid_pairs="$(jq -r '.gruTransport.summary.hybridCapTransportPairs // 0' "$tmp_body")"
|
||||
|
||||
display_path="${prefix}/api/v1/bridge/preflight"
|
||||
if [[ -z "$prefix" ]]; then
|
||||
@@ -80,6 +83,9 @@ log "Transport pairs: $transport_pairs"
|
||||
log "Runtime-ready pairs: $runtime_ready_pairs"
|
||||
log "Ready pairs returned: $ready_pairs"
|
||||
log "Blocked pairs returned: $blocked_pairs"
|
||||
log "Gas-native transport pairs: $gas_transport_pairs"
|
||||
log "Strict-escrow pairs: $strict_pairs"
|
||||
log "Hybrid-cap pairs: $hybrid_pairs"
|
||||
|
||||
if (( blocked_pairs > 0 )); then
|
||||
log ""
|
||||
@@ -89,6 +95,17 @@ if (( blocked_pairs > 0 )); then
|
||||
| "- \(.key): eligibilityBlockers=\(((.eligibilityBlockers // []) | join(",")) // "") runtimeMissingRequirements=\(((.runtimeMissingRequirements // []) | join(",")) // "")"
|
||||
' "$tmp_body"
|
||||
|
||||
gas_blocked_pairs="$(jq -r '[.gruTransport.blockedPairs[] | select(.assetClass == "gas_native")] | length' "$tmp_body")"
|
||||
if (( gas_blocked_pairs > 0 )); then
|
||||
log ""
|
||||
warn "Blocked gas-native lanes:"
|
||||
jq -r '
|
||||
.gruTransport.blockedPairs[]
|
||||
| select(.assetClass == "gas_native")
|
||||
| "- \(.key): family=\(.familyKey // "n/a") backingMode=\(.backingMode // "n/a") supplyInvariantSatisfied=\(.supplyInvariantSatisfied // false) runtimeMissingRequirements=\(((.runtimeMissingRequirements // []) | join(",")) // "")"
|
||||
' "$tmp_body"
|
||||
fi
|
||||
|
||||
if [[ "$ALLOW_BLOCKED" != "1" ]]; then
|
||||
fail "GRU transport preflight has blocked pairs. Set ALLOW_BLOCKED=1 for diagnostic-only mode."
|
||||
else
|
||||
|
||||
252
scripts/verify/check-gru-v2-chain138-readiness.sh
Executable file
252
scripts/verify/check-gru-v2-chain138-readiness.sh
Executable file
@@ -0,0 +1,252 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify the live GRU c* V2 readiness state on Chain 138.
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-v2-chain138-readiness.sh
|
||||
# bash scripts/verify/check-gru-v2-chain138-readiness.sh --report-only
|
||||
# RPC_URL=https://rpc-http-pub.d-bis.org bash scripts/verify/check-gru-v2-chain138-readiness.sh
|
||||
#
|
||||
# The script checks:
|
||||
# - local CompliantFiatTokenV2 Foundry suite (optional via RUN_LOCAL_TESTS=1)
|
||||
# - deployed bytecode for cUSDT V2 / cUSDC V2
|
||||
# - UniversalAssetRegistry activation as GRU assets
|
||||
# - core version/signing surface (name/symbol/versionTag/currencyCode/assetId/assetVersionId/DOMAIN_SEPARATOR)
|
||||
# - promotion blockers (forwardCanonical still false, missing governance/supervision metadata ABI)
|
||||
# - token metadata path is populated and uses decentralized URI format
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
set +eu
|
||||
# shellcheck source=../lib/load-project-env.sh
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
set -euo pipefail
|
||||
fi
|
||||
|
||||
REPORT_ONLY=0
|
||||
RUN_LOCAL_TESTS="${RUN_LOCAL_TESTS:-0}"
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--report-only) REPORT_ONLY=1 ;;
|
||||
--run-local-tests) RUN_LOCAL_TESTS=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd cast
|
||||
|
||||
RPC_URL="${RPC_URL:-${RPC_URL_138_PUBLIC:-${RPC_URL_138:-https://rpc-http-pub.d-bis.org}}}"
|
||||
REGISTRY="${UNIVERSAL_ASSET_REGISTRY:-0xAEE4b7fBe82E1F8295951584CBc772b8BBD68575}"
|
||||
CUSDT_V2="${COMPLIANT_USDT_V2:-${CUSDT_V2_ADDRESS_138:-0x9FBfab33882Efe0038DAa608185718b772EE5660}}"
|
||||
CUSDC_V2="${COMPLIANT_USDC_V2:-${CUSDC_V2_ADDRESS_138:-0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d}}"
|
||||
|
||||
BLOCKERS=()
|
||||
WARNINGS=()
|
||||
|
||||
ok() { printf '[OK] %s\n' "$*"; }
|
||||
warn() { printf '[WARN] %s\n' "$*"; WARNINGS+=("$*"); }
|
||||
block() { printf '[BLOCKER] %s\n' "$*"; BLOCKERS+=("$*"); }
|
||||
|
||||
call_or_fail() {
|
||||
local address="$1"
|
||||
local signature="$2"
|
||||
shift 2
|
||||
cast call "$address" "$signature" "$@" --rpc-url "$RPC_URL" 2>/dev/null
|
||||
}
|
||||
|
||||
call_or_placeholder() {
|
||||
local address="$1"
|
||||
local signature="$2"
|
||||
shift 2
|
||||
cast call "$address" "$signature" "$@" --rpc-url "$RPC_URL" 2>/dev/null || printf '__CALL_FAILED__'
|
||||
}
|
||||
|
||||
trim_quotes() {
|
||||
local value="$1"
|
||||
value="${value#\"}"
|
||||
value="${value%\"}"
|
||||
printf '%s' "$value"
|
||||
}
|
||||
|
||||
run_local_suite() {
|
||||
local smom_root="${PROJECT_ROOT}/smom-dbis-138"
|
||||
need_cmd forge
|
||||
(
|
||||
cd "$smom_root"
|
||||
rm -f cache/solidity-files-cache.json
|
||||
forge test --match-contract CompliantFiatTokenV2Test
|
||||
)
|
||||
}
|
||||
|
||||
check_token() {
|
||||
local label="$1"
|
||||
local address="$2"
|
||||
local expected_symbol="$3"
|
||||
local expected_currency="$4"
|
||||
|
||||
printf '\n=== %s ===\n' "$label"
|
||||
printf 'Address: %s\n' "$address"
|
||||
|
||||
local code
|
||||
code="$(cast code "$address" --rpc-url "$RPC_URL" 2>/dev/null || true)"
|
||||
if [[ -z "$code" || "$code" == "0x" ]]; then
|
||||
block "$label has no bytecode on Chain 138 ($address)"
|
||||
return
|
||||
fi
|
||||
ok "$label bytecode present."
|
||||
|
||||
local name symbol version_tag currency forward_canonical asset_id asset_version_id domain_separator
|
||||
name="$(trim_quotes "$(call_or_fail "$address" 'name()(string)')")"
|
||||
symbol="$(trim_quotes "$(call_or_fail "$address" 'symbol()(string)')")"
|
||||
version_tag="$(trim_quotes "$(call_or_fail "$address" 'versionTag()(string)')")"
|
||||
currency="$(trim_quotes "$(call_or_fail "$address" 'currencyCode()(string)')")"
|
||||
forward_canonical="$(call_or_fail "$address" 'forwardCanonical()(bool)')"
|
||||
asset_id="$(call_or_fail "$address" 'assetId()(bytes32)')"
|
||||
asset_version_id="$(call_or_fail "$address" 'assetVersionId()(bytes32)')"
|
||||
domain_separator="$(call_or_fail "$address" 'DOMAIN_SEPARATOR()(bytes32)')"
|
||||
|
||||
printf 'name: %s\n' "$name"
|
||||
printf 'symbol: %s\n' "$symbol"
|
||||
printf 'versionTag: %s\n' "$version_tag"
|
||||
printf 'currencyCode: %s\n' "$currency"
|
||||
printf 'forwardCanonical: %s\n' "$forward_canonical"
|
||||
printf 'assetId: %s\n' "$asset_id"
|
||||
printf 'assetVersionId: %s\n' "$asset_version_id"
|
||||
printf 'DOMAIN_SEPARATOR: %s\n' "$domain_separator"
|
||||
|
||||
[[ "$symbol" == "$expected_symbol" ]] || block "$label symbol mismatch (expected $expected_symbol, got $symbol)"
|
||||
[[ "$currency" == "$expected_currency" ]] || block "$label currencyCode mismatch (expected $expected_currency, got $currency)"
|
||||
[[ "$version_tag" == "2" ]] || block "$label versionTag is not 2 (got $version_tag)"
|
||||
[[ "$domain_separator" != "0x0000000000000000000000000000000000000000000000000000000000000000" ]] || \
|
||||
block "$label DOMAIN_SEPARATOR is zero"
|
||||
|
||||
local registry_active asset_type
|
||||
registry_active="$(call_or_fail "$REGISTRY" 'isAssetActive(address)(bool)' "$address")"
|
||||
asset_type="$(call_or_fail "$REGISTRY" 'getAssetType(address)(uint8)' "$address")"
|
||||
printf 'registryActive: %s\n' "$registry_active"
|
||||
printf 'registryAssetType: %s\n' "$asset_type"
|
||||
[[ "$registry_active" == "true" ]] || block "$label is not active in UniversalAssetRegistry"
|
||||
[[ "$asset_type" == "2" ]] || block "$label is not registered as AssetType.GRU (expected 2, got $asset_type)"
|
||||
|
||||
if [[ "$forward_canonical" != "true" ]]; then
|
||||
block "$label is deployed and GRU-registered, but forwardCanonical is still false"
|
||||
fi
|
||||
|
||||
local symbol_display token_uri legacy_aliases
|
||||
symbol_display="$(trim_quotes "$(call_or_placeholder "$address" 'symbolDisplay()(string)')")"
|
||||
token_uri="$(trim_quotes "$(call_or_placeholder "$address" 'tokenURI()(string)')")"
|
||||
legacy_aliases="$(call_or_placeholder "$address" 'legacyAliases()(string[])')"
|
||||
printf 'symbolDisplay: %s\n' "$symbol_display"
|
||||
printf 'tokenURI: %s\n' "${token_uri:-<empty>}"
|
||||
printf 'legacyAliases: %s\n' "$legacy_aliases"
|
||||
|
||||
if [[ -z "$token_uri" ]]; then
|
||||
block "$label tokenURI is empty"
|
||||
elif [[ "$token_uri" != ipfs://* ]]; then
|
||||
block "$label tokenURI is not an ipfs:// URI ($token_uri)"
|
||||
fi
|
||||
|
||||
local required_surface=(
|
||||
"governanceProfileId()(bytes32)"
|
||||
"supervisionProfileId()(bytes32)"
|
||||
"storageNamespace()(bytes32)"
|
||||
"governanceController()(address)"
|
||||
"primaryJurisdiction()(string)"
|
||||
"regulatoryDisclosureURI()(string)"
|
||||
"reportingURI()(string)"
|
||||
"canonicalUnderlyingAsset()(address)"
|
||||
"supervisionRequired()(bool)"
|
||||
"governmentApprovalRequired()(bool)"
|
||||
"minimumUpgradeNoticePeriod()(uint256)"
|
||||
"wrappedTransport()(bool)"
|
||||
)
|
||||
|
||||
local fn value
|
||||
for fn in "${required_surface[@]}"; do
|
||||
value="$(call_or_placeholder "$address" "$fn")"
|
||||
if [[ "$value" == "__CALL_FAILED__" ]]; then
|
||||
block "$label is missing required GRU v2 surface: $fn"
|
||||
else
|
||||
printf '%s => %s\n' "$fn" "$value"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
audit_registry_orphans() {
|
||||
local raw
|
||||
raw="$(call_or_placeholder "$REGISTRY" 'getAssetsByType(uint8)(address[])' 2)"
|
||||
if [[ "$raw" == "__CALL_FAILED__" ]]; then
|
||||
warn "Could not enumerate AssetType.GRU entries from UniversalAssetRegistry."
|
||||
return
|
||||
fi
|
||||
|
||||
raw="${raw#[}"
|
||||
raw="${raw%]}"
|
||||
|
||||
local registry_addresses=()
|
||||
local item address code
|
||||
IFS=',' read -r -a registry_addresses <<< "$raw"
|
||||
for item in "${registry_addresses[@]}"; do
|
||||
address="$(printf '%s' "$item" | xargs)"
|
||||
[[ -z "$address" ]] && continue
|
||||
code="$(cast code "$address" --rpc-url "$RPC_URL" 2>/dev/null || true)"
|
||||
if [[ -z "$code" || "$code" == "0x" ]]; then
|
||||
warn "UniversalAssetRegistry has an active GRU entry with no bytecode: $address"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
printf '=== GRU v2 Chain 138 readiness ===\n'
|
||||
printf 'RPC: %s\n' "$RPC_URL"
|
||||
printf 'UniversalAssetRegistry: %s\n' "$REGISTRY"
|
||||
printf 'cUSDT V2: %s\n' "$CUSDT_V2"
|
||||
printf 'cUSDC V2: %s\n' "$CUSDC_V2"
|
||||
|
||||
if [[ "$RUN_LOCAL_TESTS" == "1" ]]; then
|
||||
printf '\nRunning local CompliantFiatTokenV2 Foundry suite...\n'
|
||||
run_local_suite
|
||||
ok "Local CompliantFiatTokenV2 suite passed."
|
||||
else
|
||||
warn "Skipping local Foundry suite. Re-run with --run-local-tests or RUN_LOCAL_TESTS=1 to include it."
|
||||
fi
|
||||
|
||||
check_token "cUSDT V2" "$CUSDT_V2" "cUSDT" "USD"
|
||||
check_token "cUSDC V2" "$CUSDC_V2" "cUSDC" "USD"
|
||||
audit_registry_orphans
|
||||
|
||||
printf '\n=== Summary ===\n'
|
||||
printf 'Warnings: %s\n' "${#WARNINGS[@]}"
|
||||
printf 'Blockers: %s\n' "${#BLOCKERS[@]}"
|
||||
|
||||
if [[ "${#WARNINGS[@]}" -gt 0 ]]; then
|
||||
printf '\nWarnings:\n'
|
||||
for item in "${WARNINGS[@]}"; do
|
||||
printf ' - %s\n' "$item"
|
||||
done
|
||||
fi
|
||||
|
||||
if [[ "${#BLOCKERS[@]}" -gt 0 ]]; then
|
||||
printf '\nPromotion blockers:\n'
|
||||
for item in "${BLOCKERS[@]}"; do
|
||||
printf ' - %s\n' "$item"
|
||||
done
|
||||
if [[ "$REPORT_ONLY" -eq 1 ]]; then
|
||||
exit 0
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ok "GRU v2 assets are ready for forward-canonical promotion on Chain 138."
|
||||
175
scripts/verify/check-gru-v2-d3mm-expansion-status.sh
Executable file
175
scripts/verify/check-gru-v2-d3mm-expansion-status.sh
Executable file
@@ -0,0 +1,175 @@
|
||||
#!/usr/bin/env bash
|
||||
# Summarize the current GRU v2 / D3MM public-EVM expansion posture against the
|
||||
# explicit network rollout plan.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-v2-d3mm-expansion-status.sh
|
||||
# bash scripts/verify/check-gru-v2-d3mm-expansion-status.sh --json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
OUTPUT_JSON=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
|
||||
function readJson(relPath) {
|
||||
return JSON.parse(fs.readFileSync(path.join(root, relPath), 'utf8'));
|
||||
}
|
||||
|
||||
const plan = readJson('config/gru-v2-d3mm-network-expansion-plan.json');
|
||||
const deployment = readJson('cross-chain-pmm-lps/config/deployment-status.json');
|
||||
const poolMatrix = readJson('cross-chain-pmm-lps/config/pool-matrix.json');
|
||||
const scaffoldConfig = readJson('config/gru-v2-first-tier-pool-scaffolds.json');
|
||||
|
||||
function normalizePair(pair) {
|
||||
return String(pair || '').trim().toUpperCase();
|
||||
}
|
||||
|
||||
function poolEntryMatchesPair(entry, pair) {
|
||||
const [base, quote] = normalizePair(pair).split('/');
|
||||
const baseCandidate = String(entry.base || '').trim().toUpperCase();
|
||||
const quoteCandidate = String(entry.quote || '').trim().toUpperCase();
|
||||
return baseCandidate === base && quoteCandidate === quote;
|
||||
}
|
||||
|
||||
const scaffoldMap = new Map(
|
||||
(scaffoldConfig.chains || []).map((chain) => [
|
||||
String(chain.chainId),
|
||||
new Set((chain.scaffoldPools || []).map((entry) => normalizePair(entry.pair)))
|
||||
])
|
||||
);
|
||||
|
||||
function deriveChainState(chainPlan) {
|
||||
const chain = deployment.chains?.[String(chainPlan.chainId)] || {};
|
||||
const matrix = poolMatrix.chains?.[String(chainPlan.chainId)] || {};
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
const cwTokenCount = Object.keys(chain.cwTokens || {}).length;
|
||||
const hubStable = matrix.hubStable || null;
|
||||
const hubStableRecorded = hubStable ? Boolean(chain.anchorAddresses?.[hubStable]) : false;
|
||||
const requiredPairs = chainPlan.requiredPairs || [];
|
||||
const liveRequiredPairs = requiredPairs.filter((pair) => pmmPools.some((entry) => poolEntryMatchesPair(entry, pair)));
|
||||
const missingPairs = requiredPairs.filter((pair) => !liveRequiredPairs.includes(pair));
|
||||
const scaffoldPairs = scaffoldMap.get(String(chainPlan.chainId)) || new Set();
|
||||
const scaffoldedMissingPairs = missingPairs.filter((pair) => scaffoldPairs.has(normalizePair(pair)));
|
||||
const gasPmmPoolCount = Array.isArray(chain.gasPmmPools) ? chain.gasPmmPools.length : 0;
|
||||
const scaffoldReady = missingPairs.length === 0 || scaffoldedMissingPairs.length === missingPairs.length;
|
||||
|
||||
let state = 'blocked';
|
||||
if (cwTokenCount === 0) {
|
||||
state = 'blocked';
|
||||
} else if (requiredPairs.length > 0 && missingPairs.length === 0) {
|
||||
state = 'live_first_tier';
|
||||
} else if (pmmPools.length > 0) {
|
||||
state = 'partial_first_tier';
|
||||
} else if (cwTokenCount > 0 && chain.bridgeAvailable === true) {
|
||||
state = 'bootstrap_ready';
|
||||
}
|
||||
|
||||
const blockers = [];
|
||||
if (cwTokenCount === 0) blockers.push('missing cW suite');
|
||||
if (chain.bridgeAvailable !== true) blockers.push('bridgeAvailable=false');
|
||||
if (hubStable && !hubStableRecorded) blockers.push(`missing hub stable anchor ${hubStable}`);
|
||||
if (missingPairs.length > 0) blockers.push(`missing required pairs: ${missingPairs.join(', ')}`);
|
||||
if (missingPairs.length > 0 && !scaffoldReady) blockers.push('missing tracked pool scaffolds');
|
||||
|
||||
const nextAction =
|
||||
state === 'live_first_tier'
|
||||
? 'promote D3MM and public edge routing'
|
||||
: state === 'partial_first_tier'
|
||||
? 'finish missing first-tier pools and seed depth'
|
||||
: state === 'bootstrap_ready'
|
||||
? scaffoldReady
|
||||
? 'deploy first-tier pools from scaffold config'
|
||||
: 'build tracked first-tier pool scaffolds'
|
||||
: 'complete chain prerequisites';
|
||||
|
||||
return {
|
||||
chainId: chainPlan.chainId,
|
||||
name: chainPlan.name,
|
||||
rolloutMode: chainPlan.rolloutMode,
|
||||
targetProtocols: chainPlan.targetProtocols || [],
|
||||
bridgeAvailable: chain.bridgeAvailable === true,
|
||||
cwTokenCount,
|
||||
hubStable,
|
||||
hubStableRecorded,
|
||||
requiredPairs,
|
||||
liveRequiredPairs,
|
||||
missingPairs,
|
||||
scaffoldedMissingPairs,
|
||||
scaffoldReady,
|
||||
gasPmmPoolCount,
|
||||
recordedPmmPoolCount: pmmPools.length,
|
||||
state,
|
||||
blockers,
|
||||
nextAction
|
||||
};
|
||||
}
|
||||
|
||||
const rows = [];
|
||||
for (const wave of plan.waves || []) {
|
||||
for (const chainPlan of wave.chains || []) {
|
||||
rows.push({
|
||||
waveKey: wave.key,
|
||||
priority: wave.priority,
|
||||
description: wave.description,
|
||||
...deriveChainState(chainPlan)
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const summary = {
|
||||
plannedChains: rows.length,
|
||||
liveFirstTier: rows.filter((row) => row.state === 'live_first_tier').length,
|
||||
partialFirstTier: rows.filter((row) => row.state === 'partial_first_tier').length,
|
||||
bootstrapReady: rows.filter((row) => row.state === 'bootstrap_ready').length,
|
||||
blocked: rows.filter((row) => row.state === 'blocked').length,
|
||||
scaffoldReadyChains: rows.filter((row) => row.scaffoldReady).length,
|
||||
bootstrapReadyWithScaffolds: rows.filter((row) => row.state === 'bootstrap_ready' && row.scaffoldReady).length
|
||||
};
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify({ summary, rows }, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== GRU v2 / D3MM expansion status ===');
|
||||
console.log(`plannedChains=${summary.plannedChains} liveFirstTier=${summary.liveFirstTier} partialFirstTier=${summary.partialFirstTier} bootstrapReady=${summary.bootstrapReady} blocked=${summary.blocked} scaffoldReadyChains=${summary.scaffoldReadyChains} bootstrapReadyWithScaffolds=${summary.bootstrapReadyWithScaffolds}`);
|
||||
console.log('');
|
||||
for (const row of rows.sort((a, b) => a.priority - b.priority || a.chainId - b.chainId)) {
|
||||
console.log(`[${row.waveKey}] ${row.name} (${row.chainId})`);
|
||||
console.log(` state=${row.state}`);
|
||||
console.log(` rolloutMode=${row.rolloutMode}`);
|
||||
console.log(` targetProtocols=${row.targetProtocols.join(', ')}`);
|
||||
console.log(` cwTokens=${row.cwTokenCount} bridge=${row.bridgeAvailable ? 'true' : 'false'} hubStable=${row.hubStable ?? 'n/a'} hubStableRecorded=${row.hubStableRecorded ? 'yes' : 'no'}`);
|
||||
console.log(` scaffoldReady=${row.scaffoldReady ? 'yes' : 'no'}`);
|
||||
console.log(` requiredPairsLive=${row.liveRequiredPairs.length}/${row.requiredPairs.length}`);
|
||||
if (row.missingPairs.length > 0) console.log(` missingPairs=${row.missingPairs.join(', ')}`);
|
||||
if (row.scaffoldedMissingPairs.length > 0) console.log(` scaffoldedMissingPairs=${row.scaffoldedMissingPairs.join(', ')}`);
|
||||
if (row.blockers.length > 0) console.log(` blockers=${row.blockers.join('; ')}`);
|
||||
console.log(` next=${row.nextAction}`);
|
||||
console.log('');
|
||||
}
|
||||
NODE
|
||||
288
scripts/verify/check-gru-v2-deployer-funding-status.sh
Normal file
288
scripts/verify/check-gru-v2-deployer-funding-status.sh
Normal file
@@ -0,0 +1,288 @@
|
||||
#!/usr/bin/env bash
|
||||
# Check deployer wallet funding posture for the remaining GRU v2 public rollout.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-v2-deployer-funding-status.sh
|
||||
# bash scripts/verify/check-gru-v2-deployer-funding-status.sh --json
|
||||
#
|
||||
# This is an operator-planning surface. It does not broadcast transactions.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
SMOM_ROOT="$PROJECT_ROOT/smom-dbis-138"
|
||||
|
||||
OUTPUT_JSON=0
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd bash
|
||||
need_cmd cast
|
||||
need_cmd jq
|
||||
need_cmd python3
|
||||
|
||||
# shellcheck disable=SC1091
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
|
||||
|
||||
if [[ -z "${PRIVATE_KEY:-}" && -f "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh" ]]; then
|
||||
# shellcheck disable=SC1091
|
||||
source "$SMOM_ROOT/scripts/lib/deployment/dotenv.sh"
|
||||
load_deployment_env --repo-root "$SMOM_ROOT"
|
||||
fi
|
||||
|
||||
if [[ -z "${PRIVATE_KEY:-}" && -n "${DEPLOYER_PRIVATE_KEY:-}" ]]; then
|
||||
export PRIVATE_KEY="$DEPLOYER_PRIVATE_KEY"
|
||||
fi
|
||||
|
||||
DEPLOYER="${DEPLOYER_ADDRESS:-}"
|
||||
if [[ -z "$DEPLOYER" && -n "${PRIVATE_KEY:-}" ]]; then
|
||||
DEPLOYER="$(cast wallet address "$PRIVATE_KEY" 2>/dev/null || true)"
|
||||
fi
|
||||
DEPLOYER="${DEPLOYER:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
|
||||
|
||||
MAINNET_RPC="${ETHEREUM_MAINNET_RPC:-${ETH_MAINNET_RPC_URL:-}}"
|
||||
CRONOS_RPC="${CRONOS_RPC_URL:-${CRONOS_RPC:-}}"
|
||||
ARBITRUM_RPC="${ARBITRUM_MAINNET_RPC:-${ARBITRUM_MAINNET_RPC_URL:-${ARBITRUM_RPC_URL:-}}}"
|
||||
|
||||
WETH_138="0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2"
|
||||
WETH10_138="0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f"
|
||||
LINK_138="0xb7721dD53A8c629d9f1Ba31a5819AFe250002b03"
|
||||
CUSDT_138="0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
|
||||
CUSDC_138="0xf22258f57794CC8E06237084b353Ab30fFfa640b"
|
||||
|
||||
# Default ~0.44 ETH at 35 gwei for a medium Arbitrum deploy batch. Override or disable via env:
|
||||
# GRU_FUNDING_ARBITRUM_THRESHOLD_WEI=0 → skip Arbitrum balance blocker
|
||||
# GRU_FUNDING_ARBITRUM_THRESHOLD_WEI=100000000000000000 → custom threshold (wei)
|
||||
ARBITRUM_DEPLOY_THRESHOLD_WEI="${GRU_FUNDING_ARBITRUM_THRESHOLD_WEI:-440872740000000000}"
|
||||
|
||||
probe_chain_id() {
|
||||
local rpc="$1"
|
||||
[[ -z "$rpc" ]] && return 1
|
||||
if command -v timeout >/dev/null 2>&1; then
|
||||
timeout 5s cast chain-id --rpc-url "$rpc" 2>/dev/null | awk '{print $1; exit}'
|
||||
else
|
||||
cast chain-id --rpc-url "$rpc" 2>/dev/null | awk '{print $1; exit}'
|
||||
fi
|
||||
}
|
||||
|
||||
get_nonce() {
|
||||
local rpc="$1"
|
||||
[[ -z "$rpc" ]] && { echo "?"; return; }
|
||||
cast nonce "$DEPLOYER" --rpc-url "$rpc" 2>/dev/null | awk '{print $1; exit}' || echo "?"
|
||||
}
|
||||
|
||||
get_native_balance() {
|
||||
local rpc="$1"
|
||||
[[ -z "$rpc" ]] && { echo "?"; return; }
|
||||
cast balance "$DEPLOYER" --rpc-url "$rpc" 2>/dev/null | awk '{print $1; exit}' || echo "?"
|
||||
}
|
||||
|
||||
get_erc20_balance() {
|
||||
local token="$1"
|
||||
local rpc="$2"
|
||||
[[ -z "$rpc" ]] && { echo "?"; return; }
|
||||
cast call "$token" "balanceOf(address)(uint256)" "$DEPLOYER" --rpc-url "$rpc" 2>/dev/null | awk '{print $1; exit}' || echo "?"
|
||||
}
|
||||
|
||||
fmt_units() {
|
||||
local raw="$1"
|
||||
local decimals="$2"
|
||||
local precision="${3:-6}"
|
||||
if [[ "$raw" == "?" ]]; then
|
||||
echo "?"
|
||||
return
|
||||
fi
|
||||
python3 - "$raw" "$decimals" "$precision" <<'PY'
|
||||
import sys
|
||||
raw = int(sys.argv[1])
|
||||
decimals = int(sys.argv[2])
|
||||
precision = int(sys.argv[3])
|
||||
value = raw / (10 ** decimals)
|
||||
fmt = f"{{:.{precision}f}}"
|
||||
print(fmt.format(value))
|
||||
PY
|
||||
}
|
||||
|
||||
pick_chain138_rpc() {
|
||||
local candidates=(
|
||||
"${RPC_URL_138_PUBLIC:-}"
|
||||
"https://rpc-http-pub.d-bis.org"
|
||||
"${RPC_URL_138:-}"
|
||||
"https://rpc.public-0138.defi-oracle.io"
|
||||
)
|
||||
local seen="|"
|
||||
local candidate=""
|
||||
local chain_id=""
|
||||
for candidate in "${candidates[@]}"; do
|
||||
[[ -z "$candidate" ]] && continue
|
||||
[[ "$seen" == *"|$candidate|"* ]] && continue
|
||||
seen="${seen}${candidate}|"
|
||||
chain_id="$(probe_chain_id "$candidate" || true)"
|
||||
if [[ "$chain_id" == "138" ]]; then
|
||||
printf '%s\n' "$candidate"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
printf '%s\n' "${RPC_URL_138_PUBLIC:-${RPC_URL_138:-https://rpc-http-pub.d-bis.org}}"
|
||||
}
|
||||
|
||||
CHAIN138_RPC="$(pick_chain138_rpc)"
|
||||
|
||||
MAINNET_NONCE="$(get_nonce "$MAINNET_RPC")"
|
||||
MAINNET_BALANCE="$(get_native_balance "$MAINNET_RPC")"
|
||||
CRONOS_NONCE="$(get_nonce "$CRONOS_RPC")"
|
||||
CRONOS_BALANCE="$(get_native_balance "$CRONOS_RPC")"
|
||||
ARBITRUM_NONCE="$(get_nonce "$ARBITRUM_RPC")"
|
||||
ARBITRUM_BALANCE="$(get_native_balance "$ARBITRUM_RPC")"
|
||||
|
||||
CHAIN138_NATIVE="$(get_native_balance "$CHAIN138_RPC")"
|
||||
CHAIN138_WETH="$(get_erc20_balance "$WETH_138" "$CHAIN138_RPC")"
|
||||
CHAIN138_WETH10="$(get_erc20_balance "$WETH10_138" "$CHAIN138_RPC")"
|
||||
CHAIN138_LINK="$(get_erc20_balance "$LINK_138" "$CHAIN138_RPC")"
|
||||
CHAIN138_CUSDT="$(get_erc20_balance "$CUSDT_138" "$CHAIN138_RPC")"
|
||||
CHAIN138_CUSDC="$(get_erc20_balance "$CUSDC_138" "$CHAIN138_RPC")"
|
||||
|
||||
BLOCKERS=()
|
||||
WARNINGS=()
|
||||
|
||||
if [[ "$CHAIN138_LINK" == "0" ]]; then
|
||||
WARNINGS+=("Chain 138 deployer holds 0 LINK, so LINK-funded relay or CCIP fee operations are unfunded from this wallet.")
|
||||
fi
|
||||
|
||||
if [[ "${ARBITRUM_DEPLOY_THRESHOLD_WEI:-0}" != "0" ]] && [[ "$ARBITRUM_BALANCE" != "?" ]] && [[ "$ARBITRUM_BALANCE" -lt "$ARBITRUM_DEPLOY_THRESHOLD_WEI" ]]; then
|
||||
BLOCKERS+=("Arbitrum deployer balance is below the repo deploy threshold (${ARBITRUM_DEPLOY_THRESHOLD_WEI} wei; set GRU_FUNDING_ARBITRUM_THRESHOLD_WEI=0 to skip this gate).")
|
||||
fi
|
||||
|
||||
if [[ "$MAINNET_BALANCE" != "?" ]] && [[ "$MAINNET_BALANCE" -lt "1000000000000000" ]]; then
|
||||
WARNINGS+=("Mainnet deployer balance is below 0.001 ETH and is likely too low for additional pool/protocol deployment work.")
|
||||
fi
|
||||
|
||||
REPORT="$(jq -n \
|
||||
--arg generatedAt "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" \
|
||||
--arg deployer "$DEPLOYER" \
|
||||
--arg mainnetRpc "${MAINNET_RPC:-}" \
|
||||
--arg cronosRpc "${CRONOS_RPC:-}" \
|
||||
--arg arbitrumRpc "${ARBITRUM_RPC:-}" \
|
||||
--arg chain138Rpc "${CHAIN138_RPC:-}" \
|
||||
--arg mainnetNonce "$MAINNET_NONCE" \
|
||||
--arg mainnetBalance "$MAINNET_BALANCE" \
|
||||
--arg mainnetBalanceEth "$(fmt_units "$MAINNET_BALANCE" 18 9)" \
|
||||
--arg cronosNonce "$CRONOS_NONCE" \
|
||||
--arg cronosBalance "$CRONOS_BALANCE" \
|
||||
--arg cronosBalanceNative "$(fmt_units "$CRONOS_BALANCE" 18 9)" \
|
||||
--arg arbitrumNonce "$ARBITRUM_NONCE" \
|
||||
--arg arbitrumBalance "$ARBITRUM_BALANCE" \
|
||||
--arg arbitrumBalanceEth "$(fmt_units "$ARBITRUM_BALANCE" 18 9)" \
|
||||
--arg chain138Native "$CHAIN138_NATIVE" \
|
||||
--arg chain138NativeEth "$(fmt_units "$CHAIN138_NATIVE" 18 9)" \
|
||||
--arg chain138Weth "$CHAIN138_WETH" \
|
||||
--arg chain138WethUnits "$(fmt_units "$CHAIN138_WETH" 18 9)" \
|
||||
--arg chain138Weth10 "$CHAIN138_WETH10" \
|
||||
--arg chain138Weth10Units "$(fmt_units "$CHAIN138_WETH10" 18 9)" \
|
||||
--arg chain138Link "$CHAIN138_LINK" \
|
||||
--arg chain138LinkUnits "$(fmt_units "$CHAIN138_LINK" 18 9)" \
|
||||
--arg chain138Cusdt "$CHAIN138_CUSDT" \
|
||||
--arg chain138CusdtUnits "$(fmt_units "$CHAIN138_CUSDT" 6 2)" \
|
||||
--arg chain138Cusdc "$CHAIN138_CUSDC" \
|
||||
--arg chain138CusdcUnits "$(fmt_units "$CHAIN138_CUSDC" 6 2)" \
|
||||
--argjson blockers "$(printf '%s\n' "${BLOCKERS[@]:-}" | sed '/^$/d' | jq -R . | jq -s .)" \
|
||||
--argjson warnings "$(printf '%s\n' "${WARNINGS[@]:-}" | sed '/^$/d' | jq -R . | jq -s .)" \
|
||||
'{
|
||||
generatedAt: $generatedAt,
|
||||
deployer: $deployer,
|
||||
networks: {
|
||||
mainnet: {
|
||||
rpc: $mainnetRpc,
|
||||
nonce: ($mainnetNonce | if . == "?" then . else tonumber end),
|
||||
balanceWei: ($mainnetBalance | if . == "?" then . else tonumber end),
|
||||
balanceEth: ($mainnetBalanceEth | if . == "?" then . else tonumber end)
|
||||
},
|
||||
cronos: {
|
||||
rpc: $cronosRpc,
|
||||
nonce: ($cronosNonce | if . == "?" then . else tonumber end),
|
||||
balanceWei: ($cronosBalance | if . == "?" then . else tonumber end),
|
||||
balanceNative: ($cronosBalanceNative | if . == "?" then . else tonumber end)
|
||||
},
|
||||
arbitrum: {
|
||||
rpc: $arbitrumRpc,
|
||||
nonce: ($arbitrumNonce | if . == "?" then . else tonumber end),
|
||||
balanceWei: ($arbitrumBalance | if . == "?" then . else tonumber end),
|
||||
balanceEth: ($arbitrumBalanceEth | if . == "?" then . else tonumber end)
|
||||
},
|
||||
chain138: {
|
||||
rpc: $chain138Rpc,
|
||||
nativeWei: ($chain138Native | if . == "?" then . else tonumber end),
|
||||
nativeEth: ($chain138NativeEth | if . == "?" then . else tonumber end),
|
||||
weth: {
|
||||
raw: ($chain138Weth | if . == "?" then . else tonumber end),
|
||||
units: ($chain138WethUnits | if . == "?" then . else tonumber end)
|
||||
},
|
||||
weth10: {
|
||||
raw: ($chain138Weth10 | if . == "?" then . else tonumber end),
|
||||
units: ($chain138Weth10Units | if . == "?" then . else tonumber end)
|
||||
},
|
||||
link: {
|
||||
raw: ($chain138Link | if . == "?" then . else tonumber end),
|
||||
units: ($chain138LinkUnits | if . == "?" then . else tonumber end)
|
||||
},
|
||||
cUSDT: {
|
||||
raw: ($chain138Cusdt | if . == "?" then . else tonumber end),
|
||||
units: ($chain138CusdtUnits | if . == "?" then . else tonumber end)
|
||||
},
|
||||
cUSDC: {
|
||||
raw: ($chain138Cusdc | if . == "?" then . else tonumber end),
|
||||
units: ($chain138CusdcUnits | if . == "?" then . else tonumber end)
|
||||
}
|
||||
}
|
||||
},
|
||||
blockers: $blockers,
|
||||
warnings: $warnings
|
||||
}')"
|
||||
|
||||
if (( OUTPUT_JSON == 1 )); then
|
||||
printf '%s\n' "$REPORT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
printf '=== GRU V2 Deployer Funding Status ===\n'
|
||||
printf 'Deployer: %s\n\n' "$DEPLOYER"
|
||||
|
||||
printf 'Mainnet: nonce=%s balance=%s wei (~%s ETH)\n' "$MAINNET_NONCE" "$MAINNET_BALANCE" "$(fmt_units "$MAINNET_BALANCE" 18 9)"
|
||||
printf 'Cronos: nonce=%s balance=%s wei (~%s native)\n' "$CRONOS_NONCE" "$CRONOS_BALANCE" "$(fmt_units "$CRONOS_BALANCE" 18 9)"
|
||||
printf 'Arbitrum: nonce=%s balance=%s wei (~%s ETH)\n' "$ARBITRUM_NONCE" "$ARBITRUM_BALANCE" "$(fmt_units "$ARBITRUM_BALANCE" 18 9)"
|
||||
printf '\n'
|
||||
printf 'Chain 138 native: %s wei (~%s ETH)\n' "$CHAIN138_NATIVE" "$(fmt_units "$CHAIN138_NATIVE" 18 9)"
|
||||
printf 'Chain 138 WETH: %s raw (~%s)\n' "$CHAIN138_WETH" "$(fmt_units "$CHAIN138_WETH" 18 9)"
|
||||
printf 'Chain 138 WETH10: %s raw (~%s)\n' "$CHAIN138_WETH10" "$(fmt_units "$CHAIN138_WETH10" 18 9)"
|
||||
printf 'Chain 138 LINK: %s raw (~%s)\n' "$CHAIN138_LINK" "$(fmt_units "$CHAIN138_LINK" 18 9)"
|
||||
printf 'Chain 138 cUSDT: %s raw (~%s)\n' "$CHAIN138_CUSDT" "$(fmt_units "$CHAIN138_CUSDT" 6 2)"
|
||||
printf 'Chain 138 cUSDC: %s raw (~%s)\n' "$CHAIN138_CUSDC" "$(fmt_units "$CHAIN138_CUSDC" 6 2)"
|
||||
|
||||
if ((${#BLOCKERS[@]} > 0)); then
|
||||
printf '\nActive funding blockers:\n'
|
||||
for item in "${BLOCKERS[@]}"; do
|
||||
printf -- '- %s\n' "$item"
|
||||
done
|
||||
fi
|
||||
|
||||
if ((${#WARNINGS[@]} > 0)); then
|
||||
printf '\nWarnings:\n'
|
||||
for item in "${WARNINGS[@]}"; do
|
||||
printf -- '- %s\n' "$item"
|
||||
done
|
||||
fi
|
||||
489
scripts/verify/check-gru-v2-deployment-queue.sh
Normal file
489
scripts/verify/check-gru-v2-deployment-queue.sh
Normal file
@@ -0,0 +1,489 @@
|
||||
#!/usr/bin/env bash
|
||||
# Generate an operator-grade GRU v2 public deployment queue across Wave 1
|
||||
# transport activation, public-chain cW pool deployment, and protocol staging.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-v2-deployment-queue.sh
|
||||
# bash scripts/verify/check-gru-v2-deployment-queue.sh --json
|
||||
# bash scripts/verify/check-gru-v2-deployment-queue.sh --write-explorer-config
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
OUTPUT_JSON=0
|
||||
WRITE_EXPLORER_CONFIG=0
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
--write-explorer-config) WRITE_EXPLORER_CONFIG=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" WRITE_EXPLORER_CONFIG="$WRITE_EXPLORER_CONFIG" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const writeExplorerConfig = process.env.WRITE_EXPLORER_CONFIG === '1';
|
||||
const explorerConfigPath = path.join(
|
||||
root,
|
||||
'explorer-monorepo/backend/api/rest/config/metamask/GRU_V2_DEPLOYMENT_QUEUE.json'
|
||||
);
|
||||
|
||||
function readJson(relPath) {
|
||||
return JSON.parse(fs.readFileSync(path.join(root, relPath), 'utf8'));
|
||||
}
|
||||
|
||||
const rollout = readJson('config/gru-global-priority-currency-rollout.json');
|
||||
const manifest = readJson('config/gru-iso4217-currency-manifest.json');
|
||||
const transport = readJson('config/gru-transport-active.json');
|
||||
const deployment = readJson('cross-chain-pmm-lps/config/deployment-status.json');
|
||||
const poolMatrix = readJson('cross-chain-pmm-lps/config/pool-matrix.json');
|
||||
const protocolPlan = readJson('config/gru-v2-public-protocol-rollout-plan.json');
|
||||
const mapping = readJson('config/token-mapping-multichain.json');
|
||||
const routingRegistry = readJson('config/routing-registry.json');
|
||||
|
||||
const manifestByCode = new Map((manifest.currencies || []).map((item) => [item.code, item]));
|
||||
const transportSymbols = new Set((transport.enabledCanonicalTokens || []).map((item) => item.symbol));
|
||||
const desiredChainIds = rollout.desiredDestinationNetworks?.evmPublicCwMeshChainIds || [];
|
||||
const poolMatrixTokens = new Set(poolMatrix.cwTokens || []);
|
||||
const cToCw = mapping.cToCwSymbolMapping || {};
|
||||
|
||||
const wave1Assets = (rollout.assets || []).filter((asset) => asset.wave === 'wave1');
|
||||
const wave1WrappedSymbols = [...new Set(wave1Assets.flatMap((asset) => (asset.tokenForms || []).map((item) => item.wrappedSymbol)))];
|
||||
const wave1CanonicalSymbols = [...new Set(wave1Assets.flatMap((asset) => (asset.tokenForms || []).map((item) => item.canonicalSymbol)))];
|
||||
const poolMatrixMissingWave1 = wave1WrappedSymbols.filter((symbol) => !poolMatrixTokens.has(symbol));
|
||||
const arbitrumRoute = (routingRegistry.routes || []).find((route) => route.fromChain === 138 && route.toChain === 42161 && route.asset === 'WETH');
|
||||
const arbitrumHubBlocker = {
|
||||
active: true,
|
||||
fromChain: 138,
|
||||
viaChain: 1,
|
||||
toChain: 42161,
|
||||
currentPath: '138 -> Mainnet -> Arbitrum',
|
||||
sourceBridge: '0xc9901ce2Ddb6490FAA183645147a87496d8b20B6',
|
||||
failedTxHash: '0x97df657f0e31341ca852666766e553650531bbcc86621246d041985d7261bb07',
|
||||
note: (arbitrumRoute && arbitrumRoute.note) || 'Use Mainnet hub; the current Mainnet -> Arbitrum WETH9 leg is blocked.'
|
||||
};
|
||||
|
||||
function deriveRepoState(bits) {
|
||||
if (bits.transportActive) return 'live_transport';
|
||||
if (bits.canonical138Deployed) return 'canonical_only';
|
||||
if (bits.manifestPresent) return 'manifest_only';
|
||||
if (bits.cToCwMapped) return 'mapping_only';
|
||||
return 'backlog';
|
||||
}
|
||||
|
||||
const allAssetResults = (rollout.assets || []).map((asset) => {
|
||||
const manifestEntry = manifestByCode.get(asset.code);
|
||||
const canonicalSymbols = (asset.tokenForms || []).map((item) => item.canonicalSymbol);
|
||||
const wrappedSymbols = (asset.tokenForms || []).map((item) => item.wrappedSymbol);
|
||||
const manifestPresent = Boolean(manifestEntry);
|
||||
const canonical138Deployed = Boolean(manifestEntry?.status?.deployed);
|
||||
const transportActive = canonicalSymbols.some((symbol) => transportSymbols.has(symbol));
|
||||
const cToCwMapped = canonicalSymbols.length > 0 && canonicalSymbols.every((symbol, idx) => cToCw[symbol] === wrappedSymbols[idx]);
|
||||
return {
|
||||
code: asset.code,
|
||||
wave: asset.wave,
|
||||
currentRepoState: deriveRepoState({
|
||||
manifestPresent,
|
||||
canonical138Deployed,
|
||||
cToCwMapped,
|
||||
transportActive
|
||||
})
|
||||
};
|
||||
});
|
||||
|
||||
const rolloutBacklogAssets = allAssetResults.filter((item) => item.currentRepoState === 'backlog').length;
|
||||
|
||||
function normalizePair(pair) {
|
||||
return String(pair || '').trim().toUpperCase();
|
||||
}
|
||||
|
||||
function poolEntryMatchesPair(entry, pair) {
|
||||
const normalized = normalizePair(pair);
|
||||
const [base, quote] = normalized.split('/');
|
||||
const baseCandidate = String(entry.base || entry.base_token || '').trim().toUpperCase();
|
||||
const quoteCandidate = String(entry.quote || entry.quote_token || '').trim().toUpperCase();
|
||||
return baseCandidate === base && quoteCandidate === quote;
|
||||
}
|
||||
|
||||
const assetQueue = wave1Assets.map((asset) => {
|
||||
const manifestEntry = manifestByCode.get(asset.code);
|
||||
const canonicalSymbols = (asset.tokenForms || []).map((item) => item.canonicalSymbol);
|
||||
const wrappedSymbols = (asset.tokenForms || []).map((item) => item.wrappedSymbol);
|
||||
const transportActive = canonicalSymbols.some((symbol) => transportSymbols.has(symbol));
|
||||
const coveredByPoolMatrix = wrappedSymbols.every((symbol) => poolMatrixTokens.has(symbol));
|
||||
return {
|
||||
code: asset.code,
|
||||
name: asset.name,
|
||||
canonicalSymbols,
|
||||
wrappedSymbols,
|
||||
transportActive,
|
||||
canonicalDeployed: Boolean(manifestEntry?.status?.deployed),
|
||||
x402Ready: Boolean(manifestEntry?.status?.x402Ready),
|
||||
coveredByPoolMatrix,
|
||||
nextSteps: transportActive
|
||||
? ['monitor_live_transport', 'deploy_public_pools']
|
||||
: ['enable_bridge_controls', 'set_max_outstanding', 'promote_transport_overlay', 'deploy_public_pools']
|
||||
};
|
||||
});
|
||||
|
||||
const chainQueue = desiredChainIds.map((chainId) => {
|
||||
const chain = deployment.chains?.[String(chainId)] || {};
|
||||
const matrix = poolMatrix.chains?.[String(chainId)] || {};
|
||||
const cwTokens = Object.keys(chain.cwTokens || {});
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
const plannedWave1Pairs = (matrix.poolsFirst || []).filter((pair) => {
|
||||
return wave1WrappedSymbols.some((symbol) => normalizePair(pair).startsWith(`${symbol.toUpperCase()}/`));
|
||||
});
|
||||
const recordedWave1Pairs = plannedWave1Pairs.filter((pair) => pmmPools.some((entry) => poolEntryMatchesPair(entry, pair)));
|
||||
return {
|
||||
chainId,
|
||||
name: matrix.name || chain.name || `Chain ${chainId}`,
|
||||
hubStable: matrix.hubStable || null,
|
||||
bridgeAvailable: chain.bridgeAvailable === true,
|
||||
cwTokenCount: cwTokens.length,
|
||||
wave1WrappedCoverage: wave1WrappedSymbols.filter((symbol) => cwTokens.includes(symbol)).length,
|
||||
plannedWave1Pairs,
|
||||
recordedWave1Pairs,
|
||||
nextStep: cwTokens.length === 0
|
||||
? 'complete_cw_suite_then_deploy_pools'
|
||||
: recordedWave1Pairs.length === plannedWave1Pairs.length && plannedWave1Pairs.length > 0
|
||||
? 'verify_and_route'
|
||||
: 'deploy_first_tier_wave1_pools'
|
||||
};
|
||||
});
|
||||
|
||||
const totalRecordedPublicPools = desiredChainIds.reduce((sum, chainId) => {
|
||||
const chain = deployment.chains?.[String(chainId)] || {};
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
return sum + pmmPools.length;
|
||||
}, 0);
|
||||
|
||||
const protocolQueue = (protocolPlan.protocols || []).map((protocol) => {
|
||||
if (protocol.key === 'dodo_v3_d3mm') {
|
||||
return {
|
||||
key: protocol.key,
|
||||
name: protocol.name,
|
||||
role: protocol.role,
|
||||
deploymentStage: protocol.deploymentStage,
|
||||
activePublicPools: 0,
|
||||
currentState: 'pilot_live_chain138_only',
|
||||
activationDependsOn: protocol.activationDependsOn || []
|
||||
};
|
||||
}
|
||||
|
||||
if (protocol.key === 'dodo_pmm') {
|
||||
return {
|
||||
key: protocol.key,
|
||||
name: protocol.name,
|
||||
role: protocol.role,
|
||||
deploymentStage: protocol.deploymentStage,
|
||||
activePublicPools: totalRecordedPublicPools,
|
||||
currentState: totalRecordedPublicPools > 0 ? 'partially_live_on_public_cw_mesh' : 'queued_not_live',
|
||||
activationDependsOn: protocol.activationDependsOn || []
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
key: protocol.key,
|
||||
name: protocol.name,
|
||||
role: protocol.role,
|
||||
deploymentStage: protocol.deploymentStage,
|
||||
activePublicPools: 0,
|
||||
currentState: 'queued_not_live',
|
||||
activationDependsOn: protocol.activationDependsOn || []
|
||||
};
|
||||
});
|
||||
|
||||
const summary = {
|
||||
wave1Assets: assetQueue.length,
|
||||
wave1TransportActive: assetQueue.filter((item) => item.transportActive).length,
|
||||
wave1TransportPending: assetQueue.filter((item) => !item.transportActive).length,
|
||||
wave1WrappedSymbols: wave1WrappedSymbols.length,
|
||||
wave1WrappedSymbolsCoveredByPoolMatrix: wave1WrappedSymbols.length - poolMatrixMissingWave1.length,
|
||||
wave1WrappedSymbolsMissingFromPoolMatrix: poolMatrixMissingWave1.length,
|
||||
desiredPublicEvmTargets: chainQueue.length,
|
||||
chainsWithLoadedCwSuites: chainQueue.filter((item) => item.cwTokenCount > 0).length,
|
||||
chainsMissingCwSuites: chainQueue.filter((item) => item.cwTokenCount === 0).length,
|
||||
firstTierWave1PoolsPlanned: chainQueue.reduce((sum, item) => sum + item.plannedWave1Pairs.length, 0),
|
||||
firstTierWave1PoolsRecordedLive: chainQueue.reduce((sum, item) => sum + item.recordedWave1Pairs.length, 0),
|
||||
protocolsTracked: protocolQueue.length,
|
||||
protocolsLive: protocolQueue.filter((item) => item.activePublicPools > 0).length
|
||||
};
|
||||
|
||||
const blockers = [];
|
||||
if (poolMatrixMissingWave1.length > 0) {
|
||||
blockers.push(`Wave 1 wrapped symbols missing from pool-matrix: ${poolMatrixMissingWave1.join(', ')}.`);
|
||||
}
|
||||
const missingSuiteChains = chainQueue.filter((item) => item.cwTokenCount === 0);
|
||||
if (missingSuiteChains.length > 0) {
|
||||
blockers.push(`Desired public EVM targets still missing cW suites: ${missingSuiteChains.map((item) => item.name).join(', ')}.`);
|
||||
}
|
||||
const pendingWave1 = assetQueue.filter((item) => !item.transportActive);
|
||||
if (pendingWave1.length > 0) {
|
||||
blockers.push(`Wave 1 transport is still pending for: ${pendingWave1.map((item) => item.code).join(', ')}.`);
|
||||
}
|
||||
if (summary.firstTierWave1PoolsRecordedLive === 0) {
|
||||
blockers.push('No first-tier Wave 1 public cW pools are recorded live yet across the tracked public EVM mesh.');
|
||||
}
|
||||
if (protocolQueue.every((item) => item.activePublicPools === 0)) {
|
||||
blockers.push('All tracked public protocols remain queued: Uniswap v3, DODO PMM, Balancer, Curve 3, and 1inch.');
|
||||
}
|
||||
if (arbitrumHubBlocker.active) {
|
||||
blockers.push(`Arbitrum bootstrap remains blocked on the current Mainnet hub leg: tx ${arbitrumHubBlocker.failedTxHash} reverted before any bridge event was emitted.`);
|
||||
}
|
||||
|
||||
const resolutionMatrix = [
|
||||
{
|
||||
key: 'mainnet_arbitrum_hub_blocked',
|
||||
state: arbitrumHubBlocker.active ? 'open' : 'resolved',
|
||||
blocker: arbitrumHubBlocker.active
|
||||
? `Arbitrum bootstrap remains blocked on the current Mainnet hub leg: tx ${arbitrumHubBlocker.failedTxHash} reverted from ${arbitrumHubBlocker.sourceBridge} before any bridge event was emitted.`
|
||||
: 'The Mainnet -> Arbitrum WETH9 hub leg is healthy.',
|
||||
targets: [
|
||||
{
|
||||
fromChain: arbitrumHubBlocker.fromChain,
|
||||
viaChain: arbitrumHubBlocker.viaChain,
|
||||
toChain: arbitrumHubBlocker.toChain,
|
||||
currentPath: arbitrumHubBlocker.currentPath
|
||||
}
|
||||
],
|
||||
resolution: [
|
||||
'Repair or replace the current Mainnet WETH9 fan-out bridge before treating Arbitrum as an available public bootstrap target.',
|
||||
'Retest 138 -> Mainnet first-hop delivery, then rerun a smaller Mainnet -> Arbitrum send and require destination bridge events before promoting the route.',
|
||||
'Keep Arbitrum marked blocked in the explorer and status surfaces until the hub leg emits and completes normally.'
|
||||
],
|
||||
runbooks: [
|
||||
'docs/07-ccip/CROSS_NETWORK_FUNDING_BOOTSTRAP_STRATEGY.md',
|
||||
'docs/07-ccip/CHAIN138_PUBLIC_CHAIN_UNLOAD_ROUTES.md',
|
||||
'docs/00-meta/REQUIRED_FIXES_GAPS_AND_DEPLOYMENTS_LIST.md'
|
||||
],
|
||||
exitCriteria: 'A fresh Mainnet -> Arbitrum WETH9 send emits bridge events and completes destination delivery successfully.'
|
||||
},
|
||||
{
|
||||
key: 'missing_public_cw_suites',
|
||||
state: missingSuiteChains.length === 0 ? 'resolved' : 'open',
|
||||
blocker: missingSuiteChains.length === 0
|
||||
? 'All desired public EVM targets have cW suites.'
|
||||
: `Desired public EVM targets still missing cW suites: ${missingSuiteChains.map((item) => item.name).join(', ')}.`,
|
||||
targets: missingSuiteChains.map((item) => ({
|
||||
chainId: item.chainId,
|
||||
name: item.name,
|
||||
nextStep: item.nextStep
|
||||
})),
|
||||
resolution: [
|
||||
'Deploy the full cW core suite on each missing destination chain using the existing CW deploy-and-wire flow.',
|
||||
'Grant bridge mint/burn roles and mark the corridor live in cross-chain-pmm-lps/config/deployment-status.json.',
|
||||
'Update public token lists / explorer config, then rerun check-cw-evm-deployment-mesh.sh and check-cw-public-pool-status.sh.'
|
||||
],
|
||||
runbooks: [
|
||||
'docs/07-ccip/CW_DEPLOY_AND_WIRE_RUNBOOK.md',
|
||||
'docs/03-deployment/PHASE_C_CW_AND_EDGE_POOLS_RUNBOOK.md',
|
||||
'scripts/deployment/run-cw-remaining-steps.sh',
|
||||
'scripts/verify/check-cw-evm-deployment-mesh.sh'
|
||||
],
|
||||
exitCriteria: missingSuiteChains.length === 0
|
||||
? 'All desired public EVM targets report non-zero cW suites and bridgeAvailable=true in deployment-status.json.'
|
||||
: `${missingSuiteChains.map((item) => item.name).join(', ')} report non-zero cW suites and become bridgeAvailable in deployment-status.json.`
|
||||
},
|
||||
{
|
||||
key: 'wave1_transport_pending',
|
||||
state: pendingWave1.length === 0 ? 'resolved' : 'open',
|
||||
blocker: pendingWave1.length === 0
|
||||
? 'Wave 1 transport is fully active.'
|
||||
: `Wave 1 transport is still pending for: ${pendingWave1.map((item) => item.code).join(', ')}.`,
|
||||
targets: pendingWave1.map((item) => ({
|
||||
code: item.code,
|
||||
canonicalSymbols: item.canonicalSymbols,
|
||||
wrappedSymbols: item.wrappedSymbols
|
||||
})),
|
||||
resolution: [
|
||||
'Enable bridge controls and supervision policy for each Wave 1 canonical asset on Chain 138.',
|
||||
'Set max-outstanding / capacity controls, then promote the canonical symbols into config/gru-transport-active.json.',
|
||||
'Verify the overlay promotion with check-gru-global-priority-rollout.sh and check-gru-v2-chain138-readiness.sh before attaching public liquidity.'
|
||||
],
|
||||
runbooks: [
|
||||
'docs/04-configuration/GRU_GLOBAL_PRIORITY_CROSS_CHAIN_ROLLOUT.md',
|
||||
'docs/04-configuration/GRU_TRANSPORT_ACTIVE_JSON.md',
|
||||
'scripts/verify/check-gru-global-priority-rollout.sh',
|
||||
'scripts/verify/check-gru-v2-chain138-readiness.sh'
|
||||
],
|
||||
exitCriteria: 'Wave 1 transport pending count reaches zero and the overlay reports the seven non-USD assets as live_transport.'
|
||||
},
|
||||
{
|
||||
key: 'first_tier_public_pools_not_live',
|
||||
state: summary.firstTierWave1PoolsRecordedLive > 0 ? 'in_progress' : 'open',
|
||||
blocker: summary.firstTierWave1PoolsRecordedLive > 0
|
||||
? 'Some first-tier Wave 1 public cW pools are live, but the rollout is incomplete.'
|
||||
: 'No first-tier Wave 1 public cW pools are recorded live yet across the tracked public EVM mesh.',
|
||||
targets: chainQueue.map((item) => ({
|
||||
chainId: item.chainId,
|
||||
name: item.name,
|
||||
hubStable: item.hubStable,
|
||||
plannedWave1Pairs: item.plannedWave1Pairs.length,
|
||||
recordedWave1Pairs: item.recordedWave1Pairs.length
|
||||
})),
|
||||
resolution: [
|
||||
'Deploy the first-tier cW/hub-stable pairs from pool-matrix.json on every chain with a loaded cW suite.',
|
||||
'Seed the new pools with initial liquidity and record the resulting pool addresses in cross-chain-pmm-lps/config/deployment-status.json.',
|
||||
'Use check-cw-public-pool-status.sh to verify the mesh is no longer empty before surfacing the venues publicly.'
|
||||
],
|
||||
runbooks: [
|
||||
'docs/03-deployment/SINGLE_SIDED_LPS_PUBLIC_NETWORKS_RUNBOOK.md',
|
||||
'docs/03-deployment/PMM_FULL_MESH_AND_PUBLIC_SINGLE_SIDED_PLAN.md',
|
||||
'cross-chain-pmm-lps/config/pool-matrix.json',
|
||||
'scripts/verify/check-cw-public-pool-status.sh'
|
||||
],
|
||||
exitCriteria: 'First-tier Wave 1 pools are recorded live in deployment-status.json and check-cw-public-pool-status.sh reports non-zero pool coverage.'
|
||||
},
|
||||
{
|
||||
key: 'public_protocols_queued',
|
||||
state: protocolQueue.every((item) => item.activePublicPools === 0) ? 'open' : 'in_progress',
|
||||
blocker: protocolQueue.every((item) => item.activePublicPools === 0)
|
||||
? 'All tracked public protocols remain queued: Uniswap v3, DODO PMM, Balancer, Curve 3, and 1inch.'
|
||||
: 'Some tracked public protocols have begun activation, but the full protocol stack is not live yet.',
|
||||
targets: protocolQueue.map((item) => ({
|
||||
key: item.key,
|
||||
name: item.name,
|
||||
deploymentStage: item.deploymentStage,
|
||||
activationDependsOn: item.activationDependsOn
|
||||
})),
|
||||
resolution: [
|
||||
'Stage 1: activate Uniswap v3 and DODO PMM once first-tier cW pools exist on the public mesh.',
|
||||
'Stage 2: activate Balancer and Curve 3 only after first-tier stable liquidity is already live.',
|
||||
'Stage 3: expose 1inch after the underlying pools, routing/indexer visibility, and public provider-capability wiring are in place.'
|
||||
],
|
||||
runbooks: [
|
||||
'config/gru-v2-public-protocol-rollout-plan.json',
|
||||
'docs/11-references/GRU_V2_PUBLIC_PROTOCOL_DEPLOYMENT_STATUS.md',
|
||||
'scripts/verify/check-gru-v2-public-protocols.sh'
|
||||
],
|
||||
exitCriteria: 'The public protocol status surface reports non-zero active cW pools for the staged venues.'
|
||||
},
|
||||
{
|
||||
key: 'global_priority_backlog',
|
||||
state: rolloutBacklogAssets === 0 ? 'resolved' : 'open',
|
||||
blocker: rolloutBacklogAssets === 0
|
||||
? 'No ranked GRU backlog assets remain outside the live manifest.'
|
||||
: `The ranked GRU global rollout still has ${rolloutBacklogAssets} backlog assets outside the live manifest.`,
|
||||
targets: [
|
||||
{ backlogAssets: rolloutBacklogAssets }
|
||||
],
|
||||
resolution: [
|
||||
'Complete Wave 1 transport and first-tier public liquidity before promoting the remaining ranked assets.',
|
||||
'For each backlog asset, add canonical + wrapped symbols to the manifest/rollout plan, deploy contracts, and extend the public pool matrix.',
|
||||
'Promote each new asset through the same transport and public-liquidity gates used for Wave 1.'
|
||||
],
|
||||
runbooks: [
|
||||
'config/gru-global-priority-currency-rollout.json',
|
||||
'config/gru-iso4217-currency-manifest.json',
|
||||
'docs/04-configuration/GRU_GLOBAL_PRIORITY_CROSS_CHAIN_ROLLOUT.md',
|
||||
'scripts/verify/check-gru-global-priority-rollout.sh'
|
||||
],
|
||||
exitCriteria: 'Backlog assets count reaches zero in check-gru-global-priority-rollout.sh.'
|
||||
},
|
||||
{
|
||||
key: 'solana_non_evm_program',
|
||||
state: ((rollout.desiredDestinationNetworks?.nonEvmRelayPrograms || []).length || 0) === 0 ? 'resolved' : 'planned',
|
||||
blocker: ((rollout.desiredDestinationNetworks?.nonEvmRelayPrograms || []).length || 0) === 0
|
||||
? 'No desired non-EVM GRU targets remain.'
|
||||
: `Desired non-EVM GRU targets remain planned / relay-dependent: ${(rollout.desiredDestinationNetworks.nonEvmRelayPrograms || []).map((item) => item.identifier).join(', ')}.`,
|
||||
targets: (rollout.desiredDestinationNetworks?.nonEvmRelayPrograms || []).map((item) => ({
|
||||
identifier: item.identifier,
|
||||
label: item.label || item.identifier
|
||||
})),
|
||||
resolution: [
|
||||
'Define the destination-chain token/program model first: SPL or wrapped-account representation, authority model, and relay custody surface.',
|
||||
'Implement the relay/program path and only then promote Solana from desired-target status into the active transport inventory.',
|
||||
'Add dedicated verifier coverage before marking Solana live anywhere in the explorer or status docs.'
|
||||
],
|
||||
runbooks: [
|
||||
'docs/04-configuration/ADDITIONAL_PATHS_AND_EXTENSIONS.md',
|
||||
'docs/04-configuration/GRU_GLOBAL_PRIORITY_CROSS_CHAIN_ROLLOUT.md'
|
||||
],
|
||||
exitCriteria: 'Solana has a real relay/program surface, a verifier, and is no longer only listed as a desired non-EVM target.'
|
||||
}
|
||||
];
|
||||
|
||||
const report = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
summary,
|
||||
assetQueue,
|
||||
chainQueue,
|
||||
protocolQueue,
|
||||
blockers,
|
||||
resolutionMatrix,
|
||||
notes: [
|
||||
'This queue is an operator/deployment planning surface. It does not mark queued pools or transports as live.',
|
||||
'Chain 138 canonical venues remain a separate live surface from the public cW mesh.'
|
||||
]
|
||||
};
|
||||
|
||||
if (writeExplorerConfig) {
|
||||
fs.writeFileSync(explorerConfigPath, `${JSON.stringify(report, null, 2)}\n`);
|
||||
}
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify(report, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== GRU V2 Deployment Queue ===');
|
||||
console.log(`Wave 1 assets: ${summary.wave1Assets}`);
|
||||
console.log(`Wave 1 transport active: ${summary.wave1TransportActive}`);
|
||||
console.log(`Wave 1 transport pending: ${summary.wave1TransportPending}`);
|
||||
console.log(`Wave 1 wrapped symbols covered by pool-matrix: ${summary.wave1WrappedSymbolsCoveredByPoolMatrix}/${summary.wave1WrappedSymbols}`);
|
||||
console.log(`Desired public EVM targets: ${summary.desiredPublicEvmTargets}`);
|
||||
console.log(`Chains with loaded cW suites: ${summary.chainsWithLoadedCwSuites}`);
|
||||
console.log(`Chains missing cW suites: ${summary.chainsMissingCwSuites}`);
|
||||
console.log(`First-tier Wave 1 pools planned: ${summary.firstTierWave1PoolsPlanned}`);
|
||||
console.log(`First-tier Wave 1 pools recorded live: ${summary.firstTierWave1PoolsRecordedLive}`);
|
||||
console.log(`Tracked protocols: ${summary.protocolsTracked}`);
|
||||
console.log('');
|
||||
console.log('Wave 1 asset queue:');
|
||||
for (const asset of assetQueue) {
|
||||
console.log(`- ${asset.code}: transport=${asset.transportActive ? 'live' : 'pending'}; pool-matrix=${asset.coveredByPoolMatrix ? 'covered' : 'missing'}; next=${asset.nextSteps.join(',')}`);
|
||||
}
|
||||
console.log('');
|
||||
console.log('Per-chain queue:');
|
||||
for (const chain of chainQueue) {
|
||||
console.log(`- ${chain.chainId} ${chain.name}: hub=${chain.hubStable || 'n/a'}; cw=${chain.cwTokenCount}; plannedWave1Pairs=${chain.plannedWave1Pairs.length}; liveWave1Pairs=${chain.recordedWave1Pairs.length}; next=${chain.nextStep}`);
|
||||
}
|
||||
console.log('');
|
||||
console.log('Protocol queue:');
|
||||
for (const protocol of protocolQueue) {
|
||||
console.log(`- ${protocol.name}: ${protocol.currentState}; stage=${protocol.deploymentStage}`);
|
||||
}
|
||||
if (blockers.length > 0) {
|
||||
console.log('');
|
||||
console.log('Active blockers:');
|
||||
for (const blocker of blockers) {
|
||||
console.log(`- ${blocker}`);
|
||||
}
|
||||
console.log('');
|
||||
console.log('Resolution paths:');
|
||||
for (const entry of resolutionMatrix) {
|
||||
if (entry.state === 'resolved') continue;
|
||||
console.log(`- ${entry.key}: ${entry.exitCriteria}`);
|
||||
}
|
||||
}
|
||||
if (writeExplorerConfig) {
|
||||
console.log('');
|
||||
console.log(`Wrote: ${explorerConfigPath}`);
|
||||
}
|
||||
NODE
|
||||
321
scripts/verify/check-gru-v2-public-protocols.sh
Normal file
321
scripts/verify/check-gru-v2-public-protocols.sh
Normal file
@@ -0,0 +1,321 @@
|
||||
#!/usr/bin/env bash
|
||||
# Summarize the GRU v2 public-network rollout posture across the public EVM cW*
|
||||
# mesh, Wave 1 transport activation, and public protocol liquidity.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-v2-public-protocols.sh
|
||||
# bash scripts/verify/check-gru-v2-public-protocols.sh --json
|
||||
# bash scripts/verify/check-gru-v2-public-protocols.sh --write-explorer-config
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
OUTPUT_JSON=0
|
||||
WRITE_EXPLORER_CONFIG=0
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
--write-explorer-config) WRITE_EXPLORER_CONFIG=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
OUTPUT_JSON="$OUTPUT_JSON" WRITE_EXPLORER_CONFIG="$WRITE_EXPLORER_CONFIG" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const outputJson = process.env.OUTPUT_JSON === '1';
|
||||
const writeExplorerConfig = process.env.WRITE_EXPLORER_CONFIG === '1';
|
||||
const explorerConfigPath = path.join(
|
||||
root,
|
||||
'explorer-monorepo/backend/api/rest/config/metamask/GRU_V2_PUBLIC_DEPLOYMENT_STATUS.json'
|
||||
);
|
||||
|
||||
function readJson(relPath) {
|
||||
return JSON.parse(fs.readFileSync(path.join(root, relPath), 'utf8'));
|
||||
}
|
||||
|
||||
const rollout = readJson('config/gru-global-priority-currency-rollout.json');
|
||||
const manifest = readJson('config/gru-iso4217-currency-manifest.json');
|
||||
const transport = readJson('config/gru-transport-active.json');
|
||||
const deployment = readJson('cross-chain-pmm-lps/config/deployment-status.json');
|
||||
const mapping = readJson('config/token-mapping-multichain.json');
|
||||
const poolMatrix = readJson('cross-chain-pmm-lps/config/pool-matrix.json');
|
||||
const routingRegistry = readJson('config/routing-registry.json');
|
||||
|
||||
const desiredChainIds = rollout.desiredDestinationNetworks?.evmPublicCwMeshChainIds || [];
|
||||
const chainNames = mapping.chainNames || {};
|
||||
const manifestByCode = new Map((manifest.currencies || []).map((item) => [item.code, item]));
|
||||
const transportBySymbol = new Map((transport.enabledCanonicalTokens || []).map((item) => [item.symbol, item]));
|
||||
|
||||
const coreCwSymbols = [
|
||||
'cWUSDT',
|
||||
'cWUSDC',
|
||||
'cWEURC',
|
||||
'cWEURT',
|
||||
'cWGBPC',
|
||||
'cWGBPT',
|
||||
'cWAUDC',
|
||||
'cWJPYC',
|
||||
'cWCHFC',
|
||||
'cWCADC',
|
||||
'cWXAUC',
|
||||
'cWXAUT'
|
||||
];
|
||||
|
||||
const publicProtocols = [
|
||||
{ key: 'uniswap_v3', name: 'Uniswap v3' },
|
||||
{ key: 'balancer', name: 'Balancer' },
|
||||
{ key: 'curve_3', name: 'Curve 3' },
|
||||
{ key: 'dodo_pmm', name: 'DODO PMM' },
|
||||
{ key: 'one_inch', name: '1inch' }
|
||||
];
|
||||
|
||||
const desiredChainRows = desiredChainIds.map((chainId) => {
|
||||
const chain = deployment.chains?.[String(chainId)] || {};
|
||||
const cwTokens = Object.keys(chain.cwTokens || {});
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
return {
|
||||
chainId,
|
||||
name: chain.name || chainNames[String(chainId)] || `Chain ${chainId}`,
|
||||
cwTokenCount: cwTokens.length,
|
||||
hasFullCoreSuite: coreCwSymbols.every((symbol) => cwTokens.includes(symbol)),
|
||||
bridgeAvailable: chain.bridgeAvailable === true,
|
||||
pmmPoolCount: pmmPools.length
|
||||
};
|
||||
});
|
||||
|
||||
const desiredButNotLoaded = desiredChainRows.filter((row) => row.cwTokenCount === 0);
|
||||
const loadedChains = desiredChainRows.filter((row) => row.cwTokenCount > 0);
|
||||
const fullCoreSuiteChains = desiredChainRows.filter((row) => row.hasFullCoreSuite);
|
||||
const chainsWithAnyPools = desiredChainRows.filter((row) => row.pmmPoolCount > 0);
|
||||
const totalRecordedPublicPools = desiredChainRows.reduce((sum, row) => sum + row.pmmPoolCount, 0);
|
||||
const arbitrumRoute = (routingRegistry.routes || []).find((route) => route.fromChain === 138 && route.toChain === 42161 && route.asset === 'WETH');
|
||||
const arbitrumHubBlocker = {
|
||||
active: true,
|
||||
fromChain: 138,
|
||||
viaChain: 1,
|
||||
toChain: 42161,
|
||||
currentPath: '138 -> Mainnet -> Arbitrum',
|
||||
sourceBridge: '0xc9901ce2Ddb6490FAA183645147a87496d8b20B6',
|
||||
failedTxHash: '0x97df657f0e31341ca852666766e553650531bbcc86621246d041985d7261bb07',
|
||||
note: (arbitrumRoute && arbitrumRoute.note) || 'Use Mainnet hub; the current Mainnet -> Arbitrum WETH9 leg is blocked.'
|
||||
};
|
||||
|
||||
function currencyState(code) {
|
||||
const item = manifestByCode.get(code);
|
||||
return {
|
||||
manifestPresent: Boolean(item),
|
||||
deployed: Boolean(item?.status?.deployed),
|
||||
transportActive: Boolean(item?.status?.transportActive),
|
||||
x402Ready: Boolean(item?.status?.x402Ready)
|
||||
};
|
||||
}
|
||||
|
||||
const allAssetResults = (rollout.assets || []).map((asset) => {
|
||||
const state = currencyState(asset.code);
|
||||
const transportSymbols = (asset.tokenForms || []).map((item) => item.canonicalSymbol);
|
||||
const enabledByOverlay = transportSymbols.some((symbol) => transportBySymbol.has(symbol));
|
||||
return {
|
||||
code: asset.code,
|
||||
name: asset.name,
|
||||
wave: asset.wave,
|
||||
manifestPresent: state.manifestPresent,
|
||||
deployed: state.deployed,
|
||||
transportActive: state.transportActive && enabledByOverlay,
|
||||
x402Ready: state.x402Ready,
|
||||
canonicalSymbols: (asset.tokenForms || []).map((item) => item.canonicalSymbol),
|
||||
wrappedSymbols: (asset.tokenForms || []).map((item) => item.wrappedSymbol)
|
||||
};
|
||||
});
|
||||
|
||||
const wave1Results = allAssetResults
|
||||
.filter((asset) => asset.wave === 'wave1')
|
||||
.map((asset) => ({
|
||||
...asset,
|
||||
currentState: asset.transportActive
|
||||
? 'live_transport'
|
||||
: asset.deployed
|
||||
? 'canonical_only'
|
||||
: asset.manifestPresent
|
||||
? 'manifest_only'
|
||||
: 'backlog',
|
||||
nextStep: asset.transportActive
|
||||
? 'monitor_and_scale'
|
||||
: asset.deployed
|
||||
? 'activate_transport_and_attach_public_liquidity'
|
||||
: asset.manifestPresent
|
||||
? 'finish_canonical_deployment'
|
||||
: 'add_to_manifest'
|
||||
}));
|
||||
|
||||
const wave1WrappedSymbols = [...new Set(wave1Results.flatMap((asset) => asset.wrappedSymbols))];
|
||||
const poolMatrixCwTokens = new Set(poolMatrix.cwTokens || []);
|
||||
const wave1WrappedSymbolsMissingFromPoolMatrix = wave1WrappedSymbols.filter((symbol) => !poolMatrixCwTokens.has(symbol));
|
||||
|
||||
const rolloutSummary = {
|
||||
liveTransportAssets: allAssetResults.filter((asset) => asset.transportActive).length,
|
||||
canonicalOnlyAssets: allAssetResults.filter((asset) => asset.deployed && !asset.transportActive).length,
|
||||
backlogAssets: allAssetResults.filter((asset) => !asset.manifestPresent).length,
|
||||
wave1LiveTransport: wave1Results.filter((asset) => asset.currentState === 'live_transport').length,
|
||||
wave1CanonicalOnly: wave1Results.filter((asset) => asset.currentState === 'canonical_only').length,
|
||||
wave1WrappedSymbols: wave1WrappedSymbols.length,
|
||||
wave1WrappedSymbolsCoveredByPoolMatrix: wave1WrappedSymbols.length - wave1WrappedSymbolsMissingFromPoolMatrix.length
|
||||
};
|
||||
|
||||
const protocolResults = publicProtocols.map((protocol) => {
|
||||
if (protocol.key === 'dodo_pmm') {
|
||||
return {
|
||||
key: protocol.key,
|
||||
name: protocol.name,
|
||||
activePublicCwPools: totalRecordedPublicPools,
|
||||
destinationChainsWithPools: chainsWithAnyPools.length,
|
||||
status: totalRecordedPublicPools > 0 ? 'partial_live_on_public_cw_mesh' : 'not_deployed_on_public_cw_mesh',
|
||||
notes: totalRecordedPublicPools > 0
|
||||
? 'deployment-status.json now records live public-chain cW* DODO PMM pools on Mainnet, including recorded non-USD Wave 1 rows, and the recorded Mainnet pools now have bidirectional live execution proof. The broader public cW mesh is still partial.'
|
||||
: 'cross-chain-pmm-lps/config/deployment-status.json still records no public-chain cW* pools, so no live DODO PMM cW venue can be asserted.'
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
key: protocol.key,
|
||||
name: protocol.name,
|
||||
activePublicCwPools: 0,
|
||||
destinationChainsWithPools: 0,
|
||||
status: 'not_deployed_on_public_cw_mesh',
|
||||
notes: 'No live public-chain cW* venue is recorded for this protocol in deployment-status.json yet.'
|
||||
};
|
||||
});
|
||||
|
||||
const blockers = [];
|
||||
if (desiredButNotLoaded.length > 0) {
|
||||
blockers.push(`Desired public EVM targets still lack cW token suites: ${desiredButNotLoaded.map((row) => row.name).join(', ')}.`);
|
||||
}
|
||||
if (rolloutSummary.wave1CanonicalOnly > 0) {
|
||||
blockers.push(`Wave 1 GRU assets are still canonical-only on Chain 138: ${wave1Results.filter((asset) => asset.currentState === 'canonical_only').map((asset) => asset.code).join(', ')}.`);
|
||||
}
|
||||
if (wave1WrappedSymbolsMissingFromPoolMatrix.length > 0) {
|
||||
blockers.push(`Wave 1 wrapped symbols are still missing from the public pool matrix: ${wave1WrappedSymbolsMissingFromPoolMatrix.join(', ')}.`);
|
||||
}
|
||||
if (chainsWithAnyPools.length === 0) {
|
||||
blockers.push('Public cW* liquidity is still undeployed across Uniswap v3, Balancer, Curve 3, DODO PMM, and 1inch on the tracked public-network mesh.');
|
||||
}
|
||||
if (chainsWithAnyPools.length > 0 && protocolResults.some((item) => item.activePublicCwPools > 0) && protocolResults.some((item) => item.activePublicCwPools === 0)) {
|
||||
blockers.push('Public cW* protocol rollout is now partial: DODO PMM has recorded pools, while Uniswap v3, Balancer, Curve 3, and 1inch remain not live on the public cW mesh.');
|
||||
}
|
||||
if (rolloutSummary.backlogAssets > 0) {
|
||||
blockers.push(`The ranked GRU global rollout still has ${rolloutSummary.backlogAssets} backlog assets outside the live manifest.`);
|
||||
}
|
||||
if ((rollout.desiredDestinationNetworks?.nonEvmRelayPrograms || []).length > 0) {
|
||||
blockers.push(`Desired non-EVM GRU targets remain planned / relay-dependent: ${(rollout.desiredDestinationNetworks.nonEvmRelayPrograms || []).map((item) => item.identifier).join(', ')}.`);
|
||||
}
|
||||
if (arbitrumHubBlocker.active) {
|
||||
blockers.push(`Arbitrum public-network bootstrap remains blocked on the current Mainnet hub leg: tx ${arbitrumHubBlocker.failedTxHash} reverted from ${arbitrumHubBlocker.sourceBridge} before any bridge event was emitted.`);
|
||||
}
|
||||
|
||||
const report = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
canonicalChainId: 138,
|
||||
summary: {
|
||||
desiredPublicEvmTargets: desiredChainRows.length,
|
||||
loadedPublicEvmChains: loadedChains.length,
|
||||
loadedPublicEvmFullCoreSuite: fullCoreSuiteChains.length,
|
||||
desiredButNotLoaded: desiredButNotLoaded.length,
|
||||
publicProtocolsTracked: protocolResults.length,
|
||||
publicProtocolsWithActiveCwPools: protocolResults.filter((item) => item.activePublicCwPools > 0).length,
|
||||
chainsWithAnyRecordedPublicCwPools: chainsWithAnyPools.length,
|
||||
liveTransportAssets: rolloutSummary.liveTransportAssets,
|
||||
wave1CanonicalOnly: rolloutSummary.wave1CanonicalOnly,
|
||||
backlogAssets: rolloutSummary.backlogAssets
|
||||
},
|
||||
publicEvmMesh: {
|
||||
coreCwSuite: coreCwSymbols,
|
||||
desiredChains: desiredChainRows,
|
||||
desiredButNotLoaded: desiredButNotLoaded.map((row) => ({ chainId: row.chainId, name: row.name })),
|
||||
wave1PoolMatrixCoverage: {
|
||||
totalWrappedSymbols: wave1WrappedSymbols.length,
|
||||
coveredSymbols: rolloutSummary.wave1WrappedSymbolsCoveredByPoolMatrix,
|
||||
missingSymbols: wave1WrappedSymbolsMissingFromPoolMatrix
|
||||
},
|
||||
note: desiredButNotLoaded.length > 0
|
||||
? `The public EVM cW token mesh is complete on the currently loaded ${loadedChains.length}-chain set, but ${desiredButNotLoaded.map((row) => row.name).join(', ')} ${desiredButNotLoaded.length === 1 ? 'remains a desired target without a cW suite' : 'remain desired targets without cW suites'} in deployment-status.json.`
|
||||
: `The public EVM cW token mesh is complete across all ${loadedChains.length} desired public EVM targets recorded in deployment-status.json.`
|
||||
},
|
||||
transport: {
|
||||
liveTransportAssets: allAssetResults.filter((asset) => asset.transportActive).map((asset) => ({ code: asset.code, name: asset.name })),
|
||||
wave1: wave1Results,
|
||||
note: 'USD is the only live transport asset today. Wave 1 non-USD assets are deployed canonically on Chain 138 but are not yet promoted into the active transport overlay.'
|
||||
},
|
||||
protocols: {
|
||||
publicCwMesh: protocolResults,
|
||||
chain138CanonicalVenues: {
|
||||
note: 'Chain 138 canonical routing is a separate surface: DODO PMM plus upstream-native Uniswap v3 and the funded pilot-compatible Balancer, Curve 3, and 1inch venues are live there.',
|
||||
liveProtocols: ['DODO PMM', 'Uniswap v3', 'Balancer', 'Curve 3', '1inch']
|
||||
}
|
||||
},
|
||||
bridgeRouteHealth: {
|
||||
arbitrumHubBlocker
|
||||
},
|
||||
explorer: {
|
||||
tokenListApi: 'https://explorer.d-bis.org/api/config/token-list',
|
||||
staticStatusPath: 'https://explorer.d-bis.org/config/GRU_V2_PUBLIC_DEPLOYMENT_STATUS.json'
|
||||
},
|
||||
blockers
|
||||
};
|
||||
|
||||
if (writeExplorerConfig) {
|
||||
fs.writeFileSync(explorerConfigPath, `${JSON.stringify(report, null, 2)}\n`);
|
||||
}
|
||||
|
||||
if (outputJson) {
|
||||
console.log(JSON.stringify(report, null, 2));
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
console.log('=== GRU V2 Public-Protocol Rollout Status ===');
|
||||
console.log(`Desired public EVM targets: ${report.summary.desiredPublicEvmTargets}`);
|
||||
console.log(`Loaded public EVM chains: ${report.summary.loadedPublicEvmChains}`);
|
||||
console.log(`Loaded chains with full core cW suite: ${report.summary.loadedPublicEvmFullCoreSuite}`);
|
||||
console.log(`Desired targets still unloaded: ${report.summary.desiredButNotLoaded}`);
|
||||
console.log(`Live transport assets: ${report.summary.liveTransportAssets}`);
|
||||
console.log(`Wave 1 canonical-only assets: ${report.summary.wave1CanonicalOnly}`);
|
||||
console.log(`Wave 1 wrapped symbols covered by pool-matrix: ${report.publicEvmMesh.wave1PoolMatrixCoverage.coveredSymbols}/${report.publicEvmMesh.wave1PoolMatrixCoverage.totalWrappedSymbols}`);
|
||||
console.log(`Backlog assets: ${report.summary.backlogAssets}`);
|
||||
console.log(`Tracked public protocols: ${report.summary.publicProtocolsTracked}`);
|
||||
console.log(`Protocols with active public cW pools: ${report.summary.publicProtocolsWithActiveCwPools}`);
|
||||
console.log(`Chains with any recorded public cW pools: ${report.summary.chainsWithAnyRecordedPublicCwPools}`);
|
||||
console.log('');
|
||||
console.log('Wave 1:');
|
||||
for (const asset of wave1Results) {
|
||||
console.log(`- ${asset.code} (${asset.name}) -> ${asset.currentState}; next: ${asset.nextStep}`);
|
||||
}
|
||||
console.log('');
|
||||
console.log('Public protocol surface:');
|
||||
for (const protocol of protocolResults) {
|
||||
console.log(`- ${protocol.name}: ${protocol.status}`);
|
||||
}
|
||||
if (blockers.length > 0) {
|
||||
console.log('');
|
||||
console.log('Active blockers:');
|
||||
for (const blocker of blockers) {
|
||||
console.log(`- ${blocker}`);
|
||||
}
|
||||
}
|
||||
if (writeExplorerConfig) {
|
||||
console.log('');
|
||||
console.log(`Wrote: ${explorerConfigPath}`);
|
||||
}
|
||||
NODE
|
||||
122
scripts/verify/check-gru-v2-token-standard-surface.sh
Normal file
122
scripts/verify/check-gru-v2-token-standard-surface.sh
Normal file
@@ -0,0 +1,122 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify that a GRU V2 token address exposes the expected standards and metadata surface.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-gru-v2-token-standard-surface.sh --token cUSDC_V2=0x...
|
||||
# bash scripts/verify/check-gru-v2-token-standard-surface.sh --token cEURC_V2=0x... --strict
|
||||
#
|
||||
# Exit:
|
||||
# 0 on successful execution
|
||||
# 1 with --strict when any checked token is missing a required surface
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
STRICT=0
|
||||
RPC_URL="${RPC_URL_138:-${CHAIN_138_RPC_URL:-http://192.168.11.211:8545}}"
|
||||
declare -A TOKENS=()
|
||||
TOKEN_ORDER=()
|
||||
|
||||
add_token() {
|
||||
local spec="$1"
|
||||
local symbol="${spec%%=*}"
|
||||
local address="${spec#*=}"
|
||||
if [[ -z "$symbol" || -z "$address" || "$symbol" == "$address" ]]; then
|
||||
echo "ERROR: invalid token spec '$spec' (expected SYMBOL=ADDRESS)" >&2
|
||||
exit 1
|
||||
fi
|
||||
TOKENS["$symbol"]="$address"
|
||||
TOKEN_ORDER+=("$symbol")
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--strict)
|
||||
STRICT=1
|
||||
shift
|
||||
;;
|
||||
--rpc=*)
|
||||
RPC_URL="${1#--rpc=}"
|
||||
shift
|
||||
;;
|
||||
--token)
|
||||
[[ $# -ge 2 ]] || { echo "ERROR: --token requires SYMBOL=ADDRESS" >&2; exit 1; }
|
||||
add_token "$2"
|
||||
shift 2
|
||||
;;
|
||||
--token=*)
|
||||
add_token "${1#--token=}"
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if ! command -v cast >/dev/null 2>&1; then
|
||||
echo "cast is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ${#TOKEN_ORDER[@]} -eq 0 ]]; then
|
||||
echo "ERROR: provide at least one --token SYMBOL=ADDRESS" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
HOLDER="0x0000000000000000000000000000000000000001"
|
||||
ZERO32="0x0000000000000000000000000000000000000000000000000000000000000000"
|
||||
|
||||
call_ok() {
|
||||
local address="$1"
|
||||
local signature="$2"
|
||||
shift 2
|
||||
cast call "$address" "$signature" "$@" --rpc-url "$RPC_URL" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
overall_fail=0
|
||||
|
||||
for sym in "${TOKEN_ORDER[@]}"; do
|
||||
addr="${TOKENS[$sym]}"
|
||||
echo "=== ${sym} (${addr}) ==="
|
||||
|
||||
missing=()
|
||||
|
||||
call_ok "$addr" 'name()(string)' || missing+=("ERC-20 name")
|
||||
call_ok "$addr" 'symbol()(string)' || missing+=("ERC-20 symbol")
|
||||
call_ok "$addr" 'currencyCode()(string)' || missing+=("currencyCode")
|
||||
call_ok "$addr" 'versionTag()(string)' || missing+=("versionTag")
|
||||
call_ok "$addr" 'assetId()(bytes32)' || missing+=("assetId")
|
||||
call_ok "$addr" 'assetVersionId()(bytes32)' || missing+=("assetVersionId")
|
||||
call_ok "$addr" 'DOMAIN_SEPARATOR()(bytes32)' || missing+=("EIP-712 DOMAIN_SEPARATOR")
|
||||
call_ok "$addr" 'nonces(address)(uint256)' "$HOLDER" || missing+=("ERC-2612 nonces")
|
||||
call_ok "$addr" 'authorizationState(address,bytes32)(bool)' "$HOLDER" "$ZERO32" || missing+=("ERC-3009 authorizationState")
|
||||
call_ok "$addr" 'eip712Domain()(bytes1,string,string,uint256,address,bytes32,uint256[])' || missing+=("ERC-5267 eip712Domain")
|
||||
call_ok "$addr" 'governanceProfileId()(bytes32)' || missing+=("governanceProfileId")
|
||||
call_ok "$addr" 'supervisionProfileId()(bytes32)' || missing+=("supervisionProfileId")
|
||||
call_ok "$addr" 'storageNamespace()(bytes32)' || missing+=("storageNamespace")
|
||||
call_ok "$addr" 'primaryJurisdiction()(string)' || missing+=("primaryJurisdiction")
|
||||
call_ok "$addr" 'regulatoryDisclosureURI()(string)' || missing+=("regulatoryDisclosureURI")
|
||||
call_ok "$addr" 'reportingURI()(string)' || missing+=("reportingURI")
|
||||
call_ok "$addr" 'minimumUpgradeNoticePeriod()(uint256)' || missing+=("minimumUpgradeNoticePeriod")
|
||||
call_ok "$addr" 'wrappedTransport()(bool)' || missing+=("wrappedTransport")
|
||||
|
||||
if [[ "${#missing[@]}" -gt 0 ]]; then
|
||||
overall_fail=1
|
||||
echo "Missing:"
|
||||
for item in "${missing[@]}"; do
|
||||
echo "- $item"
|
||||
done
|
||||
else
|
||||
echo "[OK] Full GRU V2 standard surface detected."
|
||||
fi
|
||||
echo ""
|
||||
done
|
||||
|
||||
if [[ "$STRICT" == "1" && "$overall_fail" == "1" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
131
scripts/verify/check-info-defi-oracle-public.sh
Executable file
131
scripts/verify/check-info-defi-oracle-public.sh
Executable file
@@ -0,0 +1,131 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify https://info.defi-oracle.io (or INFO_SITE_BASE) serves the Chain 138 SPA and static agent files.
|
||||
# Run after: pnpm --filter info-defi-oracle-138 build && sync-info-defi-oracle-to-vmid2400.sh (or your CDN upload).
|
||||
set -euo pipefail
|
||||
|
||||
BASE="${INFO_SITE_BASE:-https://info.defi-oracle.io}"
|
||||
BASE="${BASE%/}"
|
||||
|
||||
echo "Checking info site: $BASE"
|
||||
|
||||
BODY=/tmp/info-site-check-body
|
||||
|
||||
fetch() {
|
||||
local path=$1
|
||||
curl -sS "${BASE}${path}" --max-time 25 -o "$BODY" -w '%{http_code}'
|
||||
}
|
||||
|
||||
check_spa() {
|
||||
local path=$1
|
||||
local min_bytes=${2:-512}
|
||||
local url="${BASE}${path}"
|
||||
local code
|
||||
code=$(fetch "$path") || true
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "FAIL $path → HTTP $code ($url)" >&2
|
||||
return 1
|
||||
fi
|
||||
local sz
|
||||
sz=$(wc -c < "$BODY" | tr -d ' ')
|
||||
if [[ "$sz" -lt "$min_bytes" ]]; then
|
||||
echo "FAIL $path → body too small (${sz}b, min ${min_bytes}) ($url)" >&2
|
||||
return 1
|
||||
fi
|
||||
if grep -q '<title>Default Page</title>' "$BODY" 2>/dev/null; then
|
||||
echo "FAIL $path → generic placeholder page (hosting default). Point NPMplus/upstream at info-defi-oracle-138 dist/ per docs/04-configuration/INFO_DEFI_ORACLE_IO_DEPLOYMENT.md ($url)" >&2
|
||||
return 1
|
||||
fi
|
||||
# Vite/React SPA
|
||||
if grep -qE 'id="root"|id='"'"'root'"'"'' "$BODY" 2>/dev/null; then
|
||||
echo "OK $path (SPA HTML, ${sz}b)"
|
||||
return 0
|
||||
fi
|
||||
if grep -qE '/assets/index-[A-Za-z0-9_-]+\.(js|mjs)' "$BODY" 2>/dev/null; then
|
||||
echo "OK $path (SPA bundle reference, ${sz}b)"
|
||||
return 0
|
||||
fi
|
||||
echo "FAIL $path → expected Vite SPA (root div or /assets/index-*.js), got unknown HTML ($url)" >&2
|
||||
return 1
|
||||
}
|
||||
|
||||
check_text() {
|
||||
local path=$1
|
||||
local min_bytes=$2
|
||||
local pattern=$3
|
||||
local url="${BASE}${path}"
|
||||
local code
|
||||
code=$(fetch "$path") || true
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "FAIL $path → HTTP $code ($url)" >&2
|
||||
return 1
|
||||
fi
|
||||
local sz
|
||||
sz=$(wc -c < "$BODY" | tr -d ' ')
|
||||
if [[ "$sz" -lt "$min_bytes" ]]; then
|
||||
echo "FAIL $path → body too small (${sz}b) ($url)" >&2
|
||||
return 1
|
||||
fi
|
||||
if ! grep -qE "$pattern" "$BODY"; then
|
||||
echo "FAIL $path → body does not match expected pattern (got HTML fallback? check static file deploy) ($url)" >&2
|
||||
head -c 120 "$BODY" | xxd >&2 || true
|
||||
return 1
|
||||
fi
|
||||
echo "OK $path (${sz}b)"
|
||||
}
|
||||
|
||||
check_spa "/" 400
|
||||
check_spa "/agents" 400
|
||||
check_spa "/disclosures" 400
|
||||
check_spa "/governance" 400
|
||||
check_spa "/ecosystem" 400
|
||||
check_spa "/documentation" 400
|
||||
check_spa "/solacenet" 400
|
||||
|
||||
check_text "/llms.txt" 80 '^#'
|
||||
check_text "/robots.txt" 10 'User-agent'
|
||||
check_text "/sitemap.xml" 80 '<urlset'
|
||||
|
||||
code=$(fetch "/agent-hints.json") || true
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "FAIL /agent-hints.json → HTTP $code" >&2
|
||||
exit 1
|
||||
fi
|
||||
first_json_char=$(sed 's/^[[:space:]]*//;q' "$BODY" | head -c 1 || true)
|
||||
if [[ "$first_json_char" != '{' ]]; then
|
||||
echo "FAIL /agent-hints.json → not JSON (SPA fallback serving index.html?)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if command -v jq >/dev/null 2>&1; then
|
||||
if ! jq -e '.chain138.chainId == 138 and .chain138.dodopmmIntegration != null and (.tokenAggregation.defaultBase | contains("info.defi-oracle.io/token-aggregation")) and (.intendedAudience | type == "string") and (.assetFraming | type == "string") and (.governanceSummary | type == "string") and (.publicHubPages | type == "array") and (.authenticatedDocsNote | type == "string")' "$BODY" >/dev/null; then
|
||||
echo "FAIL agent-hints.json schema sanity (expected chainId 138, tokenAggregation defaultBase, audience/framing/governanceSummary/publicHubPages/authenticatedDocsNote)" >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "OK /agent-hints.json (jq validation)"
|
||||
else
|
||||
echo "OK /agent-hints.json (basic JSON brace check; install jq for full validation)"
|
||||
fi
|
||||
|
||||
# Same-origin token-aggregation proxy (info LXC nginx → Blockscout). Catches 502/HTML fallback.
|
||||
code=$(curl -sS "${BASE}/token-aggregation/api/v1/networks?refresh=1" --max-time 25 -o "$BODY" -w '%{http_code}') || true
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "FAIL /token-aggregation/api/v1/networks → HTTP $code (${BASE})" >&2
|
||||
exit 1
|
||||
fi
|
||||
if command -v jq >/dev/null 2>&1; then
|
||||
if ! jq -e '.networks | type == "array"' "$BODY" >/dev/null; then
|
||||
echo "FAIL token-aggregation networks response (expected .networks array; SPA/nginx misroute?)" >&2
|
||||
head -c 200 "$BODY" | tr -d '\0' >&2 || true
|
||||
exit 1
|
||||
fi
|
||||
echo "OK /token-aggregation/api/v1/networks (jq .networks)"
|
||||
else
|
||||
first=$(sed 's/^[[:space:]]*//;q' "$BODY" | head -c 1 || true)
|
||||
if [[ "$first" != '{' ]]; then
|
||||
echo "FAIL /token-aggregation/api/v1/networks → expected JSON object (install jq for stricter checks)" >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "OK /token-aggregation/api/v1/networks (200, JSON object)"
|
||||
fi
|
||||
|
||||
echo "All checks passed for $BASE"
|
||||
89
scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh
Executable file
89
scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh
Executable file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Read-only: Mainnet cWUSDC/USDC vault reserve 1:1 peg (same 6dp raw units).
|
||||
#
|
||||
# Invariant: |base - quote| / max(base, quote) <= PEG_IMBALANCE_MAX_BPS / 10_000
|
||||
# (default 25 bps = 0.25%). Tune for ops; use 0 for "exact equality only".
|
||||
#
|
||||
# Usage:
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# bash scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh
|
||||
# PEG_IMBALANCE_MAX_BPS=50 bash scripts/verify/check-mainnet-cwusdc-usdc-reserve-peg.sh
|
||||
#
|
||||
# Exit:
|
||||
# 0 — within tolerance
|
||||
# 1 — breach (or RPC/read failure)
|
||||
#
|
||||
# Env: ETHEREUM_MAINNET_RPC, DODO_PMM_INTEGRATION_MAINNET (optional pool remap),
|
||||
# POOL_CWUSDC_USDC_MAINNET, PEG_IMBALANCE_MAX_BPS (default 25)
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
# shellcheck disable=SC1091
|
||||
source "${PROJECT_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd python3
|
||||
|
||||
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
|
||||
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
|
||||
USDC="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
|
||||
POOL="${POOL_CWUSDC_USDC_MAINNET:-0x69776fc607e9edA8042e320e7e43f54d06c68f0E}"
|
||||
MAX_BPS="${PEG_IMBALANCE_MAX_BPS:-25}"
|
||||
|
||||
if [[ -z "$RPC_URL" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -n "$INTEGRATION" ]]; then
|
||||
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$CWUSDC" "$USDC" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}' || true)"
|
||||
if [[ -n "$mapped_pool" && "$mapped_pool" != "0x0000000000000000000000000000000000000000" && "${mapped_pool,,}" != "${POOL,,}" ]]; then
|
||||
POOL="$mapped_pool"
|
||||
fi
|
||||
fi
|
||||
|
||||
out="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL" 2>/dev/null)" || {
|
||||
echo "[fail] could not read getVaultReserve on pool=$POOL" >&2
|
||||
exit 1
|
||||
}
|
||||
base_r="$(printf '%s\n' "$out" | sed -n '1p' | awk '{print $1}')"
|
||||
quote_r="$(printf '%s\n' "$out" | sed -n '2p' | awk '{print $1}')"
|
||||
|
||||
read -r imb_bps verdict human_b human_q human_gap abs_raw <<<"$(
|
||||
python3 - "$base_r" "$quote_r" "$MAX_BPS" <<'PY'
|
||||
import sys
|
||||
b, q, max_bps = int(sys.argv[1]), int(sys.argv[2]), int(sys.argv[3])
|
||||
m = max(b, q, 1)
|
||||
imb = abs(b - q) * 10000 // m
|
||||
if max_bps <= 0:
|
||||
ok = imb == 0
|
||||
else:
|
||||
ok = imb <= max_bps
|
||||
verdict = "PEG_OK" if ok else "PEG_BREACH"
|
||||
print(imb, verdict, f"{b / 1e6:.6f}", f"{q / 1e6:.6f}", f"{abs(b - q) / 1e6:.6f}", abs(b - q))
|
||||
PY
|
||||
)"
|
||||
|
||||
echo "=== Mainnet cWUSDC/USDC reserve peg ==="
|
||||
echo "pool=$POOL"
|
||||
echo "baseReserve_human=$human_b cWUSDC quoteReserve_human=$human_q USDC"
|
||||
echo "imbalance_abs_raw=$abs_raw human_gap=$human_gap"
|
||||
echo "imbalance_bps=$imb_bps max_allowed_bps=$MAX_BPS verdict=$verdict"
|
||||
echo "Remediate: bash scripts/deployment/plan-mainnet-cwusdc-usdc-rebalance-liquidity.sh"
|
||||
|
||||
if [[ "$verdict" == "PEG_OK" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
exit 1
|
||||
227
scripts/verify/check-mainnet-pmm-peg-bot-readiness.sh
Executable file
227
scripts/verify/check-mainnet-pmm-peg-bot-readiness.sh
Executable file
@@ -0,0 +1,227 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Mainnet PMM peg + bot readiness checks for recorded chain-1 pools.
|
||||
#
|
||||
# - Confirms RPC is Ethereum mainnet (chain id 1)
|
||||
# - Reads pools from cross-chain-pmm-lps/config/deployment-status.json
|
||||
# (pmmPools hub rows + pmmPoolsVolatile for cW/TRUU when poolAddress is set)
|
||||
# - Verifies DODO_PMM_INTEGRATION_MAINNET mapping and non-zero reserves
|
||||
# - For USD-class cW vs USDC/USDT pools, compares reserve imbalance (bps) to peg-bands.json
|
||||
#
|
||||
# Env:
|
||||
# ETHEREUM_MAINNET_RPC, DODO_PMM_INTEGRATION_MAINNET (required)
|
||||
# MIN_POOL_RESERVE_RAW — minimum per-leg reserve (default: 1)
|
||||
# PMM_TRUU_BASE_TOKEN, PMM_TRUU_QUOTE_TOKEN — optional extra pair to verify (ERC-20 addresses)
|
||||
# SKIP_EXIT=1 — exit 0 even when checks fail (reporting mode)
|
||||
#
|
||||
# See: docs/03-deployment/MAINNET_PMM_TRUU_CWUSD_PEG_AND_BOT_RUNBOOK.md
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
STATUS_JSON="${REPO_ROOT}/cross-chain-pmm-lps/config/deployment-status.json"
|
||||
PEG_JSON="${REPO_ROOT}/cross-chain-pmm-lps/config/peg-bands.json"
|
||||
|
||||
source "${REPO_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd jq
|
||||
|
||||
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
|
||||
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
|
||||
MIN_POOL_RESERVE_RAW="${MIN_POOL_RESERVE_RAW:-1}"
|
||||
SKIP_EXIT="${SKIP_EXIT:-0}"
|
||||
|
||||
if [[ -z "$RPC_URL" || -z "$INTEGRATION" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC and DODO_PMM_INTEGRATION_MAINNET are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$STATUS_JSON" ]]; then
|
||||
echo "[fail] missing $STATUS_JSON" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$PEG_JSON" ]]; then
|
||||
echo "[fail] missing $PEG_JSON" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
failures=0
|
||||
warnings=0
|
||||
|
||||
normal_max_bps="$(jq -r '.usdPegged.normalBps.max // 25' "$PEG_JSON")"
|
||||
circuit_bps="$(jq -r '.usdPegged.circuitBreakBps // 200' "$PEG_JSON")"
|
||||
|
||||
is_usd_pegged_base() {
|
||||
local sym="$1"
|
||||
jq -e --arg s "$sym" '.usdPegged.tokens | index($s) != null' "$PEG_JSON" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
chain_id="$(cast chain-id --rpc-url "$RPC_URL" 2>/dev/null | head -1 | tr -d '\r' || true)"
|
||||
if [[ "$chain_id" != "1" ]]; then
|
||||
echo "[fail] expected eth_chainId 1, got '${chain_id:-unknown}'" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "=== Mainnet PMM peg / bot readiness ==="
|
||||
echo "RPC: $RPC_URL"
|
||||
echo "Integration: $INTEGRATION"
|
||||
echo "Min reserve (raw per leg): $MIN_POOL_RESERVE_RAW"
|
||||
echo "USD peg bands (from peg-bands.json): normal max ${normal_max_bps} bps, circuit ${circuit_bps} bps"
|
||||
echo
|
||||
|
||||
check_pool_row() {
|
||||
local base_sym="$1"
|
||||
local quote_sym="$2"
|
||||
local expected_pool="$3"
|
||||
local label="${base_sym}/${quote_sym}"
|
||||
|
||||
local base_addr
|
||||
base_addr="$(jq -r --arg b "$base_sym" '.chains["1"].cwTokens[$b] // empty' "$STATUS_JSON")"
|
||||
local quote_addr
|
||||
if [[ "$quote_sym" == "TRUU" ]]; then
|
||||
quote_addr="$(jq -r '.chains["1"].anchorAddresses.TRUU // empty' "$STATUS_JSON")"
|
||||
else
|
||||
quote_addr="$(jq -r --arg q "$quote_sym" '.chains["1"].anchorAddresses[$q] // empty' "$STATUS_JSON")"
|
||||
fi
|
||||
|
||||
if [[ -z "$base_addr" || -z "$quote_addr" ]]; then
|
||||
echo "[fail] $label — could not resolve token address (base=$base_addr quote=$quote_addr)" >&2
|
||||
failures=$((failures + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
local mapped
|
||||
mapped="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$base_addr" "$quote_addr" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print tolower($1)}' || true)"
|
||||
local exp_lower
|
||||
exp_lower="$(printf '%s' "$expected_pool" | tr '[:upper:]' '[:lower:]')"
|
||||
|
||||
if [[ "$mapped" != "$exp_lower" ]]; then
|
||||
echo "[fail] $label — integration mapping mismatch: expected $expected_pool got ${mapped:-empty}" >&2
|
||||
failures=$((failures + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
local reserve_output
|
||||
reserve_output="$(cast call "$expected_pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL" 2>/dev/null || true)"
|
||||
local r0 r1
|
||||
r0="$(printf '%s\n' "$reserve_output" | sed -n '1p' | awk '{print $1}')"
|
||||
r1="$(printf '%s\n' "$reserve_output" | sed -n '2p' | awk '{print $1}')"
|
||||
|
||||
if [[ -z "$r0" || -z "$r1" ]]; then
|
||||
echo "[fail] $label — could not read reserves" >&2
|
||||
failures=$((failures + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
if [[ "$r0" -lt "$MIN_POOL_RESERVE_RAW" || "$r1" -lt "$MIN_POOL_RESERVE_RAW" ]]; then
|
||||
echo "[fail] $label — reserves below MIN_POOL_RESERVE_RAW ($r0 / $r1)" >&2
|
||||
failures=$((failures + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
local imb_bps=0
|
||||
local peg_info=""
|
||||
local peg_circuit_failed=0
|
||||
if is_usd_pegged_base "$base_sym" && [[ "$quote_sym" == "USDC" || "$quote_sym" == "USDT" ]]; then
|
||||
local max_r
|
||||
if [[ "$r0" -ge "$r1" ]]; then
|
||||
max_r="$r0"
|
||||
else
|
||||
max_r="$r1"
|
||||
fi
|
||||
if [[ "$max_r" -eq 0 ]]; then
|
||||
imb_bps=10000
|
||||
else
|
||||
local diff
|
||||
if [[ "$r0" -ge "$r1" ]]; then
|
||||
diff=$((r0 - r1))
|
||||
else
|
||||
diff=$((r1 - r0))
|
||||
fi
|
||||
imb_bps=$((10000 * diff / max_r))
|
||||
fi
|
||||
peg_info="imbalance_bps=${imb_bps}"
|
||||
if [[ "$imb_bps" -gt "$circuit_bps" ]]; then
|
||||
echo "[fail] $label — reserve imbalance ${imb_bps} bps exceeds circuit ${circuit_bps} bps (r0=$r0 r1=$r1)" >&2
|
||||
failures=$((failures + 1))
|
||||
peg_circuit_failed=1
|
||||
elif [[ "$imb_bps" -gt "$normal_max_bps" ]]; then
|
||||
echo "[warn] $label — reserve imbalance ${imb_bps} bps exceeds normal band max ${normal_max_bps} bps (r0=$r0 r1=$r1)" >&2
|
||||
warnings=$((warnings + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$peg_circuit_failed" -eq 0 ]]; then
|
||||
echo "- $label pool=$expected_pool reserves=$r0/$r1 ${peg_info:+$peg_info}"
|
||||
fi
|
||||
}
|
||||
|
||||
while IFS= read -r line; do
|
||||
[[ -z "$line" ]] && continue
|
||||
base_sym="${line%%|*}"
|
||||
rest="${line#*|}"
|
||||
quote_sym="${rest%%|*}"
|
||||
pool_addr="${rest##*|}"
|
||||
check_pool_row "$base_sym" "$quote_sym" "$pool_addr"
|
||||
done < <(jq -r '.chains["1"].pmmPools[]? | "\(.base)|\(.quote)|\(.poolAddress)"' "$STATUS_JSON" 2>/dev/null || true)
|
||||
|
||||
while IFS= read -r line; do
|
||||
[[ -z "$line" ]] && continue
|
||||
base_sym="${line%%|*}"
|
||||
rest="${line#*|}"
|
||||
quote_sym="${rest%%|*}"
|
||||
pool_addr="${rest##*|}"
|
||||
check_pool_row "$base_sym" "$quote_sym" "$pool_addr"
|
||||
done < <(jq -r '
|
||||
.chains["1"].pmmPoolsVolatile[]?
|
||||
| select(
|
||||
(.poolAddress // "") != ""
|
||||
and ((.poolAddress | ascii_downcase) != "0x0000000000000000000000000000000000000000")
|
||||
)
|
||||
| "\(.base)|\(.quote)|\(.poolAddress)"
|
||||
' "$STATUS_JSON" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "${PMM_TRUU_BASE_TOKEN:-}" && -n "${PMM_TRUU_QUOTE_TOKEN:-}" ]]; then
|
||||
echo
|
||||
echo "=== Optional TRUU PMM pair (PMM_TRUU_BASE_TOKEN / PMM_TRUU_QUOTE_TOKEN) ==="
|
||||
tpool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$PMM_TRUU_BASE_TOKEN" "$PMM_TRUU_QUOTE_TOKEN" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}')"
|
||||
if [[ "$tpool" == "0x0000000000000000000000000000000000000000" ]]; then
|
||||
echo "[warn] No pool registered for TRUU pair (base=$PMM_TRUU_BASE_TOKEN quote=$PMM_TRUU_QUOTE_TOKEN)" >&2
|
||||
warnings=$((warnings + 1))
|
||||
else
|
||||
tres="$(cast call "$tpool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL" 2>/dev/null || true)"
|
||||
tr0="$(printf '%s\n' "$tres" | sed -n '1p' | awk '{print $1}')"
|
||||
tr1="$(printf '%s\n' "$tres" | sed -n '2p' | awk '{print $1}')"
|
||||
if [[ -z "$tr0" || -z "$tr1" || "$tr0" -lt "$MIN_POOL_RESERVE_RAW" || "$tr1" -lt "$MIN_POOL_RESERVE_RAW" ]]; then
|
||||
echo "[warn] TRUU PMM pool $tpool has weak or unreadable reserves ($tr0 / $tr1)" >&2
|
||||
warnings=$((warnings + 1))
|
||||
else
|
||||
echo "- TRUU PMM pool=$tpool reserves=$tr0/$tr1"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo
|
||||
if (( failures > 0 )); then
|
||||
echo "[fail] $failures hard failure(s), $warnings warning(s)." >&2
|
||||
if [[ "$SKIP_EXIT" == "1" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
if (( warnings > 0 )); then
|
||||
echo "[ok] checks passed with $warnings warning(s)."
|
||||
exit 0
|
||||
fi
|
||||
echo "[ok] all checks passed."
|
||||
exit 0
|
||||
129
scripts/verify/check-mainnet-public-dodo-cw-bootstrap-pools.sh
Executable file
129
scripts/verify/check-mainnet-public-dodo-cw-bootstrap-pools.sh
Executable file
@@ -0,0 +1,129 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verify the first public Mainnet DODO PMM cW bootstrap pools.
|
||||
# Checks:
|
||||
# - pool env present
|
||||
# - integration mapping points at the expected pool
|
||||
# - reserves are non-zero
|
||||
# - the repeatable swap helper dry-runs cleanly with the expected quote fallback
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
STATUS_JSON="${REPO_ROOT}/cross-chain-pmm-lps/config/deployment-status.json"
|
||||
|
||||
source "${REPO_ROOT}/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd bash
|
||||
require_cmd jq
|
||||
|
||||
RPC_URL="${ETHEREUM_MAINNET_RPC:-}"
|
||||
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
|
||||
|
||||
if [[ -z "$RPC_URL" || -z "$INTEGRATION" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC and DODO_PMM_INTEGRATION_MAINNET are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$STATUS_JSON" ]]; then
|
||||
echo "[fail] missing $STATUS_JSON" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
pairs=(
|
||||
"cwusdt-usdc|cWUSDT|USDC|base-to-quote|1000|POOL_CWUSDT_USDC_MAINNET|${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwusdc-usdc|cWUSDC|USDC|quote-to-base|1000|POOL_CWUSDC_USDC_MAINNET|${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwusdt-usdt|cWUSDT|USDT|base-to-quote|1000|POOL_CWUSDT_USDT_MAINNET|${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}|0xdAC17F958D2ee523a2206206994597C13D831ec7"
|
||||
"cwusdc-usdt|cWUSDC|USDT|quote-to-base|1000|POOL_CWUSDC_USDT_MAINNET|${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}|0xdAC17F958D2ee523a2206206994597C13D831ec7"
|
||||
"cwusdt-cwusdc|cWUSDT|cWUSDC|base-to-quote|1000000|POOL_CWUSDT_CWUSDC_MAINNET|${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}|${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
|
||||
"cweurc-usdc|cWEURC|USDC|base-to-quote|1000|POOL_CWEURC_USDC_MAINNET|${CWEURC_MAINNET:-0xD4aEAa8cD3fB41Dc8437FaC7639B6d91B60A5e8d}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwgbpc-usdc|cWGBPC|USDC|quote-to-base|1000|POOL_CWGBPC_USDC_MAINNET|${CWGBPC_MAINNET:-0xc074007dc0bfb384b1cf6426a56287ed23fe4d52}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwaudc-usdc|cWAUDC|USDC|base-to-quote|1000|POOL_CWAUDC_USDC_MAINNET|${CWAUDC_MAINNET:-0x5020Db641B3Fc0dAbBc0c688C845bc4E3699f35F}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwcadc-usdc|cWCADC|USDC|quote-to-base|1000|POOL_CWCADC_USDC_MAINNET|${CWCADC_MAINNET:-0x209FE32fe7B541751D190ae4e50cd005DcF8EDb4}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwjpyc-usdc|cWJPYC|USDC|base-to-quote|1000|POOL_CWJPYC_USDC_MAINNET|${CWJPYC_MAINNET:-0x07EEd0D7dD40984e47B9D3a3bdded1c536435582}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
"cwchfc-usdc|cWCHFC|USDC|quote-to-base|1000|POOL_CWCHFC_USDC_MAINNET|${CWCHFC_MAINNET:-0x0F91C5E6Ddd46403746aAC970D05d70FFe404780}|0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
)
|
||||
|
||||
failures=0
|
||||
|
||||
echo "=== Mainnet Public DODO cW Bootstrap Pools ==="
|
||||
echo "Integration: $INTEGRATION"
|
||||
echo "RPC: $RPC_URL"
|
||||
|
||||
for row in "${pairs[@]}"; do
|
||||
IFS='|' read -r pair base_symbol quote_symbol direction amount pool_env_key base_token quote_token <<<"$row"
|
||||
|
||||
configured_pool="${!pool_env_key:-}"
|
||||
status_pool="$(jq -r --arg b "$base_symbol" --arg q "$quote_symbol" '.chains["1"].pmmPools[]? | select(.base == $b and .quote == $q) | .poolAddress' "$STATUS_JSON" | head -n1)"
|
||||
if [[ -n "$status_pool" && "$status_pool" != "null" ]]; then
|
||||
pool="$status_pool"
|
||||
else
|
||||
pool="$configured_pool"
|
||||
fi
|
||||
|
||||
if [[ -z "$pool" ]]; then
|
||||
echo "[fail] $pair missing pool address in registry/env" >&2
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
mapped_pool="$(cast call "$INTEGRATION" 'pools(address,address)(address)' "$base_token" "$quote_token" --rpc-url "$RPC_URL" 2>/dev/null | awk '{print $1}')"
|
||||
reserve_output="$(cast call "$pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC_URL" 2>/dev/null || true)"
|
||||
base_reserve="$(printf '%s\n' "$reserve_output" | sed -n '1p' | awk '{print $1}')"
|
||||
quote_reserve="$(printf '%s\n' "$reserve_output" | sed -n '2p' | awk '{print $1}')"
|
||||
|
||||
if [[ "$mapped_pool" != "$pool" ]]; then
|
||||
echo "[fail] $pair integration mapping mismatch: expected $pool got ${mapped_pool:-<empty>}" >&2
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ -z "$base_reserve" || -z "$quote_reserve" || "$base_reserve" == "0" || "$quote_reserve" == "0" ]]; then
|
||||
echo "[fail] $pair has zero/unknown reserves" >&2
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
dry_run_output="$(
|
||||
bash "${REPO_ROOT}/scripts/deployment/run-mainnet-public-dodo-cw-swap.sh" \
|
||||
--pair="$pair" \
|
||||
--direction="$direction" \
|
||||
--amount="$amount" \
|
||||
--dry-run
|
||||
)"
|
||||
|
||||
quote_source="$(printf '%s\n' "$dry_run_output" | awk -F= '/^quoteSource=/ {print $2}')"
|
||||
estimated_out="$(printf '%s\n' "$dry_run_output" | awk -F= '/^estimatedOut=/ {print $2}')"
|
||||
min_out="$(printf '%s\n' "$dry_run_output" | awk -F= '/^minOut=/ {print $2}')"
|
||||
|
||||
echo "- $pair"
|
||||
echo " pool=$pool"
|
||||
if [[ -n "$configured_pool" && "${configured_pool,,}" != "${pool,,}" ]]; then
|
||||
echo " note=env_pool_overridden_by_registry"
|
||||
echo " configuredPool=$configured_pool"
|
||||
fi
|
||||
echo " mappedPool=$mapped_pool"
|
||||
echo " reserves=$base_reserve/$quote_reserve"
|
||||
echo " dryRunDirection=$direction"
|
||||
echo " dryRunAmount=$amount"
|
||||
echo " quoteSource=${quote_source:-unknown}"
|
||||
echo " estimatedOut=${estimated_out:-unknown}"
|
||||
echo " minOut=${min_out:-unknown}"
|
||||
done
|
||||
|
||||
if (( failures > 0 )); then
|
||||
echo
|
||||
echo "[WARN] Bootstrap pool verification found $failures issue(s)." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "Result: all 11 recorded Mainnet public DODO cW bootstrap pools are mapped, funded, and dry-run routable."
|
||||
25
scripts/verify/check-mainnet-quote-push-prediction-tolerance.sh
Executable file
25
scripts/verify/check-mainnet-quote-push-prediction-tolerance.sh
Executable file
@@ -0,0 +1,25 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify that the quote-push calculator and the mainnet fork receiver stay
|
||||
# within a 1% tolerance on deterministic execution outputs.
|
||||
#
|
||||
# This is the strongest honest repo-side guarantee we can make: deterministic
|
||||
# PMM + mocked unwind execution must match the calculator closely. It does not
|
||||
# claim live market venues, gas, or async bridge settlement will stay within 1%.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-mainnet-quote-push-prediction-tolerance.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1
|
||||
|
||||
if [ -z "${ETHEREUM_MAINNET_RPC:-}" ]; then
|
||||
echo "[FAIL] ETHEREUM_MAINNET_RPC is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$PROJECT_ROOT/smom-dbis-138/forkproof"
|
||||
forge test -vv --match-contract AaveQuotePushFlashReceiverMainnetForkTest
|
||||
120
scripts/verify/check-mainnet-weth-relay-lane.sh
Normal file
120
scripts/verify/check-mainnet-weth-relay-lane.sh
Normal file
@@ -0,0 +1,120 @@
|
||||
#!/usr/bin/env bash
|
||||
# Surgical readiness check for the Chain 138 -> Ethereum Mainnet WETH relay lane.
|
||||
# Default is read-only.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
command -v curl >/dev/null 2>&1 || { echo "curl required" >&2; exit 1; }
|
||||
command -v jq >/dev/null 2>&1 || { echo "jq required" >&2; exit 1; }
|
||||
command -v cast >/dev/null 2>&1 || { echo "cast required" >&2; exit 1; }
|
||||
|
||||
normalize_uint() {
|
||||
awk '{print $1}'
|
||||
}
|
||||
|
||||
HEALTH_URL="${CCIP_RELAY_MAINNET_WETH_HEALTH_URL:-${CCIP_RELAY_HEALTH_URL_MAINNET_WETH:-http://192.168.11.11:9860/healthz}}"
|
||||
PROFILE_ENV="${RELAY_PROFILE_ENV_MAINNET_WETH:-${PROJECT_ROOT}/smom-dbis-138/services/relay/.env.mainnet-weth}"
|
||||
RPC="${RPC_URL_MAINNET:-${ETHEREUM_MAINNET_RPC:-https://ethereum.publicnode.com}}"
|
||||
WETH="${WETH9_MAINNET:-0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2}"
|
||||
DEFAULT_MIN_BRIDGE_WEI="10000000000000000"
|
||||
MIN_BRIDGE_WEI="${MIN_BRIDGE_WETH_WEI:-$DEFAULT_MIN_BRIDGE_WEI}"
|
||||
|
||||
if [[ -f "$PROFILE_ENV" ]]; then
|
||||
while IFS='=' read -r key value; do
|
||||
[[ -n "$key" ]] || continue
|
||||
case "$key" in
|
||||
DEST_RELAY_BRIDGE) DEST_RELAY_BRIDGE="${DEST_RELAY_BRIDGE:-$value}" ;;
|
||||
DEST_RELAY_ROUTER) DEST_RELAY_ROUTER="${DEST_RELAY_ROUTER:-$value}" ;;
|
||||
RELAY_SHEDDING) PROFILE_SHEDDING="$value" ;;
|
||||
SOURCE_BRIDGE_ADDRESS) SOURCE_BRIDGE_ADDRESS="${SOURCE_BRIDGE_ADDRESS:-$value}" ;;
|
||||
esac
|
||||
done < <(grep -E '^(DEST_RELAY_BRIDGE|DEST_RELAY_ROUTER|RELAY_SHEDDING|SOURCE_BRIDGE_ADDRESS)=' "$PROFILE_ENV" || true)
|
||||
fi
|
||||
|
||||
DEST_RELAY_BRIDGE="${DEST_RELAY_BRIDGE:-${CCIP_RELAY_BRIDGE_MAINNET:-0xF9A32F37099c582D28b4dE7Fca6eaC1e5259f939}}"
|
||||
DEST_RELAY_ROUTER="${DEST_RELAY_ROUTER:-${CCIP_RELAY_ROUTER_MAINNET:-0x416564Ab73Ad5710855E98dC7bC7Bff7387285BA}}"
|
||||
PROFILE_SHEDDING="${PROFILE_SHEDDING:-unknown}"
|
||||
|
||||
tmp_body="$(mktemp)"
|
||||
trap 'rm -f "$tmp_body"' EXIT
|
||||
|
||||
http_code="$(curl -fsS -m 10 -o "$tmp_body" -w '%{http_code}' "$HEALTH_URL" || echo "000")"
|
||||
if [[ "$http_code" != "200" ]]; then
|
||||
echo "Mainnet WETH relay health probe failed: $HEALTH_URL (HTTP $http_code)" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
status="$(jq -r '.status // "unknown"' "$tmp_body")"
|
||||
delivery_enabled="$(jq -r '.monitoring.delivery_enabled // false' "$tmp_body")"
|
||||
shedding="$(jq -r '.monitoring.shedding // false' "$tmp_body")"
|
||||
queue_size="$(jq -r '.queue.size // 0' "$tmp_body")"
|
||||
last_poll="$(jq -r '.last_source_poll.at // "unknown"' "$tmp_body")"
|
||||
last_msg="$(jq -r '.last_seen_message.message_id // "none"' "$tmp_body")"
|
||||
service_profile="$(jq -r '.service.profile // "unknown"' "$tmp_body")"
|
||||
last_error_scope="$(jq -r '.last_error.scope // "none"' "$tmp_body")"
|
||||
last_error_message="$(jq -r '.last_error.message // ""' "$tmp_body")"
|
||||
last_error_shortfall="$(jq -r '.last_error.shortfall // ""' "$tmp_body")"
|
||||
last_error_required="$(jq -r '.last_error.required_amount // ""' "$tmp_body")"
|
||||
last_error_available="$(jq -r '.last_error.available_amount // ""' "$tmp_body")"
|
||||
bridge_balance="$( (cast call "$WETH" "balanceOf(address)(uint256)" "$DEST_RELAY_BRIDGE" --rpc-url "$RPC" 2>/dev/null || echo "0") | normalize_uint )"
|
||||
router_paused="$( (cast call "$DEST_RELAY_ROUTER" "paused()(bool)" --rpc-url "$RPC" 2>/dev/null || echo "unknown") | awk '{print $1}' )"
|
||||
|
||||
bridge_ready="$(node -e 'const bal=BigInt(process.argv[1]); const min=BigInt(process.argv[2]); process.stdout.write(bal >= min ? "yes" : "no")' "$bridge_balance" "$MIN_BRIDGE_WEI")"
|
||||
|
||||
echo "Mainnet WETH relay lane"
|
||||
echo " Health URL: $HEALTH_URL"
|
||||
echo " Service profile: $service_profile"
|
||||
echo " Reported status: $status"
|
||||
echo " Delivery enabled: $delivery_enabled"
|
||||
echo " Shedding: $shedding"
|
||||
echo " Queue size: $queue_size"
|
||||
echo " Last source poll: $last_poll"
|
||||
echo " Last seen message: $last_msg"
|
||||
echo " Last error scope: $last_error_scope"
|
||||
if [[ -n "$last_error_message" ]]; then
|
||||
echo " Last error message: $last_error_message"
|
||||
fi
|
||||
if [[ -n "$last_error_shortfall" ]]; then
|
||||
echo " Inventory shortfall: $last_error_shortfall wei"
|
||||
echo " Required amount: $last_error_required wei"
|
||||
echo " Available amount: $last_error_available wei"
|
||||
fi
|
||||
echo " Profile RELAY_SHEDDING: $PROFILE_SHEDDING"
|
||||
echo " Dest relay bridge: $DEST_RELAY_BRIDGE"
|
||||
echo " Dest relay router: $DEST_RELAY_ROUTER"
|
||||
echo " Bridge WETH balance: $bridge_balance wei"
|
||||
echo " Minimum bridge floor: $MIN_BRIDGE_WEI wei"
|
||||
echo " Bridge ready: $bridge_ready"
|
||||
echo " Router paused(): $router_paused"
|
||||
echo
|
||||
|
||||
if [[ "$delivery_enabled" == "true" && "$shedding" == "false" && ( "$router_paused" == "false" || "$router_paused" == "unknown" ) ]]; then
|
||||
echo "Lane is already deliverable."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Recommended next actions:"
|
||||
|
||||
if [[ "$bridge_ready" != "yes" ]]; then
|
||||
echo "1. Top up the Mainnet relay bridge before unpausing delivery:"
|
||||
echo " bash scripts/bridge/fund-mainnet-relay-bridge.sh --target-bridge-balance-wei $MIN_BRIDGE_WEI --dry-run"
|
||||
fi
|
||||
|
||||
if [[ "$last_error_scope" == "bridge_inventory" && -n "$last_error_shortfall" ]]; then
|
||||
echo "2. The lane is live but underfunded for at least one queued release. Top up at least the reported shortfall:"
|
||||
echo " Additional WETH needed: $last_error_shortfall wei"
|
||||
fi
|
||||
|
||||
if [[ "$router_paused" == "true" ]]; then
|
||||
echo "3. The destination relay router is paused. Unpause it only after bridge inventory is ready:"
|
||||
echo " cast send $DEST_RELAY_ROUTER \"unpause()\" --rpc-url \"\$ETHEREUM_MAINNET_RPC\" --private-key \"\$PRIVATE_KEY\""
|
||||
fi
|
||||
|
||||
if [[ "$delivery_enabled" != "true" || "$shedding" != "false" || "$PROFILE_SHEDDING" == "1" ]]; then
|
||||
echo "4. Then unpark the off-chain worker:"
|
||||
echo " bash scripts/deployment/unpause-mainnet-weth-relay-lane.sh --dry-run"
|
||||
fi
|
||||
143
scripts/verify/check-npmplus-duplicate-security-headers.sh
Executable file
143
scripts/verify/check-npmplus-duplicate-security-headers.sh
Executable file
@@ -0,0 +1,143 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
DEFAULT_DOMAINS=(
|
||||
"sankofa.nexus"
|
||||
"phoenix.sankofa.nexus"
|
||||
"the-order.sankofa.nexus"
|
||||
"dbis-admin.d-bis.org"
|
||||
"dbis-api.d-bis.org"
|
||||
"dbis-api-2.d-bis.org"
|
||||
"secure.d-bis.org"
|
||||
"mim4u.org"
|
||||
"www.mim4u.org"
|
||||
"secure.mim4u.org"
|
||||
"training.mim4u.org"
|
||||
"rpc-ws-pub.d-bis.org"
|
||||
"rpc-http-prv.d-bis.org"
|
||||
"rpc-ws-prv.d-bis.org"
|
||||
"rpc.public-0138.defi-oracle.io"
|
||||
"studio.sankofa.nexus"
|
||||
"explorer.d-bis.org"
|
||||
)
|
||||
|
||||
HEADERS_TO_CHECK=(
|
||||
"Content-Security-Policy"
|
||||
"X-Frame-Options"
|
||||
"X-Content-Type-Options"
|
||||
"Referrer-Policy"
|
||||
"X-XSS-Protection"
|
||||
"Cache-Control"
|
||||
)
|
||||
|
||||
failures=0
|
||||
warn_on_unreachable=0
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd curl
|
||||
|
||||
get_domains_from_latest_backup() {
|
||||
require_cmd jq
|
||||
local backup
|
||||
backup="$(ls -1t "${PROJECT_ROOT}"/.ops-backups/npmplus/proxy-hosts-*.json 2>/dev/null | head -n1 || true)"
|
||||
if [[ -z "$backup" ]]; then
|
||||
echo "[fail] no NPMplus backup found under ${PROJECT_ROOT}/.ops-backups/npmplus" >&2
|
||||
exit 1
|
||||
fi
|
||||
jq -r '.[] | .domain_names[]?' "$backup" | sort -u
|
||||
}
|
||||
|
||||
ok() {
|
||||
printf '[ok] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[warn] %s\n' "$*"
|
||||
}
|
||||
|
||||
fail() {
|
||||
printf '[fail] %s\n' "$*"
|
||||
failures=$((failures + 1))
|
||||
}
|
||||
|
||||
count_header_lines() {
|
||||
local headers="$1"
|
||||
local name="$2"
|
||||
printf '%s\n' "$headers" | grep -iEc "^${name}:" || true
|
||||
}
|
||||
|
||||
first_status_line() {
|
||||
local headers="$1"
|
||||
printf '%s\n' "$headers" | grep -E '^HTTP/' | head -n1
|
||||
}
|
||||
|
||||
check_domain() {
|
||||
local domain="$1"
|
||||
local url="https://${domain}/"
|
||||
local headers
|
||||
|
||||
if ! headers="$(curl -ksSI --connect-timeout 10 --max-time 20 "$url" 2>/dev/null)"; then
|
||||
if [[ "$warn_on_unreachable" -eq 1 ]]; then
|
||||
warn "${domain} did not return response headers"
|
||||
else
|
||||
fail "${domain} did not return response headers"
|
||||
fi
|
||||
return
|
||||
fi
|
||||
|
||||
local status
|
||||
status="$(first_status_line "$headers")"
|
||||
if [[ -n "$status" ]]; then
|
||||
ok "${domain} responded (${status})"
|
||||
else
|
||||
warn "${domain} returned headers without an HTTP status line"
|
||||
fi
|
||||
|
||||
local header
|
||||
for header in "${HEADERS_TO_CHECK[@]}"; do
|
||||
local count
|
||||
count="$(count_header_lines "$headers" "$header")"
|
||||
if [[ "$count" -gt 1 ]]; then
|
||||
fail "${domain} has duplicate ${header} headers (${count})"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
main() {
|
||||
local domains=()
|
||||
if [[ "$#" -gt 0 && "$1" == "--all-from-latest-backup" ]]; then
|
||||
warn_on_unreachable=1
|
||||
while IFS= read -r domain; do
|
||||
[[ -z "$domain" || "$domain" == *'*'* ]] && continue
|
||||
domains+=("$domain")
|
||||
done < <(get_domains_from_latest_backup)
|
||||
elif [[ "$#" -gt 0 ]]; then
|
||||
domains=("$@")
|
||||
else
|
||||
domains=("${DEFAULT_DOMAINS[@]}")
|
||||
fi
|
||||
|
||||
for domain in "${domains[@]}"; do
|
||||
check_domain "$domain"
|
||||
done
|
||||
|
||||
if [[ "$failures" -gt 0 ]]; then
|
||||
echo
|
||||
fail "duplicate-header check failed for ${failures} condition(s)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo
|
||||
ok "no duplicate security/cache headers detected"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
59
scripts/verify/check-pending-transactions-chain138.sh
Executable file
59
scripts/verify/check-pending-transactions-chain138.sh
Executable file
@@ -0,0 +1,59 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deployer nonce + Besu txpool snapshot on Chain 138 Core RPC.
|
||||
# Usage: bash scripts/verify/check-pending-transactions-chain138.sh
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=scripts/lib/load-project-env.sh
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
RPC_URL="${RPC_URL_138:-http://${RPC_CORE_1:-192.168.11.211}:8545}"
|
||||
DEPLOYER="${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
|
||||
DEPLOYER_LC=$(echo "$DEPLOYER" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
echo "=== Chain 138 pending / mempool (Core) ==="
|
||||
echo "RPC_URL=$RPC_URL"
|
||||
echo "DEPLOYER=$DEPLOYER"
|
||||
echo ""
|
||||
|
||||
if ! command -v cast >/dev/null 2>&1; then
|
||||
echo "cast (foundry) required for nonce checks" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! timeout 8 cast chain-id --rpc-url "$RPC_URL" >/dev/null 2>&1; then
|
||||
echo "RPC not reachable" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
LATEST_HEX=$(cast rpc eth_getTransactionCount "$DEPLOYER_LC" latest --rpc-url "$RPC_URL" 2>/dev/null | tr -d '"')
|
||||
PENDING_HEX=$(cast rpc eth_getTransactionCount "$DEPLOYER_LC" pending --rpc-url "$RPC_URL" 2>/dev/null | tr -d '"')
|
||||
LATEST_DEC=$(cast --to-dec "$LATEST_HEX" 2>/dev/null || echo "0")
|
||||
PENDING_DEC=$(cast --to-dec "$PENDING_HEX" 2>/dev/null || echo "0")
|
||||
echo "Deployer latest nonce (next mined): $LATEST_DEC"
|
||||
echo "Deployer pending nonce view: $PENDING_DEC"
|
||||
echo "Delta (pending - latest): $((PENDING_DEC - LATEST_DEC))"
|
||||
echo ""
|
||||
|
||||
echo "--- txpool_besuStatistics ---"
|
||||
cast rpc txpool_besuStatistics --rpc-url "$RPC_URL" 2>/dev/null || echo "(method missing or error)"
|
||||
echo ""
|
||||
|
||||
echo "--- txpool_besuPendingTransactions (count) ---"
|
||||
PT=$(cast rpc txpool_besuPendingTransactions --rpc-url "$RPC_URL" 2>/dev/null || echo "[]")
|
||||
if command -v jq >/dev/null 2>&1; then
|
||||
echo "$PT" | jq 'if type == "array" then length else . end'
|
||||
if echo "$PT" | jq -e 'type == "array" and length > 0' >/dev/null 2>&1; then
|
||||
echo ""
|
||||
echo "--- pending summary (hash, from, nonce, gasPrice) ---"
|
||||
echo "$PT" | jq '[.[] | {hash, from, nonce, gasPrice}]'
|
||||
fi
|
||||
else
|
||||
echo "$PT" | head -c 500
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=== Done ==="
|
||||
@@ -3,7 +3,7 @@
|
||||
# Uses eth_call (curl) for compatibility with RPCs that reject some cast call formats.
|
||||
#
|
||||
# Canonical source for addresses:
|
||||
# docs/11-references/ADDRESS_MATRIX_AND_STATUS.md (updated 2026-03-26)
|
||||
# docs/11-references/ADDRESS_MATRIX_AND_STATUS.md (updated 2026-04-02)
|
||||
# docs/11-references/DEPLOYED_TOKENS_BRIDGES_LPS_AND_ROUTING_STATUS.md
|
||||
#
|
||||
# Usage: ./scripts/verify/check-pmm-pool-balances-chain138.sh [RPC_URL]
|
||||
@@ -87,15 +87,15 @@ print_pool() {
|
||||
echo "=============================================="
|
||||
echo " Chain 138 — PMM pool balances"
|
||||
echo " RPC: $RPC"
|
||||
echo " Pool map: corrected 2026-03-26 public + private funded set"
|
||||
echo " Pool map: canonical 2026-04-02 stable pools + funded XAU/public/private set"
|
||||
echo "=============================================="
|
||||
echo ""
|
||||
|
||||
echo "Public pools"
|
||||
echo "------------"
|
||||
print_pool "Pool 1: cUSDT / cUSDC" "0xff8d3b8fDF7B112759F076B69f4271D4209C0849" "cUSDT" "$cUSDT" "cUSDC" "$cUSDC" "public"
|
||||
print_pool "Pool 2: cUSDT / USDT (official mirror)" "0x6fc60DEDc92a2047062294488539992710b99D71" "cUSDT" "$cUSDT" "USDT" "$OFFICIAL_USDT" "public"
|
||||
print_pool "Pool 3: cUSDC / USDC (official mirror)" "0x0309178ae30302D83c76d6Dd402a684eF3160eec" "cUSDC" "$cUSDC" "USDC" "$OFFICIAL_USDC" "public"
|
||||
print_pool "Pool 1: cUSDT / cUSDC" "0x9e89bAe009adf128782E19e8341996c596ac40dC" "cUSDT" "$cUSDT" "cUSDC" "$cUSDC" "public"
|
||||
print_pool "Pool 2: cUSDT / USDT (official mirror)" "0x866Cb44b59303d8dc5f4F9E3E7A8e8b0bf238d66" "cUSDT" "$cUSDT" "USDT" "$OFFICIAL_USDT" "public"
|
||||
print_pool "Pool 3: cUSDC / USDC (official mirror)" "0xc39B7D0F40838cbFb54649d327f49a6DAC964062" "cUSDC" "$cUSDC" "USDC" "$OFFICIAL_USDC" "public"
|
||||
print_pool "Pool 4: cUSDT / cXAUC" "0x1AA55E2001E5651349AfF5A63FD7A7Ae44f0F1b0" "cUSDT" "$cUSDT" "cXAUC" "$cXAUC" "public"
|
||||
print_pool "Pool 5: cUSDC / cXAUC" "0xEA9Ac6357CaCB42a83b9082B870610363B177cBa" "cUSDC" "$cUSDC" "cXAUC" "$cXAUC" "public"
|
||||
print_pool "Pool 6: cEURT / cXAUC" "0xbA99bc1eAAC164569d5AcA96C806934DDaF970Cf" "cEURT" "$cEURT" "cXAUC" "$cXAUC" "public"
|
||||
|
||||
89
scripts/verify/check-proxmox-mgmt-fqdn.sh
Executable file
89
scripts/verify/check-proxmox-mgmt-fqdn.sh
Executable file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env bash
|
||||
# Check LAN DNS (or /etc/hosts) for Proxmox hypervisor FQDNs: *.sankofa.nexus
|
||||
# Sources config/ip-addresses.conf (PROXMOX_FQDN_*). Read-only unless --ssh is passed.
|
||||
#
|
||||
# Usage (repo root):
|
||||
# bash scripts/verify/check-proxmox-mgmt-fqdn.sh
|
||||
# bash scripts/verify/check-proxmox-mgmt-fqdn.sh --print-hosts # snippet for /etc/hosts on operator laptops
|
||||
# bash scripts/verify/check-proxmox-mgmt-fqdn.sh --ssh # BatchMode ssh after resolve (root)
|
||||
#
|
||||
# Exit: 0 if every FQDN resolves to an address; 1 otherwise (--ssh failure does not change exit; see output)
|
||||
|
||||
set -uo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "$PROJECT_ROOT/config/ip-addresses.conf"
|
||||
|
||||
PRINT_HOSTS=0
|
||||
DO_SSH=0
|
||||
for a in "$@"; do
|
||||
case "$a" in
|
||||
--print-hosts) PRINT_HOSTS=1 ;;
|
||||
--ssh) DO_SSH=1 ;;
|
||||
-h | --help) sed -n '1,18p' "$0"; exit 0 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
fqdns=(
|
||||
"${PROXMOX_FQDN_ML110}"
|
||||
"${PROXMOX_FQDN_R630_01}"
|
||||
"${PROXMOX_FQDN_R630_02}"
|
||||
"${PROXMOX_FQDN_R630_03}"
|
||||
"${PROXMOX_FQDN_R630_04}"
|
||||
)
|
||||
ips=(
|
||||
"${PROXMOX_HOST_ML110}"
|
||||
"${PROXMOX_HOST_R630_01}"
|
||||
"${PROXMOX_HOST_R630_02}"
|
||||
"${PROXMOX_HOST_R630_03}"
|
||||
"${PROXMOX_HOST_R630_04}"
|
||||
)
|
||||
|
||||
echo "=== Proxmox management FQDN resolution ==="
|
||||
ok=0
|
||||
bad=0
|
||||
for i in "${!fqdns[@]}"; do
|
||||
f="${fqdns[$i]}"
|
||||
ip="${ips[$i]}"
|
||||
printf '%-28s ' "$f"
|
||||
line=$(getent ahosts "$f" 2>/dev/null | head -1 || true)
|
||||
if [[ -n "$line" ]]; then
|
||||
echo "OK ($line)"
|
||||
ok=$((ok + 1))
|
||||
else
|
||||
echo "MISS (expected ${ip} on VLAN 11 — add DNS A-record or /etc/hosts)"
|
||||
bad=$((bad + 1))
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
echo "Summary: resolved=$ok missing=$bad"
|
||||
|
||||
if [[ "$PRINT_HOSTS" -eq 1 ]]; then
|
||||
echo ""
|
||||
echo "=== /etc/hosts snippet (operator workstation on LAN) ==="
|
||||
for i in "${!fqdns[@]}"; do
|
||||
echo "${ips[$i]} ${fqdns[$i]}"
|
||||
done
|
||||
fi
|
||||
|
||||
if [[ "$DO_SSH" -eq 1 ]]; then
|
||||
SSH_USER="${SSH_USER:-${PROXMOX_SSH_USER:-root}}"
|
||||
SSH_OPTS=(-o BatchMode=yes -o ConnectTimeout=10 -o StrictHostKeyChecking=accept-new)
|
||||
echo ""
|
||||
echo "=== SSH by FQDN (only if resolved) ==="
|
||||
for i in "${!fqdns[@]}"; do
|
||||
f="${fqdns[$i]}"
|
||||
getent ahosts "$f" &>/dev/null || continue
|
||||
printf '%-28s ' "$f"
|
||||
if out=$(ssh "${SSH_OPTS[@]}" "${SSH_USER}@${f}" "hostname -f 2>/dev/null || hostname" 2>&1); then
|
||||
echo "OK ($out)"
|
||||
else
|
||||
echo "FAIL $out"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
[[ "$bad" -eq 0 ]]
|
||||
exit $?
|
||||
350
scripts/verify/check-public-pmm-dry-run-readiness.sh
Executable file
350
scripts/verify/check-public-pmm-dry-run-readiness.sh
Executable file
@@ -0,0 +1,350 @@
|
||||
#!/usr/bin/env bash
|
||||
# Read-only readiness checker for public-network PMM loop dry-runs.
|
||||
# Reports what is live, what is configured, and what content is still missing.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
|
||||
# shellcheck disable=SC1091
|
||||
source "$ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
JSON="$ROOT/cross-chain-pmm-lps/config/deployment-status.json"
|
||||
FLASH_SCAN="$ROOT/config/flash_candidates-chain138.json"
|
||||
MAINNET_BALANCER_VAULT_ADDR="${MAINNET_BALANCER_VAULT:-0xBA12222222228d8Ba445958a75a0704d566BF2C8}"
|
||||
MAINNET_AAVE_V3_POOL_ADDR="${MAINNET_AAVE_V3_POOL:-0x87870Bca3F3fD6335C3F4ce8392D69350B4fA4E2}"
|
||||
MAINNET_AAVE_V3_PROVIDER_ADDR="${MAINNET_AAVE_V3_POOL_ADDRESSES_PROVIDER:-0x2f39d218133AFaB8F2B819B1066c7E434Ad94E9e}"
|
||||
balancer_usdc_balance=""
|
||||
balancer_usdt_balance=""
|
||||
aave_fee=""
|
||||
aave_usdc_balance=""
|
||||
aave_usdt_balance=""
|
||||
warn_count=0
|
||||
|
||||
ok() {
|
||||
printf '[ok] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[warn] %s\n' "$*"
|
||||
warn_count=$((warn_count + 1))
|
||||
}
|
||||
|
||||
note() {
|
||||
printf '[note] %s\n' "$*"
|
||||
}
|
||||
|
||||
section() {
|
||||
printf '\n%s\n' "$*"
|
||||
}
|
||||
|
||||
have_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
probe_pool_reserves() {
|
||||
local pool="$1"
|
||||
local rpc="$2"
|
||||
cast call "$pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$rpc" 2>/dev/null |
|
||||
tr -d '()' |
|
||||
paste -sd '/' -
|
||||
}
|
||||
|
||||
probe_erc20_balance() {
|
||||
local token="$1"
|
||||
local holder="$2"
|
||||
local rpc="$3"
|
||||
cast call "$token" 'balanceOf(address)(uint256)' "$holder" --rpc-url "$rpc" 2>/dev/null
|
||||
}
|
||||
|
||||
probe_aave_reserve_data() {
|
||||
local pool="$1"
|
||||
local token="$2"
|
||||
local rpc="$3"
|
||||
cast call "$pool" \
|
||||
'getReserveData(address)((uint256,uint128,uint128,uint128,uint128,uint128,uint40,address,address,address,address,uint8))' \
|
||||
"$token" \
|
||||
--rpc-url "$rpc" \
|
||||
2>/dev/null
|
||||
}
|
||||
|
||||
strip_cast_scalar() {
|
||||
printf '%s' "$1" | awk '{print $1}' | tr -d '(),'
|
||||
}
|
||||
|
||||
extract_nth_address() {
|
||||
local payload="$1"
|
||||
local index="$2"
|
||||
printf '%s\n' "$payload" | grep -oE '0x[a-fA-F0-9]{40}' | sed -n "${index}p"
|
||||
}
|
||||
|
||||
fmt6() {
|
||||
local raw="$1"
|
||||
awk -v raw="$raw" 'BEGIN { printf "%.6f", raw / 1000000 }'
|
||||
}
|
||||
|
||||
section "Public PMM Dry-Run Readiness"
|
||||
|
||||
if [[ ! -f "$JSON" ]]; then
|
||||
echo "[fail] missing $JSON" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MAINNET_USDC_ADDR="$(jq -r '.chains["1"].anchorAddresses.USDC // "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"' "$JSON")"
|
||||
MAINNET_USDT_ADDR="$(jq -r '.chains["1"].anchorAddresses.USDT // "0xdAC17F958D2ee523a2206206994597C13D831ec7"' "$JSON")"
|
||||
|
||||
section "Live Mainnet cWUSD*/USD* pools"
|
||||
jq -r '
|
||||
.chains["1"].pmmPools[]
|
||||
| select((.base == "cWUSDT" or .base == "cWUSDC") and (.quote == "USDC" or .quote == "USDT"))
|
||||
| " - \(.base) / \(.quote) -> \(.poolAddress) (feeBps=\(.feeBps // 10))"
|
||||
' "$JSON"
|
||||
|
||||
section "Config surfaces"
|
||||
if [[ -n "${ETHEREUM_MAINNET_RPC:-}" ]]; then
|
||||
ok "ETHEREUM_MAINNET_RPC is set"
|
||||
else
|
||||
warn "ETHEREUM_MAINNET_RPC is not set; live Mainnet reserve reads will fall back to manual -B/-Q"
|
||||
fi
|
||||
|
||||
if [[ -n "${DODO_PMM_INTEGRATION_MAINNET:-}" ]]; then
|
||||
ok "DODO_PMM_INTEGRATION_MAINNET is set to ${DODO_PMM_INTEGRATION_MAINNET}"
|
||||
else
|
||||
warn "DODO_PMM_INTEGRATION_MAINNET is not set in the canonical env"
|
||||
fi
|
||||
|
||||
if [[ -n "${FLASH_VAULT:-}" ]]; then
|
||||
ok "FLASH_VAULT is set to ${FLASH_VAULT}"
|
||||
else
|
||||
note "FLASH_VAULT is not set; same-chain ERC-3156 stable flash dry-runs still require a manual lender address"
|
||||
fi
|
||||
|
||||
if [[ -n "${FLASH_VAULT_TOKEN:-}" ]]; then
|
||||
ok "FLASH_VAULT_TOKEN is set to ${FLASH_VAULT_TOKEN}"
|
||||
else
|
||||
note "FLASH_VAULT_TOKEN is not set; ERC-3156 lender probing cannot auto-select the flash asset"
|
||||
fi
|
||||
|
||||
if [[ -n "${FLASH_PROVIDER_RPC_URL:-}" ]]; then
|
||||
ok "FLASH_PROVIDER_RPC_URL is set to ${FLASH_PROVIDER_RPC_URL}"
|
||||
elif [[ -n "${RPC_URL_138_PUBLIC:-}" || -n "${RPC_URL_138:-}" || -n "${CHAIN138_RPC_URL:-}" || -n "${ETHEREUM_MAINNET_RPC:-}" ]]; then
|
||||
ok "flash provider probe has RPC fallback(s) available via env"
|
||||
else
|
||||
warn "no flash-provider RPC env is set; live ERC-3156 probe would need --flash-provider-rpc-url"
|
||||
fi
|
||||
|
||||
section "Known public flash-lender candidates"
|
||||
ok "Mainnet Balancer vault candidate=${MAINNET_BALANCER_VAULT_ADDR}"
|
||||
ok "Mainnet Aave V3 pool candidate=${MAINNET_AAVE_V3_POOL_ADDR}"
|
||||
ok "Mainnet Aave V3 pool provider candidate=${MAINNET_AAVE_V3_PROVIDER_ADDR}"
|
||||
|
||||
section "Live reserve probe"
|
||||
if have_cmd cast && [[ -n "${ETHEREUM_MAINNET_RPC:-}" ]]; then
|
||||
while IFS=$'\t' read -r base quote pool; do
|
||||
reserves="$(probe_pool_reserves "$pool" "$ETHEREUM_MAINNET_RPC" || true)"
|
||||
if [[ -n "$reserves" ]]; then
|
||||
ok "${base}/${quote} reserves raw: ${reserves}"
|
||||
else
|
||||
warn "could not read getVaultReserve() for ${base}/${quote} at ${pool}"
|
||||
fi
|
||||
done < <(
|
||||
jq -r '
|
||||
.chains["1"].pmmPools[]
|
||||
| select((.base == "cWUSDT" or .base == "cWUSDC") and (.quote == "USDC" or .quote == "USDT"))
|
||||
| [.base, .quote, .poolAddress] | @tsv
|
||||
' "$JSON"
|
||||
)
|
||||
else
|
||||
warn "cast and ETHEREUM_MAINNET_RPC are both required for live reserve probing"
|
||||
fi
|
||||
|
||||
section "Public flash-lender probe"
|
||||
if have_cmd cast && [[ -n "${ETHEREUM_MAINNET_RPC:-}" ]]; then
|
||||
balancer_code="$(cast code "$MAINNET_BALANCER_VAULT_ADDR" --rpc-url "$ETHEREUM_MAINNET_RPC" 2>/dev/null || true)"
|
||||
if [[ -n "$balancer_code" && "$balancer_code" != "0x" ]]; then
|
||||
ok "Balancer mainnet vault bytecode present at ${MAINNET_BALANCER_VAULT_ADDR}"
|
||||
balancer_collector="$(cast call "$MAINNET_BALANCER_VAULT_ADDR" 'getProtocolFeesCollector()(address)' --rpc-url "$ETHEREUM_MAINNET_RPC" 2>/dev/null || true)"
|
||||
if [[ -n "$balancer_collector" ]]; then
|
||||
ok "Balancer ProtocolFeesCollector=${balancer_collector}"
|
||||
balancer_fee="$(cast call "$balancer_collector" 'getFlashLoanFeePercentage()(uint256)' --rpc-url "$ETHEREUM_MAINNET_RPC" 2>/dev/null || true)"
|
||||
if [[ -n "$balancer_fee" ]]; then
|
||||
ok "Balancer getFlashLoanFeePercentage() responded: ${balancer_fee}"
|
||||
else
|
||||
warn "could not read getFlashLoanFeePercentage() from collector ${balancer_collector}"
|
||||
fi
|
||||
else
|
||||
warn "could not read getProtocolFeesCollector() from ${MAINNET_BALANCER_VAULT_ADDR}"
|
||||
fi
|
||||
balancer_usdc_balance="$(strip_cast_scalar "$(probe_erc20_balance "$MAINNET_USDC_ADDR" "$MAINNET_BALANCER_VAULT_ADDR" "$ETHEREUM_MAINNET_RPC" || true)")"
|
||||
if [[ -n "$balancer_usdc_balance" ]]; then
|
||||
ok "Balancer vault USDC balance raw=${balancer_usdc_balance} (~$(fmt6 "$balancer_usdc_balance") USDC)"
|
||||
else
|
||||
warn "could not read Balancer vault USDC balance"
|
||||
fi
|
||||
balancer_usdt_balance="$(strip_cast_scalar "$(probe_erc20_balance "$MAINNET_USDT_ADDR" "$MAINNET_BALANCER_VAULT_ADDR" "$ETHEREUM_MAINNET_RPC" || true)")"
|
||||
if [[ -n "$balancer_usdt_balance" ]]; then
|
||||
ok "Balancer vault USDT balance raw=${balancer_usdt_balance} (~$(fmt6 "$balancer_usdt_balance") USDT)"
|
||||
else
|
||||
warn "could not read Balancer vault USDT balance"
|
||||
fi
|
||||
else
|
||||
warn "no bytecode found at MAINNET_BALANCER_VAULT=${MAINNET_BALANCER_VAULT_ADDR}"
|
||||
fi
|
||||
else
|
||||
warn "cast and ETHEREUM_MAINNET_RPC are required to probe the public Balancer candidate"
|
||||
fi
|
||||
|
||||
if have_cmd cast && [[ -n "${ETHEREUM_MAINNET_RPC:-}" ]]; then
|
||||
aave_code="$(cast code "$MAINNET_AAVE_V3_POOL_ADDR" --rpc-url "$ETHEREUM_MAINNET_RPC" 2>/dev/null || true)"
|
||||
if [[ -n "$aave_code" && "$aave_code" != "0x" ]]; then
|
||||
ok "Aave V3 mainnet pool bytecode present at ${MAINNET_AAVE_V3_POOL_ADDR}"
|
||||
aave_provider="$(cast call "$MAINNET_AAVE_V3_POOL_ADDR" 'ADDRESSES_PROVIDER()(address)' --rpc-url "$ETHEREUM_MAINNET_RPC" 2>/dev/null || true)"
|
||||
if [[ -n "$aave_provider" ]]; then
|
||||
if [[ "${aave_provider,,}" == "${MAINNET_AAVE_V3_PROVIDER_ADDR,,}" ]]; then
|
||||
ok "Aave ADDRESSES_PROVIDER() matches ${aave_provider}"
|
||||
else
|
||||
warn "Aave ADDRESSES_PROVIDER()=${aave_provider} does not match configured/default ${MAINNET_AAVE_V3_PROVIDER_ADDR}"
|
||||
fi
|
||||
else
|
||||
warn "could not read ADDRESSES_PROVIDER() from ${MAINNET_AAVE_V3_POOL_ADDR}"
|
||||
fi
|
||||
aave_fee="$(cast call "$MAINNET_AAVE_V3_POOL_ADDR" 'FLASHLOAN_PREMIUM_TOTAL()(uint128)' --rpc-url "$ETHEREUM_MAINNET_RPC" 2>/dev/null || true)"
|
||||
if [[ -n "$aave_fee" ]]; then
|
||||
ok "Aave FLASHLOAN_PREMIUM_TOTAL() responded: ${aave_fee}"
|
||||
else
|
||||
warn "could not read FLASHLOAN_PREMIUM_TOTAL() from ${MAINNET_AAVE_V3_POOL_ADDR}"
|
||||
fi
|
||||
aave_usdc_reserve="$(probe_aave_reserve_data "$MAINNET_AAVE_V3_POOL_ADDR" "$MAINNET_USDC_ADDR" "$ETHEREUM_MAINNET_RPC" || true)"
|
||||
aave_usdc_atoken="$(extract_nth_address "$aave_usdc_reserve" 2)"
|
||||
if [[ -n "$aave_usdc_atoken" ]]; then
|
||||
aave_usdc_balance="$(strip_cast_scalar "$(probe_erc20_balance "$MAINNET_USDC_ADDR" "$aave_usdc_atoken" "$ETHEREUM_MAINNET_RPC" || true)")"
|
||||
if [[ -n "$aave_usdc_balance" ]]; then
|
||||
ok "Aave USDC reserve proxy=${aave_usdc_atoken} available balance raw=${aave_usdc_balance} (~$(fmt6 "$aave_usdc_balance") USDC)"
|
||||
else
|
||||
warn "could not read Aave USDC balance from reserve proxy ${aave_usdc_atoken}"
|
||||
fi
|
||||
else
|
||||
warn "could not derive Aave USDC reserve proxy from getReserveData()"
|
||||
fi
|
||||
aave_usdt_reserve="$(probe_aave_reserve_data "$MAINNET_AAVE_V3_POOL_ADDR" "$MAINNET_USDT_ADDR" "$ETHEREUM_MAINNET_RPC" || true)"
|
||||
aave_usdt_atoken="$(extract_nth_address "$aave_usdt_reserve" 2)"
|
||||
if [[ -n "$aave_usdt_atoken" ]]; then
|
||||
aave_usdt_balance="$(strip_cast_scalar "$(probe_erc20_balance "$MAINNET_USDT_ADDR" "$aave_usdt_atoken" "$ETHEREUM_MAINNET_RPC" || true)")"
|
||||
if [[ -n "$aave_usdt_balance" ]]; then
|
||||
ok "Aave USDT reserve proxy=${aave_usdt_atoken} available balance raw=${aave_usdt_balance} (~$(fmt6 "$aave_usdt_balance") USDT)"
|
||||
else
|
||||
warn "could not read Aave USDT balance from reserve proxy ${aave_usdt_atoken}"
|
||||
fi
|
||||
else
|
||||
warn "could not derive Aave USDT reserve proxy from getReserveData()"
|
||||
fi
|
||||
else
|
||||
warn "no bytecode found at MAINNET_AAVE_V3_POOL=${MAINNET_AAVE_V3_POOL_ADDR}"
|
||||
fi
|
||||
else
|
||||
warn "cast and ETHEREUM_MAINNET_RPC are required to probe the public Aave candidate"
|
||||
fi
|
||||
|
||||
section "Chain 138 flash candidates"
|
||||
if [[ -f "$FLASH_SCAN" ]]; then
|
||||
jq -r '
|
||||
.candidates[]
|
||||
| " - \(.name) -> \(.address) [" + (.matchedSignatures | join(", ")) + "]"
|
||||
' "$FLASH_SCAN"
|
||||
candidate_count="$(jq '.candidates | length' "$FLASH_SCAN")"
|
||||
if [[ "$candidate_count" -eq 1 ]] && jq -e '.candidates[0].name == "WETH10"' "$FLASH_SCAN" >/dev/null; then
|
||||
warn "only WETH10 is auto-detectable as a flash-style lender on Chain 138 right now; no canonical USDC/USDT same-chain lender is recorded"
|
||||
fi
|
||||
else
|
||||
warn "missing $FLASH_SCAN; flash candidate sweep has not been preserved"
|
||||
fi
|
||||
|
||||
section "Remaining manual inputs for a public-network dry-run"
|
||||
echo " - Choose the public flash lender path to simulate: Balancer (0-fee vault balances above) or Aave V3 (5 bps, reserve balances above)"
|
||||
echo " - Exact flash asset (USDC vs USDT)"
|
||||
echo " - For Balancer/Aave public paths: feed one of the live cap values above into --flash-provider-cap, because the PMM tool's live probe is ERC-3156-only today"
|
||||
echo " - External unwind venue/path or the effective --external-price assumption"
|
||||
echo " - Pool deployment target amounts (--target-quote-per-pool or --target-total-quote)"
|
||||
echo " - Gas assumptions for the target chain (--gas-tx-count, --gas-per-tx, --max-fee-gwei, --native-token-price)"
|
||||
|
||||
section "Suggested next command"
|
||||
if [[ -n "${FLASH_VAULT:-}" && -n "${FLASH_VAULT_TOKEN:-}" ]]; then
|
||||
cat <<EOF
|
||||
node scripts/analytics/pmm-flash-push-break-even.mjs \\
|
||||
-B 999999997998 -Q 999999997998 -x 10000000000 \\
|
||||
--full-loop-dry-run \\
|
||||
--base-asset cUSDC --base-decimals 6 \\
|
||||
--quote-asset USDC --quote-decimals 6 \\
|
||||
--external-price 1.12 \\
|
||||
--flash-provider-cap 250000000000 \\
|
||||
--seed-pools 2 --target-quote-per-pool 500000000
|
||||
EOF
|
||||
else
|
||||
cat <<'EOF'
|
||||
node scripts/analytics/pmm-flash-push-break-even.mjs \
|
||||
-B 999999997998 -Q 999999997998 -x 10000000000 \
|
||||
--full-loop-dry-run \
|
||||
--base-asset cUSDC --base-decimals 6 \
|
||||
--quote-asset USDC --quote-decimals 6 \
|
||||
--external-price 1.12 \
|
||||
--flash-provider-address <ERC3156 lender> \
|
||||
--flash-provider-token-address <flash asset> \
|
||||
--flash-provider-rpc-url <rpc> \
|
||||
--flash-provider-cap <max loan> \
|
||||
--seed-pools 2 --target-quote-per-pool 500000000
|
||||
EOF
|
||||
fi
|
||||
|
||||
section "Public mainnet dry-run templates"
|
||||
cat <<EOF
|
||||
Balancer USDC:
|
||||
node scripts/analytics/pmm-flash-push-break-even.mjs \\
|
||||
-B 4854640 -Q 4854839 -x 1000000 \\
|
||||
--full-loop-dry-run \\
|
||||
--base-asset cWUSDC --base-decimals 6 \\
|
||||
--quote-asset USDC --quote-decimals 6 \\
|
||||
--flash-provider-name "Balancer mainnet vault" \\
|
||||
--flash-provider-cap ${balancer_usdc_balance:-<balancer-usdc-cap>} \\
|
||||
--flash-fee-bps 0 --lp-fee-bps 10 \\
|
||||
--external-price <effective USDC out per cWUSDC> \\
|
||||
--gas-tx-count 3 --gas-per-tx 250000 --max-fee-gwei 40 --native-token-price 3200
|
||||
|
||||
Balancer USDT:
|
||||
node scripts/analytics/pmm-flash-push-break-even.mjs \\
|
||||
-B 4846579 -Q 4846778 -x 1000000 \\
|
||||
--full-loop-dry-run \\
|
||||
--base-asset cWUSDC --base-decimals 6 \\
|
||||
--quote-asset USDT --quote-decimals 6 \\
|
||||
--flash-provider-name "Balancer mainnet vault" \\
|
||||
--flash-provider-cap ${balancer_usdt_balance:-<balancer-usdt-cap>} \\
|
||||
--flash-fee-bps 0 --lp-fee-bps 10 \\
|
||||
--external-price <effective USDT out per cWUSDC> \\
|
||||
--gas-tx-count 3 --gas-per-tx 250000 --max-fee-gwei 40 --native-token-price 3200
|
||||
|
||||
Aave USDC:
|
||||
node scripts/analytics/pmm-flash-push-break-even.mjs \\
|
||||
-B 4854640 -Q 4854839 -x 1000000 \\
|
||||
--full-loop-dry-run \\
|
||||
--base-asset cWUSDC --base-decimals 6 \\
|
||||
--quote-asset USDC --quote-decimals 6 \\
|
||||
--flash-provider-name "Aave V3 mainnet USDC" \\
|
||||
--flash-provider-cap ${aave_usdc_balance:-<aave-usdc-cap>} \\
|
||||
--flash-fee-bps ${aave_fee:-5} --lp-fee-bps 10 \\
|
||||
--external-price <effective USDC out per cWUSDC> \\
|
||||
--gas-tx-count 3 --gas-per-tx 250000 --max-fee-gwei 40 --native-token-price 3200
|
||||
|
||||
Aave USDT:
|
||||
node scripts/analytics/pmm-flash-push-break-even.mjs \\
|
||||
-B 4846579 -Q 4846778 -x 1000000 \\
|
||||
--full-loop-dry-run \\
|
||||
--base-asset cWUSDC --base-decimals 6 \\
|
||||
--quote-asset USDT --quote-decimals 6 \\
|
||||
--flash-provider-name "Aave V3 mainnet USDT" \\
|
||||
--flash-provider-cap ${aave_usdt_balance:-<aave-usdt-cap>} \\
|
||||
--flash-fee-bps ${aave_fee:-5} --lp-fee-bps 10 \\
|
||||
--external-price <effective USDT out per cWUSDC> \\
|
||||
--gas-tx-count 3 --gas-per-tx 250000 --max-fee-gwei 40 --native-token-price 3200
|
||||
EOF
|
||||
|
||||
printf '\nSummary: %d warning(s).\n' "$warn_count"
|
||||
exit 0
|
||||
@@ -8,8 +8,10 @@
|
||||
# 1 = one or more endpoints returned the wrong shape or were unreachable
|
||||
# Set SKIP_EXIT=1 to print diagnostics but exit 0.
|
||||
# Set KEEP_GOING=1 to keep checking every endpoint before exiting non-zero.
|
||||
# Set SKIP_TOKEN_LIST=1 / SKIP_COINGECKO=1 / SKIP_CMC=1 / SKIP_NETWORKS=1 to skip individual checks.
|
||||
# Set SKIP_BRIDGE_ROUTES=0 to assert /api/v1/bridge/routes payload shape.
|
||||
# Set SKIP_BRIDGE_PREFLIGHT=0 to assert /api/v1/bridge/preflight payload shape.
|
||||
# Set SKIP_GAS_REGISTRY=0 to assert /api/v1/report/gas-registry payload shape.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -98,29 +100,37 @@ else
|
||||
fi
|
||||
log ""
|
||||
|
||||
check_json_shape \
|
||||
"token-list" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/token-list?chainId=138" \
|
||||
'type == "object" and (.tokens | type == "array")' \
|
||||
'object with .tokens[]'
|
||||
if [[ "${SKIP_TOKEN_LIST:-0}" != "1" ]]; then
|
||||
check_json_shape \
|
||||
"token-list" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/token-list?chainId=138" \
|
||||
'type == "object" and (.tokens | type == "array")' \
|
||||
'object with .tokens[]'
|
||||
fi
|
||||
|
||||
check_json_shape \
|
||||
"coingecko report" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/coingecko?chainId=138" \
|
||||
'type == "object"' \
|
||||
'token-aggregation report JSON object'
|
||||
if [[ "${SKIP_COINGECKO:-0}" != "1" ]]; then
|
||||
check_json_shape \
|
||||
"coingecko report" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/coingecko?chainId=138" \
|
||||
'type == "object"' \
|
||||
'token-aggregation report JSON object'
|
||||
fi
|
||||
|
||||
check_json_shape \
|
||||
"cmc report" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/cmc?chainId=138" \
|
||||
'type == "object"' \
|
||||
'token-aggregation report JSON object'
|
||||
if [[ "${SKIP_CMC:-0}" != "1" ]]; then
|
||||
check_json_shape \
|
||||
"cmc report" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/cmc?chainId=138" \
|
||||
'type == "object"' \
|
||||
'token-aggregation report JSON object'
|
||||
fi
|
||||
|
||||
check_json_shape \
|
||||
"networks" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/networks" \
|
||||
'type == "object" and (.networks | type == "array")' \
|
||||
'object with .networks[]'
|
||||
if [[ "${SKIP_NETWORKS:-0}" != "1" ]]; then
|
||||
check_json_shape \
|
||||
"networks" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/networks" \
|
||||
'type == "object" and (.networks | type == "array")' \
|
||||
'object with .networks[]'
|
||||
fi
|
||||
|
||||
# Bridge routes (requires token-aggregation build with GET /api/v1/bridge/routes). Off by default until edge is deployed.
|
||||
if [[ "${SKIP_BRIDGE_ROUTES:-1}" != "1" ]]; then
|
||||
@@ -140,6 +150,14 @@ if [[ "${SKIP_BRIDGE_PREFLIGHT:-1}" != "1" ]]; then
|
||||
'object with .gruTransport.summary and .gruTransport.blockedPairs[]'
|
||||
fi
|
||||
|
||||
if [[ "${SKIP_GAS_REGISTRY:-1}" != "1" ]]; then
|
||||
check_json_shape \
|
||||
"gas-registry" \
|
||||
"${BASE_URL%/}${TA_PREFIX}/api/v1/report/gas-registry?chainId=10" \
|
||||
'type == "object" and (.gasAssetFamilies | type == "array") and (.gasRedeemGroups | type == "array") and (.chains | type == "array") and (.gasProtocolExposure | type == "array")' \
|
||||
'object with .gasAssetFamilies[], .gasRedeemGroups[], .chains[], and .gasProtocolExposure[]'
|
||||
fi
|
||||
|
||||
log ""
|
||||
if (( HAD_FAILURE > 0 )); then
|
||||
if [[ "$SKIP_EXIT" == "1" ]]; then
|
||||
|
||||
155
scripts/verify/check-publication-pack-explorer-status.mjs
Normal file
155
scripts/verify/check-publication-pack-explorer-status.mjs
Normal file
@@ -0,0 +1,155 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import https from 'https';
|
||||
|
||||
const repoRoot = path.resolve(path.dirname(new URL(import.meta.url).pathname), '..', '..');
|
||||
const packsRoot = path.join(repoRoot, 'reports', 'status', 'publication-packs');
|
||||
const outJson = path.join(repoRoot, 'reports', 'status', 'publication-pack-explorer-status.json');
|
||||
const outMd = path.join(repoRoot, 'docs', '11-references', 'PUBLICATION_PACK_EXPLORER_STATUS.md');
|
||||
|
||||
const chainIdToApi = {
|
||||
'1': 'https://api.etherscan.io/v2/api',
|
||||
'10': 'https://api.etherscan.io/v2/api',
|
||||
'56': 'https://api.etherscan.io/v2/api',
|
||||
'137': 'https://api.etherscan.io/v2/api',
|
||||
'8453': 'https://api.etherscan.io/v2/api',
|
||||
};
|
||||
|
||||
const apiKey = process.env.ETHERSCAN_API_KEY || '';
|
||||
if (!apiKey) {
|
||||
console.error('ETHERSCAN_API_KEY is required for pack explorer status checks.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
function ensureDir(filePath) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
function getJson(url) {
|
||||
return new Promise((resolve, reject) => {
|
||||
https.get(url, {
|
||||
headers: {
|
||||
'user-agent': 'proxmox-publication-status-checker/1.0',
|
||||
accept: 'application/json',
|
||||
},
|
||||
}, (res) => {
|
||||
let data = '';
|
||||
res.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
res.on('end', () => {
|
||||
try {
|
||||
resolve(JSON.parse(data));
|
||||
} catch (err) {
|
||||
reject(new Error(`Invalid JSON from ${url}: ${err.message}`));
|
||||
}
|
||||
});
|
||||
}).on('error', reject);
|
||||
});
|
||||
}
|
||||
|
||||
function sleep(ms) {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
async function fetchStatus(chainId, address) {
|
||||
const api = chainIdToApi[chainId];
|
||||
if (!api) return { status: 'unsupported', detail: 'No configured explorer API' };
|
||||
const url = `${api}?chainid=${chainId}&module=contract&action=getsourcecode&address=${address}&apikey=${apiKey}`;
|
||||
|
||||
for (let attempt = 1; attempt <= 4; attempt += 1) {
|
||||
const json = await getJson(url);
|
||||
const result = Array.isArray(json.result) ? json.result[0] : null;
|
||||
if (result) {
|
||||
const source = (result.SourceCode || '').trim();
|
||||
const name = (result.ContractName || '').trim();
|
||||
if (source || name) {
|
||||
return {
|
||||
status: 'verified',
|
||||
contractName: name || null,
|
||||
detail: result.CompilerVersion || 'verified',
|
||||
};
|
||||
}
|
||||
return { status: 'unverified', detail: result.ABI || 'No source metadata' };
|
||||
}
|
||||
|
||||
const message = typeof json.message === 'string' ? json.message : '';
|
||||
const detail = typeof json.result === 'string' && json.result
|
||||
? `${message}: ${json.result}`
|
||||
: (message || 'No result');
|
||||
const retryable = /rate limit|timeout|temporarily unavailable|busy/i.test(detail);
|
||||
if (!retryable || attempt === 4) {
|
||||
return { status: 'unknown', detail };
|
||||
}
|
||||
await sleep(400 * attempt);
|
||||
}
|
||||
|
||||
return { status: 'unknown', detail: 'Status check exhausted retries' };
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const packDirs = fs.readdirSync(packsRoot).sort();
|
||||
const report = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
packs: [],
|
||||
};
|
||||
|
||||
for (const dir of packDirs) {
|
||||
const packPath = path.join(packsRoot, dir, 'pack.json');
|
||||
if (!fs.existsSync(packPath)) continue;
|
||||
const pack = JSON.parse(fs.readFileSync(packPath, 'utf8'));
|
||||
const entries = [];
|
||||
for (const entry of pack.entries) {
|
||||
try {
|
||||
const status = await fetchStatus(entry.chainId, entry.address);
|
||||
entries.push({ ...entry, explorerStatus: status.status, explorerDetail: status.detail, explorerContractName: status.contractName || null });
|
||||
} catch (err) {
|
||||
entries.push({ ...entry, explorerStatus: 'error', explorerDetail: err.message, explorerContractName: null });
|
||||
}
|
||||
}
|
||||
const counts = entries.reduce((acc, entry) => {
|
||||
acc[entry.explorerStatus] = (acc[entry.explorerStatus] || 0) + 1;
|
||||
return acc;
|
||||
}, {});
|
||||
report.packs.push({
|
||||
slug: dir,
|
||||
chainId: pack.chainId,
|
||||
chainName: pack.chainName,
|
||||
explorer: pack.explorer,
|
||||
counts,
|
||||
entries,
|
||||
});
|
||||
}
|
||||
|
||||
const rows = report.packs.map((pack) => {
|
||||
const verified = pack.counts.verified || 0;
|
||||
const unverified = pack.counts.unverified || 0;
|
||||
const unknown = pack.counts.unknown || 0;
|
||||
const unsupported = pack.counts.unsupported || 0;
|
||||
const error = pack.counts.error || 0;
|
||||
return `| ${pack.chainId} | ${pack.chainName} | ${verified} | ${unverified} | ${unknown} | ${unsupported} | ${error} | ${pack.explorer} |`;
|
||||
}).join('\n');
|
||||
const md = `# Publication Pack Explorer Status
|
||||
|
||||
**Generated:** ${report.generatedAt}
|
||||
|
||||
Live explorer verification status for the grouped publication packs.
|
||||
|
||||
| Chain ID | Chain | Verified | Unverified | Unknown | Unsupported | Errors | Explorer |
|
||||
| --- | --- | ---: | ---: | ---: | ---: | ---: | --- |
|
||||
${rows}
|
||||
`;
|
||||
|
||||
ensureDir(outJson);
|
||||
ensureDir(outMd);
|
||||
fs.writeFileSync(outJson, JSON.stringify(report, null, 2) + '\n');
|
||||
fs.writeFileSync(outMd, md + '\n');
|
||||
console.log(`Wrote:\n- ${outJson}\n- ${outMd}`);
|
||||
}
|
||||
|
||||
main().catch((err) => {
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -53,6 +53,6 @@ echo ""
|
||||
# Summary
|
||||
echo "Summary: $CONNECTED_COUNT/$POSSIBLE_COUNT possible peers connected."
|
||||
if [ "$MISSING_COUNT" -gt 0 ]; then
|
||||
echo "Not connected: node may be down, or RPC has not yet connected (max-peers=32 on 2101)."
|
||||
echo "Not connected: node may be down, topology may still be converging, or the remote side may not be accepting the peering request."
|
||||
fi
|
||||
echo ""
|
||||
|
||||
125
scripts/verify/check-thirdweb-account-abstraction-support.sh
Normal file
125
scripts/verify/check-thirdweb-account-abstraction-support.sh
Normal file
@@ -0,0 +1,125 @@
|
||||
#!/usr/bin/env bash
|
||||
# Validate the repo's machine-readable Thirdweb account abstraction / x402 support matrix.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/check-thirdweb-account-abstraction-support.sh
|
||||
# bash scripts/verify/check-thirdweb-account-abstraction-support.sh --json
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
MATRIX_PATH="${PROJECT_ROOT}/config/thirdweb-account-abstraction-support.json"
|
||||
OUTPUT_JSON=0
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
echo "[FAIL] Missing required command: jq" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$MATRIX_PATH" ]]; then
|
||||
echo "[FAIL] Missing matrix file: $MATRIX_PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
summary_json="$(jq -c '
|
||||
def chain($id): (.chains[] | select(.chainId == $id));
|
||||
{
|
||||
version,
|
||||
updated,
|
||||
dualModeAccountAbstraction: (.globalRequirements.dualModeAccountAbstraction == true),
|
||||
chain138: {
|
||||
currentDefault: (chain(138).accountAbstraction.currentDefault // null),
|
||||
erc4337Supported: (chain(138).accountAbstraction.erc4337.supported == true),
|
||||
eip7702Supported: (chain(138).accountAbstraction.eip7702.supported == true),
|
||||
x402Enabled: (chain(138).x402.enabled == true),
|
||||
x402SettlementMode: (chain(138).x402.settlementMode // null),
|
||||
compatibleTokenCount: ((chain(138).x402.compatibleTokens // []) | length)
|
||||
},
|
||||
mainnet: {
|
||||
currentDefault: (chain(1).accountAbstraction.currentDefault // null),
|
||||
erc4337Supported: (chain(1).accountAbstraction.erc4337.supported == true),
|
||||
eip7702Supported: (chain(1).accountAbstraction.eip7702.supported == true)
|
||||
}
|
||||
}
|
||||
' "$MATRIX_PATH")"
|
||||
|
||||
failures=()
|
||||
|
||||
assert_true() {
|
||||
local jq_expr="$1"
|
||||
local message="$2"
|
||||
if [[ "$(jq -r "$jq_expr" "$MATRIX_PATH")" != "true" ]]; then
|
||||
failures+=("$message")
|
||||
fi
|
||||
}
|
||||
|
||||
assert_equals() {
|
||||
local jq_expr="$1"
|
||||
local expected="$2"
|
||||
local message="$3"
|
||||
if [[ "$(jq -r "$jq_expr" "$MATRIX_PATH")" != "$expected" ]]; then
|
||||
failures+=("$message")
|
||||
fi
|
||||
}
|
||||
|
||||
assert_true '.globalRequirements.dualModeAccountAbstraction == true' "dualModeAccountAbstraction must be true"
|
||||
assert_true 'any(.chains[]; .chainId == 138)' "chainId 138 must be present"
|
||||
assert_true 'any(.chains[]; .chainId == 1)' "chainId 1 must be present"
|
||||
assert_true '(.chains[] | select(.chainId == 138) | .accountAbstraction.erc4337.supported) == true' "Chain 138 must support ERC-4337"
|
||||
assert_true '(.chains[] | select(.chainId == 138) | .accountAbstraction.eip7702.supported) == true' "Chain 138 must record EIP-7702 support intent"
|
||||
assert_equals '(.chains[] | select(.chainId == 138) | .accountAbstraction.currentDefault)' "erc4337" "Chain 138 currentDefault must remain erc4337 until 7702 is fully confirmed"
|
||||
assert_equals '(.chains[] | select(.chainId == 138) | .x402.settlementMode)' "eip7702" "Chain 138 x402 settlement mode must be eip7702"
|
||||
assert_true '((.chains[] | select(.chainId == 138) | .x402.compatibleTokens // []) | length) >= 1' "Chain 138 must list at least one x402-compatible token"
|
||||
assert_true 'all((.chains[] | select(.chainId == 138) | .x402.compatibleTokens // [])[]; (.erc2612 == true) or (.erc3009 == true))' "Every Chain 138 x402 token must support ERC-2612 or ERC-3009"
|
||||
assert_true '(.chains[] | select(.chainId == 1) | .accountAbstraction.erc4337.supported) == true' "Ethereum Mainnet must support ERC-4337"
|
||||
assert_true '(.chains[] | select(.chainId == 1) | .accountAbstraction.eip7702.supported) == true' "Ethereum Mainnet must support EIP-7702"
|
||||
|
||||
if [[ "$OUTPUT_JSON" == "1" ]]; then
|
||||
jq -n \
|
||||
--argjson summary "$summary_json" \
|
||||
--argjson failures "$(printf '%s\n' "${failures[@]:-}" | jq -R . | jq -s 'map(select(length > 0))')" \
|
||||
'{
|
||||
summary: $summary,
|
||||
failures: $failures,
|
||||
ok: (($failures | length) == 0)
|
||||
}'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "=== Thirdweb Account Abstraction Support ==="
|
||||
jq -r '
|
||||
. as $root |
|
||||
"Matrix version: \(.version)",
|
||||
"Updated: \(.updated)",
|
||||
"Dual-mode AA: \(.globalRequirements.dualModeAccountAbstraction)",
|
||||
"",
|
||||
(.chains[] | "Chain \(.chainId) (\(.name))",
|
||||
" currentDefault: \(.accountAbstraction.currentDefault)",
|
||||
" erc4337.supported: \(.accountAbstraction.erc4337.supported)",
|
||||
" eip7702.supported: \(.accountAbstraction.eip7702.supported)",
|
||||
" x402.enabled: \(.x402.enabled // false)",
|
||||
" x402.settlementMode: \(.x402.settlementMode // "n/a")")
|
||||
' "$MATRIX_PATH"
|
||||
|
||||
if [[ "${#failures[@]}" -gt 0 ]]; then
|
||||
echo ""
|
||||
echo "Failures:"
|
||||
for failure in "${failures[@]}"; do
|
||||
echo "- $failure"
|
||||
done
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "[OK] Thirdweb account abstraction matrix is internally consistent."
|
||||
@@ -10,6 +10,9 @@ BASE_URL="${BASE_URL%/}"
|
||||
|
||||
CUSDT="0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
|
||||
CUSDC="0xf22258f57794CC8E06237084b353Ab30fFfa640b"
|
||||
WETH10="0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f"
|
||||
USDT="0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1"
|
||||
USDC="0x71D6687F38b93CCad569Fa6352c876eea967201b"
|
||||
|
||||
try_path() {
|
||||
local prefix="$1"
|
||||
@@ -24,6 +27,24 @@ try_path() {
|
||||
fi
|
||||
}
|
||||
|
||||
try_post_path() {
|
||||
local prefix="$1"
|
||||
local path="$2"
|
||||
local body="$3"
|
||||
local url="${BASE_URL}${prefix}${path}"
|
||||
local code
|
||||
code=$(curl -sS -o /tmp/ta-check.json -w "%{http_code}" -m 25 \
|
||||
-H "content-type: application/json" \
|
||||
-X POST \
|
||||
--data "$body" \
|
||||
"$url" || echo "000")
|
||||
echo " $code POST ${prefix}${path}"
|
||||
if [[ "$code" == 200 || "$code" == 400 ]]; then
|
||||
head -c 260 /tmp/ta-check.json
|
||||
echo
|
||||
fi
|
||||
}
|
||||
|
||||
echo "== Token-aggregation checks against ${BASE_URL} =="
|
||||
for prefix in "" "/token-aggregation"; do
|
||||
echo ""
|
||||
@@ -31,12 +52,51 @@ for prefix in "" "/token-aggregation"; do
|
||||
try_path "${prefix}" "/api/v1/tokens?chainId=138&limit=3&includeDodoPool=true"
|
||||
try_path "${prefix}" "/api/v1/tokens/${CUSDT}/pools?chainId=138"
|
||||
try_path "${prefix}" "/api/v1/quote?chainId=138&tokenIn=${CUSDT}&tokenOut=${CUSDC}&amountIn=1000000"
|
||||
if command -v jq >/dev/null 2>&1; then
|
||||
qurl="${BASE_URL}${prefix}/api/v1/quote?chainId=138&tokenIn=${CUSDT}&tokenOut=${CUSDC}&amountIn=1000000"
|
||||
qcode=$(curl -sS -o /tmp/ta-quote.json -w "%{http_code}" -m 25 "$qurl" 2>/dev/null || echo "000")
|
||||
if [[ "$qcode" == "200" ]]; then
|
||||
qe=$(jq -r '.quoteEngine // "absent"' /tmp/ta-quote.json 2>/dev/null || echo "?")
|
||||
echo " quoteEngine=${qe} (${BASE_URL}${prefix}/api/v1/quote cUSDT→cUSDC)"
|
||||
if [[ "$qe" == "constant-product" ]]; then
|
||||
echo " (hint: PMM on-chain not used — RPC unset, blocked, or pool not DODO; publication deploy sets RPC_URL_138)"
|
||||
fi
|
||||
if [[ "$qe" == "absent" ]]; then
|
||||
echo " (hint: redeploy token-aggregation from repo — older build omits quoteEngine; run deploy-token-aggregation-for-publication.sh on explorer host)"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
try_path "${prefix}" "/api/v1/bridge/routes"
|
||||
try_path "${prefix}" "/api/v1/bridge/status"
|
||||
try_path "${prefix}" "/api/v1/bridge/preflight"
|
||||
try_path "${prefix}" "/api/v1/report/gas-registry?chainId=10"
|
||||
try_path "${prefix}" "/api/v1/networks"
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "== planner-v2 checks =="
|
||||
echo ""
|
||||
echo "-- prefix: /token-aggregation (published v2 path) --"
|
||||
try_path "/token-aggregation" "/api/v2/providers/capabilities?chainId=138"
|
||||
if [[ -f /tmp/ta-check.json ]] && head -c 20 /tmp/ta-check.json | grep -qi '<!DOCTYPE'; then
|
||||
echo " [WARN] capabilities response looks like HTML — nginx may not proxy /token-aggregation/api/v2/ to token-aggregation (VMID 5000: scripts/fix-explorer-token-aggregation-api-v2-proxy.sh)"
|
||||
fi
|
||||
try_post_path "/token-aggregation" "/api/v2/routes/plan" "{\"sourceChainId\":138,\"tokenIn\":\"${WETH10}\",\"tokenOut\":\"${USDT}\",\"amountIn\":\"100000000000000000\"}"
|
||||
try_post_path "/token-aggregation" "/api/v2/routes/internal-execution-plan" "{\"sourceChainId\":138,\"tokenIn\":\"${WETH10}\",\"tokenOut\":\"${USDT}\",\"amountIn\":\"100000000000000000\"}"
|
||||
|
||||
echo ""
|
||||
echo "== DODO stable depth sanity =="
|
||||
for prefix in "" "/token-aggregation"; do
|
||||
code=$(curl -sS -o /tmp/ta-depth.json -w "%{http_code}" -m 20 \
|
||||
"${BASE_URL}${prefix}/api/v1/routes/tree?chainId=138&tokenIn=${CUSDC}&tokenOut=${USDC}&amountIn=1000000" \
|
||||
2>/dev/null || echo 000)
|
||||
echo "${prefix:-/} -> HTTP $code"
|
||||
if [[ "$code" == "200" ]] && command -v jq >/dev/null 2>&1; then
|
||||
jq '{pool: .pools[0].poolAddress, tvlUsd: .pools[0].depth.tvlUsd, estimatedTradeCapacityUsd: .pools[0].depth.estimatedTradeCapacityUsd}' /tmp/ta-depth.json 2>/dev/null || head -c 220 /tmp/ta-depth.json
|
||||
echo
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo ""
|
||||
echo "== bridge summary =="
|
||||
@@ -60,9 +120,27 @@ for prefix in "" "/token-aggregation"; do
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "== gas-registry summary =="
|
||||
for prefix in "" "/token-aggregation"; do
|
||||
code=$(curl -sS -o /tmp/gas-registry.json -w "%{http_code}" -m 20 "${BASE_URL}${prefix}/api/v1/report/gas-registry?chainId=10" 2>/dev/null || echo 000)
|
||||
echo "${prefix:-/} -> HTTP $code"
|
||||
if [[ "$code" == "200" ]] && command -v jq >/dev/null 2>&1; then
|
||||
jq '{gasAssetFamilies: (.gasAssetFamilies | length), gasRedeemGroups: (.gasRedeemGroups | length), chains: (.chains | length), gasProtocolExposure: (.gasProtocolExposure | length)}' /tmp/gas-registry.json 2>/dev/null || head -c 200 /tmp/gas-registry.json
|
||||
echo
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "Notes:"
|
||||
echo " - Empty tokens/pools: set DATABASE_URL + migrations; RPC to 138; PMM integration now defaults on-chain if env unset."
|
||||
echo " - Empty tokens/pools: set DATABASE_URL, then run bash smom-dbis-138/services/token-aggregation/scripts/apply-lightweight-schema.sh for standalone deployments; RPC to 138; PMM integration now defaults on-chain if env unset."
|
||||
echo " - bridge/routes 404: redeploy token-aggregation from repo (implements GET /api/v1/bridge/routes)."
|
||||
echo " - bridge/preflight blocked pairs: run bash scripts/verify/check-gru-transport-preflight.sh [BASE_URL] for exact missing refs."
|
||||
echo " - gas-registry 404: redeploy token-aggregation from repo (implements GET /api/v1/report/gas-registry)."
|
||||
echo " - Health: curl -s http://127.0.0.1:3001/health on explorer VM (not always proxied as /health)."
|
||||
echo " - planner-v2 publishes under /token-aggregation/api/v2/* so it does not collide with Blockscout /api/v2/* on explorer.d-bis.org."
|
||||
echo " - Apex https://explorer.d-bis.org/api/v1/* returns 400 while /token-aggregation/api/v1/* works: add HTTP+HTTPS location /api/v1/ → token-aggregation (scripts/fix-explorer-http-api-v1-proxy.sh on explorer VM)."
|
||||
echo " - POST /token-aggregation/api/v2/* returns 405: insert v2 proxy block (scripts/fix-explorer-token-aggregation-api-v2-proxy.sh on VMID 5000)."
|
||||
echo " - Fresh binary + PMM env: bash scripts/deploy-token-aggregation-for-publication.sh then rsync dist/node_modules/.env to /opt/token-aggregation; systemctl restart token-aggregation (see TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md)."
|
||||
echo " - DODO v3 pilot routes should return provider=dodo_v3, routePlanPresent=true, and an internal-execution-plan object targeting EnhancedSwapRouterV2 when CHAIN138_ENABLE_DODO_V3_EXECUTION is live."
|
||||
echo " - The funded canonical cUSDC/USDC DODO pool should report non-zero route-tree depth on Chain 138; if it falls back to near-zero TVL again, check the DODO valuation path and the canonical PMM integration address."
|
||||
|
||||
292
scripts/verify/check-wsl-distro-health.sh
Executable file
292
scripts/verify/check-wsl-distro-health.sh
Executable file
@@ -0,0 +1,292 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
PROFILE="minimal-dev-only"
|
||||
SHOW_ALL=0
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: bash scripts/verify/check-wsl-distro-health.sh [--profile <name>] [--all]
|
||||
|
||||
Profiles:
|
||||
minimal-dev-only Fast CLI/dev distro without Docker or GUI/GPU checks
|
||||
docker Includes binfmt and Docker-specific guidance
|
||||
gui-gpu Includes WSLg / GPU checks
|
||||
|
||||
Options:
|
||||
--profile <name> Select profile (default: minimal-dev-only)
|
||||
--all Print every section regardless of profile
|
||||
-h, --help Show this help
|
||||
|
||||
This script is read-only. It inspects the current distro and prints:
|
||||
- boot timing hotspots
|
||||
- failed systemd units
|
||||
- binfmt_misc status
|
||||
- recent ext4 journal recovery hints
|
||||
- WSLg / dxg / GPU status (gui-gpu profile)
|
||||
EOF
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--profile)
|
||||
PROFILE="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--all)
|
||||
SHOW_ALL=1
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2
|
||||
usage >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
case "$PROFILE" in
|
||||
minimal-dev-only|docker|gui-gpu) ;;
|
||||
*)
|
||||
echo "Unsupported profile: $PROFILE" >&2
|
||||
usage >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
have_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
safe_timeout() {
|
||||
if have_cmd timeout; then
|
||||
timeout "$@"
|
||||
else
|
||||
shift
|
||||
"$@"
|
||||
fi
|
||||
}
|
||||
|
||||
section() {
|
||||
printf '\n== %s ==\n' "$1"
|
||||
}
|
||||
|
||||
pass() {
|
||||
printf 'PASS: %s\n' "$1"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf 'WARN: %s\n' "$1"
|
||||
}
|
||||
|
||||
info() {
|
||||
printf 'INFO: %s\n' "$1"
|
||||
}
|
||||
|
||||
show_profile_section() {
|
||||
local section_profile="$1"
|
||||
[[ "$SHOW_ALL" -eq 1 || "$PROFILE" == "$section_profile" ]]
|
||||
}
|
||||
|
||||
show_if_not_minimal() {
|
||||
[[ "$SHOW_ALL" -eq 1 || "$PROFILE" != "minimal-dev-only" ]]
|
||||
}
|
||||
|
||||
is_wsl="no"
|
||||
if grep -qiE '(microsoft|wsl)' /proc/version 2>/dev/null || [[ -n "${WSL_DISTRO_NAME:-}" ]]; then
|
||||
is_wsl="yes"
|
||||
fi
|
||||
|
||||
section "WSL Summary"
|
||||
info "profile=$PROFILE"
|
||||
info "wsl_detected=$is_wsl"
|
||||
info "kernel=$(uname -r)"
|
||||
if [[ -n "${WSL_DISTRO_NAME:-}" ]]; then
|
||||
info "distro=$WSL_DISTRO_NAME"
|
||||
fi
|
||||
if have_cmd systemctl; then
|
||||
info "systemd=$(systemctl is-system-running 2>/dev/null || true)"
|
||||
else
|
||||
warn "systemctl not found; systemd-specific checks will be incomplete"
|
||||
fi
|
||||
|
||||
section "Boot Timing"
|
||||
if have_cmd systemd-analyze; then
|
||||
safe_timeout 8 systemd-analyze 2>/dev/null || true
|
||||
echo
|
||||
info "Top boot units:"
|
||||
safe_timeout 8 systemd-analyze blame 2>/dev/null | head -15 || true
|
||||
else
|
||||
warn "systemd-analyze not available"
|
||||
fi
|
||||
|
||||
section "Failed Units"
|
||||
if have_cmd systemctl; then
|
||||
failed_units="$(systemctl --failed --no-legend --plain 2>/dev/null || true)"
|
||||
if [[ -n "$failed_units" ]]; then
|
||||
warn "systemd reports failed units"
|
||||
printf '%s\n' "$failed_units"
|
||||
else
|
||||
pass "no failed units reported by systemd"
|
||||
fi
|
||||
else
|
||||
warn "systemctl not available"
|
||||
fi
|
||||
|
||||
section "WSL Init Timeout"
|
||||
if have_cmd journalctl; then
|
||||
init_timeout_lines="$(safe_timeout 8 journalctl -b --no-pager 2>/dev/null | grep -E 'WaitForBootProcess|/sbin/init failed to start within' | tail -10 || true)"
|
||||
if [[ -n "$init_timeout_lines" ]]; then
|
||||
warn "found WSL boot watchdog timeout lines in current boot"
|
||||
printf '%s\n' "$init_timeout_lines"
|
||||
info "If boot still reaches login or multi-user.target, this is usually a watchdog race rather than fatal init failure."
|
||||
else
|
||||
pass "no WSL init timeout lines found in current boot journal"
|
||||
fi
|
||||
else
|
||||
warn "journalctl not available"
|
||||
fi
|
||||
|
||||
section "ext4 Recovery"
|
||||
if have_cmd journalctl; then
|
||||
current_ext4="$(safe_timeout 8 journalctl -b 0 --no-pager 2>/dev/null | grep -iE 'EXT4-fs|orphan inodes|recovery complete' | tail -20 || true)"
|
||||
previous_ext4="$(safe_timeout 8 journalctl -b -1 --no-pager 2>/dev/null | grep -iE 'EXT4-fs|orphan inodes|recovery complete' | tail -20 || true)"
|
||||
if [[ -n "$current_ext4" ]]; then
|
||||
warn "current boot shows ext4 recovery-related lines"
|
||||
printf '%s\n' "$current_ext4"
|
||||
else
|
||||
pass "no ext4 recovery lines found in current boot journal"
|
||||
fi
|
||||
if [[ -n "$previous_ext4" ]]; then
|
||||
info "previous boot also had ext4-related lines:"
|
||||
printf '%s\n' "$previous_ext4"
|
||||
fi
|
||||
if [[ -z "$current_ext4" && -z "$previous_ext4" ]]; then
|
||||
info "No recent journal evidence of repeated ext4 recovery."
|
||||
fi
|
||||
else
|
||||
warn "journalctl not available"
|
||||
fi
|
||||
|
||||
section "binfmt_misc"
|
||||
binfmt_mount="$(mount 2>/dev/null | grep 'binfmt_misc' || true)"
|
||||
if [[ -n "$binfmt_mount" ]]; then
|
||||
info "mount: $binfmt_mount"
|
||||
else
|
||||
info "binfmt_misc is not currently mounted"
|
||||
fi
|
||||
|
||||
if have_cmd systemctl; then
|
||||
automount_state="$(systemctl is-active proc-sys-fs-binfmt_misc.automount 2>/dev/null || true)"
|
||||
automount_failed="$(systemctl is-failed proc-sys-fs-binfmt_misc.automount 2>/dev/null || true)"
|
||||
binfmt_state="$(systemctl is-active systemd-binfmt 2>/dev/null || true)"
|
||||
info "proc-sys-fs-binfmt_misc.automount=$automount_state failed=$automount_failed"
|
||||
info "systemd-binfmt=$binfmt_state"
|
||||
if [[ "$automount_failed" == "failed" && -n "$binfmt_mount" ]]; then
|
||||
warn "binfmt_misc is mounted but the automount unit failed; this is often harmless on WSL."
|
||||
elif [[ "$automount_failed" == "failed" ]]; then
|
||||
warn "binfmt automount failed and binfmt_misc is not mounted"
|
||||
else
|
||||
pass "no failed binfmt automount detected"
|
||||
fi
|
||||
else
|
||||
warn "systemctl not available"
|
||||
fi
|
||||
|
||||
if show_if_not_minimal; then
|
||||
section "Docker / Cross-Arch"
|
||||
if have_cmd docker; then
|
||||
if docker info >/dev/null 2>&1; then
|
||||
pass "docker daemon is reachable"
|
||||
else
|
||||
warn "docker CLI is present but daemon is not reachable"
|
||||
fi
|
||||
else
|
||||
info "docker not installed in this distro"
|
||||
fi
|
||||
|
||||
if [[ -d /proc/sys/fs/binfmt_misc ]]; then
|
||||
qemu_entries="$(find /proc/sys/fs/binfmt_misc -maxdepth 1 -type f 2>/dev/null | grep -E 'qemu-|status|register' || true)"
|
||||
if [[ -n "$qemu_entries" ]]; then
|
||||
info "binfmt entries:"
|
||||
printf '%s\n' "$qemu_entries"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if show_profile_section "gui-gpu"; then
|
||||
section "WSLg / GPU"
|
||||
if [[ -e /dev/dxg ]]; then
|
||||
pass "/dev/dxg is present"
|
||||
else
|
||||
warn "/dev/dxg is missing"
|
||||
fi
|
||||
|
||||
if [[ -n "${WAYLAND_DISPLAY:-}" || -n "${DISPLAY:-}" ]]; then
|
||||
info "display_env DISPLAY=${DISPLAY:-unset} WAYLAND_DISPLAY=${WAYLAND_DISPLAY:-unset}"
|
||||
else
|
||||
info "DISPLAY and WAYLAND_DISPLAY are not set"
|
||||
fi
|
||||
|
||||
if have_cmd journalctl; then
|
||||
dxg_lines="$(safe_timeout 8 journalctl -k -b --no-pager 2>/dev/null | grep -i 'dxg' | tail -20 || true)"
|
||||
if [[ -n "$dxg_lines" ]]; then
|
||||
info "recent dxg kernel lines:"
|
||||
printf '%s\n' "$dxg_lines"
|
||||
if printf '%s\n' "$dxg_lines" | grep -q 'Ioctl failed: -22'; then
|
||||
warn "dxg reported Ioctl failed: -22; this usually points to a host driver or WSLg feature mismatch."
|
||||
fi
|
||||
else
|
||||
info "no dxg lines found in current kernel journal"
|
||||
fi
|
||||
fi
|
||||
|
||||
if have_cmd glxinfo; then
|
||||
info "glxinfo -B:"
|
||||
glxinfo -B 2>/dev/null | sed -n '1,12p' || true
|
||||
else
|
||||
info "glxinfo not installed"
|
||||
fi
|
||||
|
||||
if have_cmd nvidia-smi; then
|
||||
info "nvidia-smi:"
|
||||
nvidia-smi 2>/dev/null | sed -n '1,12p' || true
|
||||
else
|
||||
info "nvidia-smi not installed"
|
||||
fi
|
||||
fi
|
||||
|
||||
section "Checklist"
|
||||
case "$PROFILE" in
|
||||
minimal-dev-only)
|
||||
cat <<'EOF'
|
||||
1. Run `systemd-analyze blame | head -30` and disable only the services you do not need at boot.
|
||||
2. If `proc-sys-fs-binfmt_misc.automount` failed but you do not use qemu-user, buildx, or foreign-arch execution, you can mask it:
|
||||
`sudo systemctl mask proc-sys-fs-binfmt_misc.automount`
|
||||
3. If ext4 recovery lines appear on many boots, stop forcing WSL down during active disk writes.
|
||||
4. If the distro still reaches login, treat `WaitForBootProcess` as a boot watchdog warning unless the journal shows real init failure afterward.
|
||||
EOF
|
||||
;;
|
||||
docker)
|
||||
cat <<'EOF'
|
||||
1. Keep Docker enabled only if you need it to auto-start: `sudo systemctl enable docker`.
|
||||
2. Leave `binfmt_misc` alone if Docker buildx and cross-arch images already work.
|
||||
3. If cross-arch execution is broken, repair registrations instead of masking the automount:
|
||||
`sudo apt-get install --reinstall binfmt-support qemu-user-static`
|
||||
`sudo systemctl restart systemd-binfmt`
|
||||
4. Use `systemd-analyze blame | head -30` before disabling anything Docker depends on.
|
||||
EOF
|
||||
;;
|
||||
gui-gpu)
|
||||
cat <<'EOF'
|
||||
1. Update WSL from Windows: `wsl --update`, then fully reboot Windows.
|
||||
2. Update the Windows GPU driver before changing Linux packages.
|
||||
3. If `/dev/dxg` exists but apps still render in software, verify with `glxinfo -B` and troubleshoot the Windows-side GPU stack first.
|
||||
4. Treat `dxg ... Ioctl failed: -22` as a host/driver/WSLg mismatch unless you have a reproducible app failure.
|
||||
EOF
|
||||
;;
|
||||
esac
|
||||
190
scripts/verify/dry-run-gru-v2-cw-mainnet-corridor.sh
Executable file
190
scripts/verify/dry-run-gru-v2-cw-mainnet-corridor.sh
Executable file
@@ -0,0 +1,190 @@
|
||||
#!/usr/bin/env bash
|
||||
# Dry-run (read-only) check: can GRU V2 USD tokens on Chain 138 use the live
|
||||
# CWMultiTokenBridgeL1 → Mainnet CWMultiTokenBridgeL2 corridor today?
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/dry-run-gru-v2-cw-mainnet-corridor.sh
|
||||
# bash scripts/verify/dry-run-gru-v2-cw-mainnet-corridor.sh --json
|
||||
# STRICT=1 bash scripts/verify/dry-run-gru-v2-cw-mainnet-corridor.sh # exit 1 if V2 not bridgeable
|
||||
#
|
||||
# Requires: cast, jq. Optional: ETHEREUM_MAINNET_RPC for L2 probes (defaults to public RPC).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
OUTPUT_JSON=0
|
||||
STRICT="${STRICT:-0}"
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json) OUTPUT_JSON=1 ;;
|
||||
*)
|
||||
echo "Unknown argument: $arg" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd bash
|
||||
need_cmd cast
|
||||
need_cmd jq
|
||||
|
||||
# shellcheck disable=SC1091
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
|
||||
|
||||
RPC138="${RPC_URL_138:-http://192.168.11.211:8545}"
|
||||
MAINRPC="${ETHEREUM_MAINNET_RPC:-${ETH_MAINNET_RPC_URL:-https://eth.llamarpc.com}}"
|
||||
|
||||
L1_BRIDGE="${CW_MULTITOKEN_BRIDGE_L1_138:-0x152ed3e9912161b76bdfd368d0c84b7c31c10de7}"
|
||||
L2_BRIDGE="${CW_MULTITOKEN_BRIDGE_L2_MAINNET:-0x2bF74583206A49Be07E0E8A94197C12987AbD7B5}"
|
||||
MAINNET_SELECTOR=5009297550715157269
|
||||
|
||||
CUSDT_V1="${COMPLIANT_USDT:-0x93E66202A11B1772E55407B32B44e5Cd8eda7f22}"
|
||||
CUSDC_V1="${COMPLIANT_USDC:-0xf22258f57794CC8E06237084b353Ab30fFfa640b}"
|
||||
CUSDT_V2="${COMPLIANT_USDT_V2:-0x9FBfab33882Efe0038DAa608185718b772EE5660}"
|
||||
CUSDC_V2="${COMPLIANT_USDC_V2:-0x219522c60e83dEe01FC5b0329d6fA8fD84b9D13d}"
|
||||
|
||||
DUMMY_RECIPIENT=0x0000000000000000000000000000000000000001
|
||||
PROBE_AMOUNT=1000000
|
||||
|
||||
probe_chain() {
|
||||
local rpc="$1"
|
||||
timeout 8 cast chain-id --rpc-url "$rpc" 2>/dev/null | awk '{print $1; exit}' || echo "?"
|
||||
}
|
||||
|
||||
cid_138="$(probe_chain "$RPC138")"
|
||||
cid_1="$(probe_chain "$MAINRPC")"
|
||||
|
||||
read_dest() {
|
||||
local token="$1"
|
||||
cast call "$L1_BRIDGE" "destinations(address,uint64)(address,bool)" "$token" "$MAINNET_SELECTOR" --rpc-url "$RPC138" 2>/dev/null | tr '\n' ' ' || echo "revert"
|
||||
}
|
||||
|
||||
fee_ok() {
|
||||
local token="$1"
|
||||
cast call "$L1_BRIDGE" \
|
||||
"calculateFee(address,uint64,address,uint256)(uint256)" \
|
||||
"$token" "$MAINNET_SELECTOR" "$DUMMY_RECIPIENT" "$PROBE_AMOUNT" \
|
||||
--rpc-url "$RPC138" 2>/dev/null | awk '{print $1; exit}'
|
||||
}
|
||||
|
||||
l2_mirrored() {
|
||||
local token="$1"
|
||||
cast call "$L2_BRIDGE" "canonicalToMirrored(address)(address)" "$token" --rpc-url "$MAINRPC" 2>/dev/null | awk '{print $1; exit}' || echo "revert"
|
||||
}
|
||||
|
||||
analyze_pair() {
|
||||
local label="$1" token="$2"
|
||||
local dest_out recv enabled fee_res mirror
|
||||
dest_out="$(read_dest "$token")"
|
||||
recv="$(echo "$dest_out" | awk '{print $1}')"
|
||||
enabled="$(echo "$dest_out" | awk '{print $2}')"
|
||||
fee_res="$(fee_ok "$token" || true)"
|
||||
mirror="$(l2_mirrored "$token")"
|
||||
|
||||
local l1_ok=0
|
||||
[[ "$enabled" == "true" && "$recv" != "0x0000000000000000000000000000000000000000" ]] && l1_ok=1
|
||||
|
||||
local l2_ok=0
|
||||
[[ "$mirror" != "0x0000000000000000000000000000000000000000" && "$mirror" != "revert" ]] && l2_ok=1
|
||||
|
||||
local fee_ok_flag=0
|
||||
[[ -n "$fee_res" && "$fee_res" != "revert" ]] && fee_ok_flag=1
|
||||
|
||||
local ready=0
|
||||
(( l1_ok && l2_ok && fee_ok_flag )) && ready=1
|
||||
|
||||
printf '%s\n' "$label|$token|$recv|$enabled|$fee_res|$mirror|$l1_ok|$l2_ok|$fee_ok_flag|$ready"
|
||||
}
|
||||
|
||||
row_v1_usdt="$(analyze_pair "cUSDT_V1" "$CUSDT_V1")"
|
||||
row_v1_usdc="$(analyze_pair "cUSDC_V1" "$CUSDC_V1")"
|
||||
row_v2_usdt="$(analyze_pair "cUSDT_V2" "$CUSDT_V2")"
|
||||
row_v2_usdc="$(analyze_pair "cUSDC_V2" "$CUSDC_V2")"
|
||||
|
||||
v2_usdt_ready="$(echo "$row_v2_usdt" | awk -F'|' '{print $10}')"
|
||||
v2_usdc_ready="$(echo "$row_v2_usdc" | awk -F'|' '{print $10}')"
|
||||
v2_all_ready=0
|
||||
(( v2_usdt_ready && v2_usdc_ready )) && v2_all_ready=1
|
||||
|
||||
GENERATED_AT="$(date -u +"%Y-%m-%dT%H:%M:%SZ")"
|
||||
|
||||
json_row() {
|
||||
local row="$1"
|
||||
local label token recv enabled fee mirror l1_ok l2_ok fee_f ready
|
||||
IFS='|' read -r label token recv enabled fee mirror l1_ok l2_ok fee_f ready <<<"$row"
|
||||
jq -n \
|
||||
--arg label "$label" \
|
||||
--arg token "$token" \
|
||||
--arg recv "$recv" \
|
||||
--argjson enabled "$( [[ "$enabled" == "true" ]] && echo true || echo false)" \
|
||||
--arg feeSampleWei "$fee" \
|
||||
--arg mirrored "$mirror" \
|
||||
--argjson l1DestinationWired "$([[ "$l1_ok" == 1 ]] && echo true || echo false)" \
|
||||
--argjson l2TokenPairConfigured "$([[ "$l2_ok" == 1 ]] && echo true || echo false)" \
|
||||
--argjson l1FeeQuoteOk "$([[ "$fee_f" == 1 ]] && echo true || echo false)" \
|
||||
--argjson corridorReady "$([[ "$ready" == 1 ]] && echo true || echo false)" \
|
||||
'{
|
||||
label: $label,
|
||||
canonicalToken: $token,
|
||||
l1ReceiverBridge: $recv,
|
||||
l1DestinationEnabled: $enabled,
|
||||
l1FeeSampleWei: $feeSampleWei,
|
||||
mainnetMirroredToken: $mirrored,
|
||||
l1DestinationWired: $l1DestinationWired,
|
||||
l2TokenPairConfigured: $l2TokenPairConfigured,
|
||||
l1FeeQuoteOk: $l1FeeQuoteOk,
|
||||
corridorReady: $corridorReady
|
||||
}'
|
||||
}
|
||||
|
||||
JSON_OUT="$(jq -n \
|
||||
--arg generatedAt "$GENERATED_AT" \
|
||||
--arg rpc138 "$RPC138" \
|
||||
--arg rpc1 "$MAINRPC" \
|
||||
--argjson chainId138 "$cid_138" \
|
||||
--argjson chainId1 "$cid_1" \
|
||||
--arg l1 "$L1_BRIDGE" \
|
||||
--arg l2 "$L2_BRIDGE" \
|
||||
--argjson mainnetSelector "$MAINNET_SELECTOR" \
|
||||
--argjson gruV2BridgeableToMainnet "$([[ "$v2_all_ready" == 1 ]] && echo true || echo false)" \
|
||||
--argjson assets "[$(json_row "$row_v1_usdt"),$(json_row "$row_v1_usdc"),$(json_row "$row_v2_usdt"),$(json_row "$row_v2_usdc")]" \
|
||||
'{
|
||||
generatedAt: $generatedAt,
|
||||
chainId138: ($chainId138 | if . == "?" then . else tonumber end),
|
||||
chainIdMainnet: ($chainId1 | if . == "?" then . else tonumber end),
|
||||
rpc138: $rpc138,
|
||||
rpcMainnet: $rpc1,
|
||||
l1Bridge138: $l1,
|
||||
l2BridgeMainnet: $l2,
|
||||
mainnetSelector: $mainnetSelector,
|
||||
gruV2BridgeableToMainnet: $gruV2BridgeableToMainnet,
|
||||
assets: $assets
|
||||
}')"
|
||||
|
||||
if (( OUTPUT_JSON == 1 )); then
|
||||
printf '%s\n' "$JSON_OUT"
|
||||
else
|
||||
printf '%s\n' "$JSON_OUT" | jq .
|
||||
fi
|
||||
|
||||
if (( v2_all_ready != 1 )); then
|
||||
if (( OUTPUT_JSON == 0 )); then
|
||||
echo "" >&2
|
||||
echo "RESULT: GRU V2 is NOT ready for the live CW → Mainnet corridor." >&2
|
||||
echo " - L1 must enable destinations(<V2>, mainnetSelector) with the Mainnet L2 bridge as receiver." >&2
|
||||
echo " - Mainnet L2 must configureTokenPair(<V2 canonical>, <cW mirrored>) for each asset." >&2
|
||||
echo " See config/gru-transport-active.json cutover notes and docs/07-ccip/CW_DEPLOY_AND_WIRE_RUNBOOK.md" >&2
|
||||
fi
|
||||
[[ "$STRICT" == "1" ]] && exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
313
scripts/verify/generate-contract-verification-publish-matrix.mjs
Normal file
313
scripts/verify/generate-contract-verification-publish-matrix.mjs
Normal file
@@ -0,0 +1,313 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
const repoRoot = path.resolve(path.dirname(new URL(import.meta.url).pathname), '..', '..');
|
||||
const smartContractsPath = path.join(repoRoot, 'config', 'smart-contracts-master.json');
|
||||
const deploymentStatusPath = path.join(repoRoot, 'cross-chain-pmm-lps', 'config', 'deployment-status.json');
|
||||
const outputJsonPath = path.join(repoRoot, 'reports', 'status', 'contract_verification_publish_matrix.json');
|
||||
const outputMdPath = path.join(repoRoot, 'docs', '11-references', 'CONTRACT_VERIFICATION_AND_PUBLICATION_MATRIX_ALL_NETWORKS.md');
|
||||
|
||||
const CHAIN_META = {
|
||||
'1': {
|
||||
name: 'Ethereum Mainnet',
|
||||
explorer: 'https://etherscan.io',
|
||||
verifierKind: 'etherscan',
|
||||
publishSurface: 'Etherscan + repo inventory/token maps',
|
||||
},
|
||||
'10': {
|
||||
name: 'Optimism',
|
||||
explorer: 'https://optimistic.etherscan.io',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Optimism explorer + repo inventory/token maps',
|
||||
},
|
||||
'25': {
|
||||
name: 'Cronos',
|
||||
explorer: 'https://cronoscan.com',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Cronoscan + repo inventory/token maps',
|
||||
},
|
||||
'56': {
|
||||
name: 'BSC',
|
||||
explorer: 'https://bscscan.com',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'BscScan + repo inventory/token maps',
|
||||
},
|
||||
'100': {
|
||||
name: 'Gnosis',
|
||||
explorer: 'https://gnosisscan.io',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Gnosis explorer + repo inventory/token maps',
|
||||
},
|
||||
'1111': {
|
||||
name: 'Wemix',
|
||||
explorer: 'https://explorer.wemix.com',
|
||||
verifierKind: 'manual',
|
||||
publishSurface: 'Wemix explorer + repo inventory/token maps',
|
||||
},
|
||||
'137': {
|
||||
name: 'Polygon',
|
||||
explorer: 'https://polygonscan.com',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'PolygonScan + repo inventory/token maps',
|
||||
},
|
||||
'138': {
|
||||
name: 'Chain 138',
|
||||
explorer: 'https://blockscout.defi-oracle.io',
|
||||
explorerAlt: 'https://explorer.d-bis.org',
|
||||
verifierKind: 'blockscout',
|
||||
publishSurface: 'Blockscout + repo inventory + explorer token/config surfaces',
|
||||
verificationScript: 'bash scripts/verify/run-contract-verification-with-proxy.sh',
|
||||
},
|
||||
'42161': {
|
||||
name: 'Arbitrum',
|
||||
explorer: 'https://arbiscan.io',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Arbiscan + repo inventory/token maps',
|
||||
},
|
||||
'42220': {
|
||||
name: 'Celo',
|
||||
explorer: 'https://celoscan.io',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Celo explorer + repo inventory/token maps',
|
||||
},
|
||||
'43114': {
|
||||
name: 'Avalanche',
|
||||
explorer: 'https://snowtrace.io',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Snowtrace + repo inventory/token maps',
|
||||
},
|
||||
'8453': {
|
||||
name: 'Base',
|
||||
explorer: 'https://basescan.org',
|
||||
verifierKind: 'etherscan-family',
|
||||
publishSurface: 'Basescan + repo inventory/token maps',
|
||||
},
|
||||
'651940': {
|
||||
name: 'ALL Mainnet',
|
||||
explorer: 'https://alltra.global',
|
||||
verifierKind: 'manual',
|
||||
publishSurface: 'Alltra explorer/docs + repo inventory/token maps',
|
||||
},
|
||||
};
|
||||
|
||||
function loadJson(filePath) {
|
||||
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
}
|
||||
|
||||
function ensureDir(filePath) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
function chainMeta(chainId, fallbackName = '') {
|
||||
const meta = CHAIN_META[String(chainId)] || {};
|
||||
return {
|
||||
chainId: String(chainId),
|
||||
chainName: meta.name || fallbackName || `Chain ${chainId}`,
|
||||
explorer: meta.explorer || '',
|
||||
explorerAlt: meta.explorerAlt || '',
|
||||
verifierKind: meta.verifierKind || 'manual',
|
||||
publishSurface: meta.publishSurface || 'Explorer + repo inventory',
|
||||
verificationScript: meta.verificationScript || '',
|
||||
};
|
||||
}
|
||||
|
||||
function pushEntry(target, entry) {
|
||||
target.push({
|
||||
verificationStatus: 'pending',
|
||||
publicationStatus: 'pending',
|
||||
publishRequirement: 'Verify source on chain explorer and keep repo inventories/token maps aligned',
|
||||
...entry,
|
||||
});
|
||||
}
|
||||
|
||||
const smartContracts = loadJson(smartContractsPath);
|
||||
const deploymentStatus = loadJson(deploymentStatusPath);
|
||||
|
||||
const entries = [];
|
||||
|
||||
for (const [chainId, chainData] of Object.entries(smartContracts.chains || {})) {
|
||||
const meta = chainMeta(chainId);
|
||||
for (const [label, address] of Object.entries(chainData.contracts || {})) {
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'config/smart-contracts-master.json',
|
||||
contractType: 'canonical_contract',
|
||||
label,
|
||||
address,
|
||||
...meta,
|
||||
automation: chainId === '138' ? 'repo-supported' : 'manual-or-external',
|
||||
publishNotes: chainId === '138'
|
||||
? 'Canonical Chain 138 contract; verify in Blockscout and keep reference docs current'
|
||||
: 'Mainnet relay or non-138 canonical contract; verify on chain explorer and mirror in repo docs',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
for (const [chainId, chainData] of Object.entries(deploymentStatus.chains || {})) {
|
||||
const meta = chainMeta(chainId, chainData.name);
|
||||
for (const [label, address] of Object.entries(chainData.cwTokens || {})) {
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
contractType: 'cw_token',
|
||||
label,
|
||||
address,
|
||||
...meta,
|
||||
automation: 'inventory-only',
|
||||
publishNotes: 'Cross-chain wrapped/canonical token; verify on explorer and keep PMM/token-mapping inventories aligned',
|
||||
});
|
||||
}
|
||||
for (const [label, address] of Object.entries(chainData.gasMirrors || {})) {
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
contractType: 'gas_mirror',
|
||||
label,
|
||||
address,
|
||||
...meta,
|
||||
automation: 'inventory-only',
|
||||
publishNotes: 'Gas-family mirror token; verify on explorer and keep gas rollout inventory aligned',
|
||||
});
|
||||
}
|
||||
for (const [label, address] of Object.entries(chainData.anchorAddresses || {})) {
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
contractType: 'anchor_token',
|
||||
label,
|
||||
address,
|
||||
...meta,
|
||||
automation: 'reference-only',
|
||||
publishNotes: 'Anchor/reference asset; confirm explorer address and keep mapping docs current',
|
||||
});
|
||||
}
|
||||
for (const pool of chainData.pmmPools || []) {
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
contractType: 'pmm_pool',
|
||||
label: `${pool.base}/${pool.quote}`,
|
||||
address: pool.poolAddress,
|
||||
...meta,
|
||||
automation: chainId === '138' ? 'partial' : 'inventory-only',
|
||||
publishNotes: `PMM pool (${pool.role || 'unclassified'}); verify source if custom deployment and keep routing inventory aligned`,
|
||||
});
|
||||
}
|
||||
for (const pool of chainData.pmmPoolsVolatile || []) {
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
contractType: 'pmm_pool_volatile',
|
||||
label: `${pool.base}/${pool.quote}`,
|
||||
address: pool.poolAddress,
|
||||
...meta,
|
||||
automation: 'inventory-only',
|
||||
publishNotes: `Volatile PMM pool (${pool.role || 'unclassified'}); verify source where repo owns deployment and keep routing inventory aligned`,
|
||||
});
|
||||
}
|
||||
for (const venue of chainData.gasReferenceVenues || []) {
|
||||
if (!venue.venueAddress) continue;
|
||||
pushEntry(entries, {
|
||||
sourceRegistry: 'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
contractType: 'reference_venue',
|
||||
label: `${venue.protocol}:${venue.base}/${venue.quote}`,
|
||||
address: venue.venueAddress,
|
||||
...meta,
|
||||
automation: 'inventory-only',
|
||||
publishNotes: 'Reference venue entry; verify only if repo owns deployment, otherwise treat as external dependency',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
entries.sort((a, b) => {
|
||||
if (a.chainId !== b.chainId) return Number(a.chainId) - Number(b.chainId);
|
||||
if (a.contractType !== b.contractType) return a.contractType.localeCompare(b.contractType);
|
||||
return a.label.localeCompare(b.label);
|
||||
});
|
||||
|
||||
const summaryByChain = new Map();
|
||||
for (const entry of entries) {
|
||||
const key = `${entry.chainId}:${entry.chainName}`;
|
||||
const current = summaryByChain.get(key) || { total: 0, canonical: 0, cw: 0, pools: 0 };
|
||||
current.total += 1;
|
||||
if (entry.contractType === 'canonical_contract') current.canonical += 1;
|
||||
if (entry.contractType === 'cw_token' || entry.contractType === 'gas_mirror') current.cw += 1;
|
||||
if (entry.contractType.includes('pmm_pool')) current.pools += 1;
|
||||
summaryByChain.set(key, current);
|
||||
}
|
||||
|
||||
const generatedAt = new Date().toISOString();
|
||||
const jsonPayload = {
|
||||
generatedAt,
|
||||
sources: [
|
||||
'config/smart-contracts-master.json',
|
||||
'cross-chain-pmm-lps/config/deployment-status.json',
|
||||
],
|
||||
entryCount: entries.length,
|
||||
entries,
|
||||
};
|
||||
|
||||
const summaryRows = [...summaryByChain.entries()]
|
||||
.map(([key, value]) => {
|
||||
const [chainId, chainName] = key.split(':');
|
||||
const meta = chainMeta(chainId, chainName);
|
||||
return `| ${chainId} | ${chainName} | ${value.total} | ${value.canonical} | ${value.cw} | ${value.pools} | ${meta.explorer || 'manual'} |`;
|
||||
})
|
||||
.join('\n');
|
||||
|
||||
const sampleRows = entries.slice(0, 80).map((entry) => {
|
||||
const explorer = entry.explorer || 'manual';
|
||||
return `| ${entry.chainId} | ${entry.chainName} | ${entry.contractType} | ${entry.label} | \`${entry.address}\` | ${entry.verifierKind} | ${entry.automation} | ${explorer} | ${entry.verificationStatus} | ${entry.publicationStatus} |`;
|
||||
}).join('\n');
|
||||
|
||||
const md = `# Contract Verification And Publication Matrix (All Networks)
|
||||
|
||||
**Generated:** ${generatedAt}
|
||||
**Authoritative sources:** \`config/smart-contracts-master.json\`, \`cross-chain-pmm-lps/config/deployment-status.json\`
|
||||
|
||||
This matrix is the canonical repo-level inventory for **what still needs explorer verification and publication coverage across every network currently tracked in the workspace**.
|
||||
|
||||
## Meaning
|
||||
|
||||
- **Verification** = source or deployment metadata is verified on the network explorer used for that chain.
|
||||
- **Publication** = the deployment is also reflected in the repo’s public inventories, token mappings, PMM status, and explorer-facing docs/config where applicable.
|
||||
- **Pending** means the repo knows the address, but does not yet have a machine-confirmed proof here that explorer verification/publication is complete.
|
||||
|
||||
## Chain Summary
|
||||
|
||||
| Chain ID | Chain | Total Entries | Canonical Contracts | cW / Gas Mirrors | PMM Pools | Explorer |
|
||||
| --- | --- | ---: | ---: | ---: | ---: | --- |
|
||||
${summaryRows}
|
||||
|
||||
## Required operator path
|
||||
|
||||
1. **Chain 138 canonical contracts**
|
||||
- Run: \`bash scripts/verify/run-contract-verification-with-proxy.sh\`
|
||||
- Recheck: \`bash scripts/verify/check-contracts-on-chain-138.sh\`
|
||||
2. **Chain 138 DODO v3 pilot**
|
||||
- Run: \`bash scripts/verify/verify-dodo-v3-chain138-blockscout.sh\`
|
||||
3. **Other EVM chains**
|
||||
- Verify on the chain explorer shown below.
|
||||
- If the repo owns the deployment, keep token/pool/mapping docs updated after explorer verification.
|
||||
4. **Publication closure**
|
||||
- Update \`config/smart-contracts-master.json\`, \`cross-chain-pmm-lps/config/deployment-status.json\`, token lists, and any chain-specific runbooks after verification is confirmed.
|
||||
|
||||
## Inventory sample
|
||||
|
||||
The JSON report in \`reports/status/contract_verification_publish_matrix.json\` contains the full set. The first 80 rows are shown here for readability.
|
||||
|
||||
| Chain ID | Chain | Type | Label | Address | Verifier | Automation | Explorer | Verify | Publish |
|
||||
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
||||
${sampleRows}
|
||||
|
||||
## Notes
|
||||
|
||||
- Entries from \`smart-contracts-master.json\` are treated as the canonical deploy inventory.
|
||||
- Entries from \`deployment-status.json\` are treated as required publication inventory, even when explorer verification may be external or manual.
|
||||
- This matrix does **not** claim every address is already verified; it marks the repo-wide backlog explicitly so the status can be closed chain by chain instead of being lost in prose.
|
||||
`;
|
||||
|
||||
ensureDir(outputJsonPath);
|
||||
ensureDir(outputMdPath);
|
||||
fs.writeFileSync(outputJsonPath, JSON.stringify(jsonPayload, null, 2) + '\n');
|
||||
fs.writeFileSync(outputMdPath, md + '\n');
|
||||
|
||||
console.log(`Wrote ${entries.length} entries to:`);
|
||||
console.log(`- ${outputJsonPath}`);
|
||||
console.log(`- ${outputMdPath}`);
|
||||
89
scripts/verify/generate-crosschain-publication-packs.mjs
Normal file
89
scripts/verify/generate-crosschain-publication-packs.mjs
Normal file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
const repoRoot = path.resolve(path.dirname(new URL(import.meta.url).pathname), '..', '..');
|
||||
const matrixPath = path.join(repoRoot, 'reports', 'status', 'contract_verification_publish_matrix.json');
|
||||
const outDir = path.join(repoRoot, 'reports', 'status', 'publication-packs');
|
||||
const chainIds = ['1', '10', '56', '137', '8453'];
|
||||
|
||||
const chainNames = {
|
||||
'1': 'ethereum-mainnet',
|
||||
'10': 'optimism',
|
||||
'56': 'bsc',
|
||||
'137': 'polygon',
|
||||
'8453': 'base',
|
||||
};
|
||||
|
||||
function ensureDir(dir) {
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
}
|
||||
|
||||
const matrix = JSON.parse(fs.readFileSync(matrixPath, 'utf8'));
|
||||
|
||||
for (const chainId of chainIds) {
|
||||
const entries = matrix.entries.filter((entry) => entry.chainId === chainId);
|
||||
if (!entries.length) continue;
|
||||
|
||||
const slug = chainNames[chainId] || `chain-${chainId}`;
|
||||
const chainDir = path.join(outDir, slug);
|
||||
ensureDir(chainDir);
|
||||
|
||||
const summary = {
|
||||
chainId,
|
||||
chainName: entries[0].chainName,
|
||||
explorer: entries[0].explorer,
|
||||
generatedAt: matrix.generatedAt,
|
||||
total: entries.length,
|
||||
counts: entries.reduce((acc, entry) => {
|
||||
acc[entry.contractType] = (acc[entry.contractType] || 0) + 1;
|
||||
return acc;
|
||||
}, {}),
|
||||
entries,
|
||||
};
|
||||
|
||||
const grouped = entries.reduce((acc, entry) => {
|
||||
(acc[entry.contractType] ||= []).push(entry);
|
||||
return acc;
|
||||
}, {});
|
||||
|
||||
const sections = Object.entries(grouped)
|
||||
.sort(([a], [b]) => a.localeCompare(b))
|
||||
.map(([type, items]) => {
|
||||
const rows = items
|
||||
.map((entry) => `| ${entry.label} | \`${entry.address}\` | ${entry.automation} | ${entry.publishNotes} |`)
|
||||
.join('\n');
|
||||
return `## ${type}\n\n| Label | Address | Automation | Notes |\n| --- | --- | --- | --- |\n${rows}`;
|
||||
})
|
||||
.join('\n\n');
|
||||
|
||||
const md = `# ${summary.chainName} Publication Pack
|
||||
|
||||
**Chain ID:** ${chainId}
|
||||
**Explorer:** ${summary.explorer}
|
||||
**Generated:** ${summary.generatedAt}
|
||||
|
||||
This pack groups the repo-known contracts and published deployment inventory for ${summary.chainName}. Use it when doing per-chain explorer verification and publication closure.
|
||||
|
||||
## Operator sequence
|
||||
|
||||
1. Verify repo-owned deployments on the chain explorer.
|
||||
2. Confirm the address list still matches on-chain deployment reality.
|
||||
3. Update repo-facing token lists, PMM inventories, and mapping docs if anything changed.
|
||||
4. Regenerate:
|
||||
- \`node scripts/verify/generate-contract-verification-publish-matrix.mjs\`
|
||||
- \`node scripts/verify/generate-crosschain-publication-packs.mjs\`
|
||||
|
||||
## Counts
|
||||
|
||||
${Object.entries(summary.counts).map(([type, count]) => `- \`${type}\`: ${count}`).join('\n')}
|
||||
|
||||
${sections}
|
||||
`;
|
||||
|
||||
fs.writeFileSync(path.join(chainDir, 'pack.json'), JSON.stringify(summary, null, 2) + '\n');
|
||||
fs.writeFileSync(path.join(chainDir, 'README.md'), md + '\n');
|
||||
}
|
||||
|
||||
console.log(`Wrote grouped publication packs under ${outDir}`);
|
||||
230
scripts/verify/generate-public-surface-remediation-plan.sh
Normal file
230
scripts/verify/generate-public-surface-remediation-plan.sh
Normal file
@@ -0,0 +1,230 @@
|
||||
#!/usr/bin/env bash
|
||||
# Generate a markdown remediation plan for current public 502 and DNS gaps.
|
||||
# Uses config/public-surface-remediation-map.json plus the latest public E2E run by default.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
EVIDENCE_DIR="$PROJECT_ROOT/docs/04-configuration/verification-evidence"
|
||||
MAP_FILE_DEFAULT="$PROJECT_ROOT/config/public-surface-remediation-map.json"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: bash scripts/verify/generate-public-surface-remediation-plan.sh [--e2e-json PATH] [--map PATH] [--output PATH] [--print]
|
||||
|
||||
Defaults:
|
||||
--e2e-json Latest docs/04-configuration/verification-evidence/e2e-verification-*/all_e2e_results.json
|
||||
--map config/public-surface-remediation-map.json
|
||||
--output docs/04-configuration/verification-evidence/public-surface-remediation-YYYYMMDD_HHMMSS.md
|
||||
EOF
|
||||
}
|
||||
|
||||
find_latest_e2e_json() {
|
||||
local latest_dir
|
||||
latest_dir="$(ls -1dt "$EVIDENCE_DIR"/e2e-verification-* 2>/dev/null | head -n1 || true)"
|
||||
if [[ -z "$latest_dir" ]]; then
|
||||
return 1
|
||||
fi
|
||||
printf '%s\n' "$latest_dir/all_e2e_results.json"
|
||||
}
|
||||
|
||||
dns_resolution() {
|
||||
local domain="$1"
|
||||
local results=""
|
||||
if command -v getent >/dev/null 2>&1; then
|
||||
results="$(getent ahosts "$domain" 2>/dev/null | awk '{print $1}' | sort -u | paste -sd ',' -)"
|
||||
fi
|
||||
if [[ -z "$results" ]] && command -v dig >/dev/null 2>&1; then
|
||||
results="$(dig +short "$domain" 2>/dev/null | grep -E '^[0-9a-fA-F:.]+$' | sort -u | paste -sd ',' -)"
|
||||
fi
|
||||
printf '%s\n' "$results"
|
||||
}
|
||||
|
||||
E2E_JSON=""
|
||||
MAP_FILE="$MAP_FILE_DEFAULT"
|
||||
OUTPUT=""
|
||||
PRINT_REPORT=0
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--e2e-json)
|
||||
E2E_JSON="$2"
|
||||
shift 2
|
||||
;;
|
||||
--map)
|
||||
MAP_FILE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--output)
|
||||
OUTPUT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--print)
|
||||
PRINT_REPORT=1
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2
|
||||
usage >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "$E2E_JSON" ]]; then
|
||||
E2E_JSON="$(find_latest_e2e_json || true)"
|
||||
fi
|
||||
|
||||
if [[ -z "$E2E_JSON" || ! -f "$E2E_JSON" ]]; then
|
||||
echo "Could not locate an E2E JSON report. Pass --e2e-json explicitly." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$MAP_FILE" ]]; then
|
||||
echo "Remediation map not found: $MAP_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TIMESTAMP="$(date +%Y%m%d_%H%M%S)"
|
||||
if [[ -z "$OUTPUT" ]]; then
|
||||
OUTPUT="$EVIDENCE_DIR/public-surface-remediation-$TIMESTAMP.md"
|
||||
fi
|
||||
|
||||
mkdir -p "$(dirname "$OUTPUT")"
|
||||
|
||||
TMP_REPORT="$(mktemp)"
|
||||
SUMMARY_FILE="$(mktemp)"
|
||||
DETAILS_FILE="$(mktemp)"
|
||||
trap 'rm -f "$TMP_REPORT" "$SUMMARY_FILE" "$DETAILS_FILE"' EXIT
|
||||
|
||||
{
|
||||
echo "# Public surface remediation plan"
|
||||
echo ""
|
||||
echo "- Generated: $(date -Iseconds)"
|
||||
echo "- E2E source: \`$E2E_JSON\`"
|
||||
echo "- Remediation map: \`$MAP_FILE\`"
|
||||
echo ""
|
||||
echo "## Summary"
|
||||
echo ""
|
||||
echo "| Surface | Domains | Live observation | Classification | Upstream | Primary fix |"
|
||||
echo "|---------|---------|------------------|----------------|----------|-------------|"
|
||||
} > "$TMP_REPORT"
|
||||
|
||||
surface_count="$(jq '.surfaces | length' "$MAP_FILE")"
|
||||
active_count=0
|
||||
|
||||
for idx in $(seq 0 $((surface_count - 1))); do
|
||||
surface_json="$(jq ".surfaces[$idx]" "$MAP_FILE")"
|
||||
id="$(printf '%s\n' "$surface_json" | jq -r '.id')"
|
||||
classification="$(printf '%s\n' "$surface_json" | jq -r '.classification')"
|
||||
expected_service="$(printf '%s\n' "$surface_json" | jq -r '.expectedService')"
|
||||
status_policy="$(printf '%s\n' "$surface_json" | jq -r '.statusPolicy')"
|
||||
repo_solution="$(printf '%s\n' "$surface_json" | jq -r '.repoSolution')"
|
||||
upstream_desc="$(printf '%s\n' "$surface_json" | jq -r '.upstream | "\(.vmid) @ \(.host) -> \(.ip):\(.port)"')"
|
||||
domains_csv="$(printf '%s\n' "$surface_json" | jq -r '.domains | join(", ")')"
|
||||
|
||||
observations=()
|
||||
active_issue=0
|
||||
effective_solution="$repo_solution"
|
||||
optional_surface=0
|
||||
all_domains_unpublished=1
|
||||
if [[ "$status_policy" == "keep_optional_until_real_service_deployed" ]]; then
|
||||
optional_surface=1
|
||||
fi
|
||||
|
||||
while IFS= read -r domain; do
|
||||
[[ -z "$domain" ]] && continue
|
||||
dns_ips="$(dns_resolution "$domain")"
|
||||
e2e_http_code="$(jq -r --arg domain "$domain" 'map(select(.domain == $domain))[0].tests.https.http_code // empty' "$E2E_JSON")"
|
||||
e2e_https_status="$(jq -r --arg domain "$domain" 'map(select(.domain == $domain))[0].tests.https.status // empty' "$E2E_JSON")"
|
||||
|
||||
if [[ -z "$dns_ips" ]]; then
|
||||
observations+=("$domain: no public DNS answer")
|
||||
if [[ $optional_surface -ne 1 ]]; then
|
||||
active_issue=1
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
all_domains_unpublished=0
|
||||
|
||||
if [[ -n "$e2e_http_code" ]]; then
|
||||
if [[ "$e2e_http_code" == "502" ]]; then
|
||||
observations+=("$domain: HTTPS 502, DNS $dns_ips")
|
||||
active_issue=1
|
||||
else
|
||||
observations+=("$domain: HTTPS ${e2e_http_code} (${e2e_https_status:-ok}), DNS $dns_ips")
|
||||
fi
|
||||
else
|
||||
observations+=("$domain: DNS $dns_ips, not in current E2E inventory")
|
||||
fi
|
||||
done < <(printf '%s\n' "$surface_json" | jq -r '.domains[]')
|
||||
|
||||
observation_text="$(printf '%s\n' "${observations[@]}" | awk 'NR==1{printf "%s",$0; next}{printf "; %s",$0}')"
|
||||
if [[ $active_issue -eq 0 ]]; then
|
||||
if [[ "$classification" == "public_502_backend" ]]; then
|
||||
effective_solution="Currently healthy. Keep the upstream, proxy rows, and public E2E verifier aligned so any future regression is caught quickly."
|
||||
elif [[ "$classification" == "public_edge_ok" ]]; then
|
||||
effective_solution="Currently healthy. Keep the documented upstream, proxy rows, and verification checks aligned."
|
||||
elif [[ "$classification" == "placeholder_surface" && $optional_surface -eq 1 && $all_domains_unpublished -eq 1 ]]; then
|
||||
effective_solution="Currently unpublished as intended. Only publish these hostnames after the real service is deployed and verified."
|
||||
fi
|
||||
fi
|
||||
if [[ $active_issue -eq 1 ]]; then
|
||||
active_count=$((active_count + 1))
|
||||
fi
|
||||
|
||||
printf '| `%s` | `%s` | %s | `%s` | `%s` | %s |\n' \
|
||||
"$id" \
|
||||
"$domains_csv" \
|
||||
"$observation_text" \
|
||||
"$classification" \
|
||||
"$upstream_desc" \
|
||||
"$effective_solution" >> "$SUMMARY_FILE"
|
||||
|
||||
{
|
||||
echo ""
|
||||
echo "## $id"
|
||||
echo ""
|
||||
echo "- Expected service: $expected_service"
|
||||
echo "- Status policy: \`$status_policy\`"
|
||||
echo "- Upstream: \`$upstream_desc\`"
|
||||
echo "- Live observation: $observation_text"
|
||||
echo "- Repo solution: $effective_solution"
|
||||
echo "- Scripts:"
|
||||
printf '%s\n' "$surface_json" | jq -r '.scripts[]' | while IFS= read -r path; do
|
||||
echo " - \`$path\`"
|
||||
done
|
||||
echo "- Commands:"
|
||||
printf '%s\n' "$surface_json" | jq -r '.commands[]' | while IFS= read -r cmd; do
|
||||
echo " - \`$cmd\`"
|
||||
done
|
||||
echo "- Docs:"
|
||||
printf '%s\n' "$surface_json" | jq -r '.docs[]' | while IFS= read -r path; do
|
||||
echo " - \`$path\`"
|
||||
done
|
||||
} >> "$DETAILS_FILE"
|
||||
done
|
||||
|
||||
{
|
||||
cat "$SUMMARY_FILE"
|
||||
cat "$DETAILS_FILE"
|
||||
echo ""
|
||||
echo "## Totals"
|
||||
echo ""
|
||||
echo "- Active issue groups in this report: $active_count"
|
||||
echo "- Total mapped surfaces: $surface_count"
|
||||
} >> "$TMP_REPORT"
|
||||
|
||||
mv "$TMP_REPORT" "$OUTPUT"
|
||||
trap - EXIT
|
||||
|
||||
echo "Wrote remediation plan: $OUTPUT"
|
||||
|
||||
if [[ "$PRINT_REPORT" -eq 1 ]]; then
|
||||
cat "$OUTPUT"
|
||||
fi
|
||||
145
scripts/verify/generate-publication-actionable-backlog.mjs
Normal file
145
scripts/verify/generate-publication-actionable-backlog.mjs
Normal file
@@ -0,0 +1,145 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
|
||||
const repoRoot = path.resolve(path.dirname(new URL(import.meta.url).pathname), '..', '..');
|
||||
const matrixPath = path.join(repoRoot, 'reports', 'status', 'contract_verification_publish_matrix.json');
|
||||
const packStatusPath = path.join(repoRoot, 'reports', 'status', 'publication-pack-explorer-status.json');
|
||||
const outJson = path.join(repoRoot, 'reports', 'status', 'publication-actionable-backlog.json');
|
||||
const outMd = path.join(repoRoot, 'docs', '11-references', 'PUBLICATION_ACTIONABLE_BACKLOG.md');
|
||||
|
||||
const targetChainIds = new Set(['1', '10', '56', '137', '8453']);
|
||||
const nonActionableAutomation = new Set(['inventory-only', 'reference-only']);
|
||||
const autoSubmittableAutomation = new Set(['repo-supported', 'partial']);
|
||||
|
||||
function ensureDir(filePath) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
function loadJson(filePath) {
|
||||
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
}
|
||||
|
||||
function byAddress(statusReport) {
|
||||
const index = new Map();
|
||||
for (const pack of statusReport.packs || []) {
|
||||
for (const entry of pack.entries || []) {
|
||||
index.set(`${entry.chainId}:${entry.address.toLowerCase()}`, entry);
|
||||
}
|
||||
}
|
||||
return index;
|
||||
}
|
||||
|
||||
function toRow(entry) {
|
||||
return `| ${entry.chainId} | ${entry.chainName} | ${entry.label} | \`${entry.address}\` | ${entry.contractType} | ${entry.automation} | ${entry.explorerStatus} | ${entry.nextAction} |`;
|
||||
}
|
||||
|
||||
const matrix = loadJson(matrixPath);
|
||||
const packStatus = loadJson(packStatusPath);
|
||||
const statusIndex = byAddress(packStatus);
|
||||
|
||||
const backlogEntries = matrix.entries
|
||||
.filter((entry) => targetChainIds.has(entry.chainId))
|
||||
.map((entry) => {
|
||||
const status = statusIndex.get(`${entry.chainId}:${entry.address.toLowerCase()}`) || {};
|
||||
const explorerStatus = status.explorerStatus || 'not-in-pack-status';
|
||||
let nextAction = 'No automatic action';
|
||||
if (nonActionableAutomation.has(entry.automation)) {
|
||||
nextAction = 'Keep inventory/docs aligned; do not treat as repo-owned submission target';
|
||||
} else if (autoSubmittableAutomation.has(entry.automation)) {
|
||||
nextAction = 'Submit automatically with repo-owned verification flow';
|
||||
} else if (entry.automation === 'manual-or-external') {
|
||||
nextAction = 'Manual or external closure only; missing repo-supported source bundle or ownership proof';
|
||||
}
|
||||
return {
|
||||
...entry,
|
||||
explorerStatus,
|
||||
explorerDetail: status.explorerDetail || null,
|
||||
nextAction,
|
||||
};
|
||||
});
|
||||
|
||||
const autoSubmittable = backlogEntries.filter((entry) => autoSubmittableAutomation.has(entry.automation));
|
||||
const manualOrExternal = backlogEntries.filter((entry) => entry.automation === 'manual-or-external');
|
||||
const inventoryOnly = backlogEntries.filter((entry) => nonActionableAutomation.has(entry.automation));
|
||||
|
||||
const byChain = new Map();
|
||||
for (const entry of backlogEntries) {
|
||||
const current = byChain.get(entry.chainId) || {
|
||||
chainId: entry.chainId,
|
||||
chainName: entry.chainName,
|
||||
total: 0,
|
||||
autoSubmittable: 0,
|
||||
manualOrExternal: 0,
|
||||
inventoryOnly: 0,
|
||||
};
|
||||
current.total += 1;
|
||||
if (autoSubmittableAutomation.has(entry.automation)) current.autoSubmittable += 1;
|
||||
if (entry.automation === 'manual-or-external') current.manualOrExternal += 1;
|
||||
if (nonActionableAutomation.has(entry.automation)) current.inventoryOnly += 1;
|
||||
byChain.set(entry.chainId, current);
|
||||
}
|
||||
|
||||
const summaryRows = [...byChain.values()]
|
||||
.sort((a, b) => Number(a.chainId) - Number(b.chainId))
|
||||
.map((chain) => `| ${chain.chainId} | ${chain.chainName} | ${chain.total} | ${chain.autoSubmittable} | ${chain.manualOrExternal} | ${chain.inventoryOnly} |`)
|
||||
.join('\n');
|
||||
|
||||
const autoRows = autoSubmittable.length
|
||||
? autoSubmittable.map(toRow).join('\n')
|
||||
: '| - | - | - | - | - | - | - | Automatic submission backlog is currently empty for the requested packs |';
|
||||
|
||||
const manualRows = manualOrExternal.length
|
||||
? manualOrExternal.map(toRow).join('\n')
|
||||
: '| - | - | - | - | - | - | - | No manual-or-external backlog |';
|
||||
|
||||
const payload = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
targetChains: [...targetChainIds],
|
||||
summary: [...byChain.values()].sort((a, b) => Number(a.chainId) - Number(b.chainId)),
|
||||
autoSubmittable,
|
||||
manualOrExternal,
|
||||
inventoryOnlyCount: inventoryOnly.length,
|
||||
};
|
||||
|
||||
const md = `# Publication Actionable Backlog
|
||||
|
||||
**Generated:** ${payload.generatedAt}
|
||||
|
||||
This artifact separates the five requested publication packs into:
|
||||
|
||||
- addresses we can honestly treat as **auto-submittable** from the repo
|
||||
- addresses that are **manual or external**
|
||||
- addresses that are only **inventory/reference tracking**
|
||||
|
||||
## Chain Summary
|
||||
|
||||
| Chain ID | Chain | Total Entries | Auto-submittable | Manual / External | Inventory / Reference |
|
||||
| --- | --- | ---: | ---: | ---: | ---: |
|
||||
${summaryRows}
|
||||
|
||||
## Auto-submittable backlog
|
||||
|
||||
| Chain ID | Chain | Label | Address | Type | Automation | Explorer Status | Next Action |
|
||||
| --- | --- | --- | --- | --- | --- | --- | --- |
|
||||
${autoRows}
|
||||
|
||||
## Manual or external backlog
|
||||
|
||||
| Chain ID | Chain | Label | Address | Type | Automation | Explorer Status | Next Action |
|
||||
| --- | --- | --- | --- | --- | --- | --- | --- |
|
||||
${manualRows}
|
||||
|
||||
## Closure note
|
||||
|
||||
For the five requested packs, the repo-owned **automatic** submission pass is complete when the auto-submittable backlog is empty.
|
||||
Any remaining rows in the manual/external table require source provenance, ownership confirmation, or a non-repo verification process before they can be closed honestly.
|
||||
`;
|
||||
|
||||
ensureDir(outJson);
|
||||
ensureDir(outMd);
|
||||
fs.writeFileSync(outJson, JSON.stringify(payload, null, 2) + '\n');
|
||||
fs.writeFileSync(outMd, md + '\n');
|
||||
|
||||
console.log(`Wrote:\n- ${outJson}\n- ${outMd}`);
|
||||
231
scripts/verify/plan-lxc-rebalance-from-health-report.sh
Executable file
231
scripts/verify/plan-lxc-rebalance-from-health-report.sh
Executable file
@@ -0,0 +1,231 @@
|
||||
#!/usr/bin/env bash
|
||||
# Recommend LXC migration candidates from a poll-lxc-cluster-health JSON report.
|
||||
# Read-only: prints a ranked plan and example pct migrate commands; does not mutate.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/plan-lxc-rebalance-from-health-report.sh
|
||||
# bash scripts/verify/plan-lxc-rebalance-from-health-report.sh --report reports/status/lxc_cluster_health_*.json
|
||||
# bash scripts/verify/plan-lxc-rebalance-from-health-report.sh --source r630-01 --target r630-04 --limit 12
|
||||
#
|
||||
# Exit:
|
||||
# 0 = plan generated
|
||||
# 1 = no eligible candidates
|
||||
# 2 = bad arguments / report missing
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
REPORT=""
|
||||
SOURCE_NODE="${SOURCE_NODE:-r630-01}"
|
||||
TARGET_NODE="${TARGET_NODE:-r630-04}"
|
||||
TARGET_STORAGE="${TARGET_STORAGE:-local-lvm}"
|
||||
LIMIT="${LIMIT:-10}"
|
||||
JSON_OUT=0
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--report)
|
||||
REPORT="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--source)
|
||||
SOURCE_NODE="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--target)
|
||||
TARGET_NODE="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--storage)
|
||||
TARGET_STORAGE="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--limit)
|
||||
LIMIT="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--json)
|
||||
JSON_OUT=1
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
sed -n '1,28p' "$0"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "ERROR: unknown argument: $1" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "${REPORT}" ]]; then
|
||||
REPORT="$(ls -1t "${PROJECT_ROOT}"/reports/status/lxc_cluster_health_*.json 2>/dev/null | head -1 || true)"
|
||||
fi
|
||||
|
||||
if [[ -z "${REPORT}" || ! -f "${REPORT}" ]]; then
|
||||
echo "ERROR: health report not found; pass --report <path>" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
python3 - "${REPORT}" "${SOURCE_NODE}" "${TARGET_NODE}" "${TARGET_STORAGE}" "${LIMIT}" "${JSON_OUT}" <<'PY'
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
report_path, source_node, target_node, target_storage, limit_s, json_out_s = sys.argv[1:7]
|
||||
limit = int(limit_s)
|
||||
json_out = json_out_s == "1"
|
||||
|
||||
with open(report_path, "r", encoding="utf-8") as fh:
|
||||
report = json.load(fh)
|
||||
|
||||
lxcs = report.get("lxcs")
|
||||
if not isinstance(lxcs, list):
|
||||
raise SystemExit("ERROR: report is missing full 'lxcs' inventory; re-run poll-lxc-cluster-health.sh")
|
||||
|
||||
node_metrics = report.get("node_metrics", {})
|
||||
source_metrics = node_metrics.get(source_node, {})
|
||||
target_metrics = node_metrics.get(target_node, {})
|
||||
|
||||
deny_patterns = [
|
||||
(re.compile(r"^besu-(validator|sentry|rpc|node)"), "chain-critical Besu"),
|
||||
(re.compile(r"^(validator|sentry|rpc|ccip|wormhole|relay)"), "chain / relay critical"),
|
||||
(re.compile(r"^(npmplus|cloudflared|gitea|mail|proxmox-mail-gateway|keycloak)"), "edge / infra critical"),
|
||||
(re.compile(r"^(dbis-api|dbis-frontend|dbis-core|thirdweb-rpc)"), "core app / RPC"),
|
||||
]
|
||||
prefer_patterns = [
|
||||
(re.compile(r"^(order-|gov-|portal|studio|vault-|mifos|tsunamiswap|ai-|dev-vm|blockscout)"), "application/support workload"),
|
||||
(re.compile(r"^(prometheus|grafana|opensearch|haproxy)"), "support workload"),
|
||||
]
|
||||
|
||||
def match_reason(name, patterns):
|
||||
for pat, reason in patterns:
|
||||
if pat.search(name):
|
||||
return reason
|
||||
return None
|
||||
|
||||
def gib(n):
|
||||
return round(float(n) / (1024 ** 3), 2)
|
||||
|
||||
target_vz = (target_metrics.get("df_vz") or {}).get("use_pct")
|
||||
target_io = (((target_metrics.get("psi") or {}).get("io") or {}).get("full") or {}).get("avg10")
|
||||
|
||||
candidates = []
|
||||
skipped = []
|
||||
for row in lxcs:
|
||||
if row.get("node") != source_node:
|
||||
continue
|
||||
name = str(row.get("name") or "")
|
||||
vmid = row.get("vmid")
|
||||
deny = match_reason(name, deny_patterns)
|
||||
if deny:
|
||||
skipped.append({"vmid": vmid, "name": name, "reason": deny})
|
||||
continue
|
||||
|
||||
mem_pct = float(row.get("mem_pct") or 0.0)
|
||||
cpu_pct = float(row.get("cpu_pct") or 0.0)
|
||||
disk_total_gib = gib((row.get("diskread_bytes") or 0) + (row.get("diskwrite_bytes") or 0))
|
||||
disk_cap_gib = gib(row.get("maxdisk_bytes") or 0)
|
||||
net_total_gib = gib((row.get("netin_bytes") or 0) + (row.get("netout_bytes") or 0))
|
||||
disk_pct = float(row.get("disk_pct") or 0.0)
|
||||
|
||||
# Lower score is better to move.
|
||||
score = 0.0
|
||||
score += mem_pct * 0.35
|
||||
score += cpu_pct * 0.30
|
||||
score += min(disk_total_gib, 500.0) * 0.03
|
||||
score += min(net_total_gib, 500.0) * 0.01
|
||||
score += disk_pct * 0.05
|
||||
if disk_cap_gib >= 100:
|
||||
score += 8.0
|
||||
if disk_total_gib >= 1000:
|
||||
score += 10.0
|
||||
if mem_pct >= 85:
|
||||
score += 10.0
|
||||
if cpu_pct >= 40:
|
||||
score += 8.0
|
||||
|
||||
pref = match_reason(name, prefer_patterns)
|
||||
if pref:
|
||||
score -= 18.0
|
||||
if row.get("maxmem_bytes", 0) <= 8 * 1024**3:
|
||||
score -= 4.0
|
||||
if disk_cap_gib <= 40:
|
||||
score -= 5.0
|
||||
elif disk_cap_gib <= 80:
|
||||
score -= 2.0
|
||||
|
||||
candidates.append({
|
||||
"vmid": vmid,
|
||||
"name": name,
|
||||
"source_node": source_node,
|
||||
"target_node": target_node,
|
||||
"target_storage": target_storage,
|
||||
"score": round(score, 2),
|
||||
"preferred_reason": pref,
|
||||
"mem_pct": round(mem_pct, 2),
|
||||
"cpu_pct": round(cpu_pct, 2),
|
||||
"disk_pct": round(disk_pct, 2),
|
||||
"maxmem_gib": gib(row.get("maxmem_bytes") or 0),
|
||||
"maxdisk_gib": disk_cap_gib,
|
||||
"disk_total_gib": disk_total_gib,
|
||||
"network_total_gib": net_total_gib,
|
||||
"command": f"ssh root@{source_node}.sankofa.nexus \"pct migrate {vmid} {target_node} --storage {target_storage} --restart\"",
|
||||
})
|
||||
|
||||
candidates.sort(key=lambda x: (x["score"], x["maxdisk_gib"], x["vmid"]))
|
||||
chosen = candidates[:limit]
|
||||
|
||||
summary = {
|
||||
"report": str(Path(report_path)),
|
||||
"overall_status": report.get("overall_status"),
|
||||
"source_node": source_node,
|
||||
"target_node": target_node,
|
||||
"target_storage": target_storage,
|
||||
"source_running_lxcs": next((x.get("running_lxcs") for x in report.get("nodes", []) if x.get("node") == source_node), None),
|
||||
"target_running_lxcs": next((x.get("running_lxcs") for x in report.get("nodes", []) if x.get("node") == target_node), None),
|
||||
"source_vz_use_pct": (source_metrics.get("df_vz") or {}).get("use_pct"),
|
||||
"target_vz_use_pct": target_vz,
|
||||
"source_io_full_avg10": ((((source_metrics.get("psi") or {}).get("io") or {}).get("full") or {}).get("avg10")),
|
||||
"target_io_full_avg10": target_io,
|
||||
"eligible_candidates": len(candidates),
|
||||
"recommended_candidates": chosen,
|
||||
"skipped_examples": skipped[:15],
|
||||
}
|
||||
|
||||
if json_out:
|
||||
print(json.dumps(summary, indent=2))
|
||||
else:
|
||||
print(f"Rebalance plan from {source_node} to {target_node} using {target_storage}")
|
||||
print(f"Report: {report_path}")
|
||||
print(f"Cluster status: {report.get('overall_status')} | source running LXCs={summary['source_running_lxcs']} | target running LXCs={summary['target_running_lxcs']}")
|
||||
print(f"Source vz={summary['source_vz_use_pct']}% io_full_avg10={summary['source_io_full_avg10']} | target vz={summary['target_vz_use_pct']}% io_full_avg10={summary['target_io_full_avg10']}")
|
||||
print("")
|
||||
if chosen:
|
||||
print("Recommended candidates (lower score is easier/safer to move first):")
|
||||
for item in chosen:
|
||||
reason = item["preferred_reason"] or "generic non-critical workload"
|
||||
print(
|
||||
f"- {item['vmid']} {item['name']}: score={item['score']}, "
|
||||
f"mem={item['mem_pct']}%, cpu={item['cpu_pct']}%, "
|
||||
f"disk_cap={item['maxdisk_gib']} GiB, disk_rw={item['disk_total_gib']} GiB, "
|
||||
f"net={item['network_total_gib']} GiB, reason={reason}"
|
||||
)
|
||||
print(f" {item['command']}")
|
||||
else:
|
||||
print("No eligible candidates found with the current filters.")
|
||||
print("")
|
||||
print("Skipped examples:")
|
||||
if skipped:
|
||||
for item in skipped[:10]:
|
||||
print(f"- {item['vmid']} {item['name']}: {item['reason']}")
|
||||
else:
|
||||
print("- none")
|
||||
|
||||
if not candidates:
|
||||
sys.exit(1)
|
||||
PY
|
||||
117
scripts/verify/plan-mainnet-usdt-usdc-via-cw-paths.sh
Executable file
117
scripts/verify/plan-mainnet-usdt-usdc-via-cw-paths.sh
Executable file
@@ -0,0 +1,117 @@
|
||||
#!/usr/bin/env bash
|
||||
# Legitimate USDT <-> USDC routing on Ethereum mainnet through public cW* PMM rails.
|
||||
#
|
||||
# Direct cWUSDT/cWUSDC PMM is recorded in deployment-status.json (vault 0xe944…68DB; one pool, both directions).
|
||||
# Top up with deploy-mainnet-cwusdt-cwusdc-pool.sh / add-mainnet-public-dodo-cw-liquidity.sh --pair=cwusdt-cwusdc.
|
||||
# Official stables connect to both wraps via four pools (3 bps LP fee each on the public rails).
|
||||
# Any USDT<->USDC conversion is at least two swaps (two integration txs unless batched elsewhere).
|
||||
#
|
||||
# Usage:
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# bash scripts/verify/plan-mainnet-usdt-usdc-via-cw-paths.sh
|
||||
# bash scripts/verify/plan-mainnet-usdt-usdc-via-cw-paths.sh --with-examples 1000000
|
||||
#
|
||||
# Operator policy (recommended):
|
||||
# - One treasury EOA or named contract; no sybil wallet farms.
|
||||
# - Document purpose (inventory rebalance, peg support, testing); retain logs and tx hashes.
|
||||
# - Per-leg --dry-run first; set --min-out from quoted outputs; cap daily notional internally.
|
||||
# - "0.16" style budgets: define explicitly (e.g. 16 bps max implied loss vs mid, or $0.16 absolute).
|
||||
#
|
||||
# See also:
|
||||
# scripts/deployment/plan-mainnet-cw-stabilization.sh
|
||||
# scripts/deployment/run-mainnet-public-dodo-cw-swap.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "$ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
WITH_EXAMPLES=0
|
||||
EXAMPLE_RAW="1000000"
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--with-examples)
|
||||
WITH_EXAMPLES=1
|
||||
;;
|
||||
--with-examples=*)
|
||||
WITH_EXAMPLES=1
|
||||
EXAMPLE_RAW="${arg#*=}"
|
||||
;;
|
||||
-h | --help)
|
||||
sed -n '1,35p' "$0"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "[fail] unknown arg: $arg (use --help)" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo "=== Mainnet USDT <-> USDC via cW* PMM (planning) ==="
|
||||
echo
|
||||
echo "Canonical pools (cross-chain-pmm-lps/config/deployment-status.json, chains[\"1\"].pmmPools):"
|
||||
echo " cWUSDT/USDT 0x79156F6B7bf71a1B72D78189B540A89A6C13F6FC"
|
||||
echo " cWUSDC/USDC 0x69776fc607e9edA8042e320e7e43f54d06c68f0E"
|
||||
echo " cWUSDT/USDC 0x27f3aE7EE71Be3d77bAf17d4435cF8B895DD25D2"
|
||||
echo " cWUSDC/USDT 0xCC0fd27A40775c9AfcD2BBd3f7c902b0192c247A"
|
||||
echo " cWUSDT/cWUSDC 0xe944b7Cb012A0820c07f54D51e92f0e1C74168DB (direct wrap↔wrap; one vault, both directions)"
|
||||
echo
|
||||
echo "Tokens:"
|
||||
echo " USDT 0xdAC17F958D2ee523a2206206994597C13D831ec7"
|
||||
echo " USDC 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
echo " cWUSDT ${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
|
||||
echo " cWUSDC ${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
|
||||
echo
|
||||
echo "Economics:"
|
||||
echo " - Two hops at 3 bps each => roughly 6 bps fee drag before slippage and gas (not compound-precise)."
|
||||
echo " - Compare both paths off-chain; pick the better quoted out-amount for your size."
|
||||
echo
|
||||
echo "--- Path A (USDT -> cWUSDC -> USDC) ---"
|
||||
echo " Leg 1: pair=cwusdc-usdt direction=quote-to-base (USDT in -> cWUSDC out)"
|
||||
echo " Leg 2: pair=cwusdc-usdc direction=base-to-quote (cWUSDC in -> USDC out)"
|
||||
echo
|
||||
echo "--- Path B (USDT -> cWUSDT -> USDC) ---"
|
||||
echo " Leg 1: pair=cwusdt-usdt direction=quote-to-base (USDT in -> cWUSDT out)"
|
||||
echo " Leg 2: pair=cwusdt-usdc direction=base-to-quote (cWUSDT in -> USDC out)"
|
||||
echo
|
||||
echo "--- Path A' (USDC -> cWUSDC -> USDT) ---"
|
||||
echo " Leg 1: pair=cwusdc-usdc direction=quote-to-base (USDC in -> cWUSDC out)"
|
||||
echo " Leg 2: pair=cwusdc-usdt direction=base-to-quote (cWUSDC in -> USDT out)"
|
||||
echo
|
||||
echo "--- Path B' (USDC -> cWUSDT -> USDT) ---"
|
||||
echo " Leg 1: pair=cwusdt-usdc direction=quote-to-base (USDC in -> cWUSDT out)"
|
||||
echo " Leg 2: pair=cwusdt-usdt direction=base-to-quote (cWUSDT in -> USDT out)"
|
||||
echo
|
||||
echo "--- Path C (USDT -> USDC via direct cW leg, 3 hops) ---"
|
||||
echo " Leg 1: pair=cwusdt-usdt direction=quote-to-base (USDT -> cWUSDT)"
|
||||
echo " Leg 2: pair=cwusdt-cwusdc direction=base-to-quote (cWUSDT -> cWUSDC)"
|
||||
echo " Leg 3: pair=cwusdc-usdc direction=base-to-quote (cWUSDC -> USDC)"
|
||||
echo
|
||||
echo "--- Path C' (USDC -> USDT via direct cW leg, 3 hops) ---"
|
||||
echo " Leg 1: pair=cwusdc-usdc direction=quote-to-base (USDC -> cWUSDC)"
|
||||
echo " Leg 2: pair=cwusdt-cwusdc direction=quote-to-base (cWUSDC -> cWUSDT)"
|
||||
echo " Leg 3: pair=cwusdt-usdt direction=base-to-quote (cWUSDT -> USDT)"
|
||||
echo
|
||||
|
||||
if [[ "$WITH_EXAMPLES" -eq 1 ]]; then
|
||||
echo "=== Example dry-run commands (amount raw 6dp = ${EXAMPLE_RAW}) ==="
|
||||
echo "USDT -> USDC path A:"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdc-usdt --direction=quote-to-base --amount=${EXAMPLE_RAW} --dry-run"
|
||||
echo " # then use printed out-amount as --amount for leg 2:"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdc-usdc --direction=base-to-quote --amount=<leg1_out_raw> --dry-run"
|
||||
echo
|
||||
echo "USDT -> USDC path B:"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdt --direction=quote-to-base --amount=${EXAMPLE_RAW} --dry-run"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdc --direction=base-to-quote --amount=<leg1_out_raw> --dry-run"
|
||||
echo
|
||||
echo "USDT -> USDC path C (three legs; chain outputs leg2/leg3 amounts):"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-usdt --direction=quote-to-base --amount=${EXAMPLE_RAW} --dry-run"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdt-cwusdc --direction=base-to-quote --amount=<leg1_out_raw> --dry-run"
|
||||
echo " bash scripts/deployment/run-mainnet-public-dodo-cw-swap.sh --pair=cwusdc-usdc --direction=base-to-quote --amount=<leg2_out_raw> --dry-run"
|
||||
echo
|
||||
echo "Live sizing / micro tranches:"
|
||||
echo " bash scripts/deployment/plan-mainnet-cw-stabilization.sh"
|
||||
fi
|
||||
240
scripts/verify/playwright-audit-info-defi-oracle.mjs
Normal file
240
scripts/verify/playwright-audit-info-defi-oracle.mjs
Normal file
@@ -0,0 +1,240 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* Playwright audit for info.defi-oracle.io (or INFO_SITE_URL).
|
||||
* Grades structure, agent surfaces, API reachability from the browser, console/network hygiene, basic perf.
|
||||
*
|
||||
* Usage:
|
||||
* pnpm exec playwright install chromium # once
|
||||
* node scripts/verify/playwright-audit-info-defi-oracle.mjs
|
||||
* INFO_SITE_URL=http://192.168.11.218 node scripts/verify/playwright-audit-info-defi-oracle.mjs
|
||||
*/
|
||||
import { chromium } from 'playwright';
|
||||
|
||||
const BASE = (process.env.INFO_SITE_URL || 'https://info.defi-oracle.io').replace(/\/$/, '');
|
||||
const ROUTES = [
|
||||
'/',
|
||||
'/tokens',
|
||||
'/pools',
|
||||
'/swap',
|
||||
'/routing',
|
||||
'/governance',
|
||||
'/ecosystem',
|
||||
'/documentation',
|
||||
'/solacenet',
|
||||
'/agents',
|
||||
'/disclosures',
|
||||
];
|
||||
|
||||
function clamp(n, lo, hi) {
|
||||
return Math.max(lo, Math.min(hi, n));
|
||||
}
|
||||
|
||||
function scoreFromRatio(ok, total) {
|
||||
if (total === 0) return 10;
|
||||
return clamp((ok / total) * 10, 0, 10);
|
||||
}
|
||||
|
||||
async function collectPageMetrics(page) {
|
||||
return page.evaluate(() => {
|
||||
const h1s = [...document.querySelectorAll('h1')].map((el) => el.textContent?.trim() || '');
|
||||
const nav = document.querySelector('nav');
|
||||
const main = document.querySelector('main') || document.querySelector('[role="main"]');
|
||||
const title = document.title || '';
|
||||
const metaDesc = document.querySelector('meta[name="description"]')?.getAttribute('content') || '';
|
||||
const jsonLd = [...document.querySelectorAll('script[type="application/ld+json"]')].length;
|
||||
const navLinks = nav ? nav.querySelectorAll('a').length : 0;
|
||||
let ttfbMs = null;
|
||||
let domCompleteMs = null;
|
||||
const navEntry = performance.getEntriesByType('navigation')[0];
|
||||
if (navEntry) {
|
||||
ttfbMs = Math.round(navEntry.responseStart);
|
||||
domCompleteMs = Math.round(navEntry.domComplete);
|
||||
}
|
||||
return {
|
||||
title,
|
||||
metaDesc,
|
||||
jsonLdScripts: jsonLd,
|
||||
h1Count: h1s.length,
|
||||
h1Texts: h1s.slice(0, 3),
|
||||
hasMain: !!main,
|
||||
navLinkCount: navLinks,
|
||||
ttfbMs,
|
||||
domCompleteMs,
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const browser = await chromium.launch({ headless: true });
|
||||
const context = await browser.newContext({
|
||||
viewport: { width: 1280, height: 800 },
|
||||
ignoreHTTPSErrors: true,
|
||||
});
|
||||
const page = await context.newPage();
|
||||
|
||||
const consoleErrors = [];
|
||||
const consoleWarnings = [];
|
||||
page.on('console', (msg) => {
|
||||
const t = msg.type();
|
||||
const text = msg.text();
|
||||
if (t === 'error') consoleErrors.push(text);
|
||||
if (t === 'warning') consoleWarnings.push(text);
|
||||
});
|
||||
|
||||
const failedRequests = [];
|
||||
page.on('requestfailed', (req) => {
|
||||
failedRequests.push({ url: req.url(), failure: req.failure()?.errorText || 'unknown' });
|
||||
});
|
||||
|
||||
const pages = [];
|
||||
for (const path of ROUTES) {
|
||||
const url = `${BASE}${path === '/' ? '/' : path}`;
|
||||
const res = await page.goto(url, { waitUntil: 'load', timeout: 60_000 });
|
||||
const status = res?.status() ?? 0;
|
||||
await page.waitForTimeout(1500);
|
||||
const metrics = await collectPageMetrics(page);
|
||||
const bodySample = await page.evaluate(() => document.body?.innerText?.slice(0, 2000) || '');
|
||||
pages.push({ path, url, status, metrics, bodySampleLength: bodySample.length });
|
||||
}
|
||||
|
||||
const staticChecks = [];
|
||||
for (const p of ['/llms.txt', '/robots.txt', '/sitemap.xml', '/agent-hints.json']) {
|
||||
const r = await page.request.get(`${BASE}${p}`);
|
||||
staticChecks.push({
|
||||
path: p,
|
||||
status: r.status(),
|
||||
ok: r.ok(),
|
||||
contentType: r.headers()['content-type'] || '',
|
||||
});
|
||||
}
|
||||
|
||||
let tokenAggProbe = { ok: false, status: 0, snippet: '' };
|
||||
try {
|
||||
const taUrl = `${BASE}/token-aggregation/api/v1/networks?refresh=1`;
|
||||
const tr = await page.request.get(taUrl);
|
||||
tokenAggProbe.status = tr.status();
|
||||
tokenAggProbe.ok = tr.ok();
|
||||
const ct = (tr.headers()['content-type'] || '').toLowerCase();
|
||||
if (ct.includes('json')) {
|
||||
const j = await tr.json().catch(() => null);
|
||||
tokenAggProbe.snippet = j && typeof j === 'object' ? JSON.stringify(j).slice(0, 120) : '';
|
||||
} else {
|
||||
tokenAggProbe.snippet = (await tr.text()).slice(0, 80);
|
||||
}
|
||||
} catch (e) {
|
||||
tokenAggProbe.error = String(e);
|
||||
}
|
||||
|
||||
await browser.close();
|
||||
|
||||
const routesOk = pages.filter((p) => p.status >= 200 && p.status < 400).length;
|
||||
const staticOk = staticChecks.filter((s) => s.ok).length;
|
||||
const h1Ok = pages.filter((p) => p.metrics.h1Count === 1).length;
|
||||
const mainOk = pages.filter((p) => p.metrics.hasMain).length;
|
||||
|
||||
const uniqueConsoleBuckets = new Set(
|
||||
consoleErrors.map((t) => {
|
||||
if (t.includes('CORS policy')) return 'cors-blocked-cross-origin-api';
|
||||
if (t.includes('502')) return 'http-502';
|
||||
if (t.includes('429')) return 'http-429';
|
||||
if (t.includes('Failed to load resource')) return 'failed-resource-generic';
|
||||
return t.slice(0, 100);
|
||||
}),
|
||||
);
|
||||
const distinctConsoleIssues = uniqueConsoleBuckets.size;
|
||||
|
||||
const scores = {
|
||||
availability: scoreFromRatio(routesOk, pages.length),
|
||||
staticAgentSurfaces: scoreFromRatio(staticOk, staticChecks.length),
|
||||
documentStructure: clamp(
|
||||
(scoreFromRatio(h1Ok, pages.length) * 0.5 + scoreFromRatio(mainOk, pages.length) * 0.5),
|
||||
0,
|
||||
10,
|
||||
),
|
||||
tokenAggregationApi: tokenAggProbe.ok ? 10 : tokenAggProbe.status === 429 ? 4 : 2,
|
||||
consoleHygiene:
|
||||
consoleErrors.length === 0 ? 10 : clamp(10 - distinctConsoleIssues * 2.5, 0, 10),
|
||||
networkHygiene:
|
||||
failedRequests.length === 0 ? 10 : clamp(10 - Math.min(failedRequests.length, 8) * 1.1, 0, 10),
|
||||
homeMetaAndSeo: (() => {
|
||||
const home = pages[0];
|
||||
let s = 0;
|
||||
if (home?.metrics.title?.length > 10) s += 3;
|
||||
if (home?.metrics.metaDesc?.length > 20) s += 3;
|
||||
if (home?.metrics.jsonLdScripts > 0) s += 4;
|
||||
return clamp(s, 0, 10);
|
||||
})(),
|
||||
performanceHint: (() => {
|
||||
const ttfb = pages[0]?.metrics?.ttfbMs;
|
||||
if (ttfb == null || Number.isNaN(ttfb)) return 7;
|
||||
if (ttfb < 300) return 10;
|
||||
if (ttfb < 800) return 8;
|
||||
if (ttfb < 1800) return 6;
|
||||
return 4;
|
||||
})(),
|
||||
};
|
||||
|
||||
const weights = {
|
||||
availability: 0.18,
|
||||
staticAgentSurfaces: 0.12,
|
||||
documentStructure: 0.12,
|
||||
tokenAggregationApi: 0.18,
|
||||
consoleHygiene: 0.1,
|
||||
networkHygiene: 0.1,
|
||||
homeMetaAndSeo: 0.1,
|
||||
performanceHint: 0.1,
|
||||
};
|
||||
|
||||
let overall = 0;
|
||||
for (const [k, w] of Object.entries(weights)) {
|
||||
overall += scores[k] * w;
|
||||
}
|
||||
overall = Math.round(overall * 10) / 10;
|
||||
|
||||
const report = {
|
||||
baseUrl: BASE,
|
||||
generatedAt: new Date().toISOString(),
|
||||
overallScore: overall,
|
||||
scores,
|
||||
weights,
|
||||
pages: pages.map((p) => ({
|
||||
path: p.path,
|
||||
url: p.url,
|
||||
status: p.status,
|
||||
metrics: p.metrics,
|
||||
bodyTextChars: p.bodySampleLength,
|
||||
})),
|
||||
staticChecks,
|
||||
tokenAggregationProbe: tokenAggProbe,
|
||||
consoleErrors,
|
||||
distinctConsoleIssues,
|
||||
consoleWarnings: consoleWarnings.slice(0, 20),
|
||||
failedRequests: failedRequests.slice(0, 30),
|
||||
rubricNotes: [
|
||||
'Scores are heuristic (0–10 per axis); overall is weighted sum.',
|
||||
'tokenAggregationApi uses same-origin /token-aggregation from BASE (expects JSON 200).',
|
||||
'Exactly one h1 per route is preferred for documentStructure.',
|
||||
'performanceHint uses Navigation Timing responseStart (TTFB) on home only.',
|
||||
],
|
||||
};
|
||||
|
||||
const grade =
|
||||
overall >= 9
|
||||
? 'A'
|
||||
: overall >= 8
|
||||
? 'B'
|
||||
: overall >= 7
|
||||
? 'C'
|
||||
: overall >= 6
|
||||
? 'D'
|
||||
: 'F';
|
||||
|
||||
report.letterGrade = grade;
|
||||
|
||||
console.log(JSON.stringify(report, null, 2));
|
||||
}
|
||||
|
||||
main().catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
164
scripts/verify/pmm-swap-quote-chain138.sh
Executable file
164
scripts/verify/pmm-swap-quote-chain138.sh
Executable file
@@ -0,0 +1,164 @@
|
||||
#!/usr/bin/env bash
|
||||
# On-chain DODO PMM quote + suggested minOut for DODOPMMIntegration.swapExactIn (Chain 138).
|
||||
# The REST GET /api/v1/quote uses Uniswap-style math; this script matches what the pool + integration enforce.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/pmm-swap-quote-chain138.sh --token-in 0x93E6… --amount-in 100000000
|
||||
# bash scripts/verify/pmm-swap-quote-chain138.sh --pool 0x9e89… --token-in 0x93E6… --amount-in 100000000 --trader 0xYour…
|
||||
# RPC_URL_138=https://rpc.defi-oracle.io bash scripts/verify/pmm-swap-quote-chain138.sh ...
|
||||
#
|
||||
# Requires: cast (Foundry), bc (for minOut math on large wei values).
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
[[ -f "$PROJECT_ROOT/scripts/lib/load-project-env.sh" ]] && source "$PROJECT_ROOT/scripts/lib/load-project-env.sh"
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd bc
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: pmm-swap-quote-chain138.sh [options]
|
||||
|
||||
Required:
|
||||
--token-in ADDR ERC-20 you are selling (must be pool base or quote)
|
||||
--amount-in WEI Raw amount (integer string, e.g. 100000000 for 100e6)
|
||||
|
||||
Options:
|
||||
--rpc URL JSON-RPC (default: RPC_URL_138 or http://192.168.11.211:8545)
|
||||
--pool ADDR PMM pool (default: PMM_QUOTE_POOL or canonical cUSDT/cUSDC pool below)
|
||||
--trader ADDR Address passed to querySell* (default: DEPLOYER_ADDRESS or 0x4A66…301C8)
|
||||
-h, --help This help
|
||||
|
||||
Prints: pool base/quote, quoted output wei, minOut at 99% / 95% / 90%, and copy-paste swapExactIn hints.
|
||||
|
||||
See: info-defi-oracle.io/swap (uses the same on-chain quotes in the browser).
|
||||
EOF
|
||||
}
|
||||
|
||||
# Defaults align with live indexer / failed-tx repro; override with PMM_QUOTE_POOL.
|
||||
RPC="${RPC_URL_138:-http://192.168.11.211:8545}"
|
||||
POOL="${PMM_QUOTE_POOL:-0x9e89bAe009adf128782E19e8341996c596ac40dC}"
|
||||
INTEGRATION="${DODO_PMM_INTEGRATION:-${CHAIN_138_DODO_PMM_INTEGRATION:-0x86ADA6Ef91A3B450F89f2b751e93B1b7A3218895}}"
|
||||
TRADER="${PMM_QUOTE_TRADER:-${DEPLOYER_ADDRESS:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}}"
|
||||
TOKEN_IN=""
|
||||
AMOUNT_IN=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-h | --help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
--rpc)
|
||||
RPC="$2"
|
||||
shift 2
|
||||
;;
|
||||
--pool)
|
||||
POOL="$2"
|
||||
shift 2
|
||||
;;
|
||||
--token-in)
|
||||
TOKEN_IN="$2"
|
||||
shift 2
|
||||
;;
|
||||
--amount-in)
|
||||
AMOUNT_IN="$2"
|
||||
shift 2
|
||||
;;
|
||||
--trader)
|
||||
TRADER="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "[fail] unknown argument: $1" >&2
|
||||
usage >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "${TOKEN_IN}" || -z "${AMOUNT_IN}" ]]; then
|
||||
usage >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
to_lower() {
|
||||
printf '%s' "$1" | tr '[:upper:]' '[:lower:]'
|
||||
}
|
||||
|
||||
ti="$(to_lower "$TOKEN_IN")"
|
||||
POOL_LC="$(to_lower "$POOL")"
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Chain 138 PMM quote (for swapExactIn minAmountOut)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "RPC: ${RPC}"
|
||||
echo "Pool: ${POOL}"
|
||||
echo "Integration: ${INTEGRATION}"
|
||||
echo "Trader (view): ${TRADER}"
|
||||
echo "Token in: ${TOKEN_IN}"
|
||||
echo "Amount in wei: ${AMOUNT_IN}"
|
||||
echo
|
||||
|
||||
BASE="$(cast call "${POOL}" '_BASE_TOKEN_()(address)' --rpc-url "${RPC}" | awk '{print $1}')"
|
||||
QUOTE="$(cast call "${POOL}" '_QUOTE_TOKEN_()(address)' --rpc-url "${RPC}" | awk '{print $1}')"
|
||||
BASE_LC="$(to_lower "$BASE")"
|
||||
QUOTE_LC="$(to_lower "$QUOTE")"
|
||||
|
||||
echo "Pool _BASE_TOKEN_: ${BASE}"
|
||||
echo "Pool _QUOTE_TOKEN_: ${QUOTE}"
|
||||
echo
|
||||
|
||||
if [[ "${ti}" == "${BASE_LC}" ]]; then
|
||||
MODE="querySellBase (selling base → receive quote)"
|
||||
raw="$(cast call "${POOL}" 'querySellBase(address,uint256)(uint256,uint256)' "${TRADER}" "${AMOUNT_IN}" --rpc-url "${RPC}")"
|
||||
elif [[ "${ti}" == "${QUOTE_LC}" ]]; then
|
||||
MODE="querySellQuote (selling quote → receive base)"
|
||||
raw="$(cast call "${POOL}" 'querySellQuote(address,uint256)(uint256,uint256)' "${TRADER}" "${AMOUNT_IN}" --rpc-url "${RPC}")"
|
||||
else
|
||||
echo "[fail] token-in is not this pool's base or quote token." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
quoted="$(printf '%s\n' "${raw}" | head -1 | awk '{print $1}')"
|
||||
mtfee="$(printf '%s\n' "${raw}" | head -1 | awk '{print $2}')"
|
||||
if [[ -z "${quoted}" || "${quoted}" == "0" ]]; then
|
||||
echo "[fail] query returned empty or zero output; check pool/RPC/amount." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
min99="$(echo "scale=0; ${quoted} * 99 / 100" | bc)"
|
||||
min95="$(echo "scale=0; ${quoted} * 95 / 100" | bc)"
|
||||
min90="$(echo "scale=0; ${quoted} * 90 / 100" | bc)"
|
||||
|
||||
echo "Mode: ${MODE}"
|
||||
echo "Quoted amount out (wei): ${quoted}"
|
||||
echo "MT fee (2nd return): ${mtfee:-?}"
|
||||
echo
|
||||
echo "Suggested minAmountOut (slippage vs on-chain quote):"
|
||||
echo " 99%: ${min99}"
|
||||
echo " 95%: ${min95}"
|
||||
echo " 90%: ${min90}"
|
||||
echo
|
||||
echo "── swapExactIn (DODOPMMIntegration) ──"
|
||||
echo " pool: ${POOL_LC}"
|
||||
echo " tokenIn: ${ti}"
|
||||
echo " amountIn: ${AMOUNT_IN}"
|
||||
echo " minAmountOut: ${min99} # 1% slip from quoted out"
|
||||
echo
|
||||
echo "Example cast send (needs PRIVATE_KEY and prior approve on tokenIn → integration):"
|
||||
echo " cast send ${INTEGRATION} \\"
|
||||
echo " 'swapExactIn(address,address,uint256,uint256)' \\"
|
||||
echo " ${POOL} ${TOKEN_IN} ${AMOUNT_IN} ${min99} \\"
|
||||
echo " --rpc-url \"\${RPC_URL_138}\" --private-key \"\${PRIVATE_KEY}\" --legacy --gas-price 1000"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
193
scripts/verify/poll-chain138-active-liquidity-pools-public.sh
Executable file
193
scripts/verify/poll-chain138-active-liquidity-pools-public.sh
Executable file
@@ -0,0 +1,193 @@
|
||||
#!/usr/bin/env bash
|
||||
# Poll Chain 138 public RPC and list known liquidity pools with on-chain activity hints.
|
||||
#
|
||||
# Why not getReserves()? DODO V2 PMM / DVM pools are not Uniswap V2 pairs; getReserves
|
||||
# reverts on them. This script uses eth_getCode + ERC-20 balanceOf on the pool address
|
||||
# (same method as check-pmm-pool-balances-chain138.sh).
|
||||
#
|
||||
# Canonical addresses: docs/11-references/ADDRESS_MATRIX_AND_STATUS.md,
|
||||
# docs/11-references/CONTRACT_ADDRESSES_REFERENCE.md
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/verify/poll-chain138-active-liquidity-pools-public.sh [RPC_URL]
|
||||
# Env:
|
||||
# OUTPUT_JSON=1 — print one JSON object to stdout (requires jq)
|
||||
# MIN_HEX=0x0 — balances above this count as "non-zero" (default: 0x0)
|
||||
#
|
||||
# Default RPC (internet): https://rpc-http-pub.d-bis.org
|
||||
# Override: RPC_URL_138_PUBLIC, or pass URL as first argument.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
RPC="${1:-${RPC_URL_138_PUBLIC:-https://rpc-http-pub.d-bis.org}}"
|
||||
MIN_HEX="${MIN_HEX:-0x0}"
|
||||
OUTPUT_JSON="${OUTPUT_JSON:-0}"
|
||||
|
||||
# --- Canonical tokens (Chain 138) ---
|
||||
WETH10="0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f"
|
||||
cUSDT="0x93E66202A11B1772E55407B32B44e5Cd8eda7f22"
|
||||
cUSDC="0xf22258f57794CC8E06237084b353Ab30fFfa640b"
|
||||
OFFICIAL_USDT="0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1"
|
||||
OFFICIAL_USDC="0x71D6687F38b93CCad569Fa6352c876eea967201b"
|
||||
cXAUC="0x290E52a8819A4fbD0714E517225429aA2B70EC6b"
|
||||
cEURT="0xdf4b71c61E5912712C1Bdd451416B9aC26949d72"
|
||||
|
||||
json_escape() { printf '%s' "$1" | jq -Rs .; }
|
||||
|
||||
rpc_call() {
|
||||
local method="$1" params="$2"
|
||||
curl -sS -X POST "$RPC" -H "Content-Type: application/json" \
|
||||
--data "{\"jsonrpc\":\"2.0\",\"method\":\"$method\",\"params\":$params,\"id\":1}"
|
||||
}
|
||||
|
||||
verify_chain() {
|
||||
local res hex
|
||||
res=$(rpc_call "eth_chainId" "[]")
|
||||
hex=$(echo "$res" | jq -r '.result // empty')
|
||||
if [[ "$hex" != "0x8a" ]]; then
|
||||
echo "error: expected chainId 0x8a (138), got: ${hex:-<empty>}" >&2
|
||||
echo "raw: $res" >&2
|
||||
exit 2
|
||||
fi
|
||||
}
|
||||
|
||||
pads() {
|
||||
local a
|
||||
a=$(echo "$1" | sed 's/0x//')
|
||||
printf '%064s' "$a" | tr ' ' '0'
|
||||
}
|
||||
|
||||
# balanceOf(address) on ERC-20
|
||||
token_balance() {
|
||||
local tok="$1" holder="$2"
|
||||
local data="0x70a08231$(pads "$holder")"
|
||||
local res
|
||||
res=$(rpc_call "eth_call" "[{\"to\":\"$tok\",\"data\":\"$data\"},\"latest\"]")
|
||||
echo "$res" | jq -r '.result // "0x0"'
|
||||
}
|
||||
|
||||
contract_code() {
|
||||
local addr="$1"
|
||||
local res
|
||||
res=$(rpc_call "eth_getCode" "[\"$addr\",\"latest\"]")
|
||||
echo "$res" | jq -r '.result // "0x"'
|
||||
}
|
||||
|
||||
hex_gt() {
|
||||
# return 0 if $1 > $2 (as hex integers)
|
||||
local a b
|
||||
a=$(printf '%d' "${1:-0x0}" 2>/dev/null || echo 0)
|
||||
b=$(printf '%d' "${2:-0x0}" 2>/dev/null || echo 0)
|
||||
(( a > b ))
|
||||
}
|
||||
|
||||
# pool entries: tab-separated: type name pool base_tok quote_tok
|
||||
read -r -d '' POOL_TABLE <<'EOF' || true
|
||||
dodo_pmm Pool cUSDT/cUSDC 0x9e89bAe009adf128782E19e8341996c596ac40dC 0x93E66202A11B1772E55407B32B44e5Cd8eda7f22 0xf22258f57794CC8E06237084b353Ab30fFfa640b
|
||||
dodo_pmm Pool cUSDT/USDT (official mirror) 0x866Cb44b59303d8dc5f4F9E3E7A8e8b0bf238d66 0x93E66202A11B1772E55407B32B44e5Cd8eda7f22 0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1
|
||||
dodo_pmm Pool cUSDC/USDC (official mirror) 0xc39B7D0F40838cbFb54649d327f49a6DAC964062 0xf22258f57794CC8E06237084b353Ab30fFfa640b 0x71D6687F38b93CCad569Fa6352c876eea967201b
|
||||
dodo_pmm Pool cUSDT/cXAUC (public) 0x1AA55E2001E5651349AfF5A63FD7A7Ae44f0F1b0 0x93E66202A11B1772E55407B32B44e5Cd8eda7f22 0x290E52a8819A4fbD0714E517225429aA2B70EC6b
|
||||
dodo_pmm Pool cUSDC/cXAUC (public) 0xEA9Ac6357CaCB42a83b9082B870610363B177cBa 0xf22258f57794CC8E06237084b353Ab30fFfa640b 0x290E52a8819A4fbD0714E517225429aA2B70EC6b
|
||||
dodo_pmm Pool cEURT/cXAUC (public) 0xbA99bc1eAAC164569d5AcA96C806934DDaF970Cf 0xdf4b71c61E5912712C1Bdd451416B9aC26949d72 0x290E52a8819A4fbD0714E517225429aA2B70EC6b
|
||||
dodo_pmm Pool cUSDT/cXAUC (private) 0x94316511621430423a2cff0C036902BAB4aA70c2 0x93E66202A11B1772E55407B32B44e5Cd8eda7f22 0x290E52a8819A4fbD0714E517225429aA2B70EC6b
|
||||
dodo_pmm Pool cUSDC/cXAUC (private) 0x7867D58567948e5b9908F1057055Ee4440de0851 0xf22258f57794CC8E06237084b353Ab30fFfa640b 0x290E52a8819A4fbD0714E517225429aA2B70EC6b
|
||||
dodo_pmm Pool cEURT/cXAUC (private) 0x505403093826D494983A93b43Aa0B8601078A44e 0xdf4b71c61E5912712C1Bdd451416B9aC26949d72 0x290E52a8819A4fbD0714E517225429aA2B70EC6b
|
||||
d3_pmm D3MM WETH10 pilot pool 0x6550A3a59070061a262a893A1D6F3F490afFDBDA 0xf4BB2e28688e89fCcE3c0580D37d36A7672E8A9f 0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1
|
||||
pilot_curve PilotCurve3Pool USDT/USDC 0xE440Ec15805BE4C7BabCD17A63B8C8A08a492e0f 0x004b63A7B5b0E06f6bB6adb4a5F9f590BF3182D1 0x71D6687F38b93CCad569Fa6352c876eea967201b
|
||||
EOF
|
||||
|
||||
# Venue contracts (liquidity may be behind router/vault; listed for completeness)
|
||||
read -r -d '' VENUE_TABLE <<'EOF' || true
|
||||
pilot_uniswap PilotUniswapV3Router/Quoter 0xD164D9cCfAcf5D9F91698f296aE0cd245D964384
|
||||
pilot_balancer PilotBalancerVault 0x96423d7C1727698D8a25EbFB88131e9422d1a3C3
|
||||
pilot_oneinch PilotOneInchRouter 0x500B84b1Bc6F59C1898a5Fe538eA20A758757A4F
|
||||
EOF
|
||||
|
||||
json_items=()
|
||||
text_lines=()
|
||||
|
||||
process_pool_row() {
|
||||
local ptype="$1" pname="$2" pool="$3" btok="$4" qtok="$5"
|
||||
local code braw qraw
|
||||
code=$(contract_code "$pool")
|
||||
braw=$(token_balance "$btok" "$pool")
|
||||
qraw=$(token_balance "$qtok" "$pool")
|
||||
|
||||
local has_code="false"
|
||||
[[ "$code" != "0x" && "$code" != "0x0" ]] && has_code="true"
|
||||
|
||||
local active="false"
|
||||
if hex_gt "$braw" "$MIN_HEX" && hex_gt "$qraw" "$MIN_HEX"; then
|
||||
active="true"
|
||||
fi
|
||||
|
||||
if [[ "$OUTPUT_JSON" == "1" ]]; then
|
||||
json_items+=("$(jq -nc \
|
||||
--arg type "$ptype" \
|
||||
--arg name "$pname" \
|
||||
--arg pool "$pool" \
|
||||
--arg base "$btok" \
|
||||
--arg quote "$qtok" \
|
||||
--arg braw "$braw" \
|
||||
--arg qraw "$qraw" \
|
||||
--argjson has_code "$has_code" \
|
||||
--argjson active "$active" \
|
||||
'{type:$type,name:$name,pool:$pool,base_token:$base,quote_token:$quote,base_balance_raw:$braw,quote_balance_raw:$qraw,has_contract_code:$has_code,active_two_sided:$active}')")
|
||||
else
|
||||
text_lines+=("$pname | $pool | code=$has_code | active_two_sided=$active | base=$braw quote=$qraw")
|
||||
fi
|
||||
}
|
||||
|
||||
process_venue_row() {
|
||||
local vtype="$1" vname="$2" vaddr="$3"
|
||||
local code
|
||||
code=$(contract_code "$vaddr")
|
||||
local has_code="false"
|
||||
[[ "$code" != "0x" && "$code" != "0x0" ]] && has_code="true"
|
||||
if [[ "$OUTPUT_JSON" == "1" ]]; then
|
||||
json_items+=("$(jq -nc \
|
||||
--arg type "$vtype" \
|
||||
--arg name "$vname" \
|
||||
--arg addr "$vaddr" \
|
||||
--argjson has_code "$has_code" \
|
||||
'{type:$type,name:$name,address:$addr,has_contract_code:$has_code,note:"not a single pair; balances may be on internal pools/vaults"}')")
|
||||
else
|
||||
text_lines+=("$vname | $vaddr | venue_contract code=$has_code (see note in script header)")
|
||||
fi
|
||||
}
|
||||
|
||||
verify_chain
|
||||
|
||||
if [[ "$OUTPUT_JSON" == "1" ]] && ! command -v jq >/dev/null 2>&1; then
|
||||
echo "error: OUTPUT_JSON=1 requires jq" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
while IFS=$'\t' read -r ptype pname pool btok qtok; do
|
||||
[[ -z "$ptype" ]] && continue
|
||||
process_pool_row "$ptype" "$pname" "$pool" "$btok" "$qtok"
|
||||
done <<< "$POOL_TABLE"
|
||||
|
||||
while IFS=$'\t' read -r vtype vname vaddr; do
|
||||
[[ -z "$vtype" ]] && continue
|
||||
process_venue_row "$vtype" "$vname" "$vaddr"
|
||||
done <<< "$VENUE_TABLE"
|
||||
|
||||
if [[ "$OUTPUT_JSON" == "1" ]]; then
|
||||
printf '%s\n' "${json_items[@]}" | jq -s \
|
||||
--arg rpc "$RPC" \
|
||||
--arg scanned_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
'{chainId:138,rpc:$rpc,scanned_at:$scanned_at,entries:.}'
|
||||
else
|
||||
echo "Chain 138 — liquidity poll (registry + balanceOf)"
|
||||
echo "RPC: $RPC"
|
||||
echo ""
|
||||
printf '%s\n' "${text_lines[@]}"
|
||||
echo ""
|
||||
echo "active_two_sided=true means both base and quote ERC-20 balances on the pool address exceed MIN_HEX ($MIN_HEX)."
|
||||
echo "Venue rows are router/vault contracts; do not expect Uniswap V2 getReserves on DODO PMM pool addresses."
|
||||
echo "Full on-chain discovery of every AMM pool would require factory logs or an indexer, not a single JSON-RPC sweep."
|
||||
fi
|
||||
613
scripts/verify/poll-lxc-cluster-health.sh
Executable file
613
scripts/verify/poll-lxc-cluster-health.sh
Executable file
@@ -0,0 +1,613 @@
|
||||
#!/usr/bin/env bash
|
||||
# Poll Proxmox LXC cluster health over SSH with key auth.
|
||||
# Collects:
|
||||
# - /cluster/resources VM inventory (LXCs only) from a seed Proxmox host
|
||||
# - Per-node load, RAM, PSI, /var/lib/vz usage, and pvesm status
|
||||
# Emits:
|
||||
# - Timestamped JSON report under reports/status/
|
||||
# - Human-readable summary text next to the JSON
|
||||
# Exit codes:
|
||||
# 0 = OK
|
||||
# 1 = WARN findings present
|
||||
# 2 = CRIT findings present
|
||||
# 3 = Collection failure / seed unreachable
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/poll-lxc-cluster-health.sh
|
||||
# SEED_HOST=192.168.11.11 bash scripts/verify/poll-lxc-cluster-health.sh --json
|
||||
# OUT_DIR=/tmp bash scripts/verify/poll-lxc-cluster-health.sh
|
||||
#
|
||||
# Threshold env overrides:
|
||||
# CLUSTER_HEALTH_NODE_LOAD_WARN_PER_CPU=0.90
|
||||
# CLUSTER_HEALTH_NODE_LOAD_CRIT_PER_CPU=1.20
|
||||
# CLUSTER_HEALTH_NODE_MEM_WARN_PCT=85
|
||||
# CLUSTER_HEALTH_NODE_MEM_CRIT_PCT=92
|
||||
# CLUSTER_HEALTH_VZ_WARN_PCT=85
|
||||
# CLUSTER_HEALTH_VZ_CRIT_PCT=93
|
||||
# CLUSTER_HEALTH_PSI_CPU_SOME_WARN=10
|
||||
# CLUSTER_HEALTH_PSI_CPU_SOME_CRIT=20
|
||||
# CLUSTER_HEALTH_PSI_IO_FULL_WARN=10
|
||||
# CLUSTER_HEALTH_PSI_IO_FULL_CRIT=20
|
||||
# CLUSTER_HEALTH_PSI_MEM_FULL_WARN=5
|
||||
# CLUSTER_HEALTH_PSI_MEM_FULL_CRIT=10
|
||||
# CLUSTER_HEALTH_LXC_MEM_WARN_PCT=85
|
||||
# CLUSTER_HEALTH_LXC_MEM_CRIT_PCT=95
|
||||
# CLUSTER_HEALTH_LXC_CPU_WARN_PCT=20
|
||||
# CLUSTER_HEALTH_LXC_CPU_CRIT_PCT=40
|
||||
# CLUSTER_HEALTH_NODE_SKEW_WARN_PCT=45
|
||||
# CLUSTER_HEALTH_NODE_SKEW_CRIT_PCT=55
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
|
||||
JSON_ONLY=0
|
||||
case "${1:-}" in
|
||||
--json) JSON_ONLY=1 ;;
|
||||
"" ) ;;
|
||||
-h|--help)
|
||||
sed -n '1,48p' "$0"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "ERROR: unknown argument: ${1}" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
PROXMOX_SSH_USER="${PROXMOX_SSH_USER:-root}"
|
||||
SEED_HOST="${SEED_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
|
||||
OUT_DIR="${OUT_DIR:-${PROJECT_ROOT}/reports/status}"
|
||||
TS="$(date +%Y%m%d_%H%M%S)"
|
||||
JSON_OUT="${JSON_OUT:-${OUT_DIR}/lxc_cluster_health_${TS}.json}"
|
||||
TEXT_OUT="${TEXT_OUT:-${OUT_DIR}/lxc_cluster_health_${TS}.txt}"
|
||||
mkdir -p "${OUT_DIR}"
|
||||
|
||||
TMP_DIR="$(mktemp -d)"
|
||||
trap 'rm -rf "${TMP_DIR}"' EXIT
|
||||
|
||||
VM_JSON="${TMP_DIR}/cluster_resources_vm.json"
|
||||
NODES_JSON="${TMP_DIR}/cluster_resources_node.json"
|
||||
NODE_DIR="${TMP_DIR}/nodes"
|
||||
mkdir -p "${NODE_DIR}"
|
||||
|
||||
ssh_base=(
|
||||
ssh
|
||||
-o BatchMode=yes
|
||||
-o ConnectTimeout=15
|
||||
-o StrictHostKeyChecking=no
|
||||
)
|
||||
|
||||
SEED_TARGET="${PROXMOX_SSH_USER}@${SEED_HOST}"
|
||||
|
||||
node_ssh_host() {
|
||||
case "$1" in
|
||||
ml110) printf '%s\n' "${PROXMOX_HOST_ML110:-$1}" ;;
|
||||
r630-01) printf '%s\n' "${PROXMOX_HOST_R630_01:-$1}" ;;
|
||||
r630-02) printf '%s\n' "${PROXMOX_HOST_R630_02:-$1}" ;;
|
||||
r630-03) printf '%s\n' "${PROXMOX_HOST_R630_03:-$1}" ;;
|
||||
r630-04) printf '%s\n' "${PROXMOX_HOST_R630_04:-$1}" ;;
|
||||
*) printf '%s\n' "$1" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
if ! ping -c1 -W2 "${SEED_HOST}" >/dev/null 2>&1; then
|
||||
echo "ERROR: seed unreachable: ${SEED_HOST}" >&2
|
||||
exit 3
|
||||
fi
|
||||
|
||||
if ! "${ssh_base[@]}" "${SEED_TARGET}" "pvesh get /cluster/resources --type vm --output-format json" >"${VM_JSON}" 2>"${TMP_DIR}/seed_vm.err"; then
|
||||
echo "ERROR: failed to query VM resources from ${SEED_HOST}" >&2
|
||||
cat "${TMP_DIR}/seed_vm.err" >&2 || true
|
||||
exit 3
|
||||
fi
|
||||
|
||||
if ! "${ssh_base[@]}" "${SEED_TARGET}" "pvesh get /cluster/resources --type node --output-format json" >"${NODES_JSON}" 2>"${TMP_DIR}/seed_nodes.err"; then
|
||||
echo "ERROR: failed to query node resources from ${SEED_HOST}" >&2
|
||||
cat "${TMP_DIR}/seed_nodes.err" >&2 || true
|
||||
exit 3
|
||||
fi
|
||||
|
||||
mapfile -t NODE_ROWS < <(
|
||||
python3 - "${NODES_JSON}" <<'PY'
|
||||
import json, sys
|
||||
with open(sys.argv[1], 'r', encoding='utf-8') as fh:
|
||||
data = json.load(fh)
|
||||
for row in data:
|
||||
node = row.get("node")
|
||||
if node:
|
||||
print(node)
|
||||
PY
|
||||
)
|
||||
|
||||
if [[ "${#NODE_ROWS[@]}" -eq 0 ]]; then
|
||||
echo "ERROR: no Proxmox nodes returned by cluster resources" >&2
|
||||
exit 3
|
||||
fi
|
||||
|
||||
REMOTE_BODY=$(cat <<'EOS'
|
||||
set -euo pipefail
|
||||
echo "__HOSTNAME__"
|
||||
hostname -s 2>/dev/null || hostname
|
||||
echo "__UPTIME__"
|
||||
uptime
|
||||
echo "__NPROC__"
|
||||
nproc 2>/dev/null || getconf _NPROCESSORS_ONLN || echo 0
|
||||
echo "__FREE__"
|
||||
free -b
|
||||
echo "__PSI_CPU__"
|
||||
cat /proc/pressure/cpu 2>/dev/null || true
|
||||
echo "__PSI_IO__"
|
||||
cat /proc/pressure/io 2>/dev/null || true
|
||||
echo "__PSI_MEMORY__"
|
||||
cat /proc/pressure/memory 2>/dev/null || true
|
||||
echo "__DF_VZ__"
|
||||
df -B1 -P /var/lib/vz 2>/dev/null || true
|
||||
echo "__PVESM__"
|
||||
pvesm status 2>/dev/null || true
|
||||
EOS
|
||||
)
|
||||
|
||||
for node in "${NODE_ROWS[@]}"; do
|
||||
target="${PROXMOX_SSH_USER}@$(node_ssh_host "${node}")"
|
||||
if ! "${ssh_base[@]}" "${target}" "bash -lc $(printf '%q' "${REMOTE_BODY}")" >"${NODE_DIR}/${node}.txt" 2>"${NODE_DIR}/${node}.err"; then
|
||||
printf 'COLLECTION_FAILED\n' >"${NODE_DIR}/${node}.txt"
|
||||
cat "${NODE_DIR}/${node}.err" >>"${NODE_DIR}/${node}.txt" || true
|
||||
fi
|
||||
done
|
||||
|
||||
set +e
|
||||
python3 - "${VM_JSON}" "${NODES_JSON}" "${NODE_DIR}" "${JSON_OUT}" "${TEXT_OUT}" "${SEED_HOST}" <<'PY'
|
||||
import json
|
||||
import math
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from collections import Counter, defaultdict
|
||||
from datetime import datetime, timezone
|
||||
|
||||
vm_json, nodes_json, node_dir, json_out, text_out, seed_host = sys.argv[1:7]
|
||||
|
||||
def env_float(name, default):
|
||||
try:
|
||||
return float(os.environ.get(name, default))
|
||||
except Exception:
|
||||
return float(default)
|
||||
|
||||
def env_int(name, default):
|
||||
try:
|
||||
return int(float(os.environ.get(name, default)))
|
||||
except Exception:
|
||||
return int(default)
|
||||
|
||||
T = {
|
||||
"node_load_warn_per_cpu": env_float("CLUSTER_HEALTH_NODE_LOAD_WARN_PER_CPU", 0.90),
|
||||
"node_load_crit_per_cpu": env_float("CLUSTER_HEALTH_NODE_LOAD_CRIT_PER_CPU", 1.20),
|
||||
"node_mem_warn_pct": env_float("CLUSTER_HEALTH_NODE_MEM_WARN_PCT", 85),
|
||||
"node_mem_crit_pct": env_float("CLUSTER_HEALTH_NODE_MEM_CRIT_PCT", 92),
|
||||
"vz_warn_pct": env_float("CLUSTER_HEALTH_VZ_WARN_PCT", 85),
|
||||
"vz_crit_pct": env_float("CLUSTER_HEALTH_VZ_CRIT_PCT", 93),
|
||||
"psi_cpu_some_warn": env_float("CLUSTER_HEALTH_PSI_CPU_SOME_WARN", 10),
|
||||
"psi_cpu_some_crit": env_float("CLUSTER_HEALTH_PSI_CPU_SOME_CRIT", 20),
|
||||
"psi_io_full_warn": env_float("CLUSTER_HEALTH_PSI_IO_FULL_WARN", 10),
|
||||
"psi_io_full_crit": env_float("CLUSTER_HEALTH_PSI_IO_FULL_CRIT", 20),
|
||||
"psi_mem_full_warn": env_float("CLUSTER_HEALTH_PSI_MEM_FULL_WARN", 5),
|
||||
"psi_mem_full_crit": env_float("CLUSTER_HEALTH_PSI_MEM_FULL_CRIT", 10),
|
||||
"lxc_mem_warn_pct": env_float("CLUSTER_HEALTH_LXC_MEM_WARN_PCT", 85),
|
||||
"lxc_mem_crit_pct": env_float("CLUSTER_HEALTH_LXC_MEM_CRIT_PCT", 95),
|
||||
"lxc_cpu_warn_pct": env_float("CLUSTER_HEALTH_LXC_CPU_WARN_PCT", 20),
|
||||
"lxc_cpu_crit_pct": env_float("CLUSTER_HEALTH_LXC_CPU_CRIT_PCT", 40),
|
||||
"node_skew_warn_pct": env_float("CLUSTER_HEALTH_NODE_SKEW_WARN_PCT", 45),
|
||||
"node_skew_crit_pct": env_float("CLUSTER_HEALTH_NODE_SKEW_CRIT_PCT", 55),
|
||||
"summary_top_n": env_int("CLUSTER_HEALTH_SUMMARY_TOP_N", 8),
|
||||
}
|
||||
|
||||
with open(vm_json, "r", encoding="utf-8") as fh:
|
||||
vm_rows = json.load(fh)
|
||||
with open(nodes_json, "r", encoding="utf-8") as fh:
|
||||
node_rows = json.load(fh)
|
||||
|
||||
def parse_uptime_load(text):
|
||||
m = re.search(r"load average[s]?:\s*([0-9.]+),\s*([0-9.]+),\s*([0-9.]+)", text)
|
||||
if not m:
|
||||
return None
|
||||
return [float(m.group(1)), float(m.group(2)), float(m.group(3))]
|
||||
|
||||
def parse_free(text):
|
||||
for line in text.splitlines():
|
||||
if line.startswith("Mem:"):
|
||||
parts = line.split()
|
||||
if len(parts) >= 3:
|
||||
total = int(parts[1])
|
||||
used = int(parts[2])
|
||||
return {"total": total, "used": used}
|
||||
return None
|
||||
|
||||
def parse_psi(section):
|
||||
out = {}
|
||||
for line in section.splitlines():
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
kind, rest = line.split(None, 1)
|
||||
vals = {}
|
||||
for token in rest.split():
|
||||
key, value = token.split("=", 1)
|
||||
try:
|
||||
vals[key] = float(value)
|
||||
except ValueError:
|
||||
pass
|
||||
out[kind] = vals
|
||||
return out
|
||||
|
||||
def parse_df_vz(section):
|
||||
lines = [line for line in section.splitlines() if line.strip()]
|
||||
if len(lines) < 2:
|
||||
return None
|
||||
parts = lines[-1].split()
|
||||
if len(parts) < 6:
|
||||
return None
|
||||
return {
|
||||
"filesystem": parts[0],
|
||||
"size_bytes": int(parts[1]),
|
||||
"used_bytes": int(parts[2]),
|
||||
"avail_bytes": int(parts[3]),
|
||||
"use_pct": float(parts[4].rstrip("%")),
|
||||
"mountpoint": parts[5],
|
||||
}
|
||||
|
||||
def parse_pvesm(section):
|
||||
lines = [line.rstrip() for line in section.splitlines() if line.strip()]
|
||||
if len(lines) < 2:
|
||||
return []
|
||||
storage = []
|
||||
for line in lines[1:]:
|
||||
parts = line.split()
|
||||
if len(parts) < 6:
|
||||
continue
|
||||
item = {
|
||||
"name": parts[0],
|
||||
"type": parts[1],
|
||||
"status": parts[2],
|
||||
"total_bytes": None if parts[3] == "0" else int(parts[3]),
|
||||
"used_bytes": None if parts[4] == "0" else int(parts[4]),
|
||||
"available_bytes": None if parts[5] == "0" else int(parts[5]),
|
||||
"use_pct": None,
|
||||
}
|
||||
if item["total_bytes"] and item["used_bytes"] is not None:
|
||||
item["use_pct"] = round((item["used_bytes"] / item["total_bytes"]) * 100, 2)
|
||||
storage.append(item)
|
||||
return storage
|
||||
|
||||
def split_sections(text):
|
||||
sections = {}
|
||||
current = None
|
||||
bucket = []
|
||||
for raw in text.splitlines():
|
||||
line = raw.rstrip("\n")
|
||||
if line.startswith("__") and line.endswith("__"):
|
||||
if current is not None:
|
||||
sections[current] = "\n".join(bucket).strip()
|
||||
current = line.strip("_")
|
||||
bucket = []
|
||||
continue
|
||||
bucket.append(line)
|
||||
if current is not None:
|
||||
sections[current] = "\n".join(bucket).strip()
|
||||
return sections
|
||||
|
||||
severity_rank = {"ok": 0, "warn": 1, "crit": 2}
|
||||
alerts = []
|
||||
|
||||
def add_alert(severity, scope, entity, metric, message, value=None, threshold=None):
|
||||
alerts.append({
|
||||
"severity": severity,
|
||||
"scope": scope,
|
||||
"entity": entity,
|
||||
"metric": metric,
|
||||
"message": message,
|
||||
"value": value,
|
||||
"threshold": threshold,
|
||||
})
|
||||
|
||||
node_metrics = {}
|
||||
for path in sorted(os.listdir(node_dir)):
|
||||
if not path.endswith(".txt"):
|
||||
continue
|
||||
node_name = path[:-4]
|
||||
full_path = os.path.join(node_dir, path)
|
||||
text = open(full_path, "r", encoding="utf-8", errors="replace").read()
|
||||
if text.startswith("COLLECTION_FAILED"):
|
||||
node_metrics[node_name] = {
|
||||
"node": node_name,
|
||||
"collection_failed": True,
|
||||
"error": text.splitlines()[1:] if len(text.splitlines()) > 1 else [],
|
||||
}
|
||||
add_alert("crit", "node", node_name, "collection", f"{node_name} metrics collection failed")
|
||||
continue
|
||||
|
||||
sections = split_sections(text)
|
||||
loads = parse_uptime_load(sections.get("UPTIME", ""))
|
||||
nproc = None
|
||||
try:
|
||||
nproc = int((sections.get("NPROC", "0").splitlines() or ["0"])[0].strip())
|
||||
except ValueError:
|
||||
nproc = 0
|
||||
free_mem = parse_free(sections.get("FREE", ""))
|
||||
psi = {
|
||||
"cpu": parse_psi(sections.get("PSI_CPU", "")),
|
||||
"io": parse_psi(sections.get("PSI_IO", "")),
|
||||
"memory": parse_psi(sections.get("PSI_MEMORY", "")),
|
||||
}
|
||||
df_vz = parse_df_vz(sections.get("DF_VZ", ""))
|
||||
pvesm = parse_pvesm(sections.get("PVESM", ""))
|
||||
|
||||
metric = {
|
||||
"node": node_name,
|
||||
"hostname": (sections.get("HOSTNAME", node_name).splitlines() or [node_name])[0].strip(),
|
||||
"collection_failed": False,
|
||||
"loadavg": loads,
|
||||
"nproc": nproc,
|
||||
"load_per_cpu_1m": round(loads[0] / nproc, 3) if loads and nproc else None,
|
||||
"memory": None,
|
||||
"psi": psi,
|
||||
"df_vz": df_vz,
|
||||
"storage": pvesm,
|
||||
}
|
||||
if free_mem and free_mem["total"] > 0:
|
||||
metric["memory"] = {
|
||||
**free_mem,
|
||||
"used_pct": round((free_mem["used"] / free_mem["total"]) * 100, 2),
|
||||
}
|
||||
node_metrics[node_name] = metric
|
||||
|
||||
lp = metric["load_per_cpu_1m"]
|
||||
if lp is not None:
|
||||
if lp >= T["node_load_crit_per_cpu"]:
|
||||
add_alert("crit", "node", node_name, "load_per_cpu_1m", f"{node_name} load/core is high", lp, T["node_load_crit_per_cpu"])
|
||||
elif lp >= T["node_load_warn_per_cpu"]:
|
||||
add_alert("warn", "node", node_name, "load_per_cpu_1m", f"{node_name} load/core is elevated", lp, T["node_load_warn_per_cpu"])
|
||||
|
||||
mem = metric["memory"]
|
||||
if mem:
|
||||
if mem["used_pct"] >= T["node_mem_crit_pct"]:
|
||||
add_alert("crit", "node", node_name, "memory_used_pct", f"{node_name} host RAM usage is high", mem["used_pct"], T["node_mem_crit_pct"])
|
||||
elif mem["used_pct"] >= T["node_mem_warn_pct"]:
|
||||
add_alert("warn", "node", node_name, "memory_used_pct", f"{node_name} host RAM usage is elevated", mem["used_pct"], T["node_mem_warn_pct"])
|
||||
|
||||
if df_vz:
|
||||
if df_vz["use_pct"] >= T["vz_crit_pct"]:
|
||||
add_alert("crit", "node", node_name, "vz_use_pct", f"{node_name} /var/lib/vz is near full", df_vz["use_pct"], T["vz_crit_pct"])
|
||||
elif df_vz["use_pct"] >= T["vz_warn_pct"]:
|
||||
add_alert("warn", "node", node_name, "vz_use_pct", f"{node_name} /var/lib/vz usage is elevated", df_vz["use_pct"], T["vz_warn_pct"])
|
||||
|
||||
cpu_some = psi["cpu"].get("some", {}).get("avg10")
|
||||
if cpu_some is not None:
|
||||
if cpu_some >= T["psi_cpu_some_crit"]:
|
||||
add_alert("crit", "node", node_name, "psi_cpu_some_avg10", f"{node_name} CPU pressure is high", cpu_some, T["psi_cpu_some_crit"])
|
||||
elif cpu_some >= T["psi_cpu_some_warn"]:
|
||||
add_alert("warn", "node", node_name, "psi_cpu_some_avg10", f"{node_name} CPU pressure is elevated", cpu_some, T["psi_cpu_some_warn"])
|
||||
|
||||
io_full = psi["io"].get("full", {}).get("avg10")
|
||||
if io_full is not None:
|
||||
if io_full >= T["psi_io_full_crit"]:
|
||||
add_alert("crit", "node", node_name, "psi_io_full_avg10", f"{node_name} I/O full pressure is high", io_full, T["psi_io_full_crit"])
|
||||
elif io_full >= T["psi_io_full_warn"]:
|
||||
add_alert("warn", "node", node_name, "psi_io_full_avg10", f"{node_name} I/O full pressure is elevated", io_full, T["psi_io_full_warn"])
|
||||
|
||||
mem_full = psi["memory"].get("full", {}).get("avg10")
|
||||
if mem_full is not None:
|
||||
if mem_full >= T["psi_mem_full_crit"]:
|
||||
add_alert("crit", "node", node_name, "psi_memory_full_avg10", f"{node_name} memory full pressure is high", mem_full, T["psi_mem_full_crit"])
|
||||
elif mem_full >= T["psi_mem_full_warn"]:
|
||||
add_alert("warn", "node", node_name, "psi_memory_full_avg10", f"{node_name} memory full pressure is elevated", mem_full, T["psi_mem_full_warn"])
|
||||
|
||||
lxc_rows = [row for row in vm_rows if row.get("type") == "lxc"]
|
||||
running_lxcs = [row for row in lxc_rows if row.get("status") == "running"]
|
||||
stopped_lxcs = [row for row in lxc_rows if row.get("status") != "running"]
|
||||
|
||||
node_counts = Counter(row.get("node", "unknown") for row in running_lxcs)
|
||||
running_total = len(running_lxcs)
|
||||
for node, count in node_counts.items():
|
||||
pct = round((count / running_total) * 100, 2) if running_total else 0.0
|
||||
if pct >= T["node_skew_crit_pct"]:
|
||||
add_alert("crit", "cluster", node, "running_lxc_share_pct", f"{node} holds a large share of running LXCs", pct, T["node_skew_crit_pct"])
|
||||
elif pct >= T["node_skew_warn_pct"]:
|
||||
add_alert("warn", "cluster", node, "running_lxc_share_pct", f"{node} holds a high share of running LXCs", pct, T["node_skew_warn_pct"])
|
||||
|
||||
mem_hot = []
|
||||
cpu_hot = []
|
||||
disk_rw = []
|
||||
network_totals = []
|
||||
all_lxcs = []
|
||||
|
||||
for row in running_lxcs:
|
||||
maxmem = row.get("maxmem") or 0
|
||||
mem = row.get("mem") or 0
|
||||
mem_pct = round((mem / maxmem) * 100, 2) if maxmem else None
|
||||
maxcpu = row.get("maxcpu") or 0
|
||||
cpu_pct = round(float(row.get("cpu") or 0) * 100, 2)
|
||||
diskread = int(row.get("diskread") or 0)
|
||||
diskwrite = int(row.get("diskwrite") or 0)
|
||||
netin = int(row.get("netin") or 0)
|
||||
netout = int(row.get("netout") or 0)
|
||||
entry = {
|
||||
"vmid": row.get("vmid"),
|
||||
"name": row.get("name"),
|
||||
"node": row.get("node"),
|
||||
"cpu_pct": cpu_pct,
|
||||
"maxcpu": maxcpu,
|
||||
"mem_pct": mem_pct,
|
||||
"mem_bytes": mem,
|
||||
"maxmem_bytes": maxmem,
|
||||
"disk_pct": round(((row.get("disk") or 0) / row.get("maxdisk")) * 100, 2) if row.get("maxdisk") else None,
|
||||
"disk_bytes": int(row.get("disk") or 0),
|
||||
"maxdisk_bytes": int(row.get("maxdisk") or 0),
|
||||
"diskread_bytes": diskread,
|
||||
"diskwrite_bytes": diskwrite,
|
||||
"netin_bytes": netin,
|
||||
"netout_bytes": netout,
|
||||
"status": row.get("status"),
|
||||
}
|
||||
all_lxcs.append(entry)
|
||||
if mem_pct is not None:
|
||||
mem_hot.append(entry)
|
||||
if mem_pct >= T["lxc_mem_crit_pct"]:
|
||||
add_alert("crit", "lxc", f"{row.get('vmid')}:{row.get('name')}", "memory_used_pct", "LXC memory usage is high", mem_pct, T["lxc_mem_crit_pct"])
|
||||
elif mem_pct >= T["lxc_mem_warn_pct"]:
|
||||
add_alert("warn", "lxc", f"{row.get('vmid')}:{row.get('name')}", "memory_used_pct", "LXC memory usage is elevated", mem_pct, T["lxc_mem_warn_pct"])
|
||||
if cpu_pct >= T["lxc_cpu_crit_pct"]:
|
||||
add_alert("crit", "lxc", f"{row.get('vmid')}:{row.get('name')}", "cpu_pct", "LXC CPU usage is high", cpu_pct, T["lxc_cpu_crit_pct"])
|
||||
elif cpu_pct >= T["lxc_cpu_warn_pct"]:
|
||||
add_alert("warn", "lxc", f"{row.get('vmid')}:{row.get('name')}", "cpu_pct", "LXC CPU usage is elevated", cpu_pct, T["lxc_cpu_warn_pct"])
|
||||
cpu_hot.append(entry)
|
||||
disk_rw.append({**entry, "disk_total_bytes": diskread + diskwrite})
|
||||
network_totals.append({**entry, "network_total_bytes": netin + netout})
|
||||
|
||||
mem_hot.sort(key=lambda x: (-1 if x["mem_pct"] is None else -x["mem_pct"], x["vmid"]))
|
||||
cpu_hot.sort(key=lambda x: (-x["cpu_pct"], x["vmid"]))
|
||||
disk_rw.sort(key=lambda x: (-x["disk_total_bytes"], x["vmid"]))
|
||||
network_totals.sort(key=lambda x: (-x["network_total_bytes"], x["vmid"]))
|
||||
|
||||
alerts.sort(key=lambda a: (-severity_rank[a["severity"]], a["scope"], str(a["entity"]), a["metric"]))
|
||||
|
||||
overall = "ok"
|
||||
if any(a["severity"] == "crit" for a in alerts):
|
||||
overall = "crit"
|
||||
elif any(a["severity"] == "warn" for a in alerts):
|
||||
overall = "warn"
|
||||
|
||||
node_summary = []
|
||||
for row in node_rows:
|
||||
node_name = row.get("node")
|
||||
metric = node_metrics.get(node_name, {"collection_failed": True})
|
||||
node_summary.append({
|
||||
"node": node_name,
|
||||
"status": row.get("status"),
|
||||
"running_lxcs": node_counts.get(node_name, 0),
|
||||
"cluster_cpu_fraction_pct": round(float(row.get("cpu") or 0) * 100, 2) if row.get("cpu") is not None else None,
|
||||
"cluster_mem_fraction_pct": round(((row.get("mem") or 0) / row.get("maxmem")) * 100, 2) if row.get("maxmem") else None,
|
||||
"loadavg_1m": metric.get("loadavg", [None])[0] if not metric.get("collection_failed") else None,
|
||||
"load_per_cpu_1m": metric.get("load_per_cpu_1m"),
|
||||
"host_mem_used_pct": metric.get("memory", {}).get("used_pct") if not metric.get("collection_failed") else None,
|
||||
"psi_cpu_some_avg10": metric.get("psi", {}).get("cpu", {}).get("some", {}).get("avg10") if not metric.get("collection_failed") else None,
|
||||
"psi_io_full_avg10": metric.get("psi", {}).get("io", {}).get("full", {}).get("avg10") if not metric.get("collection_failed") else None,
|
||||
"psi_memory_full_avg10": metric.get("psi", {}).get("memory", {}).get("full", {}).get("avg10") if not metric.get("collection_failed") else None,
|
||||
"vz_use_pct": metric.get("df_vz", {}).get("use_pct") if not metric.get("collection_failed") else None,
|
||||
})
|
||||
|
||||
report = {
|
||||
"collected_at": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"seed_host": seed_host,
|
||||
"overall_status": overall,
|
||||
"thresholds": T,
|
||||
"cluster": {
|
||||
"total_lxcs": len(lxc_rows),
|
||||
"running_lxcs": running_total,
|
||||
"stopped_lxcs": len(stopped_lxcs),
|
||||
"running_distribution": [
|
||||
{
|
||||
"node": node,
|
||||
"running_lxcs": count,
|
||||
"share_pct": round((count / running_total) * 100, 2) if running_total else 0.0,
|
||||
}
|
||||
for node, count in sorted(node_counts.items())
|
||||
],
|
||||
},
|
||||
"nodes": node_summary,
|
||||
"node_metrics": node_metrics,
|
||||
"top_lxcs": {
|
||||
"memory_pct": mem_hot[:T["summary_top_n"]],
|
||||
"cpu_pct": cpu_hot[:T["summary_top_n"]],
|
||||
"disk_total_bytes": disk_rw[:T["summary_top_n"]],
|
||||
"network_total_bytes": network_totals[:T["summary_top_n"]],
|
||||
},
|
||||
"lxcs": sorted(all_lxcs, key=lambda x: (str(x["node"]), int(x["vmid"]))),
|
||||
"alerts": alerts,
|
||||
}
|
||||
|
||||
with open(json_out, "w", encoding="utf-8") as fh:
|
||||
json.dump(report, fh, indent=2)
|
||||
fh.write("\n")
|
||||
|
||||
def gib(n):
|
||||
return round(n / (1024 ** 3), 2)
|
||||
|
||||
lines = []
|
||||
lines.append(f"LXC cluster health {report['collected_at']} ({overall.upper()})")
|
||||
lines.append(f"Seed host: {report['seed_host']}")
|
||||
lines.append(f"LXCs: running {running_total} / total {len(lxc_rows)} / stopped {len(stopped_lxcs)}")
|
||||
lines.append("")
|
||||
lines.append("Node summary:")
|
||||
for item in node_summary:
|
||||
lines.append(
|
||||
f"- {item['node']}: running_lxcs={item['running_lxcs']}, "
|
||||
f"load1={item['loadavg_1m'] if item['loadavg_1m'] is not None else 'n/a'}, "
|
||||
f"load/core={item['load_per_cpu_1m'] if item['load_per_cpu_1m'] is not None else 'n/a'}, "
|
||||
f"host_mem={item['host_mem_used_pct'] if item['host_mem_used_pct'] is not None else 'n/a'}%, "
|
||||
f"psi_io_full_avg10={item['psi_io_full_avg10'] if item['psi_io_full_avg10'] is not None else 'n/a'}, "
|
||||
f"vz={item['vz_use_pct'] if item['vz_use_pct'] is not None else 'n/a'}%"
|
||||
)
|
||||
lines.append("")
|
||||
lines.append("Top findings:")
|
||||
if alerts:
|
||||
for alert in alerts[: max(T["summary_top_n"], 10)]:
|
||||
value = "" if alert["value"] is None else f" value={alert['value']}"
|
||||
threshold = "" if alert["threshold"] is None else f" threshold={alert['threshold']}"
|
||||
lines.append(f"- [{alert['severity'].upper()}] {alert['scope']} {alert['entity']} {alert['metric']}: {alert['message']}{value}{threshold}")
|
||||
else:
|
||||
lines.append("- none")
|
||||
|
||||
lines.append("")
|
||||
lines.append("Top LXC memory:")
|
||||
for item in mem_hot[: T["summary_top_n"]]:
|
||||
lines.append(f"- {item['vmid']} {item['name']} @ {item['node']}: mem={item['mem_pct']}% of {gib(item['maxmem_bytes'])} GiB, cpu={item['cpu_pct']}%")
|
||||
|
||||
lines.append("")
|
||||
lines.append("Top LXC CPU:")
|
||||
for item in cpu_hot[: T["summary_top_n"]]:
|
||||
lines.append(f"- {item['vmid']} {item['name']} @ {item['node']}: cpu={item['cpu_pct']}%, mem={item['mem_pct']}%")
|
||||
|
||||
lines.append("")
|
||||
lines.append("Top cumulative disk:")
|
||||
for item in disk_rw[: T["summary_top_n"]]:
|
||||
lines.append(f"- {item['vmid']} {item['name']} @ {item['node']}: read+write={gib(item['disk_total_bytes'])} GiB")
|
||||
|
||||
lines.append("")
|
||||
lines.append("Top cumulative network:")
|
||||
for item in network_totals[: T["summary_top_n"]]:
|
||||
lines.append(f"- {item['vmid']} {item['name']} @ {item['node']}: in+out={gib(item['network_total_bytes'])} GiB")
|
||||
|
||||
with open(text_out, "w", encoding="utf-8") as fh:
|
||||
fh.write("\n".join(lines).rstrip() + "\n")
|
||||
|
||||
print(json.dumps({
|
||||
"overall_status": overall,
|
||||
"json_report": json_out,
|
||||
"text_report": text_out,
|
||||
"alerts": len(alerts),
|
||||
}))
|
||||
|
||||
if overall == "crit":
|
||||
sys.exit(2)
|
||||
if overall == "warn":
|
||||
sys.exit(1)
|
||||
sys.exit(0)
|
||||
PY
|
||||
|
||||
RC=$?
|
||||
set -e
|
||||
|
||||
if [[ "${JSON_ONLY}" -eq 1 ]]; then
|
||||
cat "${JSON_OUT}"
|
||||
else
|
||||
cat "${TEXT_OUT}"
|
||||
echo
|
||||
echo "JSON: ${JSON_OUT}"
|
||||
echo "Text: ${TEXT_OUT}"
|
||||
fi
|
||||
|
||||
exit "${RC}"
|
||||
88
scripts/verify/poll-proxmox-cluster-hardware.sh
Executable file
88
scripts/verify/poll-proxmox-cluster-hardware.sh
Executable file
@@ -0,0 +1,88 @@
|
||||
#!/usr/bin/env bash
|
||||
# Poll Proxmox hypervisor hardware (CPU/RAM/disk/NIC/pvesm) over SSH with key auth.
|
||||
# Also records cluster membership from the first reachable seed host.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/poll-proxmox-cluster-hardware.sh
|
||||
# OUT=/path/to/report.txt bash scripts/verify/poll-proxmox-cluster-hardware.sh
|
||||
#
|
||||
# Requires: ping, ssh (BatchMode / keys to root@hosts), optional cluster read via SEED_HOST.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
|
||||
SEED_HOST="${SEED_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
|
||||
HOSTS=(
|
||||
"${PROXMOX_HOST_ML110:-192.168.11.10}"
|
||||
"${PROXMOX_HOST_R630_01:-192.168.11.11}"
|
||||
"${PROXMOX_HOST_R630_02:-192.168.11.12}"
|
||||
"${PROXMOX_HOST_R630_03:-192.168.11.13}"
|
||||
"${PROXMOX_HOST_R630_04:-192.168.11.14}"
|
||||
)
|
||||
|
||||
TS="$(date +%Y%m%d_%H%M%S)"
|
||||
OUT="${OUT:-${PROJECT_ROOT}/reports/status/hardware_poll_${TS}.txt}"
|
||||
mkdir -p "$(dirname "$OUT")"
|
||||
|
||||
REMOTE_BODY=$(cat <<'EOS'
|
||||
echo "=== HOST === $(hostname -f 2>/dev/null || hostname)"
|
||||
echo "=== DMI ==="
|
||||
if command -v dmidecode >/dev/null 2>&1; then
|
||||
printf '%s | %s | SN:%s\n' \
|
||||
"$(dmidecode -s system-manufacturer 2>/dev/null)" \
|
||||
"$(dmidecode -s system-product-name 2>/dev/null)" \
|
||||
"$(dmidecode -s system-serial-number 2>/dev/null)"
|
||||
fi
|
||||
echo "=== CPU ==="
|
||||
lscpu 2>/dev/null | grep -E '^(Architecture|CPU op-mode|Model name|Socket|Core|Thread|CPU\\(s\\)|CPU max MHz|CPU min MHz)' || true
|
||||
echo "=== RAM ==="
|
||||
free -h
|
||||
echo "=== DISKS ==="
|
||||
lsblk -d -o NAME,SIZE,TYPE,MODEL,ROTA,SERIAL 2>/dev/null || true
|
||||
echo "=== PVE ==="
|
||||
pveversion -v 2>/dev/null | head -3 || echo "not pve"
|
||||
echo "=== pvesm ==="
|
||||
pvesm status 2>/dev/null || true
|
||||
echo "=== NIC PCI ==="
|
||||
lspci -nn 2>/dev/null | grep -iE 'ethernet|network' || true
|
||||
echo "=== vmbr0 addr ==="
|
||||
ip -br addr show vmbr0 2>/dev/null || true
|
||||
EOS
|
||||
)
|
||||
|
||||
{
|
||||
echo "hardware_poll $(date -Is)"
|
||||
echo "SSH BatchMode=yes root@HOST"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
echo "=== Cluster (from seed ${SEED_HOST}) ==="
|
||||
if ping -c1 -W1 "${SEED_HOST}" >/dev/null 2>&1; then
|
||||
ssh -o BatchMode=yes -o ConnectTimeout=10 -o StrictHostKeyChecking=no \
|
||||
"root@${SEED_HOST}" "pvecm status 2>/dev/null; echo '---'; pvecm nodes 2>/dev/null; echo '---'; grep ring0_addr /etc/pve/corosync.conf 2>/dev/null" \
|
||||
|| echo "(cluster query failed)"
|
||||
else
|
||||
echo "(seed unreachable)"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
for ip in "${HOSTS[@]}"; do
|
||||
echo ""
|
||||
echo "########################################"
|
||||
echo "# ${ip}"
|
||||
echo "########################################"
|
||||
if ! ping -c1 -W1 "${ip}" >/dev/null 2>&1; then
|
||||
echo "(no ping)"
|
||||
continue
|
||||
fi
|
||||
if ! ssh -o BatchMode=yes -o ConnectTimeout=12 -o StrictHostKeyChecking=no \
|
||||
"root@${ip}" "bash -lc $(printf '%q' "$REMOTE_BODY")" 2>/dev/null; then
|
||||
echo "(SSH failed — keys or root login)"
|
||||
fi
|
||||
done
|
||||
} | tee "${OUT}"
|
||||
|
||||
echo ""
|
||||
echo "Wrote: ${OUT}"
|
||||
133
scripts/verify/print-gas-runtime-env-canonical.sh
Normal file
133
scripts/verify/print-gas-runtime-env-canonical.sh
Normal file
@@ -0,0 +1,133 @@
|
||||
#!/usr/bin/env bash
|
||||
# Emit the non-secret gas-lane env scaffold from the active GRU transport config.
|
||||
# Safe defaults:
|
||||
# - per-lane caps come from config/gru-transport-active.json gasAssetFamilies[].perLaneCaps
|
||||
# - on-chain canonical totalSupply() is used for outstanding/escrowed defaults when readable
|
||||
# - treasuryBacked / treasuryCap default to 0 unless already provided by env
|
||||
# - active gas verifier envs are intentionally left commented until the live L1 bridge is attached
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/print-gas-runtime-env-canonical.sh
|
||||
# RPC_URL_138=http://192.168.11.211:8545 bash scripts/verify/print-gas-runtime-env-canonical.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
ACTIVE_JSON="$PROJECT_ROOT/config/gru-transport-active.json"
|
||||
CONTRACTS_JSON="$PROJECT_ROOT/config/smart-contracts-master.json"
|
||||
|
||||
# shellcheck source=/dev/null
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
need_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
need_cmd node
|
||||
need_cmd cast
|
||||
|
||||
RPC_URL="${RPC_URL_138:-${RPC_URL_138_PUBLIC:-http://192.168.11.211:8545}}"
|
||||
|
||||
node_output="$(
|
||||
node - <<'NODE' "$ACTIVE_JSON" "$CONTRACTS_JSON"
|
||||
const fs = require('fs');
|
||||
const activeFile = process.argv[2];
|
||||
const contractsFile = process.argv[3];
|
||||
const active = JSON.parse(fs.readFileSync(activeFile, 'utf8'));
|
||||
const contracts = JSON.parse(fs.readFileSync(contractsFile, 'utf8'));
|
||||
|
||||
const familiesByKey = new Map();
|
||||
for (const family of active.gasAssetFamilies || []) {
|
||||
if (family && family.familyKey) familiesByKey.set(String(family.familyKey), family);
|
||||
}
|
||||
|
||||
const deployedVerifier = contracts?.chains?.['138']?.contracts?.CWAssetReserveVerifier || '';
|
||||
const pairs = (active.transportPairs || [])
|
||||
.filter((pair) => pair && pair.assetClass === 'gas_native' && pair.active !== false)
|
||||
.map((pair) => {
|
||||
const family = familiesByKey.get(String(pair.familyKey || '')) || {};
|
||||
const chainId = Number(pair.destinationChainId);
|
||||
const cap = family?.perLaneCaps?.[String(chainId)] || '';
|
||||
return {
|
||||
key: pair.key,
|
||||
familyKey: String(pair.familyKey || ''),
|
||||
canonicalSymbol: String(pair.canonicalSymbol || ''),
|
||||
destinationChainId: chainId,
|
||||
maxOutstandingEnv: pair?.maxOutstanding?.env || '',
|
||||
supplyAccounting: pair?.supplyAccounting || {},
|
||||
reserveVerifierKey: pair?.reserveVerifierKey || '',
|
||||
cap,
|
||||
canonicalAddress: pair?.canonicalAddress || '',
|
||||
};
|
||||
});
|
||||
|
||||
process.stdout.write(JSON.stringify({ deployedVerifier, pairs }));
|
||||
NODE
|
||||
)"
|
||||
|
||||
echo "# Canonical gas-lane runtime env scaffold"
|
||||
echo "# Generated from config/gru-transport-active.json plus live canonical totalSupply() on Chain 138."
|
||||
echo "# Active gas verifier envs remain commented until the live L1 bridge is explicitly attached."
|
||||
echo
|
||||
|
||||
chain138_l1_bridge="${CHAIN138_L1_BRIDGE:-${CW_L1_BRIDGE:-${CW_L1_BRIDGE_CHAIN138:-}}}"
|
||||
asset_reserve_verifier="$(
|
||||
node -e 'const data=JSON.parse(process.argv[1]); process.stdout.write(data.deployedVerifier || "");' "$node_output"
|
||||
)"
|
||||
|
||||
if [[ -n "$chain138_l1_bridge" ]]; then
|
||||
printf 'CHAIN138_L1_BRIDGE=%s\n' "$chain138_l1_bridge"
|
||||
fi
|
||||
if [[ -n "${CW_L1_BRIDGE_CHAIN138:-}" ]]; then
|
||||
printf 'CW_L1_BRIDGE_CHAIN138=%s\n' "$CW_L1_BRIDGE_CHAIN138"
|
||||
fi
|
||||
if [[ -n "$asset_reserve_verifier" ]]; then
|
||||
printf 'CW_ASSET_RESERVE_VERIFIER_DEPLOYED_CHAIN138=%s\n' "$asset_reserve_verifier"
|
||||
printf '# CW_GAS_STRICT_ESCROW_VERIFIER_CHAIN138=%s\n' "$asset_reserve_verifier"
|
||||
printf '# CW_GAS_HYBRID_CAP_VERIFIER_CHAIN138=%s\n' "$asset_reserve_verifier"
|
||||
else
|
||||
printf '# CW_GAS_STRICT_ESCROW_VERIFIER_CHAIN138=\n'
|
||||
printf '# CW_GAS_HYBRID_CAP_VERIFIER_CHAIN138=\n'
|
||||
fi
|
||||
printf 'CW_GAS_ESCROW_VAULT_CHAIN138=%s\n' "${CW_GAS_ESCROW_VAULT_CHAIN138:-}"
|
||||
printf 'CW_GAS_TREASURY_SYSTEM=%s\n' "${CW_GAS_TREASURY_SYSTEM:-}"
|
||||
echo
|
||||
|
||||
while IFS='|' read -r canonical_address max_outstanding_env cap outstanding_env escrowed_env treasury_backed_env treasury_cap_env; do
|
||||
[[ -n "$canonical_address" ]] || continue
|
||||
|
||||
total_supply="$(cast call "$canonical_address" 'totalSupply()(uint256)' --rpc-url "$RPC_URL" 2>/dev/null || printf '0')"
|
||||
[[ "$total_supply" =~ ^[0-9]+$ ]] || total_supply=0
|
||||
|
||||
if [[ -n "$max_outstanding_env" && -n "$cap" ]]; then
|
||||
printf '%s=%s\n' "$max_outstanding_env" "${!max_outstanding_env:-$cap}"
|
||||
elif [[ -n "$max_outstanding_env" ]]; then
|
||||
printf '%s=%s\n' "$max_outstanding_env" "${!max_outstanding_env:-}"
|
||||
fi
|
||||
|
||||
[[ -n "$outstanding_env" ]] && printf '%s=%s\n' "$outstanding_env" "${!outstanding_env:-$total_supply}"
|
||||
[[ -n "$escrowed_env" ]] && printf '%s=%s\n' "$escrowed_env" "${!escrowed_env:-$total_supply}"
|
||||
[[ -n "$treasury_backed_env" ]] && printf '%s=%s\n' "$treasury_backed_env" "${!treasury_backed_env:-0}"
|
||||
[[ -n "$treasury_cap_env" ]] && printf '%s=%s\n' "$treasury_cap_env" "${!treasury_cap_env:-0}"
|
||||
echo
|
||||
done < <(
|
||||
node -e '
|
||||
const data = JSON.parse(process.argv[1]);
|
||||
for (const pair of data.pairs) {
|
||||
const supply = pair.supplyAccounting || {};
|
||||
console.log([
|
||||
pair.canonicalAddress || "",
|
||||
pair.maxOutstandingEnv || "",
|
||||
pair.cap || "",
|
||||
supply?.outstanding?.env || "",
|
||||
supply?.escrowed?.env || "",
|
||||
supply?.treasuryBacked?.env || "",
|
||||
supply?.treasuryCap?.env || "",
|
||||
].join("|"));
|
||||
}
|
||||
' "$node_output"
|
||||
)
|
||||
81
scripts/verify/print-gru-v2-first-tier-pool-scaffolds.sh
Executable file
81
scripts/verify/print-gru-v2-first-tier-pool-scaffolds.sh
Executable file
@@ -0,0 +1,81 @@
|
||||
#!/usr/bin/env bash
|
||||
# Print JSON scaffold snippets for missing first-tier public PMM rows from the
|
||||
# GRU v2 / D3MM expansion plan without claiming they are live. For the
|
||||
# canonical tracked inventory, use build-gru-v2-first-tier-pool-scaffolds.sh.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/print-gru-v2-first-tier-pool-scaffolds.sh
|
||||
# bash scripts/verify/print-gru-v2-first-tier-pool-scaffolds.sh 42161 8453
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
export PROJECT_ROOT
|
||||
|
||||
FILTER_IDS="${*:-}"
|
||||
|
||||
command -v node >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: node" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
FILTER_IDS="$FILTER_IDS" node <<'NODE'
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const root = process.env.PROJECT_ROOT;
|
||||
const filterIds = (process.env.FILTER_IDS || '')
|
||||
.split(/\s+/)
|
||||
.map((x) => x.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
function readJson(relPath) {
|
||||
return JSON.parse(fs.readFileSync(path.join(root, relPath), 'utf8'));
|
||||
}
|
||||
|
||||
const plan = readJson('config/gru-v2-d3mm-network-expansion-plan.json');
|
||||
const deployment = readJson('cross-chain-pmm-lps/config/deployment-status.json');
|
||||
|
||||
function normalizePair(pair) {
|
||||
return String(pair || '').trim().toUpperCase();
|
||||
}
|
||||
|
||||
function hasPair(pmmPools, pair) {
|
||||
const [base, quote] = normalizePair(pair).split('/');
|
||||
return pmmPools.some((entry) =>
|
||||
String(entry.base || '').trim().toUpperCase() === base &&
|
||||
String(entry.quote || '').trim().toUpperCase() === quote
|
||||
);
|
||||
}
|
||||
|
||||
for (const wave of plan.waves || []) {
|
||||
for (const chainPlan of wave.chains || []) {
|
||||
const chainId = String(chainPlan.chainId);
|
||||
if (filterIds.length > 0 && !filterIds.includes(chainId)) continue;
|
||||
const chain = deployment.chains?.[chainId] || {};
|
||||
const pmmPools = Array.isArray(chain.pmmPools) ? chain.pmmPools : [];
|
||||
const missingPairs = (chainPlan.requiredPairs || []).filter((pair) => !hasPair(pmmPools, pair));
|
||||
if (missingPairs.length === 0) continue;
|
||||
|
||||
console.log(`=== ${chainPlan.name} (${chainId}) ===`);
|
||||
console.log(`wave=${wave.key}`);
|
||||
console.log(`rolloutMode=${chainPlan.rolloutMode}`);
|
||||
console.log('missing first-tier pool scaffolds:');
|
||||
for (const pair of missingPairs) {
|
||||
const [base, quote] = pair.split('/');
|
||||
console.log(JSON.stringify({
|
||||
base,
|
||||
quote,
|
||||
poolAddress: '0x0000000000000000000000000000000000000000',
|
||||
feeBps: 3,
|
||||
k: 500000000000000000,
|
||||
role: 'public_routing',
|
||||
publicRoutingEnabled: false,
|
||||
notes: 'scaffold only; replace poolAddress and enable routing after live deploy + seed'
|
||||
}, null, 2));
|
||||
}
|
||||
console.log('');
|
||||
}
|
||||
}
|
||||
NODE
|
||||
175
scripts/verify/print-mainnet-cwusdc-external-exit-quote.sh
Executable file
175
scripts/verify/print-mainnet-cwusdc-external-exit-quote.sh
Executable file
@@ -0,0 +1,175 @@
|
||||
#!/usr/bin/env bash
|
||||
# Print one number: hosted **gross USDC per cWUSDC** for selling `base_raw` minimal units
|
||||
# on Ethereum mainnet, using either **DODO SmartTrade** or **1inch v6** quote APIs.
|
||||
# Intended for `pmm-flash-push-break-even.mjs --external-exit-price-cmd` (exit becomes "live").
|
||||
#
|
||||
# Dependencies: curl, python3 (no jq).
|
||||
#
|
||||
# Usage:
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# export ONEINCH_API_KEY=... # or DODO_API_KEY / DODO_SECRET_KEY / DODO_DEVELOPER_API_KEY
|
||||
# bash scripts/verify/print-mainnet-cwusdc-external-exit-quote.sh 1inch 6187975
|
||||
# bash scripts/verify/print-mainnet-cwusdc-external-exit-quote.sh dodo 6187975
|
||||
#
|
||||
# Env (optional):
|
||||
# EXIT_QUOTE_ENGINE — default engine if $1 omitted: dodo | 1inch
|
||||
# EXIT_QUOTE_BASE_RAW — default size if $2 omitted (raw 6dp cWUSDC)
|
||||
# CWUSDC_MAINNET, MAINNET_USDC — token overrides
|
||||
# DODO_QUOTE_URL — default https://api.dodoex.io/route-service/developer/swap
|
||||
# DODO_SLIPPAGE — default 0.005
|
||||
# DODO_USER_ADDRESS / USER_ADDR / DEPLOYER_ADDRESS — DODO userAddr; else derived from PRIVATE_KEY
|
||||
# ONEINCH_API_URL — default https://api.1inch.dev/swap/v6.0
|
||||
#
|
||||
# Notes:
|
||||
# - Quotes are **not** guaranteed executable at execution time; refresh per tranche.
|
||||
# - DODO and 1inch may not route thin cW*; failures exit non-zero with stderr detail.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "$ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd curl
|
||||
require_cmd python3
|
||||
|
||||
ENGINE="${1:-${EXIT_QUOTE_ENGINE:-dodo}}"
|
||||
BASE_RAW="${2:-${EXIT_QUOTE_BASE_RAW:-6187975}}"
|
||||
ENGINE_LC="$(printf '%s' "$ENGINE" | tr '[:upper:]' '[:lower:]')"
|
||||
|
||||
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
|
||||
USDC="${MAINNET_USDC:-0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48}"
|
||||
CHAIN_ID=1
|
||||
DEC_OUT=6
|
||||
|
||||
if ! [[ "$BASE_RAW" =~ ^[0-9]+$ ]] || ((BASE_RAW < 1)); then
|
||||
echo "[fail] base_raw must be a positive integer (cWUSDC raw, 6 decimals)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
quote_dodo() {
|
||||
local api_key url slippage user_addr body code
|
||||
api_key="${DODO_API_KEY:-${DODO_SECRET_KEY:-${DODO_DEVELOPER_API_KEY:-}}}"
|
||||
if [[ -z "$api_key" ]]; then
|
||||
echo "[fail] DODO quote needs DODO_API_KEY, DODO_SECRET_KEY, or DODO_DEVELOPER_API_KEY" >&2
|
||||
return 1
|
||||
fi
|
||||
url="${DODO_QUOTE_URL:-https://api.dodoex.io/route-service/developer/swap}"
|
||||
slippage="${DODO_SLIPPAGE:-0.005}"
|
||||
user_addr="${DODO_USER_ADDRESS:-${USER_ADDR:-${DEPLOYER_ADDRESS:-}}}"
|
||||
if [[ -z "$user_addr" && -n "${PRIVATE_KEY:-}" ]] && command -v cast >/dev/null 2>&1; then
|
||||
user_addr="$(cast wallet address --private-key "$PRIVATE_KEY" 2>/dev/null || true)"
|
||||
fi
|
||||
user_addr="${user_addr:-0x0000000000000000000000000000000000000001}"
|
||||
|
||||
code="$(
|
||||
curl -sS -G -o "$tmp" -w "%{http_code}" --max-time 45 "$url" \
|
||||
--data-urlencode "chainId=${CHAIN_ID}" \
|
||||
--data-urlencode "fromAmount=${BASE_RAW}" \
|
||||
--data-urlencode "fromTokenAddress=${CWUSDC}" \
|
||||
--data-urlencode "toTokenAddress=${USDC}" \
|
||||
--data-urlencode "apikey=${api_key}" \
|
||||
--data-urlencode "slippage=${slippage}" \
|
||||
--data-urlencode "userAddr=${user_addr}"
|
||||
)" || code="000"
|
||||
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "[fail] DODO HTTP ${code}: $(head -c 400 "$tmp" 2>/dev/null || true)" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
python3 - "$BASE_RAW" "$DEC_OUT" <"$tmp" <<'PY'
|
||||
import json, sys, decimal
|
||||
base_in = int(sys.argv[1])
|
||||
dec_out = int(sys.argv[2])
|
||||
j = json.load(sys.stdin)
|
||||
data = j.get("data") if isinstance(j.get("data"), dict) else {}
|
||||
msg = (data.get("msgError") or data.get("message") or "").strip()
|
||||
if msg:
|
||||
print(f"[fail] DODO msgError={msg}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
raw = data.get("resAmount") or data.get("toTokenAmount") or j.get("resAmount")
|
||||
if raw is None or raw == "":
|
||||
print("[fail] DODO missing resAmount / toTokenAmount", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
s = str(raw).strip()
|
||||
try:
|
||||
if "." in s:
|
||||
out_raw = int((decimal.Decimal(s) * (decimal.Decimal(10) ** dec_out)).to_integral_value(rounding=decimal.ROUND_DOWN))
|
||||
else:
|
||||
out_raw = int(s)
|
||||
except Exception as e:
|
||||
print(f"[fail] DODO parse resAmount={raw!r}: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
if out_raw <= 0 or base_in <= 0:
|
||||
print("[fail] DODO non-positive amounts", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
ratio = out_raw / base_in
|
||||
out = f"{ratio:.12f}".rstrip("0").rstrip(".")
|
||||
print(out or "0")
|
||||
PY
|
||||
}
|
||||
|
||||
quote_oneinch() {
|
||||
local key base url code
|
||||
key="${ONEINCH_API_KEY:-}"
|
||||
if [[ -z "$key" ]]; then
|
||||
echo "[fail] 1inch quote needs ONEINCH_API_KEY (see .env.master.example)" >&2
|
||||
return 1
|
||||
fi
|
||||
base="${ONEINCH_API_URL:-https://api.1inch.dev/swap/v6.0}"
|
||||
base="${base%/}"
|
||||
url="${base}/${CHAIN_ID}/quote?src=${CWUSDC}&dst=${USDC}&amount=${BASE_RAW}"
|
||||
|
||||
code="$(
|
||||
curl -sS -o "$tmp" -w "%{http_code}" --max-time 45 \
|
||||
-H "Authorization: Bearer ${key}" \
|
||||
-H "Accept: application/json" \
|
||||
"$url"
|
||||
)" || code="000"
|
||||
|
||||
if [[ "$code" != "200" ]]; then
|
||||
echo "[fail] 1inch HTTP ${code}: $(head -c 400 "$tmp" 2>/dev/null || true)" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
python3 - "$BASE_RAW" <"$tmp" <<'PY'
|
||||
import json, sys
|
||||
base_in = int(sys.argv[1])
|
||||
j = json.load(sys.stdin)
|
||||
dst = j.get("dstAmount") or j.get("toAmount") or j.get("toTokenAmount")
|
||||
if not dst:
|
||||
print("[fail] 1inch missing dstAmount", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
out_raw = int(str(dst).strip())
|
||||
if out_raw <= 0:
|
||||
print("[fail] 1inch non-positive dstAmount", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
ratio = out_raw / base_in
|
||||
out = f"{ratio:.12f}".rstrip("0").rstrip(".")
|
||||
print(out or "0")
|
||||
PY
|
||||
}
|
||||
|
||||
tmp="$(mktemp)"
|
||||
trap 'rm -f "$tmp"' EXIT
|
||||
|
||||
case "$ENGINE_LC" in
|
||||
dodo)
|
||||
quote_dodo
|
||||
;;
|
||||
1inch | oneinch)
|
||||
quote_oneinch
|
||||
;;
|
||||
*)
|
||||
echo "[fail] engine must be dodo or 1inch (got: $ENGINE)" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
78
scripts/verify/print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh
Executable file
78
scripts/verify/print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh
Executable file
@@ -0,0 +1,78 @@
|
||||
#!/usr/bin/env bash
|
||||
# Print a single numeric "live" gross USDC per cWUSDC implied by the public Mainnet
|
||||
# cWUSDC/USDC DODO PMM vault at a given base sell size, using the same reserve +
|
||||
# _LP_FEE_RATE_ fallback as run-mainnet-public-dodo-cw-swap.sh when querySellBase reverts.
|
||||
#
|
||||
# This is on-chain state (not a CEX/DEX aggregator). Use for tooling that requires
|
||||
# --external-exit-price-cmd to classify the exit as live; for real unwinds on other
|
||||
# venues, replace this with a quote that matches your actual exit path.
|
||||
#
|
||||
# Usage:
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# bash scripts/verify/print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh [base_raw] [pool_address]
|
||||
#
|
||||
# Pool resolution: explicit pool_address arg > env PMM_CWUSDC_USDC_IMPLIED_PRICE_POOL > canonical public vault.
|
||||
#
|
||||
# Example (first-ladder-ish base out raw):
|
||||
# bash scripts/verify/print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh 6187975
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "$ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
# Canonical public bootstrap pool (see cross-chain-pmm-lps/config/deployment-status.json).
|
||||
# Do not use POOL_CWUSDC_USDC_MAINNET here — env may point at a different rail (e.g. TRUU).
|
||||
DEFAULT_PUBLIC_CWUSDC_USDC_POOL=0x69776fc607e9edA8042e320e7e43f54d06c68f0E
|
||||
BASE_RAW="${1:-6187975}"
|
||||
if [[ -n "${2:-}" ]]; then
|
||||
if [[ "$2" =~ ^0x[a-fA-F0-9]{40}$ ]]; then
|
||||
POOL="$2"
|
||||
else
|
||||
echo "[fail] pool_address must be a 40-hex 0x-prefixed address" >&2
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
POOL="${PMM_CWUSDC_USDC_IMPLIED_PRICE_POOL:-$DEFAULT_PUBLIC_CWUSDC_USDC_POOL}"
|
||||
fi
|
||||
RPC="${ETHEREUM_MAINNET_RPC:-}"
|
||||
|
||||
if [[ -z "$RPC" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v cast >/dev/null 2>&1; then
|
||||
echo "[fail] cast is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [[ "$BASE_RAW" =~ ^[0-9]+$ ]] || ((BASE_RAW < 1)); then
|
||||
echo "[fail] base_raw must be a positive integer (raw 6dp units)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
reserve_line="$(cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC")"
|
||||
base_reserve="$(printf '%s\n' "$reserve_line" | sed -n '1p' | awk '{print $1}' | tr -d '[],')"
|
||||
quote_reserve="$(printf '%s\n' "$reserve_line" | sed -n '2p' | awk '{print $1}' | tr -d '[],')"
|
||||
fee_rate="$(cast call "$POOL" '_LP_FEE_RATE_()(uint256)' --rpc-url "$RPC" | awk '{print $1}' | tr -d '[],')"
|
||||
|
||||
exec python3 - "$BASE_RAW" "$base_reserve" "$quote_reserve" "$fee_rate" <<'PY'
|
||||
import sys
|
||||
base_in = int(sys.argv[1])
|
||||
B = int(sys.argv[2])
|
||||
Q = int(sys.argv[3])
|
||||
fee = int(sys.argv[4])
|
||||
if base_in <= 0 or B <= 0 or Q <= 0 or fee < 0 or fee > 10000:
|
||||
print("invalid inputs", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
net = base_in * max(0, 10000 - fee) // 10000
|
||||
if net <= 0:
|
||||
print("net base after fee is zero", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
quote_out = (net * Q) // (B + net)
|
||||
ratio = quote_out / base_in
|
||||
out = f"{ratio:.12f}".rstrip("0").rstrip(".")
|
||||
print(out or "0")
|
||||
PY
|
||||
50
scripts/verify/probe-pmm-pool-dvm-chain138.sh
Executable file
50
scripts/verify/probe-pmm-pool-dvm-chain138.sh
Executable file
@@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
# Probe whether a Chain 138 address behaves like an official DODO DVM (query + sell selectors).
|
||||
#
|
||||
# XAU "PMM" pools in ADDRESS_MATRIX often use a different on-chain implementation: querySellBase
|
||||
# reverts even when ERC-20 balances exist. Stable pools behind minimal proxies delegate to DVM.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/verify/probe-pmm-pool-dvm-chain138.sh 0x9e89bAe009adf128782E19e8341996c596ac40dC [RPC_URL]
|
||||
# Env: FROM (default deployer) for query trader address.
|
||||
|
||||
set -euo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
POOL="${1:?pool address}"
|
||||
RPC="${2:-${RPC_URL_138:-https://rpc-http-pub.d-bis.org}}"
|
||||
FROM="${FROM:-0x4A666F96fC8764181194447A7dFdb7d471b301C8}"
|
||||
AMT=1000000
|
||||
|
||||
code=$(curl -s "$RPC" -H 'Content-Type: application/json' \
|
||||
-d "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getCode\",\"params\":[\"$POOL\",\"latest\"],\"id\":1}" \
|
||||
| jq -r '.result')
|
||||
clen=$(( (${#code} - 2) / 2 ))
|
||||
echo "Pool: $POOL"
|
||||
echo "RPC: $RPC"
|
||||
echo "code len: $clen bytes"
|
||||
|
||||
ok=0
|
||||
if cast call "$POOL" "querySellBase(address,uint256)(uint256,uint256)" "$FROM" "$AMT" --rpc-url "$RPC" >/dev/null 2>&1; then
|
||||
echo "querySellBase($AMT): OK"
|
||||
ok=1
|
||||
else
|
||||
echo "querySellBase($AMT): REVERT (not DVM-compatible for this probe)"
|
||||
fi
|
||||
|
||||
if cast call "$POOL" "querySellQuote(address,uint256)(uint256,uint256)" "$FROM" "$AMT" --rpc-url "$RPC" >/dev/null 2>&1; then
|
||||
echo "querySellQuote($AMT): OK"
|
||||
ok=1
|
||||
else
|
||||
echo "querySellQuote($AMT): REVERT"
|
||||
fi
|
||||
|
||||
echo "sellBase selector: $(cast sig 'sellBase(address)')"
|
||||
echo "sellQuote selector: $(cast sig 'sellQuote(address)')"
|
||||
|
||||
if [[ "$ok" -eq 1 ]]; then
|
||||
echo "Verdict: DVM-style quotes succeed — swap-random-registered-pmm-pools-chain138.sh can target this pool."
|
||||
exit 0
|
||||
fi
|
||||
echo "Verdict: Quotes failed — use a different swap path or verify pool type on Blockscout."
|
||||
exit 1
|
||||
66
scripts/verify/probe-uniswap-v3-cwusdc-usdc-mainnet.sh
Executable file
66
scripts/verify/probe-uniswap-v3-cwusdc-usdc-mainnet.sh
Executable file
@@ -0,0 +1,66 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Read-only: probe Uniswap V3 factory for cWUSDC/USDC pools on Ethereum mainnet.
|
||||
# token0/token1 must be sorted: token0 < token1.
|
||||
#
|
||||
# Usage:
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# bash scripts/verify/probe-uniswap-v3-cwusdc-usdc-mainnet.sh
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROXMOX_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "${PROXMOX_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
|
||||
RPC="${ETHEREUM_MAINNET_RPC:-}"
|
||||
if [[ -z "$RPC" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
FACTORY=0x1F98431c8aD98523631AE4a59f267346ea31F984
|
||||
CW="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
|
||||
USDC=0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48
|
||||
|
||||
# Sort token0 < token1
|
||||
if [[ "${CW,,}" < "${USDC,,}" ]]; then
|
||||
T0="$CW"
|
||||
T1="$USDC"
|
||||
else
|
||||
T0="$USDC"
|
||||
T1="$CW"
|
||||
fi
|
||||
|
||||
echo "=== Uniswap V3 getPool (mainnet) ==="
|
||||
echo "factory=$FACTORY token0=$T0 token1=$T1"
|
||||
echo "(cWUSDC raw=$CW USDC=$USDC)"
|
||||
echo
|
||||
|
||||
found=0
|
||||
for fee in 100 500 3000 10000; do
|
||||
p="$(cast call "$FACTORY" "getPool(address,address,uint24)(address)" "$T0" "$T1" "$fee" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
if [[ "${p,,}" != "0x0000000000000000000000000000000000000000" ]]; then
|
||||
echo "fee_tier_u24=$fee pool=$p -> UNWIND_MODE=0 UNWIND_V3_FEE_U24=$fee"
|
||||
found=1
|
||||
else
|
||||
echo "fee_tier_u24=$fee pool=(none)"
|
||||
fi
|
||||
done
|
||||
|
||||
echo
|
||||
if (( found == 0 )); then
|
||||
echo "No standard-fee pool found for this pair. Use UNWIND_MODE=2 with UNWIND_V3_PATH_HEX (packed exactInput path)"
|
||||
echo "or UNWIND_MODE=1 with a deep DODO PMM pool (UNWIND_DODO_POOL). See RunMainnetAaveCwusdcUsdcQuotePushOnce.s.sol."
|
||||
else
|
||||
echo "Pick a non-zero pool fee above for quote-push unwind (UniswapV3ExternalUnwinder exactInputSingle)."
|
||||
fi
|
||||
@@ -1,7 +1,8 @@
|
||||
#!/usr/bin/env bash
|
||||
# Emit recommended .env lines for Chain 138 (canonical source of truth).
|
||||
# Use to reconcile smom-dbis-138/.env: diff this output against .env and ensure one entry per variable.
|
||||
# Does not read or modify .env (no secrets). See docs/11-references/CONTRACT_ADDRESSES_REFERENCE.md.
|
||||
# Does not read or modify .env (no secrets). CCIP_ROUTER / CCIPWETH9_BRIDGE_CHAIN138 match
|
||||
# config/smart-contracts-master.json (relay + canonical WETH9 bridge); see CONTRACT_ADDRESSES_REFERENCE.md.
|
||||
# Usage: ./scripts/verify/reconcile-env-canonical.sh [--print]
|
||||
|
||||
set -euo pipefail
|
||||
@@ -18,7 +19,7 @@ fi
|
||||
PRINT="${1:-}"
|
||||
|
||||
cat << 'CANONICAL_EOF'
|
||||
# Canonical Chain 138 contract addresses (source: CONTRACT_ADDRESSES_REFERENCE.md)
|
||||
# Canonical Chain 138 contract addresses (CONTRACT_ADDRESSES_REFERENCE.md + smart-contracts-master.json CCIP rows)
|
||||
# Reconcile smom-dbis-138/.env: one entry per variable; remove duplicates.
|
||||
# RPC / PRIVATE_KEY / other secrets: set separately.
|
||||
|
||||
@@ -32,11 +33,11 @@ FEE_COLLECTOR=0xF78246eB94c6CB14018E507E60661314E5f4C53f
|
||||
DEBT_REGISTRY=0x95BC4A997c0670d5DAC64d55cDf3769B53B63C28
|
||||
POLICY_MANAGER=0x0C4FD27018130A00762a802f91a72D6a64a60F14
|
||||
TOKEN_IMPLEMENTATION=0x0059e237973179146237aB49f1322E8197c22b21
|
||||
CCIPWETH9_BRIDGE_CHAIN138=0x9cba0D04Ae5f6f16e3C599025aB97a05c4A593d5
|
||||
CCIPWETH9_BRIDGE_CHAIN138=0xcacfd227A040002e49e2e01626363071324f820a
|
||||
CCIPWETH10_BRIDGE_CHAIN138=0xe0E93247376aa097dB308B92e6Ba36bA015535D0
|
||||
LINK_TOKEN=0xb7721dD53A8c629d9f1Ba31a5819AFe250002b03
|
||||
CCIP_FEE_TOKEN=0xb7721dD53A8c629d9f1Ba31a5819AFe250002b03
|
||||
CCIP_ROUTER=0x89EC6574eeAC72Ed1b93DfCa4DB43547C8367FF0
|
||||
CCIP_ROUTER=0x42DAb7b888Dd382bD5Adcf9E038dBF1fD03b4817
|
||||
CCIP_SENDER=0x105F8A15b819948a89153505762444Ee9f324684
|
||||
UNIVERSAL_ASSET_REGISTRY=0xAEE4b7fBe82E1F8295951584CBc772b8BBD68575
|
||||
GOVERNANCE_CONTROLLER=0xA6891D5229f2181a34D4FF1B515c3Aa37dd90E0e
|
||||
@@ -51,6 +52,7 @@ WITHDRAWAL_ESCROW=0xe77cb26eA300e2f5304b461b0EC94c8AD6A7E46D
|
||||
CANONICAL_EOF
|
||||
|
||||
if [[ "$PRINT" = "--print" ]]; then
|
||||
echo ""
|
||||
echo "Reconcile: ensure smom-dbis-138/.env has one entry per variable above and matches CONTRACT_ADDRESSES_REFERENCE.md."
|
||||
echo ""
|
||||
echo "Reconcile: ensure smom-dbis-138/.env has one entry per variable above and matches CONTRACT_ADDRESSES_REFERENCE.md."
|
||||
echo "For gas-lane runtime scaffolding, also run: bash scripts/verify/print-gas-runtime-env-canonical.sh"
|
||||
fi
|
||||
|
||||
12
scripts/verify/refresh-proxmox-host-key-r630-04.sh
Executable file
12
scripts/verify/refresh-proxmox-host-key-r630-04.sh
Executable file
@@ -0,0 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
# Remove stale OpenSSH host key for r630-04 after reinstall or key rotation.
|
||||
# Confirm the new host fingerprint out-of-band before accepting on next connect.
|
||||
set -euo pipefail
|
||||
HOST="${1:-192.168.11.14}"
|
||||
KH="${SSH_KNOWN_HOSTS:-$HOME/.ssh/known_hosts}"
|
||||
if [[ ! -f "$KH" ]]; then
|
||||
echo "No known_hosts at $KH" >&2
|
||||
exit 1
|
||||
fi
|
||||
ssh-keygen -f "$KH" -R "$HOST" >/dev/null 2>&1 || true
|
||||
echo "Removed $HOST from $KH — next: ssh -o StrictHostKeyChecking=accept-new root@$HOST"
|
||||
146
scripts/verify/report-mainnet-deployer-liquidity-and-routes.sh
Executable file
146
scripts/verify/report-mainnet-deployer-liquidity-and-routes.sh
Executable file
@@ -0,0 +1,146 @@
|
||||
#!/usr/bin/env bash
|
||||
# One-shot report: Mainnet deployer wallet + deep USDC/USDT liquidity venues for swap/unwind planning.
|
||||
#
|
||||
# Read-only. Requires ETHEREUM_MAINNET_RPC and PRIVATE_KEY (for deployer address only).
|
||||
#
|
||||
# Usage:
|
||||
# source scripts/lib/load-project-env.sh
|
||||
# bash scripts/verify/report-mainnet-deployer-liquidity-and-routes.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
# shellcheck disable=SC1091
|
||||
source "$ROOT/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
# shellcheck disable=SC1091
|
||||
source "$ROOT/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1 || true
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[fail] missing: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd python3
|
||||
require_cmd jq
|
||||
|
||||
RPC="${ETHEREUM_MAINNET_RPC:-}"
|
||||
if [[ -z "$RPC" || -z "${PRIVATE_KEY:-}" ]]; then
|
||||
echo "[fail] ETHEREUM_MAINNET_RPC and PRIVATE_KEY are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
DEPLOYER="$(cast wallet address --private-key "$PRIVATE_KEY")"
|
||||
USDC=0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48
|
||||
USDT=0xdAC17F958D2ee523a2206206994597C13D831ec7
|
||||
CWUSDC="${CWUSDC_MAINNET:-0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a}"
|
||||
CWUSDT="${CWUSDT_MAINNET:-0xaF5017d0163ecb99D9B5D94e3b4D7b09Af44D8AE}"
|
||||
INTEGRATION="${DODO_PMM_INTEGRATION_MAINNET:-}"
|
||||
BAL_VAULT=0xBA12222222228d8Ba445958a75a0704d566BF2C8
|
||||
AAVE_POOL=0x87870Bca3F3fD6335C3F4ce8392D69350B4fA4E2
|
||||
THREEPOOL=0xbEbc44782C7dB0a1A60Cb6fe97d0b483032FF1C7
|
||||
UNI_FACTORY=0x1F98431c8aD98523631AE4a59f267346ea31F984
|
||||
|
||||
fmt6() {
|
||||
python3 - "$1" <<'PY'
|
||||
import sys
|
||||
print(f"{int(sys.argv[1]) / 1_000_000:.6f}")
|
||||
PY
|
||||
}
|
||||
|
||||
fmt6_or() {
|
||||
if [[ "$1" =~ ^[0-9]+$ ]]; then
|
||||
fmt6 "$1"
|
||||
else
|
||||
echo "?"
|
||||
fi
|
||||
}
|
||||
|
||||
aave_available_underlying() {
|
||||
local stable="$1"
|
||||
local rd atoken bal
|
||||
rd="$(cast call "$AAVE_POOL" \
|
||||
'getReserveData(address)((uint256,uint128,uint128,uint128,uint128,uint128,uint40,address,address,address,address,uint8))' \
|
||||
"$stable" --rpc-url "$RPC" 2>/dev/null || true)"
|
||||
atoken="$(printf '%s\n' "$rd" | grep -oE '0x[a-fA-F0-9]{40}' | sed -n '2p')"
|
||||
if [[ -z "$atoken" ]]; then
|
||||
echo "?"
|
||||
return
|
||||
fi
|
||||
bal="$(cast call "$stable" 'balanceOf(address)(uint256)' "$atoken" --rpc-url "$RPC" 2>/dev/null | awk '{print $1}')"
|
||||
printf '%s' "${bal:-?}"
|
||||
}
|
||||
|
||||
echo "=== Mainnet deployer wallet ==="
|
||||
echo "address=$DEPLOYER"
|
||||
echo "eth=$(cast balance "$DEPLOYER" --ether --rpc-url "$RPC")"
|
||||
bal_u="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
bal_t="$(cast call "$USDT" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
bal_cw="$(cast call "$CWUSDC" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
bal_wt="$(cast call "$CWUSDT" 'balanceOf(address)(uint256)' "$DEPLOYER" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
echo "USDC_raw=$bal_u human=$(fmt6 "$bal_u")"
|
||||
echo "USDT_raw=$bal_t human=$(fmt6 "$bal_t")"
|
||||
echo "cWUSDC_raw=$bal_cw human=$(fmt6 "$bal_cw")"
|
||||
echo "cWUSDT_raw=$bal_wt human=$(fmt6 "$bal_wt")"
|
||||
|
||||
if [[ -n "$INTEGRATION" ]]; then
|
||||
usdc_all="$(cast call "$USDC" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
usdt_all="$(cast call "$USDT" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
cwa_all="$(cast call "$CWUSDC" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
cwt_all="$(cast call "$CWUSDT" 'allowance(address,address)(uint256)' "$DEPLOYER" "$INTEGRATION" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
echo "allowance_to_DODO_integration USDC_raw=$usdc_all USDT_raw=$usdt_all cWUSDC_raw=$cwa_all cWUSDT_raw=$cwt_all"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== Deep stable liquidity (USDC / USDT) — public venues ==="
|
||||
bal_usdc="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$BAL_VAULT" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
bal_usdt="$(cast call "$USDT" 'balanceOf(address)(uint256)' "$BAL_VAULT" --rpc-url "$RPC" | awk '{print $1}')"
|
||||
echo "Balancer_vault=$BAL_VAULT USDC_raw=$bal_usdc (~$(fmt6 "$bal_usdc") USDC) USDT_raw=$bal_usdt (~$(fmt6 "$bal_usdt") USDT) flash_fee_bps=0 (confirm getFlashLoanFeePercentage on ProtocolFeesCollector before size)"
|
||||
|
||||
aav_u="$(aave_available_underlying "$USDC")"
|
||||
aav_t="$(aave_available_underlying "$USDT")"
|
||||
aave_fee="$(cast call "$AAVE_POOL" 'FLASHLOAN_PREMIUM_TOTAL()(uint128)' --rpc-url "$RPC" 2>/dev/null | awk '{print $1}' || echo '?')"
|
||||
echo "Aave_V3_pool=$AAVE_POOL FLASHLOAN_PREMIUM_TOTAL_bps=$aave_fee USDC_available_raw=$aav_u (~$(fmt6_or "$aav_u") USDC) USDT_available_raw=$aav_t (~$(fmt6_or "$aav_t") USDT)"
|
||||
|
||||
b0="$(cast call "$THREEPOOL" 'balances(uint256)(uint256)' 0 --rpc-url "$RPC" | awk '{print $1}')"
|
||||
b1="$(cast call "$THREEPOOL" 'balances(uint256)(uint256)' 1 --rpc-url "$RPC" | awk '{print $1}')"
|
||||
b2="$(cast call "$THREEPOOL" 'balances(uint256)(uint256)' 2 --rpc-url "$RPC" | awk '{print $1}')"
|
||||
echo "Curve_3pool=$THREEPOOL balances_raw index0_DAI=$b0 index1_USDC=$b1 index2_USDT=$b2 (~$(fmt6 "$b1") USDC, ~$(fmt6 "$b2") USDT at 6dp)"
|
||||
|
||||
p01="$(cast call "$UNI_FACTORY" 'getPool(address,address,uint24)(address)' "$USDC" "$USDT" 100 --rpc-url "$RPC" | awk '{print $1}')"
|
||||
p05="$(cast call "$UNI_FACTORY" 'getPool(address,address,uint24)(address)' "$USDC" "$USDT" 500 --rpc-url "$RPC" | awk '{print $1}')"
|
||||
liq01="$(cast call "$p01" 'liquidity()(uint128)' --rpc-url "$RPC" 2>/dev/null | awk '{print $1}' || echo 0)"
|
||||
liq05="$(cast call "$p05" 'liquidity()(uint128)' --rpc-url "$RPC" 2>/dev/null | awk '{print $1}' || echo 0)"
|
||||
echo "UniswapV3_USDC_USDT factory=$UNI_FACTORY pool_0.01%=$p01 liquidity=$liq01 pool_0.05%=$p05 liquidity=$liq05"
|
||||
|
||||
echo
|
||||
echo "=== DODO PMM cW* rails (integration swap / unwind surface) ==="
|
||||
STATUS_JSON="$ROOT/cross-chain-pmm-lps/config/deployment-status.json"
|
||||
if [[ -f "$STATUS_JSON" ]]; then
|
||||
jq -r '.chains["1"].pmmPools[]? | select(.base == "cWUSDT" or .base == "cWUSDC") | "\(.base)/\(.quote) \(.poolAddress)"' "$STATUS_JSON" | while read -r line; do
|
||||
pool="${line##* }"
|
||||
pair="${line% 0x*}"
|
||||
[[ "$pool" =~ ^0x[0-9a-fA-F]{40}$ ]] || continue
|
||||
r="$(cast call "$pool" 'getVaultReserve()(uint256,uint256)' --rpc-url "$RPC" 2>/dev/null || true)"
|
||||
b="$(printf '%s\n' "$r" | sed -n '1p' | awk '{print $1}')"
|
||||
q="$(printf '%s\n' "$r" | sed -n '2p' | awk '{print $1}')"
|
||||
echo "$pair base_raw=$b quote_raw=$q (~$(fmt6 "$b") base / ~$(fmt6 "$q") quote if 6dp)"
|
||||
done
|
||||
else
|
||||
echo "[warn] missing $STATUS_JSON"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== Hosted aggregator (optional) ==="
|
||||
if [[ -n "${ONEINCH_API_KEY:-}" ]]; then
|
||||
echo "ONEINCH_API_KEY is set — run: bash scripts/verify/print-mainnet-cwusdc-external-exit-quote.sh 1inch <base_raw>"
|
||||
else
|
||||
echo "ONEINCH_API_KEY unset — DODO/1inch hosted unwind quotes need keys (see .env.master.example)"
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "=== Related ==="
|
||||
echo "Full checklist: bash scripts/verify/check-public-pmm-dry-run-readiness.sh"
|
||||
echo "USDT<->USDC path recipes: bash scripts/verify/plan-mainnet-usdt-usdc-via-cw-paths.sh"
|
||||
@@ -20,6 +20,8 @@ fi
|
||||
[[ "${DEBUG:-0}" = "1" ]] && set -x
|
||||
|
||||
# Pre-flight
|
||||
# If forge logs missing cache/solidity-files-cache.json during verify, run once (non-destructive):
|
||||
# cd smom-dbis-138 && forge build
|
||||
command -v forge &>/dev/null || { echo "ERROR: forge not found (install Foundry)"; exit 1; }
|
||||
command -v node &>/dev/null || { echo "ERROR: node not found (required for verification proxy)"; exit 1; }
|
||||
SMOM="${SMOM_DIR:-${PROJECT_ROOT}/smom-dbis-138}"
|
||||
|
||||
255
scripts/verify/run-mainnet-cwusdc-usdc-ladder-steps-1-3.sh
Executable file
255
scripts/verify/run-mainnet-cwusdc-usdc-ladder-steps-1-3.sh
Executable file
@@ -0,0 +1,255 @@
|
||||
#!/usr/bin/env bash
|
||||
# Operator helper for the current Mainnet cWUSDC/USDC ladder.
|
||||
#
|
||||
# This script does not execute live flash swaps. It:
|
||||
# - checks wallet + pool preflight
|
||||
# - prints the staged matched top-up command
|
||||
# - runs the dry-run calculator for steps 1-3
|
||||
# - verifies live reserves before each step and tells the operator what the
|
||||
# expected matched state should be after each rebalance
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/run-mainnet-cwusdc-usdc-ladder-steps-1-3.sh
|
||||
#
|
||||
# Optional env:
|
||||
# PMM_FLASH_EXIT_PRICE_CMD — shell command that prints one numeric gross USDC per cWUSDC for
|
||||
# `--external-exit-price-cmd` (default: printf 1.12 = planning assumption).
|
||||
# For on-chain pool-implied VWAP at a fixed raw base size (diagnostic; often fails repay vs flash):
|
||||
# PMM_FLASH_EXIT_PRICE_CMD="bash ${PWD}/scripts/verify/print-mainnet-cwusdc-usdc-pmm-sellbase-implied-price.sh 6187975"
|
||||
# For DODO or 1inch hosted quote (needs ONEINCH_API_KEY or DODO_API_KEY in env):
|
||||
# PMM_FLASH_EXIT_PRICE_CMD="bash ${PWD}/scripts/verify/print-mainnet-cwusdc-external-exit-quote.sh 1inch 6187975"
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
PROXMOX_PROJECT_ROOT="$PROJECT_ROOT"
|
||||
|
||||
source "$PROJECT_ROOT/scripts/lib/load-project-env.sh" >/dev/null 2>&1
|
||||
source "$PROJECT_ROOT/smom-dbis-138/scripts/load-env.sh" >/dev/null 2>&1
|
||||
PROJECT_ROOT="$PROXMOX_PROJECT_ROOT"
|
||||
|
||||
require_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1 || {
|
||||
echo "[FAIL] Missing required command: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
require_cmd cast
|
||||
require_cmd node
|
||||
|
||||
if [[ -z "${ETHEREUM_MAINNET_RPC:-}" || -z "${PRIVATE_KEY:-}" ]]; then
|
||||
echo "[FAIL] ETHEREUM_MAINNET_RPC and PRIVATE_KEY are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
POOL="0x69776fc607e9edA8042e320e7e43f54d06c68f0E"
|
||||
USDC="0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"
|
||||
CWUSDC="0x2de5F116bFcE3d0f922d9C8351e0c5Fc24b9284a"
|
||||
WALLET="$(cast wallet address --private-key "$PRIVATE_KEY")"
|
||||
|
||||
TOPUP_RAW="136763253"
|
||||
TOPUP_HUMAN="136.763253"
|
||||
LEGACY_MATCHED_RAW="120000000"
|
||||
START_MATCHED_RAW="$((LEGACY_MATCHED_RAW + TOPUP_RAW))"
|
||||
|
||||
GAS_GWEI="${GAS_GWEI_OVERRIDE:-0.264434700}"
|
||||
NATIVE_PRICE="${NATIVE_PRICE_OVERRIDE:-2241.69}"
|
||||
DEFAULT_PMM_FLASH_EXIT_PRICE_CMD='printf 1.12'
|
||||
PMM_FLASH_EXIT_PRICE_CMD_EFFECTIVE="${PMM_FLASH_EXIT_PRICE_CMD:-$DEFAULT_PMM_FLASH_EXIT_PRICE_CMD}"
|
||||
|
||||
get_reserves() {
|
||||
cast call "$POOL" 'getVaultReserve()(uint256,uint256)' --rpc-url "$ETHEREUM_MAINNET_RPC" |
|
||||
awk 'NR==1{b=$1} NR==2{q=$1} END{print b, q}'
|
||||
}
|
||||
|
||||
print_balance_snapshot() {
|
||||
local eth usdc cwusdc usdc_allowance cwusdc_allowance
|
||||
eth="$(cast balance "$WALLET" --ether --rpc-url "$ETHEREUM_MAINNET_RPC")"
|
||||
usdc="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$WALLET" --rpc-url "$ETHEREUM_MAINNET_RPC" | awk '{print $1}')"
|
||||
cwusdc="$(cast call "$CWUSDC" 'balanceOf(address)(uint256)' "$WALLET" --rpc-url "$ETHEREUM_MAINNET_RPC" | awk '{print $1}')"
|
||||
usdc_allowance="$(cast call "$USDC" 'allowance(address,address)(uint256)' "$WALLET" "$DODO_PMM_INTEGRATION_MAINNET" --rpc-url "$ETHEREUM_MAINNET_RPC" | awk '{print $1}')"
|
||||
cwusdc_allowance="$(cast call "$CWUSDC" 'allowance(address,address)(uint256)' "$WALLET" "$DODO_PMM_INTEGRATION_MAINNET" --rpc-url "$ETHEREUM_MAINNET_RPC" | awk '{print $1}')"
|
||||
echo "wallet=$WALLET"
|
||||
echo "walletEth=$eth"
|
||||
echo "walletUsdcRaw=$usdc"
|
||||
echo "walletCwusdcRaw=$cwusdc"
|
||||
echo "walletUsdcAllowanceToIntegrationRaw=$usdc_allowance"
|
||||
echo "walletCwusdcAllowanceToIntegrationRaw=$cwusdc_allowance"
|
||||
}
|
||||
|
||||
run_dry_run() {
|
||||
local base="$1"
|
||||
local quote="$2"
|
||||
local trade="$3"
|
||||
local wallet_quote_balance="$4"
|
||||
local wallet_quote_allowance="$5"
|
||||
local execution_grade_args=()
|
||||
if (( wallet_quote_balance >= trade && wallet_quote_allowance >= trade )); then
|
||||
execution_grade_args+=(--execution-grade)
|
||||
fi
|
||||
node "$PROJECT_ROOT/scripts/analytics/pmm-flash-push-break-even.mjs" \
|
||||
-B "$base" -Q "$quote" \
|
||||
--full-loop-dry-run \
|
||||
--loop-strategy quote-push \
|
||||
-x "$trade" \
|
||||
--base-asset cWUSDC --base-decimals 6 \
|
||||
--quote-asset USDC --quote-decimals 6 \
|
||||
--external-exit-price-cmd "$PMM_FLASH_EXIT_PRICE_CMD_EFFECTIVE" \
|
||||
--flash-provider-cap 1000000000000 \
|
||||
--flash-provider-name 'Aave mainnet (cap placeholder)' \
|
||||
--gas-tx-count 3 --gas-per-tx 250000 \
|
||||
--max-fee-gwei "$GAS_GWEI" \
|
||||
--native-token-price "$NATIVE_PRICE" \
|
||||
--max-post-trade-deviation-bps 500 \
|
||||
"${execution_grade_args[@]}"
|
||||
}
|
||||
|
||||
calc_step() {
|
||||
local matched_raw="$1"
|
||||
python3 - "$matched_raw" <<'PY'
|
||||
import math, sys
|
||||
raw = int(sys.argv[1])
|
||||
s = raw / 1_000_000
|
||||
lp_fee_bps = 3
|
||||
cap_bps = 500
|
||||
safety_raw = 100 # leave a tiny buffer below the analytic cap to avoid rounding-edge failures
|
||||
net = s * (math.sqrt(1 + cap_bps / 10000) - 1)
|
||||
gross = net / (1 - lp_fee_bps / 10000)
|
||||
gross_raw = max(1, math.floor(gross * 1_000_000) - safety_raw)
|
||||
gross_buffered = gross_raw / 1_000_000
|
||||
net_buffered = gross_buffered * (1 - lp_fee_bps / 10000)
|
||||
base_out = net_buffered * s / (s + net_buffered)
|
||||
post_b = s - base_out
|
||||
post_q = s + net_buffered
|
||||
rebalance = post_q - post_b
|
||||
print(gross_raw)
|
||||
print(round(rebalance * 1_000_000))
|
||||
print(round(post_q * 1_000_000))
|
||||
PY
|
||||
}
|
||||
|
||||
verify_expected_reserves() {
|
||||
local expected_b="$1"
|
||||
local expected_q="$2"
|
||||
local actual_b actual_q
|
||||
read -r actual_b actual_q < <(get_reserves)
|
||||
echo "liveReservesRaw=$actual_b $actual_q"
|
||||
if [[ "$actual_b" == "$expected_b" && "$actual_q" == "$expected_q" ]]; then
|
||||
echo "[OK] live reserves match expected staged state"
|
||||
else
|
||||
echo "[WARN] live reserves do not match expected staged state"
|
||||
echo " expected=$expected_b $expected_q"
|
||||
echo " actual=$actual_b $actual_q"
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=== Preflight ==="
|
||||
echo "pmmFlashExitPriceCmd=${PMM_FLASH_EXIT_PRICE_CMD_EFFECTIVE}"
|
||||
print_balance_snapshot
|
||||
WALLET_USDC_RAW="$(cast call "$USDC" 'balanceOf(address)(uint256)' "$WALLET" --rpc-url "$ETHEREUM_MAINNET_RPC" | awk '{print $1}')"
|
||||
WALLET_USDC_ALLOWANCE_RAW="$(cast call "$USDC" 'allowance(address,address)(uint256)' "$WALLET" "$DODO_PMM_INTEGRATION_MAINNET" --rpc-url "$ETHEREUM_MAINNET_RPC" | awk '{print $1}')"
|
||||
read -r LIVE_BASE_RAW LIVE_QUOTE_RAW < <(get_reserves)
|
||||
echo "liveReservesRaw=$LIVE_BASE_RAW $LIVE_QUOTE_RAW"
|
||||
|
||||
if [[ "$LIVE_BASE_RAW" == "$LEGACY_MATCHED_RAW" && "$LIVE_QUOTE_RAW" == "$LEGACY_MATCHED_RAW" ]]; then
|
||||
echo "[OK] live reserves are at the pre-top-up baseline"
|
||||
PRETOPUP_PENDING=1
|
||||
elif [[ "$LIVE_BASE_RAW" == "$START_MATCHED_RAW" && "$LIVE_QUOTE_RAW" == "$START_MATCHED_RAW" ]]; then
|
||||
echo "[OK] live reserves are already at the staged top-up baseline"
|
||||
PRETOPUP_PENDING=0
|
||||
else
|
||||
echo "[WARN] live reserves are at an unexpected state for this run sheet"
|
||||
echo " expected pre-top-up=$LEGACY_MATCHED_RAW $LEGACY_MATCHED_RAW"
|
||||
echo " expected staged=$START_MATCHED_RAW $START_MATCHED_RAW"
|
||||
PRETOPUP_PENDING=0
|
||||
fi
|
||||
echo ""
|
||||
if (( WALLET_USDC_RAW == 0 )); then
|
||||
echo "probeReadinessNote=wallet has 0 live USDC, so EOA-based execution-grade probing is unavailable after the staged top-up; the helper will run planning-mode dry-runs until quote balance is replenished for probing"
|
||||
elif (( WALLET_USDC_ALLOWANCE_RAW == 0 )); then
|
||||
echo "probeReadinessNote=wallet USDC allowance to DODO integration is 0, so EOA-based execution-grade probing is unavailable until approval is restored"
|
||||
else
|
||||
echo "probeReadinessNote=wallet has enough live USDC and allowance for direct EOA quote-push probing"
|
||||
fi
|
||||
echo ""
|
||||
if (( PRETOPUP_PENDING == 1 )); then
|
||||
echo "stagedTopupHuman=$TOPUP_HUMAN cWUSDC + $TOPUP_HUMAN USDC"
|
||||
echo "stagedTopupRaw=$TOPUP_RAW cWUSDC + $TOPUP_RAW USDC"
|
||||
echo "topupDryRunCommand:"
|
||||
cat <<EOF
|
||||
bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\
|
||||
--pair=cwusdc-usdc \\
|
||||
--base-amount=$TOPUP_RAW \\
|
||||
--quote-amount=$TOPUP_RAW \\
|
||||
--dry-run
|
||||
EOF
|
||||
else
|
||||
echo "staged top-up is already reflected in the live pool; no preflight add is needed"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
mapfile -t STEP1_CALC < <(calc_step "$START_MATCHED_RAW")
|
||||
STEP1_X="${STEP1_CALC[0]}"
|
||||
STEP1_REBALANCE="${STEP1_CALC[1]}"
|
||||
STEP1_EXPECT_NEXT="${STEP1_CALC[2]}"
|
||||
|
||||
mapfile -t STEP2_CALC < <(calc_step "$STEP1_EXPECT_NEXT")
|
||||
STEP2_X="${STEP2_CALC[0]}"
|
||||
STEP2_REBALANCE="${STEP2_CALC[1]}"
|
||||
STEP2_EXPECT_NEXT="${STEP2_CALC[2]}"
|
||||
|
||||
mapfile -t STEP3_CALC < <(calc_step "$STEP2_EXPECT_NEXT")
|
||||
STEP3_X="${STEP3_CALC[0]}"
|
||||
STEP3_REBALANCE="${STEP3_CALC[1]}"
|
||||
STEP3_EXPECT_NEXT="${STEP3_CALC[2]}"
|
||||
|
||||
echo "=== Step 1 ==="
|
||||
echo "expectedMatchedStart=$START_MATCHED_RAW/$START_MATCHED_RAW"
|
||||
echo "computedSafeTrancheRaw=$STEP1_X"
|
||||
run_dry_run "$START_MATCHED_RAW" "$START_MATCHED_RAW" "$STEP1_X" "$WALLET_USDC_RAW" "$WALLET_USDC_ALLOWANCE_RAW"
|
||||
echo ""
|
||||
echo "rebalanceCommand:"
|
||||
cat <<EOF
|
||||
bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\
|
||||
--pair=cwusdc-usdc \\
|
||||
--base-amount=$STEP1_REBALANCE \\
|
||||
--quote-amount=0
|
||||
EOF
|
||||
echo ""
|
||||
|
||||
echo "=== Verify After Step 1 Rebalance ==="
|
||||
verify_expected_reserves "$STEP1_EXPECT_NEXT" "$STEP1_EXPECT_NEXT"
|
||||
echo ""
|
||||
|
||||
echo "=== Step 2 ==="
|
||||
echo "expectedMatchedStart=$STEP1_EXPECT_NEXT/$STEP1_EXPECT_NEXT"
|
||||
echo "computedSafeTrancheRaw=$STEP2_X"
|
||||
run_dry_run "$STEP1_EXPECT_NEXT" "$STEP1_EXPECT_NEXT" "$STEP2_X" "$WALLET_USDC_RAW" "$WALLET_USDC_ALLOWANCE_RAW"
|
||||
echo ""
|
||||
echo "rebalanceCommand:"
|
||||
cat <<EOF
|
||||
bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\
|
||||
--pair=cwusdc-usdc \\
|
||||
--base-amount=$STEP2_REBALANCE \\
|
||||
--quote-amount=0
|
||||
EOF
|
||||
echo ""
|
||||
|
||||
echo "=== Verify After Step 2 Rebalance ==="
|
||||
verify_expected_reserves "$STEP2_EXPECT_NEXT" "$STEP2_EXPECT_NEXT"
|
||||
echo ""
|
||||
|
||||
echo "=== Step 3 ==="
|
||||
echo "expectedMatchedStart=$STEP2_EXPECT_NEXT/$STEP2_EXPECT_NEXT"
|
||||
echo "computedSafeTrancheRaw=$STEP3_X"
|
||||
run_dry_run "$STEP2_EXPECT_NEXT" "$STEP2_EXPECT_NEXT" "$STEP3_X" "$WALLET_USDC_RAW" "$WALLET_USDC_ALLOWANCE_RAW"
|
||||
echo ""
|
||||
echo "rebalanceCommand:"
|
||||
cat <<EOF
|
||||
bash scripts/deployment/add-mainnet-public-dodo-cw-liquidity.sh \\
|
||||
--pair=cwusdc-usdc \\
|
||||
--base-amount=$STEP3_REBALANCE \\
|
||||
--quote-amount=0
|
||||
EOF
|
||||
@@ -20,8 +20,17 @@ run_step() {
|
||||
run_step "Config validation" \
|
||||
bash "$PROJECT_ROOT/scripts/validation/validate-config-files.sh"
|
||||
|
||||
run_step "Gas rollout status parser" \
|
||||
bash "$PROJECT_ROOT/scripts/verify/check-gas-public-pool-status.sh" --json
|
||||
|
||||
run_step "Chain 138 package CI targets" \
|
||||
pnpm --dir "$PROJECT_ROOT/smom-dbis-138" run test:ci
|
||||
|
||||
run_step "Info site build" \
|
||||
pnpm --dir "$PROJECT_ROOT/info-defi-oracle-138" build
|
||||
|
||||
run_step "Economics toolkit (unit tests)" \
|
||||
pnpm --dir "$PROJECT_ROOT" run economics:test
|
||||
|
||||
log ""
|
||||
ok "Repo green-path tests passed."
|
||||
|
||||
33
scripts/verify/self-test-chain138-rpc-verify.sh
Executable file
33
scripts/verify/self-test-chain138-rpc-verify.sh
Executable file
@@ -0,0 +1,33 @@
|
||||
#!/usr/bin/env bash
|
||||
# Offline checks for Chain 138 RPC verification scripts (CI-safe).
|
||||
# - bash -n on health, parity, monitor, pool clear, inventory
|
||||
# - Parity script skipped (LAN-only) unless CHAIN138_RPC_PARITY_SKIP=0 and RPC reachable
|
||||
#
|
||||
# Usage: bash scripts/verify/self-test-chain138-rpc-verify.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
cd "$ROOT"
|
||||
|
||||
check_n() {
|
||||
local f="$1"
|
||||
bash -n "$f" || { echo "bash -n FAIL: $f" >&2; exit 1; }
|
||||
echo "OK bash -n $f"
|
||||
}
|
||||
|
||||
# Inventory is source-only (exits if executed); syntax-check still valid.
|
||||
bash -n scripts/lib/chain138-lan-rpc-inventory.sh && echo "OK bash -n scripts/lib/chain138-lan-rpc-inventory.sh"
|
||||
|
||||
check_n scripts/verify/check-chain138-rpc-health.sh
|
||||
check_n scripts/verify/check-chain138-rpc-nonce-gas-parity.sh
|
||||
check_n scripts/monitoring/monitor-blockchain-health.sh
|
||||
check_n scripts/clear-all-transaction-pools.sh
|
||||
check_n scripts/verify/check-pending-transactions-chain138.sh
|
||||
|
||||
# CI / no LAN: parity must not fail the workflow
|
||||
CHAIN138_RPC_PARITY_SKIP=1 bash scripts/verify/check-chain138-rpc-nonce-gas-parity.sh
|
||||
echo "OK parity with CHAIN138_RPC_PARITY_SKIP=1"
|
||||
|
||||
echo ""
|
||||
echo "=== self-test-chain138-rpc-verify: all offline checks passed ==="
|
||||
163
scripts/verify/sweep-flash-selectors-chain138.sh
Executable file
163
scripts/verify/sweep-flash-selectors-chain138.sh
Executable file
@@ -0,0 +1,163 @@
|
||||
#!/usr/bin/env bash
|
||||
# Read-only: scan deployed bytecode (Chain 138) for common flash / callback entrypoints.
|
||||
# Address source: config/smart-contracts-master.json (chains.138.contracts) plus optional extras.
|
||||
#
|
||||
# Usage:
|
||||
# RPC_URL_138=https://rpc-core.d-bis.org bash scripts/verify/sweep-flash-selectors-chain138.sh
|
||||
# OUT_JSON=path/to/flash_candidates.json RPC_URL_138=... bash scripts/verify/sweep-flash-selectors-chain138.sh
|
||||
#
|
||||
# Requires: cast, jq. Does not use cast selectors --resolve (no OpenChain); matches 4-byte selectors only.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
||||
JSON="${ROOT}/config/smart-contracts-master.json"
|
||||
RPC="${RPC_URL_138:-https://rpc-core.d-bis.org}"
|
||||
OUT_JSON="${OUT_JSON:-${ROOT}/config/flash_candidates-chain138.json}"
|
||||
CHAIN_KEY="${CHAIN_KEY:-138}"
|
||||
|
||||
if ! command -v cast >/dev/null || ! command -v jq >/dev/null; then
|
||||
echo "ERROR: need cast and jq on PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
if [[ ! -f "$JSON" ]]; then
|
||||
echo "ERROR: missing $JSON" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Workspace-canonical PMM addresses (may differ from smart-contracts-master.json); always merged for sweep.
|
||||
EXTRA_ADDRS=(
|
||||
"0x5BDc62f1ae7D630c37A8B363a1d49845356Ee72d"
|
||||
"0xff8d3b8fDF7B112759F076B69f4271D4209C0849"
|
||||
"0x6fc60DEDc92a2047062294488539992710b99D71"
|
||||
"0x9f74Be42725f2Aa072a9E0CdCce0E7203C510263"
|
||||
"0xBe9e0B2d4cF6A3b2994d6f2f0904D2B165eB8ffC"
|
||||
"0xD084b68cB4B1ef2cBA09CF99FB1B6552fd9b4859"
|
||||
"0x89F7a1fcbBe104BeE96Da4b4b6b7d3AF85f7E661"
|
||||
)
|
||||
|
||||
declare -A SIG_OF
|
||||
while IFS= read -r line; do
|
||||
sel="${line%%$'\t'*}"
|
||||
sig="${line#*$'\t'}"
|
||||
sel_lc=$(echo "$sel" | tr '[:upper:]' '[:lower:]')
|
||||
SIG_OF[$sel_lc]="$sig"
|
||||
done <<'EOF'
|
||||
0xd0a494e4 flashLoan(uint256,uint256,address,bytes)
|
||||
0xadf51de1 flashLoan(address,uint256,address,bytes)
|
||||
0xf859ddd7 flashLoan(address,uint256,uint256,address,bytes)
|
||||
0x5cffe9de flashLoan(address,address,uint256,bytes)
|
||||
0x5c38449e flashLoan(address,address[],uint256[],bytes)
|
||||
0xf740f328 flash(address,uint256,bytes)
|
||||
0x10d1e85c uniswapV2Call(address,uint256,uint256,bytes)
|
||||
0x244dac6e executeOperation(address,address,address,uint256,uint256,bytes)
|
||||
0xfc08f9f6 executeOperation(address[],uint256[],uint256[],address,address,bytes)
|
||||
0x3e0a794c operateAction(uint8,uint256,bytes)
|
||||
0x2b2d17ad flashLoanSimple(address,uint256,bytes,uint16)
|
||||
0x613255ab maxFlashLoan(address)(uint256)
|
||||
0x88e492ab flashFees(address,address)(uint256)
|
||||
0xeb2021c3 DVMFlashLoanCall(address,uint256,uint256,bytes)
|
||||
0x4d7f652f DPPFlashLoanCall(address,uint256,uint256,bytes)
|
||||
0x2eaeb2e6 DSPFlashLoanCall(address,uint256,uint256,bytes)
|
||||
EOF
|
||||
|
||||
mapfile -t PAIRS < <(
|
||||
jq -r --arg c "$CHAIN_KEY" '.chains[$c].contracts // {} | to_entries[] | "\(.key)\t\(.value)"' "$JSON"
|
||||
)
|
||||
for a in "${EXTRA_ADDRS[@]}"; do
|
||||
PAIRS+=("EXTRA_CANONICAL"$'\t'"$a")
|
||||
done
|
||||
|
||||
declare -A SEEN_ADDR
|
||||
declare -a SCAN_ROWS=()
|
||||
declare -a CANDIDATE_JSON=()
|
||||
declare -a NO_CODE=()
|
||||
declare -a ERRORS=()
|
||||
|
||||
cid=$(cast chain-id --rpc-url "$RPC" 2>/dev/null || echo "")
|
||||
|
||||
for row in "${PAIRS[@]}"; do
|
||||
name="${row%%$'\t'*}"
|
||||
addr="${row#*$'\t'}"
|
||||
addr_lc=$(echo "$addr" | tr '[:upper:]' '[:lower:]')
|
||||
if [[ -n "${SEEN_ADDR[$addr_lc]:-}" ]]; then
|
||||
continue
|
||||
fi
|
||||
SEEN_ADDR[$addr_lc]=1
|
||||
|
||||
if [[ ! "$addr" =~ ^0x[0-9a-fA-F]{40}$ ]]; then
|
||||
ERRORS+=("{\"name\":\"$name\",\"address\":\"$addr\",\"error\":\"invalid address\"}")
|
||||
continue
|
||||
fi
|
||||
|
||||
code=$(cast code "$addr" --rpc-url "$RPC" 2>&1) || true
|
||||
if [[ "$code" == "0x" || -z "$code" ]]; then
|
||||
NO_CODE+=("{\"name\":\"$(jq -cn --arg n "$name" '$n')\",\"address\":\"$addr\"}")
|
||||
continue
|
||||
fi
|
||||
if [[ "$code" == Error:* ]]; then
|
||||
ERRORS+=("{\"name\":\"$(jq -cn --arg n "$name" '$n')\",\"address\":\"$addr\",\"error\":$(jq -cn --arg m "$code" '$m')}")
|
||||
continue
|
||||
fi
|
||||
|
||||
sel_text=$(cast selectors "$code" 2>/dev/null || true)
|
||||
matched=()
|
||||
matched_sigs=()
|
||||
while IFS= read -r sline; do
|
||||
sel=$(echo "$sline" | awk '{print $1}' | tr '[:upper:]' '[:lower:]')
|
||||
[[ "$sel" =~ ^0x[0-9a-f]{8}$ ]] || continue
|
||||
if [[ -n "${SIG_OF[$sel]:-}" ]]; then
|
||||
matched+=("$sel")
|
||||
matched_sigs+=("${SIG_OF[$sel]}")
|
||||
fi
|
||||
done <<< "$sel_text"
|
||||
|
||||
if ((${#matched[@]} > 0)); then
|
||||
ms=$(printf '%s\n' "${matched_sigs[@]}" | jq -R . | jq -s .)
|
||||
mc=$(printf '%s\n' "${matched[@]}" | jq -R . | jq -s .)
|
||||
CANDIDATE_JSON+=("$(jq -nc \
|
||||
--arg name "$name" \
|
||||
--arg addr "$addr" \
|
||||
--argjson ms "$ms" \
|
||||
--argjson mc "$mc" \
|
||||
'{name:$name,address:$addr,matchedSelectors:$mc,matchedSignatures:$ms}')")
|
||||
fi
|
||||
|
||||
SCAN_ROWS+=("{\"name\":$(jq -cn --arg n "$name" '$n'),\"address\":\"$addr\",\"bytecodeHexChars\":$((${#code} - 2)),\"flashMatchCount\":${#matched[@]}}")
|
||||
done
|
||||
|
||||
ts=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
ref=$(printf '%s\n' "${!SIG_OF[@]}" | sort -u | while read -r s; do
|
||||
jq -nc --arg sel "$s" --arg sig "${SIG_OF[$s]}" '{selector:$sel,signature:$sig}'
|
||||
done | jq -s .)
|
||||
|
||||
candidates_arr=$(printf '%s\n' "${CANDIDATE_JSON[@]}" | jq -s .)
|
||||
scanned_arr=$(printf '%s\n' "${SCAN_ROWS[@]}" | jq -s .)
|
||||
nocode_arr=$(printf '%s\n' "${NO_CODE[@]}" | jq -s .)
|
||||
errors_arr=$(printf '%s\n' "${ERRORS[@]}" | jq -s .)
|
||||
|
||||
jq -nc \
|
||||
--arg ts "$ts" \
|
||||
--arg rpc "$RPC" \
|
||||
--arg chainId "$cid" \
|
||||
--argjson ref "$ref" \
|
||||
--argjson candidates "$candidates_arr" \
|
||||
--argjson scanned "$scanned_arr" \
|
||||
--argjson noCode "$nocode_arr" \
|
||||
--argjson errors "$errors_arr" \
|
||||
'{
|
||||
schemaVersion: 1,
|
||||
generatedAt: $ts,
|
||||
rpcUrl: $rpc,
|
||||
chainId: ($chainId | tonumber? // $chainId),
|
||||
description: "Heuristic flash/callback selector scan; false positives possible (e.g. unrelated executeOperation).",
|
||||
flashSelectorsReference: $ref,
|
||||
candidates: $candidates,
|
||||
scannedContracts: $scanned,
|
||||
emptyBytecode: $noCode,
|
||||
errors: $errors
|
||||
}' > "$OUT_JSON"
|
||||
|
||||
echo "Wrote $OUT_JSON"
|
||||
echo "Candidates: $(jq '.candidates | length' "$OUT_JSON")"
|
||||
jq '.candidates' "$OUT_JSON"
|
||||
@@ -9,6 +9,14 @@
|
||||
#
|
||||
# Registry shape: config/dbis-institutional/schemas/address-registry-entry.schema.json
|
||||
#
|
||||
# DBIS → Blockscout labelType (when .blockscout.labelType is absent):
|
||||
# token_contract → token
|
||||
# eoa_operational → account
|
||||
# treasury_vault, smart_account, escrow, contract_registry, iso_intake,
|
||||
# dbis_settlement_router, other → contract
|
||||
#
|
||||
# If .blockscout.label is missing, a default label is built from addressRole + entity_id.
|
||||
#
|
||||
# Env (HTTP mode):
|
||||
# BLOCKSCOUT_BASE_URL default https://explorer.d-bis.org
|
||||
# BLOCKSCOUT_LABEL_PATH default /api/v1/labels
|
||||
@@ -232,10 +240,30 @@ emit_one() {
|
||||
|
||||
address=$(echo "$blob" | jq -r '.address // empty')
|
||||
label=$(echo "$blob" | jq -r '.blockscout.label // empty')
|
||||
ltype=$(echo "$blob" | jq -r '.blockscout.labelType // "contract"')
|
||||
ltype=$(echo "$blob" | jq -r 'if (.blockscout|type=="object") and (.blockscout.labelType|type=="string") and (.blockscout.labelType|length>0) then .blockscout.labelType else empty end')
|
||||
role=$(echo "$blob" | jq -r '.addressRole // empty')
|
||||
entity=$(echo "$blob" | jq -r '.entity_id // empty')
|
||||
|
||||
if [[ -z "$ltype" ]]; then
|
||||
case "$role" in
|
||||
token_contract) ltype="token" ;;
|
||||
eoa_operational) ltype="account" ;;
|
||||
*) ltype="contract" ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
if [[ -z "$label" ]]; then
|
||||
if [[ -n "$role" ]]; then
|
||||
if [[ -n "$entity" ]]; then
|
||||
label="DBIS:${role} (${entity})"
|
||||
else
|
||||
label="DBIS:${role}"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$address" || -z "$label" ]]; then
|
||||
echo "skip (missing address or blockscout.label): $display" >&2
|
||||
echo "skip (missing address or resolvable label): $display" >&2
|
||||
return 0
|
||||
fi
|
||||
|
||||
|
||||
111
scripts/verify/validate-address-registry-xe-aliases.mjs
Normal file
111
scripts/verify/validate-address-registry-xe-aliases.mjs
Normal file
@@ -0,0 +1,111 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* Validate address-registry JSON: each alias with aliasType "web3_eth_iban" must be a valid
|
||||
* XE… direct/indirect IBAN per web3-eth-iban; if toAddress() succeeds, it must match entry.address.
|
||||
*
|
||||
* Usage:
|
||||
* node scripts/verify/validate-address-registry-xe-aliases.mjs [path/to/registry.json ...]
|
||||
* (no args: validates repo examples under config/dbis-institutional/examples/)
|
||||
*/
|
||||
import { createRequire } from 'module'
|
||||
import { readFileSync, existsSync } from 'fs'
|
||||
import { resolve, dirname, join } from 'path'
|
||||
import { fileURLToPath } from 'url'
|
||||
|
||||
const require = createRequire(import.meta.url)
|
||||
const Iban = require('web3-eth-iban')
|
||||
|
||||
const __dirname = dirname(fileURLToPath(import.meta.url))
|
||||
const ROOT = resolve(__dirname, '../..')
|
||||
const DEFAULT_FILES = [
|
||||
join(ROOT, 'config/dbis-institutional/examples/address-registry-entry.example.json'),
|
||||
join(ROOT, 'config/dbis-institutional/examples/address-registry-entries-batch.example.json'),
|
||||
]
|
||||
|
||||
function normalizeAddr(a) {
|
||||
if (typeof a !== 'string' || !/^0x[a-fA-F0-9]{40}$/.test(a)) return null
|
||||
return a.toLowerCase()
|
||||
}
|
||||
|
||||
function collectEntries(data) {
|
||||
if (Array.isArray(data)) return data
|
||||
if (data && typeof data === 'object') return [data]
|
||||
return []
|
||||
}
|
||||
|
||||
function validateEntry(entry, sourceLabel) {
|
||||
const id = entry.registryEntryId || sourceLabel
|
||||
const addr = normalizeAddr(entry.address)
|
||||
if (!addr) {
|
||||
console.error(`FAIL ${id}: missing or invalid address`)
|
||||
return 1
|
||||
}
|
||||
const aliases = entry.aliases
|
||||
if (!Array.isArray(aliases)) return 0
|
||||
|
||||
let errors = 0
|
||||
for (const al of aliases) {
|
||||
if (!al || al.aliasType !== 'web3_eth_iban') continue
|
||||
const raw = al.aliasValue
|
||||
if (typeof raw !== 'string' || !raw.trim()) {
|
||||
console.error(`FAIL ${id}: web3_eth_iban aliasValue must be a non-empty string`)
|
||||
errors++
|
||||
continue
|
||||
}
|
||||
const ibanStr = raw.trim().toUpperCase()
|
||||
if (!Iban.isValid(ibanStr)) {
|
||||
console.error(`FAIL ${id}: invalid XE/Ethereum IBAN checksum or format: ${raw}`)
|
||||
errors++
|
||||
continue
|
||||
}
|
||||
try {
|
||||
const resolved = Iban.toAddress(ibanStr)
|
||||
const resolvedNorm = normalizeAddr(resolved)
|
||||
if (resolvedNorm && resolvedNorm !== addr) {
|
||||
console.error(
|
||||
`FAIL ${id}: web3_eth_iban ${ibanStr} resolves to ${resolved}, expected ${entry.address}`,
|
||||
)
|
||||
errors++
|
||||
}
|
||||
} catch {
|
||||
// web3-eth-iban toAddress() throws on some leading-zero encodings; checksum validity still enforced above.
|
||||
console.warn(
|
||||
`WARN ${id}: IBAN ${ibanStr} is valid but toAddress() failed (library edge case); verify manually against ${entry.address}`,
|
||||
)
|
||||
}
|
||||
}
|
||||
return errors
|
||||
}
|
||||
|
||||
function validateFile(filePath) {
|
||||
const abs = resolve(filePath)
|
||||
if (!existsSync(abs)) {
|
||||
console.error(`Missing file: ${abs}`)
|
||||
return 1
|
||||
}
|
||||
let data
|
||||
try {
|
||||
data = JSON.parse(readFileSync(abs, 'utf8'))
|
||||
} catch (e) {
|
||||
console.error(`Invalid JSON: ${abs}`, e.message)
|
||||
return 1
|
||||
}
|
||||
let total = 0
|
||||
for (const entry of collectEntries(data)) {
|
||||
total += validateEntry(entry, abs)
|
||||
}
|
||||
if (total === 0) {
|
||||
console.log(`OK XE/web3_eth_iban aliases: ${abs}`)
|
||||
}
|
||||
return total
|
||||
}
|
||||
|
||||
const paths = process.argv.slice(2).filter((a) => !a.startsWith('-'))
|
||||
const targets = paths.length > 0 ? paths : DEFAULT_FILES
|
||||
|
||||
let exitCode = 0
|
||||
for (const p of targets) {
|
||||
const code = validateFile(p)
|
||||
if (code !== 0) exitCode = 1
|
||||
}
|
||||
process.exit(exitCode)
|
||||
@@ -39,16 +39,18 @@ EXPECTED_IP[192.168.11.240]=2400
|
||||
EXPECTED_IP[192.168.11.241]=2401
|
||||
EXPECTED_IP[192.168.11.242]=2402
|
||||
EXPECTED_IP[192.168.11.243]=2403
|
||||
EXPECTED_IP[192.168.11.172]=2500
|
||||
EXPECTED_IP[192.168.11.173]=2501
|
||||
EXPECTED_IP[192.168.11.174]=2502
|
||||
EXPECTED_IP[192.168.11.246]=2503
|
||||
EXPECTED_IP[192.168.11.247]=2504
|
||||
EXPECTED_IP[192.168.11.248]=2505
|
||||
EXPECTED_IP[192.168.11.172]=2420
|
||||
EXPECTED_IP[192.168.11.173]=2430
|
||||
EXPECTED_IP[192.168.11.174]=2440
|
||||
EXPECTED_IP[192.168.11.246]=2460
|
||||
EXPECTED_IP[192.168.11.247]=2470
|
||||
EXPECTED_IP[192.168.11.248]=2480
|
||||
EXPECTED_IP[192.168.11.213]=1505
|
||||
EXPECTED_IP[192.168.11.214]=1506
|
||||
EXPECTED_IP[192.168.11.244]=1507
|
||||
EXPECTED_IP[192.168.11.245]=1508
|
||||
EXPECTED_IP[192.168.11.219]=1509
|
||||
EXPECTED_IP[192.168.11.220]=1510
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
|
||||
386
scripts/verify/verify-dodo-v3-chain138-blockscout.sh
Normal file
386
scripts/verify/verify-dodo-v3-chain138-blockscout.sh
Normal file
@@ -0,0 +1,386 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Verify the Chain 138 DODO v3 / D3MM pilot deploy set on Blockscout using
|
||||
# the existing Forge verification proxy bridge.
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/verify-dodo-v3-chain138-blockscout.sh
|
||||
# bash scripts/verify/verify-dodo-v3-chain138-blockscout.sh --status-only
|
||||
# bash scripts/verify/verify-dodo-v3-chain138-blockscout.sh --only D3Oracle,D3Vault
|
||||
#
|
||||
# Notes:
|
||||
# - This verifies the core DODO v3 pilot control-plane contracts by default.
|
||||
# - Support/template/mock deployments can still be verified by passing `--only`.
|
||||
# - The canonical D3MM pilot pool itself is clone-based and is tracked through
|
||||
# the verified factory set plus the runtime health check in
|
||||
# `check-dodo-v3-chain138.sh`.
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../.." && pwd)"
|
||||
|
||||
if [[ -f "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh"
|
||||
fi
|
||||
|
||||
command -v forge >/dev/null 2>&1 || { echo "ERROR: forge not found"; exit 1; }
|
||||
command -v node >/dev/null 2>&1 || { echo "ERROR: node not found"; exit 1; }
|
||||
command -v cast >/dev/null 2>&1 || { echo "ERROR: cast not found"; exit 1; }
|
||||
command -v jq >/dev/null 2>&1 || { echo "ERROR: jq not found"; exit 1; }
|
||||
command -v curl >/dev/null 2>&1 || { echo "ERROR: curl not found"; exit 1; }
|
||||
|
||||
DODO_V3_DIR="${DODO_V3_DIR:-/home/intlc/projects/dodo-v3}"
|
||||
DEPLOYMENTS_DIR="${DODO_V3_DIR}/deployments/chain138"
|
||||
[[ -d "${DODO_V3_DIR}" ]] || { echo "ERROR: dodo-v3 not found at ${DODO_V3_DIR}"; exit 1; }
|
||||
[[ -d "${DEPLOYMENTS_DIR}" ]] || { echo "ERROR: deployments dir not found at ${DEPLOYMENTS_DIR}"; exit 1; }
|
||||
|
||||
RPC_URL="${RPC_URL_138:-${CHAIN138_RPC_URL:-${CHAIN138_RPC:-http://192.168.11.211:8545}}}"
|
||||
BLOCKSCOUT_URL="${CHAIN138_BLOCKSCOUT_INTERNAL_URL:-http://${IP_BLOCKSCOUT:-192.168.11.140}:4000}"
|
||||
BLOCKSCOUT_API_BASE="${CHAIN138_BLOCKSCOUT_API_BASE:-${BLOCKSCOUT_URL}/api/v2}"
|
||||
BLOCKSCOUT_PUBLIC_API_BASE="${CHAIN138_BLOCKSCOUT_PUBLIC_API_BASE:-https://explorer.d-bis.org/api/v2}"
|
||||
VERIFIER_PORT="${FORGE_VERIFIER_PROXY_PORT:-3080}"
|
||||
FORGE_VERIFIER_URL="${FORGE_VERIFIER_URL:-http://127.0.0.1:${VERIFIER_PORT}/api}"
|
||||
WAIT_ATTEMPTS="${DODO_V3_VERIFY_WAIT_ATTEMPTS:-18}"
|
||||
WAIT_SECONDS="${DODO_V3_VERIFY_WAIT_SECONDS:-5}"
|
||||
|
||||
ONLY_LIST=""
|
||||
SKIP_LIST=""
|
||||
STATUS_ONLY=0
|
||||
NO_WAIT=0
|
||||
PROXY_PID=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--only) ONLY_LIST="${2:-}"; shift 2 ;;
|
||||
--skip) SKIP_LIST="${2:-}"; shift 2 ;;
|
||||
--status-only) STATUS_ONLY=1; shift ;;
|
||||
--no-wait) NO_WAIT=1; shift ;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
cleanup_proxy() {
|
||||
[[ -n "${PROXY_PID:-}" ]] && kill "${PROXY_PID}" 2>/dev/null || true
|
||||
}
|
||||
trap cleanup_proxy EXIT
|
||||
|
||||
log() {
|
||||
printf '%s\n' "$*"
|
||||
}
|
||||
|
||||
ok() {
|
||||
printf '[ok] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[warn] %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
fail() {
|
||||
printf '[fail] %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
should_handle() {
|
||||
local name="$1"
|
||||
[[ -n "${ONLY_LIST}" ]] && [[ ",${ONLY_LIST}," != *",${name},"* ]] && return 1
|
||||
[[ -n "${SKIP_LIST}" ]] && [[ ",${SKIP_LIST}," = *",${name},"* ]] && return 1
|
||||
return 0
|
||||
}
|
||||
|
||||
deployment_json() {
|
||||
local file="$1"
|
||||
printf '%s/%s.json' "${DEPLOYMENTS_DIR}" "${file}"
|
||||
}
|
||||
|
||||
deployment_address() {
|
||||
jq -r '.address' "$(deployment_json "$1")"
|
||||
}
|
||||
|
||||
deployment_arg() {
|
||||
local file="$1"
|
||||
local index="$2"
|
||||
jq -r ".args[${index}]" "$(deployment_json "${file}")"
|
||||
}
|
||||
|
||||
deployment_array_literal() {
|
||||
local file="$1"
|
||||
local index="$2"
|
||||
jq -r ".args[${index}] | \"[\" + (join(\",\")) + \"]\"" "$(deployment_json "${file}")"
|
||||
}
|
||||
|
||||
deployment_solc_input_path() {
|
||||
local file="$1"
|
||||
local hash
|
||||
hash="$(jq -r '.solcInputHash' "$(deployment_json "${file}")")"
|
||||
printf '%s/solcInputs/%s.json' "${DEPLOYMENTS_DIR}" "${hash}"
|
||||
}
|
||||
|
||||
has_contract_bytecode() {
|
||||
local addr="$1"
|
||||
local code
|
||||
code="$(cast code "${addr}" --rpc-url "${RPC_URL}" 2>/dev/null | tr -d '\n\r \t' | tr '[:upper:]' '[:lower:]')" || true
|
||||
[[ -n "${code}" && "${code}" != "0x" && "${code}" != "0x0" ]]
|
||||
}
|
||||
|
||||
proxy_listening() {
|
||||
if command -v nc >/dev/null 2>&1; then
|
||||
nc -z -w 2 127.0.0.1 "${VERIFIER_PORT}" 2>/dev/null
|
||||
else
|
||||
timeout 2 bash -c "echo >/dev/tcp/127.0.0.1/${VERIFIER_PORT}" 2>/dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
start_proxy_if_needed() {
|
||||
if proxy_listening; then
|
||||
ok "Forge verification proxy already listening on ${VERIFIER_PORT}."
|
||||
return 0
|
||||
fi
|
||||
log "Starting forge verification proxy on ${VERIFIER_PORT} -> ${BLOCKSCOUT_URL}"
|
||||
PORT="${VERIFIER_PORT}" BLOCKSCOUT_URL="${BLOCKSCOUT_URL}" node "${PROJECT_ROOT}/forge-verification-proxy/server.js" >/tmp/dodo-v3-blockscout-proxy.log 2>&1 &
|
||||
PROXY_PID=$!
|
||||
sleep 2
|
||||
proxy_listening || fail "Forge verification proxy failed to start. See /tmp/dodo-v3-blockscout-proxy.log"
|
||||
}
|
||||
|
||||
verification_status_json() {
|
||||
local addr="$1"
|
||||
local raw
|
||||
local base
|
||||
for base in "${BLOCKSCOUT_API_BASE}" "${BLOCKSCOUT_PUBLIC_API_BASE}"; do
|
||||
for _ in 1 2 3; do
|
||||
raw="$(curl --max-time 15 -fsS "${base}/smart-contracts/${addr}" 2>/dev/null || true)"
|
||||
if [[ -n "${raw}" ]] && jq -e 'type == "object"' >/dev/null 2>&1 <<<"${raw}"; then
|
||||
printf '%s' "${raw}"
|
||||
return 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
has_bytecode_surface_only() {
|
||||
local addr="$1"
|
||||
local json
|
||||
json="$(verification_status_json "${addr}")" || return 1
|
||||
jq -e '.creation_bytecode and .deployed_bytecode and ((.name // "") == "") and ((.compiler_version // "") == "")' >/dev/null 2>&1 <<<"${json}"
|
||||
}
|
||||
|
||||
is_verified() {
|
||||
local addr="$1"
|
||||
local expected_name="$2"
|
||||
local json name compiler
|
||||
json="$(verification_status_json "${addr}")" || return 1
|
||||
name="$(jq -r '.name // empty' <<<"${json}")"
|
||||
compiler="$(jq -r '.compiler_version // empty' <<<"${json}")"
|
||||
[[ -n "${name}" && -n "${compiler}" && "${name}" == "${expected_name}" ]]
|
||||
}
|
||||
|
||||
wait_for_verification() {
|
||||
local label="$1"
|
||||
local addr="$2"
|
||||
local expected_name="$3"
|
||||
local attempt json name compiler
|
||||
for (( attempt=1; attempt<=WAIT_ATTEMPTS; attempt++ )); do
|
||||
json="$(verification_status_json "${addr}")" || json=""
|
||||
name="$(jq -r '.name // empty' <<<"${json}" 2>/dev/null || true)"
|
||||
compiler="$(jq -r '.compiler_version // empty' <<<"${json}" 2>/dev/null || true)"
|
||||
if [[ -n "${name}" && -n "${compiler}" && "${name}" == "${expected_name}" ]]; then
|
||||
ok "${label} verified on Blockscout as ${name} (${compiler})."
|
||||
return 0
|
||||
fi
|
||||
sleep "${WAIT_SECONDS}"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
contract_path() {
|
||||
case "$1" in
|
||||
CloneFactory) printf '%s' 'contracts/DODOV3MM/lib/CloneFactory.sol:CloneFactory' ;;
|
||||
D3FeeRateModel) printf '%s' 'contracts/mock/D3FeeRateModel.sol:D3FeeRateModel' ;;
|
||||
D3MakerTemplate) printf '%s' 'contracts/DODOV3MM/D3Pool/D3Maker.sol:D3Maker' ;;
|
||||
D3MMFactory) printf '%s' 'contracts/DODOV3MM/periphery/D3MMFactory.sol:D3MMFactory' ;;
|
||||
D3MMLiquidationRouter) printf '%s' 'contracts/DODOV3MM/periphery/D3MMLiquidationRouter.sol:D3MMLiquidationRouter' ;;
|
||||
D3MMTemplate) printf '%s' 'contracts/DODOV3MM/D3Pool/D3MM.sol:D3MM' ;;
|
||||
D3Oracle) printf '%s' 'contracts/DODOV3MM/periphery/D3Oracle.sol:D3Oracle' ;;
|
||||
D3PoolQuota) printf '%s' 'contracts/DODOV3MM/D3Vault/periphery/D3PoolQuota.sol:D3PoolQuota' ;;
|
||||
D3Proxy) printf '%s' 'contracts/DODOV3MM/periphery/D3Proxy.sol:D3Proxy' ;;
|
||||
D3RateManager) printf '%s' 'contracts/DODOV3MM/D3Vault/periphery/D3RateManager.sol:D3RateManager' ;;
|
||||
D3TokenTemplate) printf '%s' 'contracts/DODOV3MM/D3Vault/periphery/D3Token.sol:D3Token' ;;
|
||||
D3Vault) printf '%s' 'contracts/DODOV3MM/D3Vault/D3Vault.sol:D3Vault' ;;
|
||||
DODOApprove) printf '%s' 'contracts/mock/DODOApprove.sol:DODOApprove' ;;
|
||||
DODOApproveProxy) printf '%s' 'contracts/mock/DODOApproveProxy.sol:DODOApproveProxy' ;;
|
||||
MockChainlinkPriceFeed) printf '%s' 'contracts/mock/MockChainlinkPriceFeed.sol:MockChainlinkPriceFeed' ;;
|
||||
*) return 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
constructor_args() {
|
||||
local deployment_file="$1"
|
||||
local label="$2"
|
||||
case "${label}" in
|
||||
D3MMFactory)
|
||||
cast abi-encode \
|
||||
"constructor(address,address[],address[],address,address,address,address,address)" \
|
||||
"$(deployment_arg "${deployment_file}" 0)" \
|
||||
"$(deployment_array_literal "${deployment_file}" 1)" \
|
||||
"$(deployment_array_literal "${deployment_file}" 2)" \
|
||||
"$(deployment_arg "${deployment_file}" 3)" \
|
||||
"$(deployment_arg "${deployment_file}" 4)" \
|
||||
"$(deployment_arg "${deployment_file}" 5)" \
|
||||
"$(deployment_arg "${deployment_file}" 6)" \
|
||||
"$(deployment_arg "${deployment_file}" 7)"
|
||||
;;
|
||||
D3Proxy)
|
||||
cast abi-encode \
|
||||
"constructor(address,address,address)" \
|
||||
"$(deployment_arg "${deployment_file}" 0)" \
|
||||
"$(deployment_arg "${deployment_file}" 1)" \
|
||||
"$(deployment_arg "${deployment_file}" 2)"
|
||||
;;
|
||||
D3MMLiquidationRouter|DODOApproveProxy)
|
||||
cast abi-encode \
|
||||
"constructor(address)" \
|
||||
"$(deployment_arg "${deployment_file}" 0)"
|
||||
;;
|
||||
MockUSDTUsd|MockUSDCUsd|MockcUSDTUsd|MockcUSDCUsd)
|
||||
cast abi-encode \
|
||||
"constructor(string,uint8)" \
|
||||
"$(deployment_arg "${deployment_file}" 0)" \
|
||||
"$(deployment_arg "${deployment_file}" 1)"
|
||||
;;
|
||||
*)
|
||||
printf '%s' ""
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
submit_v2_standard_input() {
|
||||
local label="$1"
|
||||
local deployment_file="$2"
|
||||
local addr="$3"
|
||||
local path="$4"
|
||||
local constructor="$5"
|
||||
local input compiler_version license_type response message
|
||||
|
||||
input="$(deployment_solc_input_path "${deployment_file}")"
|
||||
[[ -f "${input}" ]] || fail "${label}: missing standard-input artifact ${input}"
|
||||
|
||||
compiler_version="v0.8.16+commit.07a7930e"
|
||||
license_type="busl_1_1"
|
||||
|
||||
response="$(
|
||||
curl --max-time 30 -fsS -X POST \
|
||||
-F "compiler_version=${compiler_version}" \
|
||||
-F "contract_name=${path}" \
|
||||
-F "autodetect_constructor_args=false" \
|
||||
-F "constructor_args=${constructor}" \
|
||||
-F "optimization_runs=200" \
|
||||
-F "is_optimization_enabled=true" \
|
||||
-F "license_type=${license_type}" \
|
||||
-F "files[0]=@${input};type=application/json" \
|
||||
"${BLOCKSCOUT_URL}/api/v2/smart-contracts/${addr}/verification/via/standard-input"
|
||||
)" || fail "${label}: Blockscout standard-input submission failed."
|
||||
|
||||
message="$(jq -r '.message // empty' <<<"${response}")"
|
||||
if [[ "${message}" == "Smart-contract verification started" ]]; then
|
||||
ok "${label} standard-input verification submission accepted."
|
||||
return 0
|
||||
fi
|
||||
|
||||
warn "${label} standard-input verification returned: ${response}"
|
||||
return 1
|
||||
}
|
||||
|
||||
verify_one() {
|
||||
local label="$1"
|
||||
local deployment_file="$2"
|
||||
local expected_name="$3"
|
||||
local path="$4"
|
||||
local addr constructor
|
||||
|
||||
should_handle "${label}" || return 0
|
||||
|
||||
addr="$(deployment_address "${deployment_file}")"
|
||||
[[ "${addr}" != "null" && -n "${addr}" ]] || fail "${label}: missing address in ${deployment_file}.json"
|
||||
has_contract_bytecode "${addr}" || fail "${label}: no bytecode at ${addr}"
|
||||
|
||||
if is_verified "${addr}" "${expected_name}"; then
|
||||
ok "${label} already verified on Blockscout."
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "${STATUS_ONLY}" == "1" ]]; then
|
||||
if [[ "${label}" == "D3MMFactory" || "${label}" == "D3Proxy" ]] && has_bytecode_surface_only "${addr}"; then
|
||||
warn "${label} currently exposes only bytecode metadata on Blockscout."
|
||||
return 0
|
||||
fi
|
||||
warn "${label} is not currently verified on Blockscout."
|
||||
return 1
|
||||
fi
|
||||
|
||||
start_proxy_if_needed
|
||||
constructor="$(constructor_args "${deployment_file}" "${label}")"
|
||||
|
||||
log "Submitting Blockscout verification for ${label} (${addr})"
|
||||
if [[ "${label}" == "D3MMFactory" || "${label}" == "D3Proxy" ]]; then
|
||||
submit_v2_standard_input "${label}" "${deployment_file}" "${addr}" "${path}" "${constructor}" || true
|
||||
else
|
||||
(
|
||||
cd "${DODO_V3_DIR}"
|
||||
cmd=(
|
||||
forge verify-contract
|
||||
"${addr}"
|
||||
"${path}"
|
||||
--chain-id 138
|
||||
--verifier blockscout
|
||||
--verifier-url "${FORGE_VERIFIER_URL}"
|
||||
--rpc-url "${RPC_URL}"
|
||||
--flatten
|
||||
--skip-is-verified-check
|
||||
)
|
||||
if [[ -n "${constructor}" ]]; then
|
||||
cmd+=(--constructor-args "${constructor}")
|
||||
fi
|
||||
"${cmd[@]}" || true
|
||||
)
|
||||
fi
|
||||
|
||||
if [[ "${NO_WAIT}" == "1" ]]; then
|
||||
ok "${label} verification submitted."
|
||||
return 0
|
||||
fi
|
||||
|
||||
if wait_for_verification "${label}" "${addr}" "${expected_name}"; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "${label}" == "D3MMFactory" || "${label}" == "D3Proxy" ]] && has_bytecode_surface_only "${addr}"; then
|
||||
warn "${label} submission was accepted, but Blockscout still exposes only bytecode metadata for this address."
|
||||
return 0
|
||||
fi
|
||||
|
||||
fail "${label} did not show as verified on Blockscout after ${WAIT_ATTEMPTS} checks."
|
||||
}
|
||||
|
||||
log "Chain 138 DODO v3 Blockscout verification"
|
||||
log "DODO v3 dir: ${DODO_V3_DIR}"
|
||||
log "RPC: ${RPC_URL}"
|
||||
log "Explorer API: ${BLOCKSCOUT_API_BASE}"
|
||||
log
|
||||
|
||||
verify_one D3Oracle D3Oracle D3Oracle "$(contract_path D3Oracle)"
|
||||
verify_one D3Vault D3Vault D3Vault "$(contract_path D3Vault)"
|
||||
verify_one DODOApprove DODOApprove DODOApprove "$(contract_path DODOApprove)"
|
||||
verify_one DODOApproveProxy DODOApproveProxy DODOApproveProxy "$(contract_path DODOApproveProxy)"
|
||||
verify_one D3MMFactory D3MMFactory D3MMFactory "$(contract_path D3MMFactory)"
|
||||
verify_one D3Proxy D3Proxy D3Proxy "$(contract_path D3Proxy)"
|
||||
|
||||
log
|
||||
ok "Chain 138 DODO v3 core pilot set is verified on Blockscout."
|
||||
log "Note: the canonical D3MM pilot pool is clone-based and remains tracked through the verified factory set plus the runtime health check."
|
||||
@@ -28,6 +28,8 @@ OUTPUT_DIR="$EVIDENCE_DIR/e2e-verification-$TIMESTAMP"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
PUBLIC_IP="${PUBLIC_IP:-76.53.10.36}"
|
||||
MISSION_CONTROL_SAMPLE_TX="${MISSION_CONTROL_SAMPLE_TX:-0x2f31d4f9a97be754b800f4af1a9eedf3b107d353bfa1a19e81417497a76c05c2}"
|
||||
MISSION_CONTROL_SAMPLE_TOKEN="${MISSION_CONTROL_SAMPLE_TOKEN:-0x93E66202A11B1772E55407B32B44e5Cd8eda7f22}"
|
||||
# Fourth NPMplus (dev/Codespaces, Gitea) — gitea.d-bis.org, dev.d-bis.org, codespaces.d-bis.org resolve here
|
||||
PUBLIC_IP_FOURTH="${PUBLIC_IP_FOURTH:-76.53.10.40}"
|
||||
# Set ACCEPT_ANY_DNS=1 to pass DNS if domain resolves to any IP (e.g. Fastly CNAME or Cloudflare Tunnel)
|
||||
@@ -97,6 +99,7 @@ declare -A DOMAIN_TYPES_ALL=(
|
||||
["rpc.public-0138.defi-oracle.io"]="rpc-http"
|
||||
["rpc.defi-oracle.io"]="rpc-http"
|
||||
["wss.defi-oracle.io"]="rpc-ws"
|
||||
["info.defi-oracle.io"]="web"
|
||||
# Alltra / HYBX (tunnel → primary NPMplus 192.168.11.167)
|
||||
["rpc-alltra.d-bis.org"]="rpc-http"
|
||||
["rpc-alltra-2.d-bis.org"]="rpc-http"
|
||||
@@ -190,7 +193,7 @@ else
|
||||
fi
|
||||
|
||||
# Domains that are optional when any test fails (off-LAN, 502, unreachable); fail → skip so run passes.
|
||||
_PUB_OPTIONAL_WHEN_FAIL="dapp.d-bis.org mifos.d-bis.org explorer.d-bis.org admin.d-bis.org dbis-admin.d-bis.org core.d-bis.org dbis-api.d-bis.org dbis-api-2.d-bis.org secure.d-bis.org d-bis.org www.d-bis.org members.d-bis.org developers.d-bis.org data.d-bis.org research.d-bis.org policy.d-bis.org ops.d-bis.org identity.d-bis.org status.d-bis.org sandbox.d-bis.org interop.d-bis.org sankofa.nexus www.sankofa.nexus phoenix.sankofa.nexus www.phoenix.sankofa.nexus the-order.sankofa.nexus www.the-order.sankofa.nexus studio.sankofa.nexus keycloak.sankofa.nexus admin.sankofa.nexus portal.sankofa.nexus dash.sankofa.nexus docs.d-bis.org blockscout.defi-oracle.io mim4u.org www.mim4u.org secure.mim4u.org training.mim4u.org rpc-http-pub.d-bis.org rpc.d-bis.org rpc2.d-bis.org rpc-core.d-bis.org rpc.public-0138.defi-oracle.io rpc.defi-oracle.io ws.rpc.d-bis.org ws.rpc2.d-bis.org"
|
||||
_PUB_OPTIONAL_WHEN_FAIL="dapp.d-bis.org mifos.d-bis.org admin.d-bis.org dbis-admin.d-bis.org core.d-bis.org dbis-api.d-bis.org dbis-api-2.d-bis.org secure.d-bis.org d-bis.org www.d-bis.org members.d-bis.org developers.d-bis.org data.d-bis.org research.d-bis.org policy.d-bis.org ops.d-bis.org identity.d-bis.org status.d-bis.org sandbox.d-bis.org interop.d-bis.org sankofa.nexus www.sankofa.nexus phoenix.sankofa.nexus www.phoenix.sankofa.nexus the-order.sankofa.nexus www.the-order.sankofa.nexus keycloak.sankofa.nexus admin.sankofa.nexus portal.sankofa.nexus dash.sankofa.nexus docs.d-bis.org blockscout.defi-oracle.io mim4u.org www.mim4u.org secure.mim4u.org training.mim4u.org rpc-http-pub.d-bis.org rpc.d-bis.org rpc2.d-bis.org rpc-core.d-bis.org rpc.public-0138.defi-oracle.io rpc.defi-oracle.io ws.rpc.d-bis.org ws.rpc2.d-bis.org info.defi-oracle.io"
|
||||
_PRIV_OPTIONAL_WHEN_FAIL="rpc-http-prv.d-bis.org rpc-ws-prv.d-bis.org rpc-fireblocks.d-bis.org ws.rpc-fireblocks.d-bis.org"
|
||||
if [[ -z "${E2E_OPTIONAL_WHEN_FAIL:-}" ]]; then
|
||||
if [[ "$PROFILE" == "private" ]]; then
|
||||
@@ -218,6 +221,18 @@ declare -A E2E_HTTPS_PATH=(
|
||||
["data.d-bis.org"]="/v1/health"
|
||||
)
|
||||
|
||||
# Some hosts intentionally redirect "/" to an app subpath; verify that behavior separately.
|
||||
declare -A E2E_ROOT_REDIRECT_PATH=(
|
||||
["studio.sankofa.nexus"]="/studio/"
|
||||
)
|
||||
|
||||
# Reject placeholder bodies that should not count as application/API readiness.
|
||||
declare -A E2E_HTTPS_REJECT_BODY=(
|
||||
["core.d-bis.org"]="Index of /|Directory listing for /"
|
||||
["dbis-api.d-bis.org"]="Index of /|Directory listing for /"
|
||||
["dbis-api-2.d-bis.org"]="Index of /|Directory listing for /"
|
||||
)
|
||||
|
||||
# Expected apex URL for NPM www → canonical 301/308 (Location must use this host; path from E2E_HTTPS_PATH must appear when set)
|
||||
declare -A E2E_WWW_CANONICAL_BASE=(
|
||||
["www.sankofa.nexus"]="https://sankofa.nexus"
|
||||
@@ -242,6 +257,25 @@ e2e_www_redirect_location_ok() {
|
||||
return 0
|
||||
}
|
||||
|
||||
e2e_root_redirect_location_ok() {
|
||||
local loc_val="$1" domain="$2" path="$3"
|
||||
local loc_lc domain_lc path_lc
|
||||
loc_lc=$(printf '%s' "$loc_val" | tr '[:upper:]' '[:lower:]')
|
||||
domain_lc=$(printf '%s' "$domain" | tr '[:upper:]' '[:lower:]')
|
||||
path_lc=$(printf '%s' "$path" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
if [[ "$loc_lc" == "$path_lc" || "$loc_lc" == "${path_lc%/}" ]]; then
|
||||
return 0
|
||||
fi
|
||||
if [[ "$loc_lc" == "https://$domain_lc$path_lc" || "$loc_lc" == "https://$domain_lc${path_lc%/}" ]]; then
|
||||
return 0
|
||||
fi
|
||||
if [[ "$loc_lc" == "http://$domain_lc$path_lc" || "$loc_lc" == "http://$domain_lc${path_lc%/}" ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# --list-endpoints: print selected profile endpoints and exit (no tests)
|
||||
if [[ "$LIST_ENDPOINTS" == "1" ]]; then
|
||||
echo ""
|
||||
@@ -425,6 +459,43 @@ test_domain() {
|
||||
result=$(echo "$result" | jq --arg code "$http_code" --arg time "$time_total" \
|
||||
--argjson hsts "$HAS_HSTS" --argjson csp "$HAS_CSP" --argjson xfo "$HAS_XFO" \
|
||||
'.tests.https = {"status": "pass", "http_code": ($code | tonumber), "response_time_seconds": ($time | tonumber), "has_hsts": $hsts, "has_csp": $csp, "has_xfo": $xfo}')
|
||||
|
||||
reject_body_regex="${E2E_HTTPS_REJECT_BODY[$domain]:-}"
|
||||
if [ -n "$reject_body_regex" ]; then
|
||||
body_file="$OUTPUT_DIR/${domain//./_}_https_body.txt"
|
||||
body_code=$(curl -s -k --connect-timeout 10 -o "$body_file" -w "%{http_code}" "$https_url" 2>/dev/null || echo "000")
|
||||
if [ "$body_code" -ge 200 ] && [ "$body_code" -lt 400 ] && grep -Eiq "$reject_body_regex" "$body_file"; then
|
||||
body_snippet=$(head -c 240 "$body_file" 2>/dev/null | tr '\n' ' ' | tr '\r' ' ' || echo "")
|
||||
log_error "HTTPS: $domain returned placeholder content despite HTTP $body_code"
|
||||
result=$(echo "$result" | jq --arg code "$body_code" --arg time "$time_total" --arg snippet "$body_snippet" \
|
||||
'.tests.https = {"status": "fail", "http_code": ($code | tonumber), "response_time_seconds": ($time | tonumber), "reason": "placeholder_content", "body_snippet": $snippet}')
|
||||
fi
|
||||
fi
|
||||
|
||||
root_redirect_path="${E2E_ROOT_REDIRECT_PATH[$domain]:-}"
|
||||
if [ -n "$root_redirect_path" ]; then
|
||||
log_info "Test 3a: Root redirect (https://${domain}/ -> ${root_redirect_path})"
|
||||
root_response=$(curl -s -I -k --connect-timeout 10 -w "\n%{time_total}" "https://${domain}/" 2>&1 || echo "")
|
||||
root_http_code=$(echo "$root_response" | head -1 | grep -oP '\d{3}' | head -1 || echo "")
|
||||
root_time_total=$(echo "$root_response" | tail -1 | grep -E '^[0-9.]+$' || echo "0")
|
||||
root_headers=$(echo "$root_response" | head -20)
|
||||
root_location_hdr=$(echo "$root_headers" | grep -iE '^[Ll]ocation:' | head -1 | tr -d '\r' || echo "")
|
||||
root_loc_val=$(printf '%s' "$root_location_hdr" | sed -E 's/^[Ll][Oo][Cc][Aa][Tt][Ii][Oo][Nn]:[[:space:]]*//' | sed 's/[[:space:]]*$//')
|
||||
|
||||
if [[ "$root_http_code" =~ ^30[1278]$ ]] && e2e_root_redirect_location_ok "$root_loc_val" "$domain" "$root_redirect_path"; then
|
||||
log_success "Root redirect: $domain returned HTTP $root_http_code -> $root_loc_val"
|
||||
result=$(echo "$result" | jq --arg code "$root_http_code" --arg time "$root_time_total" --arg loc "$root_loc_val" \
|
||||
'.tests.root_redirect = {"status": "pass", "http_code": ($code | tonumber), "response_time_seconds": ($time | tonumber), "location": $loc}')
|
||||
elif [ -n "$root_http_code" ]; then
|
||||
log_error "Root redirect: $domain expected redirect to ${root_redirect_path}, got HTTP $root_http_code${root_loc_val:+ -> $root_loc_val}"
|
||||
result=$(echo "$result" | jq --arg code "$root_http_code" --arg time "$root_time_total" --arg loc "$root_loc_val" --arg exp "$root_redirect_path" \
|
||||
'.tests.root_redirect = {"status": "fail", "http_code": ($code | tonumber), "response_time_seconds": ($time | tonumber), "location": $loc, "expected_path": $exp}')
|
||||
else
|
||||
log_error "Root redirect: Failed to connect to https://${domain}/"
|
||||
result=$(echo "$result" | jq --arg time "$root_time_total" --arg exp "$root_redirect_path" \
|
||||
'.tests.root_redirect = {"status": "fail", "response_time_seconds": ($time | tonumber), "expected_path": $exp}')
|
||||
fi
|
||||
fi
|
||||
else
|
||||
log_warn "HTTPS: $domain returned HTTP $http_code (Time: ${time_total}s)${https_path:+ (${https_url})}"
|
||||
result=$(echo "$result" | jq --arg code "$http_code" --arg time "$time_total" \
|
||||
@@ -448,6 +519,67 @@ test_domain() {
|
||||
result=$(echo "$result" | jq --arg code "$api_code" '.tests.blockscout_api = {"status": "skip", "http_code": $code}')
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$domain" = "explorer.d-bis.org" ]; then
|
||||
log_info "Test 3c: Explorer command center"
|
||||
cc_body_file="$OUTPUT_DIR/${domain//./_}_command_center_body.txt"
|
||||
cc_code=$(curl -s -o "$cc_body_file" -w "%{http_code}" -k --connect-timeout 10 "https://$domain/chain138-command-center.html" 2>/dev/null || echo "000")
|
||||
if [ "$cc_code" = "200" ] && grep -qE 'Visual Command Center|Mission Control|mainnet cW mint corridor' "$cc_body_file" 2>/dev/null; then
|
||||
log_success "Explorer command center: $domain/chain138-command-center.html returned 200 with expected content"
|
||||
result=$(echo "$result" | jq '.tests.explorer_command_center = {"status": "pass", "http_code": 200}')
|
||||
else
|
||||
log_error "Explorer command center: $domain/chain138-command-center.html failed (HTTP $cc_code)"
|
||||
result=$(echo "$result" | jq --arg code "$cc_code" '.tests.explorer_command_center = {"status": "fail", "http_code": $code}')
|
||||
fi
|
||||
|
||||
log_info "Test 3d: Mission Control SSE stream"
|
||||
mc_stream_headers="$OUTPUT_DIR/${domain//./_}_mission_control_stream_headers.txt"
|
||||
mc_stream_body="$OUTPUT_DIR/${domain//./_}_mission_control_stream_body.txt"
|
||||
mc_stream_exit=0
|
||||
if ! curl -sS -N -D "$mc_stream_headers" -o "$mc_stream_body" -k --connect-timeout 10 --max-time 25 \
|
||||
"https://$domain/explorer-api/v1/mission-control/stream" >/dev/null 2>&1; then
|
||||
mc_stream_exit=$?
|
||||
fi
|
||||
if grep -qE '^HTTP/[0-9.]+ 200' "$mc_stream_headers" 2>/dev/null \
|
||||
&& grep -qi '^content-type: text/event-stream' "$mc_stream_headers" 2>/dev/null \
|
||||
&& { [ "$mc_stream_exit" -eq 0 ] || [ "$mc_stream_exit" -eq 28 ]; } \
|
||||
&& grep -qE '^(event|data):' "$mc_stream_body" 2>/dev/null; then
|
||||
log_success "Mission Control SSE: $domain returned text/event-stream with event data"
|
||||
result=$(echo "$result" | jq '.tests.mission_control_stream = {"status": "pass", "http_code": 200}')
|
||||
else
|
||||
log_error "Mission Control SSE: $domain stream verification failed"
|
||||
result=$(echo "$result" | jq --arg exit_code "$mc_stream_exit" '.tests.mission_control_stream = {"status": "fail", "curl_exit": $exit_code}')
|
||||
fi
|
||||
|
||||
log_info "Test 3e: Mission Control bridge trace"
|
||||
mc_trace_body="$OUTPUT_DIR/${domain//./_}_mission_control_trace.json"
|
||||
mc_trace_code=$(curl -s -o "$mc_trace_body" -w "%{http_code}" -k --connect-timeout 10 \
|
||||
"https://$domain/explorer-api/v1/mission-control/bridge/trace?tx=$MISSION_CONTROL_SAMPLE_TX" 2>/dev/null || echo "000")
|
||||
if [ "$mc_trace_code" = "200" ] \
|
||||
&& grep -q '"tx_hash"' "$mc_trace_body" 2>/dev/null \
|
||||
&& grep -q '"from_registry"' "$mc_trace_body" 2>/dev/null \
|
||||
&& grep -q '"to_registry"' "$mc_trace_body" 2>/dev/null; then
|
||||
log_success "Mission Control bridge trace: $domain returned labeled trace JSON"
|
||||
result=$(echo "$result" | jq '.tests.mission_control_trace = {"status": "pass", "http_code": 200}')
|
||||
else
|
||||
log_error "Mission Control bridge trace: $domain failed (HTTP $mc_trace_code)"
|
||||
result=$(echo "$result" | jq --arg code "$mc_trace_code" '.tests.mission_control_trace = {"status": "fail", "http_code": $code}')
|
||||
fi
|
||||
|
||||
log_info "Test 3f: Mission Control liquidity proxy"
|
||||
mc_liquidity_body="$OUTPUT_DIR/${domain//./_}_mission_control_liquidity.json"
|
||||
mc_liquidity_code=$(curl -s -o "$mc_liquidity_body" -w "%{http_code}" -k --connect-timeout 10 \
|
||||
"https://$domain/explorer-api/v1/mission-control/liquidity/token/$MISSION_CONTROL_SAMPLE_TOKEN/pools" 2>/dev/null || echo "000")
|
||||
if [ "$mc_liquidity_code" = "200" ] \
|
||||
&& grep -q '"pools"' "$mc_liquidity_body" 2>/dev/null \
|
||||
&& grep -q '"dex"' "$mc_liquidity_body" 2>/dev/null; then
|
||||
log_success "Mission Control liquidity: $domain returned pool JSON"
|
||||
result=$(echo "$result" | jq '.tests.mission_control_liquidity = {"status": "pass", "http_code": 200}')
|
||||
else
|
||||
log_error "Mission Control liquidity: $domain failed (HTTP $mc_liquidity_code)"
|
||||
result=$(echo "$result" | jq --arg code "$mc_liquidity_code" '.tests.mission_control_liquidity = {"status": "fail", "http_code": $code}')
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Test 4: RPC HTTP Request
|
||||
@@ -530,6 +662,7 @@ test_domain() {
|
||||
(if .tests.dns and (.tests.dns.status == "fail") then .tests.dns.status = "skip" else . end) |
|
||||
(if .tests.ssl and (.tests.ssl.status == "fail") then .tests.ssl.status = "skip" else . end) |
|
||||
(if .tests.https and (.tests.https.status == "fail") then .tests.https.status = "skip" else . end) |
|
||||
(if .tests.root_redirect and (.tests.root_redirect.status == "fail") then .tests.root_redirect.status = "skip" else . end) |
|
||||
(if .tests.rpc_http and (.tests.rpc_http.status == "fail") then .tests.rpc_http.status = "skip" else . end)
|
||||
')
|
||||
fi
|
||||
@@ -548,6 +681,32 @@ for domain in "${!DOMAIN_TYPES[@]}"; do
|
||||
fi
|
||||
done
|
||||
|
||||
# Supplemental: same-origin token-aggregation on info.defi-oracle.io (nginx proxy → Blockscout).
|
||||
# Does not change domain-level JSON; logs only. Set E2E_STRICT_INFO_TOKEN_AGGREGATION=1 to exit 1 on failure.
|
||||
if [[ "$PROFILE" != "private" ]] && [[ -n "${DOMAIN_TYPES[info.defi-oracle.io]:-}" ]]; then
|
||||
log_info "Supplemental: info.defi-oracle.io /token-aggregation/api/v1/networks"
|
||||
_info_ta_url="https://info.defi-oracle.io/token-aggregation/api/v1/networks?refresh=1"
|
||||
_info_ta_body="$OUTPUT_DIR/info_defi_oracle_io_token_aggregation_networks.json"
|
||||
_info_ta_code=$(curl -sS -k --connect-timeout 15 -o "$_info_ta_body" -w '%{http_code}' "$_info_ta_url" 2>/dev/null || echo "000")
|
||||
_info_ta_ok=0
|
||||
if [[ "$_info_ta_code" == "200" ]]; then
|
||||
if command -v jq >/dev/null 2>&1 && jq -e '.networks | type == "array"' "$_info_ta_body" >/dev/null 2>&1; then
|
||||
_info_ta_ok=1
|
||||
elif grep -q '"networks"' "$_info_ta_body" 2>/dev/null; then
|
||||
_info_ta_ok=1
|
||||
fi
|
||||
fi
|
||||
if [[ "$_info_ta_ok" -eq 1 ]]; then
|
||||
log_success "Supplemental: info token-aggregation returned HTTP 200 with .networks"
|
||||
else
|
||||
log_warn "Supplemental: info token-aggregation check failed (HTTP ${_info_ta_code}; expected JSON .networks). Fix: sync-info-defi-oracle-to-vmid2400.sh + NPM → .218. See check-info-defi-oracle-public.sh"
|
||||
if [[ "${E2E_STRICT_INFO_TOKEN_AGGREGATION:-0}" == "1" ]]; then
|
||||
log_error "E2E_STRICT_INFO_TOKEN_AGGREGATION=1: failing run due to token-aggregation check"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Combine all results (one JSON object per line for robustness)
|
||||
printf '%s\n' "${E2E_RESULTS[@]}" | jq -s '.' > "$OUTPUT_DIR/all_e2e_results.json" 2>/dev/null || {
|
||||
log_warn "jq merge failed; writing raw results"
|
||||
@@ -558,11 +717,26 @@ printf '%s\n' "${E2E_RESULTS[@]}" | jq -s '.' > "$OUTPUT_DIR/all_e2e_results.jso
|
||||
TOTAL_TESTS=${#DOMAIN_TYPES[@]}
|
||||
PASSED_DNS=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.dns.status == "pass")] | length' 2>/dev/null || echo "0")
|
||||
PASSED_HTTPS=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.https.status == "pass")] | length' 2>/dev/null || echo "0")
|
||||
FAILED_TESTS=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.dns.status == "fail" or .tests.https.status == "fail" or .tests.rpc_http.status == "fail")] | length' 2>/dev/null || echo "0")
|
||||
FAILED_TESTS=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(
|
||||
.tests.dns.status == "fail" or
|
||||
.tests.https.status == "fail" or
|
||||
.tests.root_redirect.status == "fail" or
|
||||
.tests.rpc_http.status == "fail" or
|
||||
.tests.explorer_command_center.status == "fail" or
|
||||
.tests.mission_control_stream.status == "fail" or
|
||||
.tests.mission_control_trace.status == "fail" or
|
||||
.tests.mission_control_liquidity.status == "fail"
|
||||
)] | length' 2>/dev/null || echo "0")
|
||||
FAILED_DNS=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.dns.status == "fail")] | length' 2>/dev/null || echo "0")
|
||||
FAILED_HTTPS=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.https.status == "fail")] | length' 2>/dev/null || echo "0")
|
||||
FAILED_RPC=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.rpc_http.status == "fail")] | length' 2>/dev/null || echo "0")
|
||||
SKIPPED_OPTIONAL=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.dns.status == "skip" or .tests.ssl.status == "skip" or .tests.https.status == "skip" or .tests.rpc_http.status == "skip")] | length' 2>/dev/null || echo "0")
|
||||
FAILED_EXPLORER_SURFACES=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(
|
||||
.tests.explorer_command_center.status == "fail" or
|
||||
.tests.mission_control_stream.status == "fail" or
|
||||
.tests.mission_control_trace.status == "fail" or
|
||||
.tests.mission_control_liquidity.status == "fail"
|
||||
)] | length' 2>/dev/null || echo "0")
|
||||
SKIPPED_OPTIONAL=$(echo "${E2E_RESULTS[@]}" | jq -s '[.[] | select(.tests.dns.status == "skip" or .tests.ssl.status == "skip" or .tests.https.status == "skip" or .tests.root_redirect.status == "skip" or .tests.rpc_http.status == "skip")] | length' 2>/dev/null || echo "0")
|
||||
# When only RPC fails (edge blocks POST), treat as success if env set
|
||||
E2E_SUCCESS_IF_ONLY_RPC_BLOCKED="${E2E_SUCCESS_IF_ONLY_RPC_BLOCKED:-0}"
|
||||
ONLY_RPC_FAILED=0
|
||||
@@ -597,14 +771,15 @@ cat >> "$REPORT_FILE" <<EOF
|
||||
- **Total domains tested**: $TOTAL_TESTS
|
||||
- **DNS tests passed**: $PASSED_DNS
|
||||
- **HTTPS tests passed**: $PASSED_HTTPS
|
||||
- **Explorer surface failures**: $FAILED_EXPLORER_SURFACES
|
||||
- **Failed tests**: $FAILED_TESTS
|
||||
- **Skipped / optional (not configured or unreachable)**: $SKIPPED_OPTIONAL
|
||||
- **Average response time**: ${AVG_RESPONSE_TIME}s
|
||||
|
||||
## Results overview
|
||||
|
||||
| Domain | Type | DNS | SSL | HTTPS | RPC |
|
||||
|--------|------|-----|-----|-------|-----|
|
||||
| Domain | Type | DNS | SSL | HTTPS | Root | RPC | Explorer+ |
|
||||
|--------|------|-----|-----|-------|------|-----|-----------|
|
||||
EOF
|
||||
|
||||
for result in "${E2E_RESULTS[@]}"; do
|
||||
@@ -613,8 +788,18 @@ for result in "${E2E_RESULTS[@]}"; do
|
||||
dns_status=$(echo "$result" | jq -r '.tests.dns.status // "-"' 2>/dev/null || echo "-")
|
||||
ssl_status=$(echo "$result" | jq -r '.tests.ssl.status // "-"' 2>/dev/null || echo "-")
|
||||
https_status=$(echo "$result" | jq -r '.tests.https.status // "-"' 2>/dev/null || echo "-")
|
||||
root_redirect_status=$(echo "$result" | jq -r '.tests.root_redirect.status // "-"' 2>/dev/null || echo "-")
|
||||
rpc_status=$(echo "$result" | jq -r '.tests.rpc_http.status // "-"' 2>/dev/null || echo "-")
|
||||
echo "| $domain | $domain_type | $dns_status | $ssl_status | $https_status | $rpc_status |" >> "$REPORT_FILE"
|
||||
explorer_extra_status=$(echo "$result" | jq -r '
|
||||
[ .tests.explorer_command_center.status?, .tests.mission_control_stream.status?, .tests.mission_control_trace.status?, .tests.mission_control_liquidity.status? ] as $s
|
||||
| if ($s | length) == 0 then "-"
|
||||
elif ($s | any(. == "fail")) then "fail"
|
||||
elif ($s | any(. == "warn" or . == "warning")) then "warn"
|
||||
elif ($s | any(. == "skip")) and not ($s | any(. == "pass")) then "skip"
|
||||
else "pass"
|
||||
end
|
||||
' 2>/dev/null || echo "-")
|
||||
echo "| $domain | $domain_type | $dns_status | $ssl_status | $https_status | $root_redirect_status | $rpc_status | $explorer_extra_status |" >> "$REPORT_FILE"
|
||||
done
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
@@ -630,8 +815,13 @@ for result in "${E2E_RESULTS[@]}"; do
|
||||
dns_status=$(echo "$result" | jq -r '.tests.dns.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
ssl_status=$(echo "$result" | jq -r '.tests.ssl.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
https_status=$(echo "$result" | jq -r '.tests.https.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
root_redirect_status=$(echo "$result" | jq -r '.tests.root_redirect.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
rpc_status=$(echo "$result" | jq -r '.tests.rpc_http.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
blockscout_api_status=$(echo "$result" | jq -r '.tests.blockscout_api.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
explorer_command_center_status=$(echo "$result" | jq -r '.tests.explorer_command_center.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
mission_control_stream_status=$(echo "$result" | jq -r '.tests.mission_control_stream.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
mission_control_trace_status=$(echo "$result" | jq -r '.tests.mission_control_trace.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
mission_control_liquidity_status=$(echo "$result" | jq -r '.tests.mission_control_liquidity.status // "unknown"' 2>/dev/null || echo "unknown")
|
||||
|
||||
echo "" >> "$REPORT_FILE"
|
||||
echo "### $domain" >> "$REPORT_FILE"
|
||||
@@ -641,9 +831,24 @@ for result in "${E2E_RESULTS[@]}"; do
|
||||
if [ "$https_status" != "unknown" ]; then
|
||||
echo "- HTTPS: $https_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$root_redirect_status" != "unknown" ]; then
|
||||
echo "- Root redirect: $root_redirect_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$blockscout_api_status" != "unknown" ]; then
|
||||
echo "- Blockscout API: $blockscout_api_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$explorer_command_center_status" != "unknown" ]; then
|
||||
echo "- Command Center: $explorer_command_center_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$mission_control_stream_status" != "unknown" ]; then
|
||||
echo "- Mission Control stream: $mission_control_stream_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$mission_control_trace_status" != "unknown" ]; then
|
||||
echo "- Mission Control trace: $mission_control_trace_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$mission_control_liquidity_status" != "unknown" ]; then
|
||||
echo "- Mission Control liquidity: $mission_control_liquidity_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
if [ "$rpc_status" != "unknown" ]; then
|
||||
echo "- RPC: $rpc_status" >> "$REPORT_FILE"
|
||||
fi
|
||||
@@ -661,11 +866,11 @@ cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
## Notes
|
||||
|
||||
- **Optional domains:** Domains in \`E2E_OPTIONAL_WHEN_FAIL\` (default: many d-bis.org/sankofa/mim4u/rpc) have any fail treated as skip so the run passes when off-LAN or services unreachable. Set \`E2E_OPTIONAL_WHEN_FAIL=\` (empty) for strict mode.
|
||||
- **Optional domains:** Domains in \`E2E_OPTIONAL_WHEN_FAIL\` (default: many d-bis.org/sankofa/mim4u/rpc) have any fail treated as skip so the run passes when off-LAN or services unreachable. The canonical explorer \`explorer.d-bis.org\` is intentionally **not** in that list anymore. Set \`E2E_OPTIONAL_WHEN_FAIL=\` (empty) for strict mode.
|
||||
- WebSocket tests require \`wscat\` tool: \`npm install -g wscat\`
|
||||
- OpenSSL fetch uses \`timeout\` (\`E2E_OPENSSL_TIMEOUT\` / \`E2E_OPENSSL_X509_TIMEOUT\`, defaults 15s / 5s) so \`openssl s_client\` cannot hang indefinitely
|
||||
- Internal connectivity tests require access to NPMplus container
|
||||
- Explorer (explorer.d-bis.org): optional Blockscout API check; use \`SKIP_BLOCKSCOUT_API=1\` to skip when backend is unreachable (e.g. off-LAN). Fix runbook: docs/03-deployment/BLOCKSCOUT_FIX_RUNBOOK.md
|
||||
- Explorer (explorer.d-bis.org): verifies Blockscout API, \`/chain138-command-center.html\`, and Mission Control stream / trace / liquidity endpoints. Use \`SKIP_BLOCKSCOUT_API=1\` only when you need to skip the Blockscout API sub-check specifically.
|
||||
|
||||
## Next Steps
|
||||
|
||||
|
||||
@@ -72,8 +72,9 @@ echo "$IP_OUT"
|
||||
echo "Default route: $GW_OUT"
|
||||
echo ""
|
||||
|
||||
# Parse default gateway
|
||||
ACTUAL_GW=$(echo "$GW_OUT" | awk '/default via/ {print $3}')
|
||||
# Parse default gateway(s)
|
||||
DEFAULT_ROUTE_COUNT=$(printf '%s\n' "$GW_OUT" | awk '/^default via / {count++} END {print count+0}')
|
||||
ACTUAL_GW=$(printf '%s\n' "$GW_OUT" | awk '/^default via / {print $3; exit}')
|
||||
if [[ -n "$ACTUAL_GW" ]]; then
|
||||
if [[ "$ACTUAL_GW" == "$EXPECTED_GW" ]]; then
|
||||
ok "Gateway is correct: $ACTUAL_GW"
|
||||
@@ -84,6 +85,10 @@ else
|
||||
warn "Could not determine default gateway"
|
||||
fi
|
||||
|
||||
if (( DEFAULT_ROUTE_COUNT > 1 )); then
|
||||
warn "Multiple default routes detected inside container ($DEFAULT_ROUTE_COUNT). NPMplus should normally have a single default route."
|
||||
fi
|
||||
|
||||
# Parse IPs (simple: lines with inet 192.168.11.x)
|
||||
FOUND_IPS=()
|
||||
while read -r line; do
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Verify RPC on VMID 2101 is full approve (permissioned with all 5 validators) and in sync.
|
||||
# Run after flush/cancel/deploy steps. Usage: ./scripts/verify/verify-rpc-2101-approve-and-sync.sh
|
||||
# Optional: set RPC_URL_138 if different. SSH to r630-01/ml110 used only for validator service check.
|
||||
# Optional: set RPC_URL_138 if different. SSH to r630-01/r630-03 used only for validator service check.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -13,7 +13,7 @@ RPC_CORE_1="${RPC_CORE_1:-192.168.11.211}"
|
||||
RPC_URL="${RPC_URL_138:-http://${RPC_CORE_1}:8545}"
|
||||
PROXMOX_USER="${PROXMOX_USER:-root}"
|
||||
PROXMOX_R630="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
|
||||
PROXMOX_ML110="${PROXMOX_ML110:-${PROXMOX_HOST_ML110:-192.168.11.10}}"
|
||||
PROXMOX_R630_03="${PROXMOX_R630_03:-${PROXMOX_HOST_R630_03:-192.168.11.13}}"
|
||||
|
||||
# Five validator IPs (1000-1004)
|
||||
VALIDATOR_IPS=(192.168.11.100 192.168.11.101 192.168.11.102 192.168.11.103 192.168.11.104)
|
||||
@@ -101,11 +101,11 @@ else
|
||||
((FAIL++)) || true
|
||||
fi
|
||||
|
||||
# 5. Optional: validator service status (requires SSH to r630-01 and ml110)
|
||||
log_info "4. Validator status (5/5 active) — requires SSH to r630-01 and ml110..."
|
||||
# 5. Optional: validator service status (requires SSH to r630-01 and r630-03)
|
||||
log_info "4. Validator status (5/5 active) — requires SSH to r630-01 and r630-03..."
|
||||
ACTIVE=0
|
||||
SSH_OK=false
|
||||
for entry in "1000:$PROXMOX_R630" "1001:$PROXMOX_R630" "1002:$PROXMOX_R630" "1003:$PROXMOX_ML110" "1004:$PROXMOX_ML110"; do
|
||||
for entry in "1000:$PROXMOX_R630" "1001:$PROXMOX_R630" "1002:$PROXMOX_R630" "1003:$PROXMOX_R630_03" "1004:$PROXMOX_R630_03"; do
|
||||
IFS=':' read -r VMID HOST <<< "$entry"
|
||||
STATUS=$(ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no "${PROXMOX_USER}@${HOST}" \
|
||||
"pct exec $VMID -- systemctl is-active besu-validator 2>/dev/null" 2>/dev/null || echo "unknown")
|
||||
|
||||
@@ -16,9 +16,9 @@ CHECKSUM=false
|
||||
|
||||
# Same VMID -> host as deploy-besu-node-lists-to-all.sh
|
||||
declare -A HOST_BY_VMID
|
||||
for v in 1000 1001 1002 1500 1501 1502 2101 2500 2501 2502 2503 2504 2505; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
|
||||
for v in 2201 2303 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
|
||||
for v in 1003 1004 1503 1504 1505 1506 1507 1508 2102 2301 2304 2305 2306 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_ML110:-${PROXMOX_HOST_ML110:-192.168.11.10}}"; done
|
||||
for v in 1000 1001 1002 1500 1501 1502 2101 2420 2430 2440 2460 2470 2480; do HOST_BY_VMID[$v]="${PROXMOX_R630_01:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"; done
|
||||
for v in 2201 2303 2305 2306 2307 2308 2401; do HOST_BY_VMID[$v]="${PROXMOX_R630_02:-${PROXMOX_HOST_R630_02:-192.168.11.12}}"; done
|
||||
for v in 1003 1004 1503 1504 1505 1506 1507 1508 1509 1510 2102 2301 2304 2400 2402 2403; do HOST_BY_VMID[$v]="${PROXMOX_R630_03:-${PROXMOX_HOST_R630_03:-192.168.11.13}}"; done
|
||||
|
||||
SSH_OPTS="-o ConnectTimeout=6 -o StrictHostKeyChecking=no"
|
||||
CANONICAL_STATIC_SUM=""
|
||||
@@ -40,7 +40,7 @@ STATIC_PATH="/etc/besu/static-nodes.json"
|
||||
PERMS_PATH="/etc/besu/permissions-nodes.toml"
|
||||
|
||||
FAIL=0
|
||||
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 2101 2102 2201 2301 2303 2304 2305 2306 2400 2401 2402 2403 2500 2501 2502 2503 2504 2505; do
|
||||
for vmid in 1000 1001 1002 1003 1004 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 2101 2102 2201 2301 2303 2304 2305 2306 2307 2308 2400 2401 2402 2403 2420 2430 2440 2460 2470 2480; do
|
||||
host="${HOST_BY_VMID[$vmid]:-}"
|
||||
[[ -z "$host" ]] && continue
|
||||
run=$(ssh $SSH_OPTS root@$host "pct exec $vmid -- bash -c 's=\"\"; p=\"\"; [ -f $STATIC_PATH ] && s=\"OK\" || s=\"MISSING\"; [ -f $PERMS_PATH ] && p=\"OK\" || p=\"MISSING\"; echo \"\$s \$p\"' 2>/dev/null" || echo "SKIP SKIP")
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
#
|
||||
# Usage:
|
||||
# bash scripts/verify/xdc-zero-chain138-preflight.sh
|
||||
# XDC mainnet (default in .env.master.example): XDC_PARENTNET_URL=https://rpc.xinfin.network
|
||||
# XDC mainnet: set XDC_PARENTNET_URL or rely on fallbacks (erpc.xinfin.network, Ankr, rpc.xinfin.network).
|
||||
# Apothem: XDC_PARENTNET_URL=https://rpc.apothem.network bash scripts/verify/xdc-zero-chain138-preflight.sh
|
||||
#
|
||||
# Loads repo dotenv via scripts/lib/load-project-env.sh when sourced from repo root.
|
||||
@@ -40,24 +40,89 @@ _rpc_chain_id() {
|
||||
echo "OK $name ($url) eth_chainId=$hex"
|
||||
}
|
||||
|
||||
_balance_eth() {
|
||||
local url="$1"
|
||||
local addr="$2"
|
||||
local out
|
||||
if ! out="$(curl -sS --connect-timeout 8 --max-time 20 -X POST "$url" \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"${addr}\",\"latest\"],\"id\":1}" 2>/dev/null)"; then
|
||||
echo ""
|
||||
return 1
|
||||
fi
|
||||
local hex
|
||||
hex="$(printf '%s' "$out" | sed -n 's/.*"result":"\([^"]*\)".*/\1/p')"
|
||||
[[ -n "$hex" ]] || return 1
|
||||
cast to-unit "$hex" ether 2>/dev/null || printf '%s' "$hex"
|
||||
}
|
||||
|
||||
# Echo chainId hex or empty if URL is not a working JSON-RPC endpoint.
|
||||
_eth_chain_id_hex() {
|
||||
local url="$1"
|
||||
local out hex
|
||||
if ! out="$(curl -sS --connect-timeout 8 --max-time 20 -X POST "$url" \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' 2>/dev/null)"; then
|
||||
return 1
|
||||
fi
|
||||
hex="$(printf '%s' "$out" | sed -n 's/.*"result":"\([^"]*\)".*/\1/p')"
|
||||
[[ -n "$hex" ]] || return 1
|
||||
printf '%s' "$hex"
|
||||
}
|
||||
|
||||
# Pick first working XDC parent RPC; sets global PARENT to the URL that answered.
|
||||
_resolve_xdc_parent_rpc() {
|
||||
local primary="${XDC_PARENTNET_URL:-}"
|
||||
[[ -z "$primary" ]] && primary="${PARENTNET_URL:-}"
|
||||
local fallbacks=(
|
||||
"https://erpc.xinfin.network"
|
||||
"https://rpc.ankr.com/xdc"
|
||||
"https://rpc.xinfin.network"
|
||||
)
|
||||
local candidates=()
|
||||
[[ -n "$primary" ]] && candidates+=("$primary")
|
||||
candidates+=("${fallbacks[@]}")
|
||||
local seen="|" u hex
|
||||
for u in "${candidates[@]}"; do
|
||||
[[ -z "$u" || "$u" == "devnet" || "$u" == "testnet" ]] && continue
|
||||
case "$seen" in *"|$u|"*) continue;; esac
|
||||
seen="${seen}${u}|"
|
||||
if hex="$(_eth_chain_id_hex "$u")"; then
|
||||
echo "OK XDC parent ($u) eth_chainId=$hex"
|
||||
PARENT="$u"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
echo "FAIL XDC parent: no working RPC (tried ${#candidates[@]} candidate URLs; set XDC_PARENTNET_URL to a healthy endpoint)"
|
||||
return 1
|
||||
}
|
||||
|
||||
echo "=== XDC Zero Chain 138 pair — RPC preflight ==="
|
||||
|
||||
fail=0
|
||||
PARENT="${XDC_PARENTNET_URL:-${PARENTNET_URL:-}}"
|
||||
PEER="${XDC_ZERO_PEER_RPC_URL:-${RPC_URL_138:-${CHAIN138_RPC_URL:-${CHAIN138_RPC:-}}}}"
|
||||
|
||||
if [[ -n "$PARENT" ]]; then
|
||||
_rpc_chain_id "$PARENT" "XDC_PARENTNET / PARENTNET" || fail=1
|
||||
PARENT=""
|
||||
if [[ -n "${XDC_PARENTNET_URL:-${PARENTNET_URL:-}}" ]]; then
|
||||
echo "INFO XDC parent: trying XDC_PARENTNET_URL / PARENTNET_URL first, then public fallbacks if needed"
|
||||
else
|
||||
echo "WARN XDC_PARENTNET_URL and PARENTNET_URL unset — set one for parent checks."
|
||||
echo "INFO XDC_PARENTNET_URL and PARENTNET_URL unset — trying public XDC mainnet RPC fallbacks"
|
||||
fi
|
||||
_resolve_xdc_parent_rpc || fail=1
|
||||
|
||||
if [[ -n "$PEER" ]]; then
|
||||
_rpc_chain_id "$PEER" "Chain138 RPC_URL_138" || fail=1
|
||||
PEER_OPERATOR="${XDC_ZERO_PEER_RPC_URL:-${RPC_URL_138:-${CHAIN138_RPC_URL:-${CHAIN138_RPC:-}}}}"
|
||||
PEER_SERVICE="${CHAIN138_PUBLIC_RPC_URL:-${SUBNET_URL:-${RPC_URL_138_PUBLIC:-}}}"
|
||||
|
||||
if [[ -n "$PEER_OPERATOR" ]]; then
|
||||
_rpc_chain_id "$PEER_OPERATOR" "Chain138 operator RPC_URL_138" || fail=1
|
||||
else
|
||||
echo "WARN RPC_URL_138 (or CHAIN138_RPC_URL) unset — set for Chain 138 checks."
|
||||
fi
|
||||
|
||||
if [[ -n "$PEER_SERVICE" ]]; then
|
||||
_rpc_chain_id "$PEER_SERVICE" "Chain138 service CHAIN138_PUBLIC_RPC_URL / SUBNET_URL / RPC_URL_138_PUBLIC" || fail=1
|
||||
else
|
||||
echo "WARN CHAIN138_PUBLIC_RPC_URL, SUBNET_URL, and RPC_URL_138_PUBLIC unset — set one for relayer/service checks."
|
||||
fi
|
||||
|
||||
if [[ -n "${ETHEREUM_MAINNET_RPC:-}" ]]; then
|
||||
_rpc_chain_id "$ETHEREUM_MAINNET_RPC" "ETHEREUM_MAINNET_RPC (optional)" || fail=1
|
||||
fi
|
||||
@@ -70,5 +135,28 @@ if [[ "$fail" -ne 0 ]]; then
|
||||
echo "=== Preflight finished with failures ==="
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -n "${PRIVATE_KEY:-}" ]]; then
|
||||
DEPLOYER_ADDR="$(cast wallet address --private-key "$PRIVATE_KEY" 2>/dev/null || true)"
|
||||
if [[ -n "$DEPLOYER_ADDR" ]]; then
|
||||
echo ""
|
||||
echo "Deployer readiness"
|
||||
echo " deployer_address: $DEPLOYER_ADDR"
|
||||
if [[ -n "$PARENT" ]]; then
|
||||
parent_bal="$(_balance_eth "$PARENT" "$DEPLOYER_ADDR" || true)"
|
||||
if [[ -n "$parent_bal" ]]; then
|
||||
echo " parent_balance: $parent_bal"
|
||||
if [[ "$parent_bal" == "0.000000000000000000" || "$parent_bal" == "0" || "$parent_bal" == "0x0" ]]; then
|
||||
echo " note: deployer has no native gas on parent; parent Endpoint/CSC deploy and registerChain are blocked until funded."
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
if [[ -n "$PEER_OPERATOR" ]]; then
|
||||
peer_bal="$(_balance_eth "$PEER_OPERATOR" "$DEPLOYER_ADDR" || true)"
|
||||
[[ -n "$peer_bal" ]] && echo " chain138_balance: $peer_bal"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "=== Preflight finished ==="
|
||||
echo "Next: deploy Endpoint/CSC per docs/03-deployment/CHAIN138_XDC_ZERO_BRIDGE_RUNBOOK.md"
|
||||
|
||||
Reference in New Issue
Block a user