feat(it-ops): LAN bootstrap for read API, NPM proxy, Cloudflare DNS
All checks were successful
Deploy to Phoenix / deploy (push) Successful in 6s
All checks were successful
Deploy to Phoenix / deploy (push) Successful in 6s
- bootstrap-sankofa-it-read-api-lan.sh: rsync /opt/proxmox, systemd + env file, repo .env keys, portal CT 7801 merge, weekly export timer; tolerate export exit 2 - upsert-it-read-api-proxy-host.sh, add-it-api-sankofa-dns.sh - systemd example uses EnvironmentFile; docs, spec, AGENTS, read API README Made-with: Cursor
This commit is contained in:
33
AGENTS.md
33
AGENTS.md
@@ -11,7 +11,9 @@ Orchestration for Proxmox VE, Chain 138 (`smom-dbis-138/`), explorers, NPMplus,
|
||||
| Need | Location |
|
||||
|------|-----------|
|
||||
| Doc index | `docs/MASTER_INDEX.md` |
|
||||
| Chain 138 info site (`info.defi-oracle.io`) | `info-defi-oracle-138/` — `pnpm --filter info-defi-oracle-138 build`; deploy `dist/`; runbook `docs/04-configuration/INFO_DEFI_ORACLE_IO_DEPLOYMENT.md` |
|
||||
| Chain 138 PMM swap quote (CLI) | `bash scripts/verify/pmm-swap-quote-chain138.sh --token-in … --amount-in …` — on-chain `querySellBase`/`querySellQuote` + suggested `minOut` for `DODOPMMIntegration.swapExactIn` (REST `/quote` is xy=k only). |
|
||||
| Chain 138 info site (`info.defi-oracle.io`) | Dedicated nginx LXC (default VMID **2410** / `IP_INFO_DEFI_ORACLE_WEB`): `provision-info-defi-oracle-web-lxc.sh` then `sync-info-defi-oracle-to-vmid2400.sh` (sync asserts `/token-aggregation` proxy); NPM fleet `scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh`; Cloudflare DNS `scripts/cloudflare/set-info-defi-oracle-dns-to-vmid2400-tunnel.sh`; cache `pnpm run cloudflare:purge-info-defi-oracle-cache`; runbook `docs/04-configuration/INFO_DEFI_ORACLE_IO_DEPLOYMENT.md`; `pnpm run verify:info-defi-oracle-public` (SPA routes including `/governance`, `/ecosystem`, `/documentation`, `/solacenet`, `llms.txt`, `agent-hints.json`, **same-origin** token-aggregation JSON; `INFO_SITE_BASE=…` optional); CI `info-defi-oracle-138.yml` (build) and `verify-info-defi-oracle-public.yml` (weekly + manual smoke); optional `pnpm run audit:info-defi-oracle-site` (`pnpm exec playwright install chromium`) |
|
||||
| **SolaceNet + gateway rails** (dbis_core) | Hub map: `docs/04-configuration/SOLACENET_PUBLIC_HUB.md`. Backlog: `dbis_core/docs/solacenet/REMAINING_TASKS_FULL_LIST.md`. Gap IDs: `dbis_core/docs/solacenet/PROTOCOL_GAPS_CHECKLIST.md`. **Delta audit** (missing wiring, naming drift, CI): `dbis_core/docs/solacenet/AUDIT_GAPS_INCONSISTENCIES_MISSING.md`. Enforce rails runbook: `dbis_core/docs/solacenet/SOLACENET_GATEWAY_RAILS_ENFORCE_RUNBOOK.md`. Tests: `cd dbis_core && npm run test:gateway` (unit + HTTP integration). **Provider seed:** `cd dbis_core && npm run seed:gateway-provider` (needs `DATABASE_URL`). **Smoke (auth):** `bash scripts/verify/check-dbis-core-gateway-rails.sh`. **Outbox worker:** `cd dbis_core && npm run worker:gateway-outbox` (`DATABASE_URL`). CI: `.github/workflows/dbis-core-gateway-ci.yml`. API: `GET/POST /api/v1/gateway/rails*` (optional `SOLACENET_GATEWAY_RAILS_ENFORCE`) — `dbis_core/src/core/gateway/routes/gateway.routes.ts`. |
|
||||
| cXAUC/cXAUT unit | 1 full token = 1 troy oz Au — `docs/11-references/EXPLORER_TOKEN_LIST_CROSSCHECK.md` (section 5.1) |
|
||||
| GRU / UTRNF token naming (`c*` vs collateral prefix) | `docs/04-configuration/naming-conventions/README.md`, `docs/04-configuration/naming-conventions/02_DBIS_NAMESPACE_AND_UTRNF_MAPPING.md` |
|
||||
| PMM mesh 6s tick | `smom-dbis-138/scripts/reserve/pmm-mesh-6s-automation.sh` — `docs/integration/ORACLE_AND_KEEPER_CHAIN138.md` (PMM mesh automation) |
|
||||
@@ -19,22 +21,32 @@ Orchestration for Proxmox VE, Chain 138 (`smom-dbis-138/`), explorers, NPMplus,
|
||||
| VMID / IP / FQDN | `docs/04-configuration/ALL_VMIDS_ENDPOINTS.md` |
|
||||
| Proxmox Mail Proxy (LAN SMTP) | VMID **100** `192.168.11.32` (`proxmox-mail-gateway`) — submission **587** / **465**; see Mail Proxy note in `ALL_VMIDS_ENDPOINTS.md` |
|
||||
| Spare R630 storage + optional tune-up | `scripts/proxmox/ensure-r630-spare-node-storage.sh`, `scripts/proxmox/provision-r630-03-six-ssd-thinpools.sh`, `scripts/proxmox/pve-spare-host-optional-tuneup.sh` · load balance / migrate: `docs/04-configuration/PROXMOX_LOAD_BALANCING_RUNBOOK.md` |
|
||||
| Ops template + JSON | `docs/03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md`, `config/proxmox-operational-template.json` |
|
||||
| Live vs template (read-only SSH) | `bash scripts/verify/audit-proxmox-operational-template.sh` |
|
||||
| Ops template + JSON | `docs/03-deployment/PROXMOX_VE_OPERATIONAL_DEPLOYMENT_TEMPLATE.md`, `config/proxmox-operational-template.json` (`proxmox_nodes[].mgmt_fqdn` = `*.sankofa.nexus`; `config/ip-addresses.conf` `PROXMOX_FQDN_*`) |
|
||||
| Live vs template (read-only SSH) | `bash scripts/verify/audit-proxmox-operational-template.sh` — defaults to ML110 + **r630-01..04** (`PROXMOX_HOSTS` overrides) |
|
||||
| Proxmox mgmt FQDN DNS + `/etc/hosts` snippet | `bash scripts/verify/check-proxmox-mgmt-fqdn.sh` (`--print-hosts`, optional `--ssh`) |
|
||||
| Proxmox SSH check (all 5 nodes) | `bash scripts/security/ensure-proxmox-ssh-access.sh` (`--fqdn`, optional `--copy` for `ssh-copy-id`) |
|
||||
| Proxmox cluster hardware poll (LAN, key SSH) | `bash scripts/verify/poll-proxmox-cluster-hardware.sh` — writes `reports/status/hardware_poll_*.txt`; companion narrative + ARP/edge: `reports/status/hardware_and_connected_inventory_*.md` |
|
||||
| IT live inventory + IPAM drift (LAN, Phase 0) | `bash scripts/it-ops/export-live-inventory-and-drift.sh` → `reports/status/live_inventory.json`, `drift.json` (exit **2** only if duplicate guest IPs; merges `ip-addresses.conf` + `ALL_VMIDS_ENDPOINTS.md`). [SANKOFA_IT_OPS_LIVE_INVENTORY_SCRIPTS.md](docs/03-deployment/SANKOFA_IT_OPS_LIVE_INVENTORY_SCRIPTS.md). Spec: [SANKOFA_IT_OPERATIONS_CONTROLLER_SPEC.md](docs/02-architecture/SANKOFA_IT_OPERATIONS_CONTROLLER_SPEC.md) |
|
||||
| IT inventory read API (Phase 0 stub) | `python3 services/sankofa-it-read-api/server.py` — GET `/health`, `/v1/inventory/live`, `/v1/inventory/drift`; optional `IT_READ_API_KEY` + `X-API-Key`; optional `IT_READ_API_CORS_ORIGINS` (comma-separated). [services/sankofa-it-read-api/README.md](services/sankofa-it-read-api/README.md), systemd [config/systemd/sankofa-it-read-api.service.example](config/systemd/sankofa-it-read-api.service.example) |
|
||||
| **IT read API LAN bootstrap** | `bash scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh` — rsync → `/opt/proxmox` on seed PVE, systemd + `/etc/sankofa-it-read-api.env`, repo `.env` + portal CT 7801 merge, weekly export timer on PVE. NPM: [upsert-it-read-api-proxy-host.sh](scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh); DNS: [add-it-api-sankofa-dns.sh](scripts/cloudflare/add-it-api-sankofa-dns.sh). [SANKOFA_IT_OPS_KEYCLOAK_PORTAL_NEXT_STEPS.md](docs/03-deployment/SANKOFA_IT_OPS_KEYCLOAK_PORTAL_NEXT_STEPS.md) |
|
||||
| Keycloak realm role for portal `/it` | `bash scripts/deployment/keycloak-sankofa-ensure-it-admin-role.sh` (CT 7802 via SSH); assign `sankofa-it-admin` to IT users. Portal: `IT_READ_API_URL` + optional `IT_READ_API_KEY` on CT 7801. Weekly export timer: [config/systemd/sankofa-it-inventory-export.timer.example](config/systemd/sankofa-it-inventory-export.timer.example) |
|
||||
| IT admin UI next steps (Keycloak + portal `/it`) | [docs/03-deployment/SANKOFA_IT_OPS_KEYCLOAK_PORTAL_NEXT_STEPS.md](docs/03-deployment/SANKOFA_IT_OPS_KEYCLOAK_PORTAL_NEXT_STEPS.md) |
|
||||
| Config validation | `bash scripts/validation/validate-config-files.sh` (optional: `python3 -m pip install check-jsonschema` for `validate-dbis-institutional-schemas.sh`, `validate-naming-convention-registry-examples.sh`, `validate-jvmtm-regulatory-closure-schemas.sh`, `validate-reserve-provenance-package.sh`; includes explorer Chain 138 inventory vs `config/smart-contracts-master.json`) |
|
||||
| Chain 138 contract addresses (JSON + bytecode) | `config/smart-contracts-master.json` — `bash scripts/verify/check-contracts-on-chain-138.sh` (expect **75/75** when Core RPC reachable; jq uses JSON when file present) |
|
||||
| OMNL + Core + Chain 138 + RTGS + Smart Vaults | `docs/03-deployment/OMNL_DBIS_CORE_CHAIN138_SMART_VAULT_RTGS_RUNBOOK.md`; identifiers (UETR vs DLT-primary): `docs/03-deployment/OJK_BI_AUDIT_JVMTM_REMEDIATION_AND_UETR_POLICY.md`; JVMTM Tables B/C/D closure matrix: `config/jvmtm-regulatory-closure/INAAUDJVMTM_2025_AUDIT_CLOSURE_MATRIX.md`; **dual-anchor attestation:** `scripts/omnl/omnl-chain138-attestation-tx.sh` (138 + optional mainnet via `ETHEREUM_MAINNET_RPC`); E2E zip: `AUDIT_PROOF.json` `chainAttestationMainnet`; machine-readable: `config/dbis-institutional/` |
|
||||
| Blockscout address labels from registry | `bash scripts/verify/sync-blockscout-address-labels-from-registry.sh` (plan); `--apply` with `BLOCKSCOUT_*` env when explorer API confirmed |
|
||||
| ISO-20022 on-chain methodology + intake gateway | `docs/04-configuration/SMART_CONTRACTS_ISO20022_FIN_METHODOLOGY.md`, `ISO20022_INTAKE_GATEWAY_CONTRACT_MULTI_NETWORK.md`; Rail: `docs/dbis-rail/ISO_GATEWAY_AND_RELAYER_SPEC.md` |
|
||||
| FQDN / NPM E2E verifier | `bash scripts/verify/verify-end-to-end-routing.sh --profile=public` — inventory: `docs/04-configuration/E2E_ENDPOINTS_LIST.md`. Gitea Actions URLs (no API): `bash scripts/verify/print-gitea-actions-urls.sh` |
|
||||
| **Gitea** (org forge **VMID 104**, upgrades, NPM) | `docs/04-configuration/GITEA_PLATFORM_AND_UPGRADE_RUNBOOK.md` — `scripts/operator/upgrade-gitea-lxc.sh` (`--dry-run`, `GITEA_VERSION=`); `config/ip-addresses.conf` **`IP_GITEA_INFRA`**, **`GITEA_PUBLIC_UPSTREAM_*`**; `scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh`, `update-npmplus-fourth-proxy-hosts.sh` |
|
||||
| Chain 138 LAN RPC health + nonce/gas parity | `bash scripts/verify/check-chain138-rpc-health.sh` (fleet + public capability); `bash scripts/verify/check-chain138-rpc-nonce-gas-parity.sh` (LAN: aligned chainId / deployer nonces / gasPrice); offline/CI: `bash scripts/verify/self-test-chain138-rpc-verify.sh`; shared VMID list: `scripts/lib/chain138-lan-rpc-inventory.sh` |
|
||||
| RPC FQDN batch (`eth_chainId` + WSS) | `bash scripts/verify/check-rpc-fqdns-e2e.sh` — after DNS + `update-npmplus-proxy-hosts-api.sh`; includes `rpc-core.d-bis.org` |
|
||||
| Submodule trees clean (CI / post-merge) | `bash scripts/verify/submodules-clean.sh` |
|
||||
| Submodule + explorer remotes | `docs/00-meta/SUBMODULE_HYGIENE.md` |
|
||||
| smom-dbis-138 `.env` in bash scripts | Prefer `source smom-dbis-138/scripts/lib/deployment/dotenv.sh` + `load_deployment_env --repo-root "$PROJECT_ROOT"` (trims RPC URL line endings). From an interactive shell: `source smom-dbis-138/scripts/load-env.sh`. Proxmox root scripts: `source scripts/lib/load-project-env.sh` (also trims common RPC vars). |
|
||||
| Sankofa portal → CT 7801 (build + restart) | `./scripts/deployment/sync-sankofa-portal-7801.sh` (`--dry-run` first); default `NEXTAUTH_URL=https://portal.sankofa.nexus` via `sankofa-portal-ensure-nextauth-on-ct.sh` |
|
||||
| Sankofa portal → CT 7801 (build + restart) | `./scripts/deployment/sync-sankofa-portal-7801.sh` (`--dry-run` first); default `NEXTAUTH_URL=https://portal.sankofa.nexus` via `sankofa-portal-ensure-nextauth-on-ct.sh`; IT `/it` env: `sankofa-portal-merge-it-read-api-env-from-repo.sh` (`IT_READ_API_URL` in repo `.env`) |
|
||||
| Portal Keycloak OIDC secret on CT 7801 | After client exists: `./scripts/deployment/sankofa-portal-merge-keycloak-env-from-repo.sh` (needs `KEYCLOAK_CLIENT_SECRET` in repo `.env`; base64-safe over SSH) |
|
||||
| Sankofa corporate web → CT 7806 | Provision: `./scripts/deployment/provision-sankofa-public-web-lxc-7806.sh`. Sync: `./scripts/deployment/sync-sankofa-public-web-to-ct.sh`. systemd: `config/systemd/sankofa-public-web.service`. Set `IP_SANKOFA_PUBLIC_WEB` in `.env`, then `scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh` |
|
||||
| CCIP relay (r630-01 host) | Unit: `config/systemd/ccip-relay.service` → `/etc/systemd/system/ccip-relay.service`; `systemctl enable --now ccip-relay` |
|
||||
| CCIP relay (r630-01 host) | WETH lane: `config/systemd/ccip-relay.service`. Mainnet cW lane: `config/systemd/ccip-relay-mainnet-cw.service` (health `http://192.168.11.11:9863/healthz`). Public edge: set `CCIP_RELAY_MAINNET_CW_PUBLIC_HOST`, run `scripts/nginx-proxy-manager/update-npmplus-proxy-hosts-api.sh`, relay-only `scripts/nginx-proxy-manager/upsert-ccip-relay-mainnet-cw-proxy-host.sh`, or SSH hop `scripts/nginx-proxy-manager/upsert-ccip-relay-mainnet-cw-via-ssh.sh`; DNS `scripts/cloudflare/configure-relay-mainnet-cw-dns.sh`. Use `NPM_URL=https://…:81` for API scripts (HTTP on :81 301s to HTTPS). |
|
||||
| XDC Zero + Chain 138 (parallel to CCIP) | `bash scripts/xdc-zero/run-xdc-zero-138-operator-sequence.sh` · `docs/03-deployment/CHAIN138_XDC_ZERO_BRIDGE_RUNBOOK.md` · `CHAIN138_XDC_ZERO_DEPLOYMENT_TROUBLESHOOTING.md` · `config/xdc-zero/` · `scripts/xdc-zero/` · systemd `node dist/server.js` template — **XDC mainnet RPC:** `https://rpc.xinfin.network` (chain id 50; more endpoints: [chainid.network/chain/50](https://chainid.network/chain/50/)); **Chain 138 side:** Core `http://192.168.11.211:8545` is operator-only, relayer/services use `https://rpc-http-pub.d-bis.org` |
|
||||
| OP Stack Standard Rollup (Ethereum mainnet, Superchain) | `docs/03-deployment/OP_STACK_STANDARD_ROLLUP_SUPERCHAIN_RUNBOOK.md` · optional L2↔Besu notes `docs/03-deployment/OP_STACK_L2_AND_BESU138_BRIDGE_NOTES.md` · `config/op-stack-superchain/` · `scripts/op-stack/` (e.g. `fetch-standard-mainnet-toml.sh`, checklist scripts) · `config/systemd/op-stack-*.example.service` — **distinct L2 chain ID from Besu 138**; follow [Optimism superchain-registry](https://github.com/ethereum-optimism/superchain-registry) for listing |
|
||||
| Wormhole protocol (LLM / MCP) vs Chain 138 facts | Wormhole NTT/Connect/VAAs/etc.: `docs/04-configuration/WORMHOLE_AI_RESOURCES_LLM_PLAYBOOK.md`, mirror `scripts/doc/sync-wormhole-ai-resources.sh`, MCP `mcp-wormhole-docs/` + `docs/04-configuration/MCP_SETUP.md`. **Chain 138 addresses, PMM, CCIP:** repo `docs/11-references/` + `docs/07-ccip/` — not Wormhole bundles. Cursor overlay: `.cursor/rules/wormhole-ai-resources.mdc`. |
|
||||
@@ -43,7 +55,7 @@ Orchestration for Proxmox VE, Chain 138 (`smom-dbis-138/`), explorers, NPMplus,
|
||||
| Portal login + Keycloak systemd + `.env` (prints password once) | `./scripts/deployment/enable-sankofa-portal-login-7801.sh` (`--dry-run` first); preserves `KEYCLOAK_*` from repo `.env` and runs merge script when `KEYCLOAK_CLIENT_SECRET` is set |
|
||||
| Keycloak redirect URIs (portal + admin) | `./scripts/deployment/keycloak-sankofa-ensure-client-redirects-via-proxmox-pct.sh` (or `keycloak-sankofa-ensure-client-redirects.sh` for LAN URL) — needs `KEYCLOAK_ADMIN_PASSWORD` in `.env` |
|
||||
| NPM TLS for hosts missing certs | `./scripts/request-npmplus-certificates.sh` — optional `CERT_DOMAINS_FILTER='portal\\.sankofa|admin\\.sankofa'` |
|
||||
| Token-aggregation API (Chain 138) | `pnpm run verify:token-aggregation-api` — tokens, pools, quote, `bridge/routes`, networks. Deploy: `scripts/deploy-token-aggregation-for-publication.sh`. After edge deploy: `SKIP_BRIDGE_ROUTES=0 bash scripts/verify/check-public-report-api.sh https://explorer.d-bis.org`. |
|
||||
| Token-aggregation API (Chain 138) | `pnpm run verify:token-aggregation-api` — tokens, pools, quote (prints `quoteEngine` when `jq` installed), `bridge/routes`, networks. Build + env: `scripts/deploy-token-aggregation-for-publication.sh` (sets `RPC_URL_138`, `TOKEN_AGGREGATION_CHAIN138_RPC_URL`, optional `TOKEN_AGGREGATION_PMM_*`). LAN push + restart: `scripts/deployment/push-token-aggregation-bundle-to-explorer.sh`. Nginx gaps: `scripts/fix-explorer-http-api-v1-proxy.sh` (apex `/api/v1/`), `scripts/fix-explorer-token-aggregation-api-v2-proxy.sh` (planner POST). Runbook: `docs/04-configuration/TOKEN_AGGREGATION_REPORT_API_RUNBOOK.md`. |
|
||||
| **Chain 138 Open Snap** (MetaMask, open Snap permissions only; stable MetaMask requires MetaMask install allowlist for npm Snaps) | Source repo: [Defi-Oracle-Tooling/chain138-snap-minimal](https://github.com/Defi-Oracle-Tooling/chain138-snap-minimal). Vendored in this workspace: `metamask-integration/chain138-snap-minimal/`. Snap ID `npm:chain138-open-snap`; **`npm run verify`** = `npm audit --omit=dev` + build. **Publish:** token in `chain138-snap/.env` or `npm login`, then `./scripts/deployment/publish-chain138-open-snap.sh`. **Full-feature Snap** (API quotes, allowlist): `metamask-integration/chain138-snap/`. Explorer `/wallet` install works on stable MetaMask only after allowlisting; use Flask or local serve for dev. |
|
||||
| Completable (no LAN) | `./scripts/run-completable-tasks-from-anywhere.sh` |
|
||||
| Operator (LAN + secrets) | `./scripts/run-all-operator-tasks-from-lan.sh` (use `--skip-backup` if `NPM_PASSWORD` unset) |
|
||||
@@ -54,6 +66,15 @@ Orchestration for Proxmox VE, Chain 138 (`smom-dbis-138/`), explorers, NPMplus,
|
||||
|
||||
Most submodules are **pinned commits**; `git submodule update --init --recursive` often leaves **detached HEAD** — that is normal. To **change** a submodule: check out a branch inside it, commit, **push the submodule first**, then commit and push the **parent** submodule pointer. Do not embed credentials in `git remote` URLs; use SSH or a credential helper. Explorer Gitea vs GitHub and token cleanup: `docs/00-meta/SUBMODULE_HYGIENE.md`.
|
||||
|
||||
## Production safety (Proxmox / shared config)
|
||||
|
||||
- **Scoped LXC starts:** use `scripts/operator/start-stopped-lxc-scoped.sh --host <PVE> --vmid <N> [--vmid …]`; default is **dry-run**; add **`--apply`** or **`PROXMOX_OPS_APPLY=1`** to mutate. Optional **`PROXMOX_OPS_ALLOWED_VMIDS`** enforces an allowlist. Do **not** use cluster-wide “start every stopped CT” patterns for production.
|
||||
- **Maintenance scripts (SSH + pct):** set **`PROXMOX_SAFE_DEFAULTS=1`** so `fix-core-rpc-2101.sh`, `make-rpc-vmids-writable-via-ssh.sh`, and `ensure-legacy-monitor-networkd-via-ssh.sh` default to **plan-only** unless **`--apply`** or **`PROXMOX_OPS_APPLY=1`**. Without that env, behavior stays **legacy** (mutate unless `--dry-run`) so existing docs/commands keep working.
|
||||
- **Guard helpers** for new SSH+pct scripts: `scripts/lib/proxmox-production-guard.sh`.
|
||||
- **VMID → host** for automation: `get_host_for_vmid` in `scripts/lib/load-project-env.sh` must match live placement (`docs/04-configuration/ALL_VMIDS_ENDPOINTS.md`).
|
||||
- **Shared config:** avoid drive-by edits to `config/ip-addresses.conf` or root `.env` when the task only affects one workload; prefer flags, workload-specific env files, or small dedicated scripts.
|
||||
- Cursor overlay: `.cursor/rules/proxmox-production-safety.mdc`.
|
||||
|
||||
## Rules of engagement
|
||||
|
||||
- Review scripts before running; prefer `--dry-run` where supported.
|
||||
|
||||
@@ -12,10 +12,13 @@ After=network.target
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/proxmox
|
||||
Environment=IT_READ_API_HOST=127.0.0.1
|
||||
Environment=IT_READ_API_PORT=8787
|
||||
# Production pattern (see scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh):
|
||||
EnvironmentFile=-/etc/sankofa-it-read-api.env
|
||||
# Or inline (dev):
|
||||
# Environment=IT_READ_API_HOST=127.0.0.1
|
||||
# Environment=IT_READ_API_PORT=8787
|
||||
# Environment=IT_READ_API_KEY=change-me
|
||||
# Optional browser CORS (prefer portal /api/it/* proxy): Environment=IT_READ_API_CORS_ORIGINS=https://portal.sankofa.nexus
|
||||
# Optional: IT_READ_API_CORS_ORIGINS=https://portal.sankofa.nexus
|
||||
ExecStart=/usr/bin/python3 /opt/proxmox/services/sankofa-it-read-api/server.py
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
@@ -165,13 +165,13 @@ The HTML controller should show a **joined view**: *public hostname → NPM →
|
||||
6. **Keycloak automation (proxmox repo)** — `scripts/deployment/keycloak-sankofa-ensure-it-admin-role.sh` creates realm role **`sankofa-it-admin`**; operators still assign the role to users in Admin Console.
|
||||
7. **Portal `/it` (Sankofa/portal repo, sibling clone)** — `src/app/it/page.tsx`, `src/app/api/it/*` (server proxy + `IT_READ_API_URL` / `IT_READ_API_KEY` on CT 7801); credentials **`ADMIN`** propagated into JWT roles for bootstrap (`src/lib/auth.ts`).
|
||||
8. **LAN schedule examples** — `config/systemd/sankofa-it-inventory-export.timer.example` + `.service.example` for weekly `export-live-inventory-and-drift.sh`.
|
||||
9. **LAN bootstrap + edge** — `scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh` (read API on PVE `/opt/proxmox`, portal env merge, weekly timer on PVE); `scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh`; `scripts/cloudflare/add-it-api-sankofa-dns.sh`.
|
||||
|
||||
**Remaining (other repos / product):**
|
||||
|
||||
1. **Full BFF** with OIDC (Keycloak) and Postgres — **`dbis_core` vs dedicated CT** — decide once.
|
||||
2. **Keycloak** — assign **`sankofa-it-admin`** to real IT users (role creation is scripted; mapping is manual policy).
|
||||
3. **Deploy** — `sync-sankofa-portal-7801.sh` after pulling portal changes; set **`IT_READ_API_URL`** on the portal LXC.
|
||||
4. **Schedule on LAN** — enable the timer on a host with repo + SSH to Proxmox; optional same cadence for `poll-proxmox-cluster-hardware.sh`.
|
||||
5. **UniFi / NPM** live collectors — Phase 2 of this spec.
|
||||
3. **TLS for `it-api.sankofa.nexus`** — NPM certificate after DNS propagation; duplicate guest IP remediation (export exit **2**) on the cluster.
|
||||
4. **UniFi / NPM** live collectors — Phase 2 of this spec.
|
||||
|
||||
This spec does **not** replace change control; it gives you a **single product vision** so IP, VLAN, ports, hosts, licenses, and billing support evolve together instead of in silos.
|
||||
|
||||
@@ -4,6 +4,19 @@
|
||||
|
||||
---
|
||||
|
||||
## 0. Full LAN bootstrap (recommended)
|
||||
|
||||
One command after `.env` has `NPM_PASSWORD`, Cloudflare vars (for DNS), and SSH to the seed node:
|
||||
|
||||
1. **`bash scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh`** — Refreshes inventory JSON, rsyncs a minimal tree to **`/opt/proxmox`** on **`PROXMOX_HOST`** (default r630-01), installs **`sankofa-it-read-api`** (bind **`0.0.0.0:8787`**, secrets in **`/etc/sankofa-it-read-api.env`**), upserts **`IT_READ_API_URL`** / **`IT_READ_API_KEY`** in repo **`.env`**, enables weekly **`sankofa-it-inventory-export.timer`** on the same host, runs **`sankofa-portal-merge-it-read-api-env-from-repo.sh`** for CT **7801**. Export exit code **2** (duplicate guest IPs) does **not** abort the bootstrap.
|
||||
2. **NPM:** `bash scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh` — proxy **`it-api.sankofa.nexus`** → **`http://<r630-01>:8787`** (override with **`IT_READ_API_PUBLIC_HOST`**).
|
||||
3. **DNS:** `bash scripts/cloudflare/add-it-api-sankofa-dns.sh` — **`it-api.sankofa.nexus`** A → **`PUBLIC_IP`** (proxied).
|
||||
4. **TLS:** In NPM UI, request a certificate for **`it-api.sankofa.nexus`** after DNS propagates (or widen **`CERT_DOMAINS_FILTER`** in `scripts/request-npmplus-certificates.sh`).
|
||||
|
||||
**Note:** Operator workstations outside VLAN 11 may be firewalled from **`192.168.11.11:8787`**; portal CT **7801** and NPM on LAN should still reach it.
|
||||
|
||||
---
|
||||
|
||||
## 1. Keycloak
|
||||
|
||||
1. Create realm role **`sankofa-it-admin`** (idempotent): `bash scripts/deployment/keycloak-sankofa-ensure-it-admin-role.sh` (needs `KEYCLOAK_ADMIN_PASSWORD` in repo `.env`, SSH to Proxmox, CT 7802). Then assign the role to IT staff in the Keycloak Admin Console (or use a group + token mapper if you prefer group claims).
|
||||
@@ -26,9 +39,10 @@
|
||||
|
||||
---
|
||||
|
||||
## 3. NPM
|
||||
## 3. NPM + public hostname
|
||||
|
||||
Add an **internal** proxy host (optional TLS) from a hostname such as `it-api.sankofa.nexus` (LAN-only DNS) to **`127.0.0.1:8787`** on the host running the read API, **or** bind the service on a dedicated CT IP and point NPM at that upstream.
|
||||
- **Scripted:** `scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh` (upstream default **`192.168.11.11:8787`**).
|
||||
- **Manual:** same idea — forward to the PVE host (or CT) where **`sankofa-it-read-api`** listens.
|
||||
|
||||
---
|
||||
|
||||
|
||||
54
scripts/cloudflare/add-it-api-sankofa-dns.sh
Executable file
54
scripts/cloudflare/add-it-api-sankofa-dns.sh
Executable file
@@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env bash
|
||||
# Cloudflare DNS: it-api.sankofa.nexus → PUBLIC_IP (A record, proxied).
|
||||
# Pair with scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh (NPM upstream r630-01:8787).
|
||||
#
|
||||
# Usage: bash scripts/cloudflare/add-it-api-sankofa-dns.sh [--dry-run]
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
source config/ip-addresses.conf 2>/dev/null || true
|
||||
[ -f .env ] && set +u && source .env 2>/dev/null || true && set -u
|
||||
|
||||
ZONE_ID="${CLOUDFLARE_ZONE_ID_SANKOFA_NEXUS:-}"
|
||||
PUBLIC_IP="${PUBLIC_IP:-76.53.10.36}"
|
||||
NAME="${IT_READ_API_DNS_NAME:-it-api}"
|
||||
DRY=false
|
||||
[[ "${1:-}" == "--dry-run" ]] && DRY=true
|
||||
|
||||
if [ -n "${CLOUDFLARE_API_TOKEN:-}" ]; then
|
||||
AUTH_H=(-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN")
|
||||
elif [ -n "${CLOUDFLARE_API_KEY:-}" ] && [ -n "${CLOUDFLARE_EMAIL:-}" ]; then
|
||||
AUTH_H=(-H "X-Auth-Email: $CLOUDFLARE_EMAIL" -H "X-Auth-Key: $CLOUDFLARE_API_KEY")
|
||||
else
|
||||
echo "Set CLOUDFLARE_API_TOKEN or (CLOUDFLARE_EMAIL + CLOUDFLARE_API_KEY) in .env" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
[ -z "$ZONE_ID" ] && { echo "Set CLOUDFLARE_ZONE_ID_SANKOFA_NEXUS in .env" >&2; exit 1; }
|
||||
|
||||
FQDN="${NAME}.sankofa.nexus"
|
||||
echo "DNS ${FQDN} → ${PUBLIC_IP} (zone sankofa.nexus)"
|
||||
|
||||
DATA=$(jq -n --arg name "$NAME" --arg content "$PUBLIC_IP" \
|
||||
'{type:"A",name:$name,content:$content,ttl:1,proxied:true}')
|
||||
|
||||
if $DRY; then
|
||||
echo "[dry-run] would POST/PUT Cloudflare record"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
EXISTING=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records?name=${FQDN}" \
|
||||
"${AUTH_H[@]}" -H "Content-Type: application/json")
|
||||
RECORD_ID=$(echo "$EXISTING" | jq -r '.result[0].id // empty')
|
||||
|
||||
if [ -n "$RECORD_ID" ] && [ "$RECORD_ID" != "null" ]; then
|
||||
UPD=$(curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/${RECORD_ID}" \
|
||||
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$DATA")
|
||||
echo "$UPD" | jq -e '.success == true' >/dev/null 2>&1 && echo "Updated ${FQDN}" || { echo "$UPD" | jq . >&2; exit 1; }
|
||||
else
|
||||
CR=$(curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records" \
|
||||
"${AUTH_H[@]}" -H "Content-Type: application/json" -d "$DATA")
|
||||
echo "$CR" | jq -e '.success == true' >/dev/null 2>&1 && echo "Created ${FQDN}" || { echo "$CR" | jq . >&2; exit 1; }
|
||||
fi
|
||||
195
scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh
Executable file
195
scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh
Executable file
@@ -0,0 +1,195 @@
|
||||
#!/usr/bin/env bash
|
||||
# One-shot LAN bootstrap: IT read API on PVE, repo .env, portal CT 7801, optional weekly timer.
|
||||
#
|
||||
# - Refreshes inventory JSON locally, rsyncs minimal tree to /opt/proxmox on the seed host
|
||||
# - Generates IT_READ_API_KEY if missing in repo .env; sets IT_READ_API_URL for portal
|
||||
# - Installs /etc/sankofa-it-read-api.env + systemd unit (bind 0.0.0.0:8787)
|
||||
# - Runs sankofa-portal-merge-it-read-api-env-from-repo.sh
|
||||
# - Optional: weekly systemd timer for export-live-inventory-and-drift.sh on PVE
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh [--dry-run] [--no-timer] [--no-portal-merge]
|
||||
#
|
||||
# Env: PROXMOX_HOST (default 192.168.11.11), IT_READ_API_PORT (8787), IT_BOOTSTRAP_REMOTE_ROOT (/opt/proxmox)
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/scripts/lib/load-project-env.sh" 2>/dev/null || true
|
||||
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
|
||||
REMOTE_ROOT="${IT_BOOTSTRAP_REMOTE_ROOT:-/opt/proxmox}"
|
||||
PORT="${IT_READ_API_PORT:-8787}"
|
||||
SSH_OPTS=(-o BatchMode=yes -o ConnectTimeout=20 -o StrictHostKeyChecking=accept-new)
|
||||
RSYNC_SSH="ssh -o BatchMode=yes -o ConnectTimeout=20 -o StrictHostKeyChecking=accept-new"
|
||||
|
||||
DRY=false
|
||||
NO_TIMER=false
|
||||
NO_PORTAL=false
|
||||
for a in "$@"; do
|
||||
case "$a" in
|
||||
--dry-run) DRY=true ;;
|
||||
--no-timer) NO_TIMER=true ;;
|
||||
--no-portal-merge) NO_PORTAL=true ;;
|
||||
esac
|
||||
done
|
||||
|
||||
PUBLIC_URL="http://${PROXMOX_HOST}:${PORT}"
|
||||
CORS_ORIGIN="${IT_READ_API_CORS_ORIGINS:-https://portal.sankofa.nexus}"
|
||||
|
||||
log() { echo "[bootstrap-it-read-api] $*"; }
|
||||
|
||||
upsert_env_file() {
|
||||
local f="$1"
|
||||
shift
|
||||
python3 - "$f" "$@" <<'PY'
|
||||
import os, re, sys
|
||||
path = sys.argv[1]
|
||||
pairs = list(zip(sys.argv[2::2], sys.argv[3::2]))
|
||||
|
||||
def upsert(text: str, k: str, v: str) -> str:
|
||||
line = f"{k}={v}"
|
||||
if re.search(rf"^{re.escape(k)}=", text, flags=re.M):
|
||||
return re.sub(rf"^{re.escape(k)}=.*$", line, text, flags=re.M, count=1)
|
||||
if text and not text.endswith("\n"):
|
||||
text += "\n"
|
||||
return text + line + "\n"
|
||||
|
||||
text = open(path).read() if os.path.isfile(path) else ""
|
||||
for k, v in pairs:
|
||||
text = upsert(text, k, v)
|
||||
open(path, "w").write(text)
|
||||
PY
|
||||
}
|
||||
|
||||
if $DRY; then
|
||||
log "dry-run: would export inventory, rsync → root@${PROXMOX_HOST}:${REMOTE_ROOT}, systemd, portal merge"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
log "refresh inventory + drift (local → reports/status)"
|
||||
set +e
|
||||
bash "${PROJECT_ROOT}/scripts/it-ops/export-live-inventory-and-drift.sh"
|
||||
EX_INV=$?
|
||||
set -e
|
||||
if [[ "$EX_INV" -eq 2 ]]; then
|
||||
log "warning: export exited 2 (duplicate guest IPs on cluster); JSON still written — continuing"
|
||||
elif [[ "$EX_INV" -ne 0 ]]; then
|
||||
exit "$EX_INV"
|
||||
fi
|
||||
|
||||
API_KEY="${IT_READ_API_KEY:-}"
|
||||
if [[ -z "$API_KEY" ]]; then
|
||||
API_KEY="$(openssl rand -hex 32)"
|
||||
log "generated IT_READ_API_KEY (openssl rand -hex 32)"
|
||||
fi
|
||||
|
||||
ENV_LOCAL="${PROJECT_ROOT}/.env"
|
||||
touch "$ENV_LOCAL"
|
||||
upsert_env_file "$ENV_LOCAL" "IT_READ_API_URL" "$PUBLIC_URL" "IT_READ_API_KEY" "$API_KEY"
|
||||
chmod 600 "$ENV_LOCAL" 2>/dev/null || true
|
||||
log "upserted IT_READ_API_URL and IT_READ_API_KEY in repo .env (gitignored)"
|
||||
|
||||
log "rsync minimal repo tree → root@${PROXMOX_HOST}:${REMOTE_ROOT}"
|
||||
ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" \
|
||||
"mkdir -p '${REMOTE_ROOT}/config' '${REMOTE_ROOT}/scripts/it-ops' '${REMOTE_ROOT}/services/sankofa-it-read-api' '${REMOTE_ROOT}/docs/04-configuration' '${REMOTE_ROOT}/reports/status'"
|
||||
cd "$PROJECT_ROOT"
|
||||
rsync -az --delete -e "$RSYNC_SSH" ./config/ip-addresses.conf "root@${PROXMOX_HOST}:${REMOTE_ROOT}/config/"
|
||||
rsync -az --delete -e "$RSYNC_SSH" ./scripts/it-ops/ "root@${PROXMOX_HOST}:${REMOTE_ROOT}/scripts/it-ops/"
|
||||
rsync -az --delete -e "$RSYNC_SSH" ./services/sankofa-it-read-api/server.py "root@${PROXMOX_HOST}:${REMOTE_ROOT}/services/sankofa-it-read-api/"
|
||||
rsync -az --delete -e "$RSYNC_SSH" ./docs/04-configuration/ALL_VMIDS_ENDPOINTS.md "root@${PROXMOX_HOST}:${REMOTE_ROOT}/docs/04-configuration/"
|
||||
rsync -az -e "$RSYNC_SSH" ./reports/status/live_inventory.json ./reports/status/drift.json "root@${PROXMOX_HOST}:${REMOTE_ROOT}/reports/status/"
|
||||
|
||||
ENV_REMOTE="/etc/sankofa-it-read-api.env"
|
||||
|
||||
# shellcheck disable=SC2087
|
||||
ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" bash -s <<REMOTE
|
||||
set -euo pipefail
|
||||
umask 077
|
||||
cat > ${ENV_REMOTE} <<ENVEOF
|
||||
IT_READ_API_HOST=0.0.0.0
|
||||
IT_READ_API_PORT=${PORT}
|
||||
IT_READ_API_KEY=${API_KEY}
|
||||
IT_READ_API_CORS_ORIGINS=${CORS_ORIGIN}
|
||||
ENVEOF
|
||||
chmod 600 ${ENV_REMOTE}
|
||||
cat > /etc/systemd/system/sankofa-it-read-api.service <<SVCEOF
|
||||
[Unit]
|
||||
Description=Sankofa IT read API (live inventory JSON)
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=${REMOTE_ROOT}
|
||||
EnvironmentFile=-${ENV_REMOTE}
|
||||
ExecStart=/usr/bin/python3 ${REMOTE_ROOT}/services/sankofa-it-read-api/server.py
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
SVCEOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable --now sankofa-it-read-api
|
||||
systemctl is-active sankofa-it-read-api
|
||||
REMOTE
|
||||
|
||||
log "verify read API (localhost on PVE — WAN/WSL may be firewalled)"
|
||||
if ! ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" "curl -sS -f -m 5 http://127.0.0.1:${PORT}/health" | head -c 200; then
|
||||
echo "health check failed on PVE" >&2
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
if ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" "pct status 7801 &>/dev/null"; then
|
||||
log "verify from portal CT 7801 → ${PUBLIC_URL}/health"
|
||||
ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" \
|
||||
"pct exec 7801 -- sh -c 'curl -sS -f -m 8 http://${PROXMOX_HOST}:${PORT}/health || true'" | head -c 200 || true
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if ! $NO_TIMER; then
|
||||
log "install weekly inventory timer on PVE"
|
||||
# shellcheck disable=SC2087
|
||||
ssh "${SSH_OPTS[@]}" "root@${PROXMOX_HOST}" bash -s <<REMOTE
|
||||
set -euo pipefail
|
||||
cat > /etc/systemd/system/sankofa-it-inventory-export.service <<EOF
|
||||
[Unit]
|
||||
Description=Export Proxmox live inventory and IPAM drift
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
User=root
|
||||
WorkingDirectory=${REMOTE_ROOT}
|
||||
ExecStart=/usr/bin/bash ${REMOTE_ROOT}/scripts/it-ops/export-live-inventory-and-drift.sh
|
||||
EOF
|
||||
cat > /etc/systemd/system/sankofa-it-inventory-export.timer <<'EOF'
|
||||
[Unit]
|
||||
Description=Timer — Proxmox live inventory + drift export (weekly)
|
||||
|
||||
[Timer]
|
||||
OnCalendar=Sun *-*-* 03:30:00
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
EOF
|
||||
systemctl daemon-reload
|
||||
systemctl enable --now sankofa-it-inventory-export.timer
|
||||
systemctl list-timers sankofa-it-inventory-export.timer --no-pager || true
|
||||
REMOTE
|
||||
fi
|
||||
|
||||
if ! $NO_PORTAL; then
|
||||
log "merge IT_READ_API_* into portal CT 7801"
|
||||
export IT_READ_API_URL="$PUBLIC_URL"
|
||||
export IT_READ_API_KEY="$API_KEY"
|
||||
bash "${SCRIPT_DIR}/sankofa-portal-merge-it-read-api-env-from-repo.sh"
|
||||
fi
|
||||
|
||||
log "done. Portal uses ${PUBLIC_URL} (server-side key in CT .env). Optional: NPM + DNS for it-api hostname → same upstream."
|
||||
85
scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh
Executable file
85
scripts/nginx-proxy-manager/upsert-it-read-api-proxy-host.sh
Executable file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
# Upsert NPMplus proxy host for the IT read API (Phase 0).
|
||||
# Point DNS (e.g. it-api.sankofa.nexus) at your NPM public IP before using TLS.
|
||||
#
|
||||
# Env: NPM_URL, NPM_EMAIL, NPM_PASSWORD; optional:
|
||||
# IT_READ_API_PUBLIC_HOST (default it-api.sankofa.nexus)
|
||||
# IT_READ_API_UPSTREAM_IP (default PROXMOX_HOST_R630_01 or 192.168.11.11)
|
||||
# IT_READ_API_UPSTREAM_PORT (default 8787)
|
||||
# NPM_CURL_MAX_TIME (default 300)
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
|
||||
_orig_npm_url="${NPM_URL:-}"
|
||||
_orig_npm_email="${NPM_EMAIL:-}"
|
||||
_orig_npm_password="${NPM_PASSWORD:-}"
|
||||
if [ -f "$PROJECT_ROOT/.env" ]; then set +u; source "$PROJECT_ROOT/.env"; set -u; fi
|
||||
[ -n "$_orig_npm_url" ] && NPM_URL="$_orig_npm_url"
|
||||
[ -n "$_orig_npm_email" ] && NPM_EMAIL="$_orig_npm_email"
|
||||
[ -n "$_orig_npm_password" ] && NPM_PASSWORD="$_orig_npm_password"
|
||||
source "${PROJECT_ROOT}/config/ip-addresses.conf" 2>/dev/null || true
|
||||
|
||||
NPM_URL="${NPM_URL:-https://${IP_NPMPLUS:-192.168.11.167}:81}"
|
||||
NPM_EMAIL="${NPM_EMAIL:-}"
|
||||
NPM_PASSWORD="${NPM_PASSWORD:-}"
|
||||
[ -z "$NPM_PASSWORD" ] && { echo "NPM_PASSWORD required (.env or export)" >&2; exit 1; }
|
||||
|
||||
DOMAIN="${IT_READ_API_PUBLIC_HOST:-it-api.sankofa.nexus}"
|
||||
UP_IP="${IT_READ_API_UPSTREAM_IP:-${PROXMOX_HOST_R630_01:-192.168.11.11}}"
|
||||
UP_PORT="${IT_READ_API_UPSTREAM_PORT:-8787}"
|
||||
|
||||
NPM_CURL_MAX_TIME="${NPM_CURL_MAX_TIME:-300}"
|
||||
curl_npm() { curl -s -k -L --http1.1 --connect-timeout 30 --max-time "$NPM_CURL_MAX_TIME" "$@"; }
|
||||
|
||||
try_connect() { curl -s -k -L -o /dev/null --connect-timeout 5 --max-time 20 "$1" 2>/dev/null; }
|
||||
if ! try_connect "$NPM_URL/"; then
|
||||
http_url="${NPM_URL/https:/http:}"
|
||||
try_connect "$http_url/" && NPM_URL="$http_url"
|
||||
fi
|
||||
|
||||
AUTH_JSON=$(jq -n --arg identity "$NPM_EMAIL" --arg secret "$NPM_PASSWORD" '{identity:$identity,secret:$secret}')
|
||||
TOKEN=$(curl_npm -X POST "$NPM_URL/api/tokens" -H "Content-Type: application/json" -d "$AUTH_JSON" | jq -r '.token // empty')
|
||||
[ -n "$TOKEN" ] && [ "$TOKEN" != "null" ] || { echo "NPM auth failed" >&2; exit 1; }
|
||||
|
||||
ADV='add_header Referrer-Policy "strict-origin-when-cross-origin" always;'
|
||||
PAYLOAD_ADD=$(jq -n \
|
||||
--arg domain "$DOMAIN" \
|
||||
--arg host "$UP_IP" \
|
||||
--argjson port "$UP_PORT" \
|
||||
--arg adv "$ADV" \
|
||||
'{domain_names:[$domain],forward_scheme:"http",forward_host:$host,forward_port:$port,allow_websocket_upgrade:false,block_exploits:true,certificate_id:null,ssl_forced:false,advanced_config:$adv}')
|
||||
|
||||
echo "Trying create (POST) for $DOMAIN -> http://${UP_IP}:${UP_PORT}"
|
||||
RESP=$(curl_npm -X POST "$NPM_URL/api/nginx/proxy-hosts" -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d "$PAYLOAD_ADD")
|
||||
if echo "$RESP" | jq -e '.id' >/dev/null 2>&1; then
|
||||
echo "OK created id=$(echo "$RESP" | jq -r .id)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
ERR_MSG=$(echo "$RESP" | jq -r '.message // .error.message // .error // empty' 2>/dev/null || echo "")
|
||||
if ! echo "$ERR_MSG" | grep -qiE 'already|in use|exist|duplicate|unique'; then
|
||||
echo "Create failed (not a duplicate case): $ERR_MSG" >&2
|
||||
echo "$RESP" | jq . 2>/dev/null || echo "$RESP"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Host exists; fetching proxy list for PUT ($ERR_MSG)"
|
||||
PROXY_JSON=$(curl_npm -X GET "$NPM_URL/api/nginx/proxy-hosts" -H "Authorization: Bearer $TOKEN")
|
||||
HOST_ID=$(echo "$PROXY_JSON" | jq -r --arg d "$DOMAIN" '.[] | select(.domain_names | type == "array") | select(any(.domain_names[]; (. | tostring | ascii_downcase) == ($d | ascii_downcase))) | .id' | head -n1)
|
||||
|
||||
if [ -z "$HOST_ID" ] || [ "$HOST_ID" = "null" ]; then
|
||||
echo "Could not resolve proxy host id for $DOMAIN." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Updating proxy host $DOMAIN (id=$HOST_ID) -> http://${UP_IP}:${UP_PORT}"
|
||||
PAYLOAD_PUT=$(jq -n \
|
||||
--arg host "$UP_IP" \
|
||||
--argjson port "$UP_PORT" \
|
||||
--arg adv "$ADV" \
|
||||
'{forward_scheme:"http",forward_host:$host,forward_port:$port,allow_websocket_upgrade:false,block_exploits:true,advanced_config:$adv}')
|
||||
RESP=$(curl_npm -X PUT "$NPM_URL/api/nginx/proxy-hosts/$HOST_ID" -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -d "$PAYLOAD_PUT")
|
||||
echo "$RESP" | jq -e '.id' >/dev/null && echo "OK updated" || { echo "$RESP" | jq . 2>/dev/null || echo "$RESP"; exit 1; }
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
Minimal **read-only** JSON service for `reports/status/live_inventory.json` and `drift.json`. Intended to run on a **LAN** host (or CT) with access to the repo checkout and optional SSH to Proxmox for refresh.
|
||||
|
||||
**Production LAN install:** `bash scripts/deployment/bootstrap-sankofa-it-read-api-lan.sh` (rsync to `/opt/proxmox` on the seed node, `/etc/sankofa-it-read-api.env`, systemd, portal merge). See [SANKOFA_IT_OPS_KEYCLOAK_PORTAL_NEXT_STEPS.md](../../docs/03-deployment/SANKOFA_IT_OPS_KEYCLOAK_PORTAL_NEXT_STEPS.md).
|
||||
|
||||
## Run
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user