- ADD_CHAIN138_TO_LEDGER_LIVE: Ledger form done; public code review repo bis-innovations/LedgerLive; init/push commands - CONTRACT_DEPLOYMENT_RUNBOOK: Chain 138 gas price 1 gwei, 36-addr check, TransactionMirror workaround - CONTRACT_*: AddressMapper, MirrorManager deployed 2026-02-12; 36-address on-chain check - NEXT_STEPS_FOR_YOU: Ledger done; steps completable now (no LAN); run-completable-tasks-from-anywhere - MASTER_INDEX, OPERATOR_OPTIONAL, SMART_CONTRACTS_INVENTORY_SIMPLE: updates - LEDGER_BLOCKCHAIN_INTEGRATION_COMPLETE: bis-innovations/LedgerLive reference Co-authored-by: Cursor <cursoragent@cursor.com>
9.1 KiB
Secret Usage Patterns Documentation
Last Updated: 2026-01-31
Document Version: 1.0
Status: Active Documentation
Date: 2025-01-27
Status: 📋 Documentation Complete
Purpose: Document how secrets are currently used across the codebase
Overview
This document tracks how secrets are accessed and used throughout the codebase, helping identify all locations that need to be updated during HSM Key Vault migration.
Secret Access Patterns
1. Direct File Reading
Pattern: Reading from .env files
# Shell scripts
source .env
export $(cat .env | xargs)
# Node.js
require('dotenv').config()
process.env.PRIVATE_KEY
# Python
from dotenv import load_dotenv
load_dotenv()
os.getenv('PRIVATE_KEY')
Locations:
scripts/*.sh- Multiple shell scriptssmom-dbis-138/scripts/*.ts- TypeScript deployment scriptsservices/*/- Service applications
Migration: Replace with Vault API calls or Vault Agent
2. Hardcoded in Scripts
Pattern: Secrets directly in code
# Example from scripts
NPM_PASSWORD="ce8219e321e1cd97bd590fb792d3caeb7e2e3b94ca7e20124acaf253f911ff72"
CLOUDFLARE_API_TOKEN="JSEO_sruWB6lf1id77gtI7HOLVdhkhaR2goPEJIk"
Locations:
scripts/create-npmplus-proxy.shscripts/fix-certbot-dns-propagation.shscripts/install-shared-tunnel-token.shscripts/nginx-proxy-manager/*.sh
Migration: Replace with Vault secret retrieval
3. Environment Variable Injection
Pattern: Using environment variables
# Scripts
PRIVATE_KEY="${PRIVATE_KEY:-default_value}"
CLOUDFLARE_TOKEN="${CLOUDFLARE_API_TOKEN:-}"
# Applications
const privateKey = process.env.PRIVATE_KEY;
const apiToken = process.env.CLOUDFLARE_API_TOKEN;
Locations:
- All deployment scripts
- Service applications
- Frontend build processes
Migration: Vault Agent can inject as environment variables
4. Configuration Files
Pattern: Secrets in config files
# docker-compose.yml
environment:
- PRIVATE_KEY=${PRIVATE_KEY}
- DATABASE_URL=${DATABASE_URL}
# Kubernetes secrets
apiVersion: v1
kind: Secret
data:
private-key: <base64>
Locations:
docker-compose/*.yml- Kubernetes manifests (if any)
- Terraform configurations
Migration: Use Vault Kubernetes integration or external secrets operator
Service-Specific Patterns
Blockchain Services
Services:
smom-dbis-138/no_five/237-combo/
Secrets Used:
PRIVATE_KEY- For contract deployment and transactionsRPC_URL- Blockchain RPC endpoint- Contract addresses (less sensitive)
Access Pattern:
// Foundry scripts
const privateKey = process.env.PRIVATE_KEY;
const deployer = new ethers.Wallet(privateKey, provider);
// Hardhat scripts
const accounts = await ethers.getSigners();
const deployer = accounts[0];
Migration Strategy:
- Store private key in HSM (never export)
- Use Vault Agent to inject as env var
- Or use Vault API with short-lived tokens
Cloudflare Integration
Services:
- DNS automation scripts
- SSL certificate management
- Tunnel configuration
Secrets Used:
CLOUDFLARE_API_TOKEN- API accessCLOUDFLARE_TUNNEL_TOKEN- Tunnel authenticationCLOUDFLARE_ORIGIN_CA_KEY- Origin CA
Access Pattern:
# Shell scripts
curl -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \
https://api.cloudflare.com/client/v4/zones
# Python scripts
import requests
headers = {"Authorization": f"Bearer {os.getenv('CLOUDFLARE_API_TOKEN')}"}
Migration Strategy:
- Store tokens in Vault
- Use Vault Agent for scripts
- Rotate tokens quarterly
Database Services
Services:
dbis_core/explorer-monorepo/
Secrets Used:
DATABASE_URL- Connection string with passwordPOSTGRES_PASSWORD- Database passwordDB_USER- Database username
Access Pattern:
// Node.js
const db = new Client({
connectionString: process.env.DATABASE_URL
});
// Python
import psycopg2
conn = psycopg2.connect(os.getenv('DATABASE_URL'))
Migration Strategy:
- Store connection string in Vault
- Or store components separately (user, password, host)
- Use Vault database secrets engine for dynamic credentials
Infrastructure Services
Services:
- Nginx Proxy Manager (NPMplus)
- UniFi Controller
- Omada Controller
Secrets Used:
NPM_PASSWORD- NPM admin passwordNPM_EMAIL- NPM admin emailUNIFI_API_KEY- UniFi API keyUNIFI_PASSWORD- UniFi passwordOMADA_API_KEY- Omada API key
Access Pattern:
# NPM API
curl -X POST "$NPM_URL/api/tokens" \
-H "Content-Type: application/json" \
-d "{\"identity\":\"$NPM_EMAIL\",\"secret\":\"$NPM_PASSWORD\"}"
# UniFi API
curl -X POST "$UNIFI_URL/api/login" \
-d "{\"username\":\"$UNIFI_USER\",\"password\":\"$UNIFI_PASSWORD\"}"
Migration Strategy:
- Store credentials in Vault
- Use Vault Agent for automation scripts
- Implement credential rotation
Application Integration Points
Frontend Applications
Services:
frontend-dapp/dbis_core/frontend/
Secrets Used:
VITE_ETHERSCAN_API_KEY- Public API key (less sensitive)VITE_WALLETCONNECT_PROJECT_ID- Public identifier
Access Pattern:
// Vite environment variables (public)
const apiKey = import.meta.env.VITE_ETHERSCAN_API_KEY;
Note: Vite variables prefixed with VITE_ are exposed to the browser. Only use for public API keys.
Migration Strategy:
- Keep public keys in .env (less sensitive)
- Or use Vault for consistency
- Never expose private keys to frontend
Backend Services
Services:
services/relay/services/state-anchoring-service/services/transaction-mirroring-service/
Secrets Used:
PRIVATE_KEY- For blockchain operationsDATABASE_URL- Database connectionJWT_SECRET- Token signing
Access Pattern:
// Node.js services
import dotenv from 'dotenv';
dotenv.config();
const privateKey = process.env.PRIVATE_KEY;
const dbUrl = process.env.DATABASE_URL;
Migration Strategy:
- Use Vault Agent for automatic injection
- Or Vault API with service account authentication
- Implement secret rotation
Migration Checklist by Pattern
Direct File Reading
- Identify all
source .envorload_dotenv()calls - Replace with Vault Agent or API calls
- Test secret retrieval
- Update documentation
Hardcoded Secrets
- Find all hardcoded secrets in scripts
- Move to Vault
- Update scripts to retrieve from Vault
- Remove hardcoded values
Environment Variables
- Identify all
process.env.*or$VARusage - Configure Vault Agent templates
- Test environment injection
- Verify application functionality
Configuration Files
- Review docker-compose.yml files
- Review Kubernetes manifests
- Update to use Vault secrets
- Test deployment
Vault Integration Patterns
Pattern 1: Vault Agent (Recommended for Applications)
Use Case: Long-running services that need secrets
# vault-agent.hcl
template {
source = "/etc/secrets/.env.tpl"
destination = "/etc/secrets/.env"
perms = 0600
}
Template:
PRIVATE_KEY={{ with secret "secret/data/blockchain/private-keys/deployer" }}{{ .Data.data.private_key }}{{ end }}
Pattern 2: Vault API (For Scripts)
Use Case: One-time scripts, automation
#!/bin/bash
PRIVATE_KEY=$(vault kv get -field=private_key secret/blockchain/private-keys/deployer)
CLOUDFLARE_TOKEN=$(vault kv get -field=token secret/cloudflare/api-tokens/main)
# Use secrets
cast send ... --private-key "$PRIVATE_KEY"
Pattern 3: Vault CLI with Caching
Use Case: Development, local scripts
# Authenticate once
vault auth -method=userpass username=dev
# Use cached token
export PRIVATE_KEY=$(vault kv get -field=private_key secret/blockchain/private-keys/deployer)
Pattern 4: Kubernetes Secrets Operator
Use Case: Kubernetes deployments
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: blockchain-secrets
spec:
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: blockchain-secrets
data:
- secretKey: private-key
remoteRef:
key: secret/data/blockchain/private-keys/deployer
property: private_key
Testing Strategy
Pre-Migration Testing
- Document current secret usage
- Identify all access points
- Test Vault connectivity
- Create test secrets in Vault
Migration Testing
- Migrate one secret at a time
- Test application functionality
- Verify no hardcoded fallbacks
- Check logs for errors
Post-Migration Testing
- Verify all secrets in Vault
- Test secret rotation
- Verify access controls
- Security audit
Related Documentation
Last Updated: 2025-01-27
Status: 📋 Documentation Complete
Next Review: During migration implementation