feat: implement naming convention, deployment automation, and infrastructure updates

- Add comprehensive naming convention (provider-region-resource-env-purpose)
- Implement Terraform locals for centralized naming
- Update all Terraform resources to use new naming convention
- Create deployment automation framework (18 phase scripts)
- Add Azure setup scripts (provider registration, quota checks)
- Update deployment scripts config with naming functions
- Create complete deployment documentation (guide, steps, quick reference)
- Add frontend portal implementations (public and internal)
- Add UI component library (18 components)
- Enhance Entra VerifiedID integration with file utilities
- Add API client package for all services
- Create comprehensive documentation (naming, deployment, next steps)

Infrastructure:
- Resource groups, storage accounts with new naming
- Terraform configuration updates
- Outputs with naming convention examples

Deployment:
- Automated deployment scripts for all 15 phases
- State management and logging
- Error handling and validation

Documentation:
- Naming convention guide and implementation summary
- Complete deployment guide (296 steps)
- Next steps and quick start guides
- Azure prerequisites and setup completion docs

Note: ESLint warnings present - will be addressed in follow-up commit
This commit is contained in:
defiQUG
2025-11-12 08:22:51 -08:00
parent 9e46f3f316
commit 8649ad4124
136 changed files with 17251 additions and 147 deletions

272
scripts/deploy/README.md Normal file
View File

@@ -0,0 +1,272 @@
# Deployment Automation Scripts
Automated deployment scripts for The Order following the deployment guide.
## Overview
This directory contains automated scripts for deploying The Order to Azure/Kubernetes. The scripts follow the 15-phase deployment guide and can be run individually or as a complete deployment.
## Quick Start
```bash
# Deploy all phases for dev environment
./scripts/deploy/deploy.sh --all --environment dev
# Deploy specific phases
./scripts/deploy/deploy.sh --phase 1 --phase 2 --phase 6
# Continue from last saved state
./scripts/deploy/deploy.sh --continue
# Deploy with auto-apply (no Terraform review)
./scripts/deploy/deploy.sh --all --auto-apply
```
## Configuration
Configuration is managed in `config.sh`. Key variables:
- `ENVIRONMENT`: Deployment environment (dev, stage, prod)
- `AZURE_REGION`: Azure region (default: westeurope)
- `ACR_NAME`: Azure Container Registry name
- `AKS_NAME`: AKS cluster name
- `KEY_VAULT_NAME`: Azure Key Vault name
Set via environment variables or edit `config.sh`:
```bash
export ENVIRONMENT=prod
export AZURE_REGION=westeurope
export ACR_NAME=theorderacr
./scripts/deploy/deploy.sh --all
```
## Phase Scripts
### Phase 1: Prerequisites
- Checks all required tools
- Verifies Azure login
- Installs dependencies
- Builds packages
```bash
./scripts/deploy/phase1-prerequisites.sh
```
### Phase 2: Azure Infrastructure
- Runs Azure setup scripts
- Registers resource providers
- Deploys Terraform infrastructure
- Configures Kubernetes access
```bash
./scripts/deploy/phase2-azure-infrastructure.sh
```
### Phase 3: Entra ID Configuration
- **Manual steps required** (Azure Portal)
- Helper script to store secrets: `store-entra-secrets.sh`
### Phase 6: Build & Package
- Builds all packages and applications
- Creates Docker images
- Pushes to Azure Container Registry
- Signs images with Cosign (if available)
```bash
./scripts/deploy/phase6-build-package.sh
```
### Phase 7: Database Migrations
- Runs database schema migrations
- Verifies database connection
```bash
./scripts/deploy/phase7-database-migrations.sh
```
### Phase 10: Backend Services
- Deploys backend services to Kubernetes
- Verifies deployments
- Tests health endpoints
```bash
./scripts/deploy/phase10-backend-services.sh
```
## Usage Examples
### Full Deployment
```bash
# Development environment
./scripts/deploy/deploy.sh --all --environment dev
# Staging environment
./scripts/deploy/deploy.sh --all --environment stage
# Production (with confirmation)
./scripts/deploy/deploy.sh --all --environment prod
```
### Incremental Deployment
```bash
# Run prerequisites and infrastructure
./scripts/deploy/deploy.sh --phase 1 --phase 2
# Build and package
./scripts/deploy/deploy.sh --phase 6
# Deploy services
./scripts/deploy/deploy.sh --phase 10 --phase 11
```
### Skip Phases
```bash
# Skip build (if already built)
./scripts/deploy/deploy.sh --all --skip-build
# Skip specific phase
./scripts/deploy/deploy.sh --all --skip 3 --skip 8
```
### Continue from Failure
```bash
# If deployment fails, continue from last state
./scripts/deploy/deploy.sh --continue
```
## State Management
Deployment state is saved in `.deployment/${ENVIRONMENT}.state`. This allows:
- Resuming from last completed phase
- Tracking deployment progress
- Debugging failed deployments
## Logging
All deployment logs are saved to `logs/deployment-YYYYMMDD-HHMMSS.log`.
View logs:
```bash
tail -f logs/deployment-*.log
```
## Manual Steps
Some phases require manual steps:
- **Phase 3**: Entra ID configuration (Azure Portal)
- **Phase 8**: Secrets configuration (use helper scripts)
- **Phase 12**: DNS configuration
- **Phase 13**: Monitoring dashboard setup
See `docs/deployment/DEPLOYMENT_GUIDE.md` for detailed instructions.
## Helper Scripts
### Store Entra ID Secrets
After completing Entra ID setup in Azure Portal:
```bash
./scripts/deploy/store-entra-secrets.sh
```
This will prompt for:
- Tenant ID
- Client ID
- Client Secret
- Credential Manifest ID
And store them in Azure Key Vault.
## Troubleshooting
### Check Deployment State
```bash
cat .deployment/dev.state
```
### View Logs
```bash
tail -f logs/deployment-*.log
```
### Verify Kubernetes Access
```bash
kubectl cluster-info
kubectl get nodes
```
### Verify Azure Access
```bash
az account show
az aks list
```
### Re-run Failed Phase
```bash
./scripts/deploy/deploy.sh --phase <phase-number>
```
## Environment-Specific Configuration
Create environment-specific config files:
```bash
# .deployment/dev.env
export ENVIRONMENT=dev
export AKS_NAME=the-order-dev-aks
export KEY_VAULT_NAME=the-order-dev-kv
```
Source before deployment:
```bash
source .deployment/dev.env
./scripts/deploy/deploy.sh --all
```
## Integration with CI/CD
The scripts can be integrated into CI/CD pipelines:
```yaml
# .github/workflows/deploy.yml
- name: Deploy to Dev
run: |
./scripts/deploy/deploy.sh --all --environment dev --auto-apply
env:
AZURE_CREDENTIALS: ${{ secrets.AZURE_CREDENTIALS }}
```
## Security Notes
- Never commit secrets to repository
- Use Azure Key Vault for all secrets
- Enable RBAC for all resources
- Review Terraform plans before applying
- Use managed identities where possible
## Next Steps
After deployment:
1. Verify all services are running: `kubectl get pods -n the-order-${ENV}`
2. Test health endpoints
3. Configure monitoring dashboards
4. Set up alerts
5. Review security settings
See `docs/deployment/DEPLOYMENT_GUIDE.md` for complete deployment instructions.

182
scripts/deploy/config.sh Executable file
View File

@@ -0,0 +1,182 @@
#!/bin/bash
#
# Deployment Configuration
# Centralized configuration for The Order deployment automation
#
set -euo pipefail
# Colors for output
readonly RED='\033[0;31m'
readonly GREEN='\033[0;32m'
readonly YELLOW='\033[1;33m'
readonly BLUE='\033[0;34m'
readonly MAGENTA='\033[0;35m'
readonly CYAN='\033[0;36m'
readonly NC='\033[0m' # No Color
# Project configuration
readonly PROJECT_NAME="the-order"
readonly PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
readonly SCRIPTS_DIR="${PROJECT_ROOT}/scripts"
readonly INFRA_DIR="${PROJECT_ROOT}/infra"
readonly TERRAFORM_DIR="${INFRA_DIR}/terraform"
readonly K8S_DIR="${INFRA_DIR}/k8s"
# Azure configuration
readonly AZURE_REGION="${AZURE_REGION:-westeurope}"
readonly AZURE_SUBSCRIPTION_ID="${AZURE_SUBSCRIPTION_ID:-}"
# Region abbreviation mapping
get_region_abbrev() {
case "${AZURE_REGION}" in
westeurope) echo "we" ;;
northeurope) echo "ne" ;;
uksouth) echo "uk" ;;
switzerlandnorth) echo "ch" ;;
norwayeast) echo "no" ;;
francecentral) echo "fr" ;;
germanywestcentral) echo "de" ;;
*) echo "we" ;; # Default to westeurope
esac
}
# Environment abbreviation mapping
get_env_abbrev() {
case "${ENVIRONMENT:-dev}" in
dev) echo "dev" ;;
stage) echo "stg" ;;
prod) echo "prd" ;;
mgmt) echo "mgmt" ;;
*) echo "dev" ;;
esac
}
# Naming convention: {provider}-{region}-{resource}-{env}-{purpose}
readonly REGION_SHORT=$(get_region_abbrev)
readonly ENV_SHORT=$(get_env_abbrev)
readonly NAME_PREFIX="az-${REGION_SHORT}"
# Environment configuration
readonly ENVIRONMENT="${ENVIRONMENT:-dev}"
readonly NAMESPACE="the-order-${ENVIRONMENT}"
# Resource Groups (az-we-rg-dev-main)
readonly RESOURCE_GROUP_NAME="${RESOURCE_GROUP_NAME:-${NAME_PREFIX}-rg-${ENV_SHORT}-main}"
readonly AKS_RESOURCE_GROUP="${AKS_RESOURCE_GROUP:-${RESOURCE_GROUP_NAME}}"
# Container registry (azweacrdev - alphanumeric only, max 50 chars)
readonly ACR_NAME="${ACR_NAME:-az${REGION_SHORT}acr${ENV_SHORT}}"
readonly IMAGE_TAG="${IMAGE_TAG:-latest}"
# Kubernetes configuration (az-we-aks-dev-main)
readonly AKS_NAME="${AKS_NAME:-${NAME_PREFIX}-aks-${ENV_SHORT}-main}"
# Key Vault (az-we-kv-dev-main - max 24 chars)
readonly KEY_VAULT_NAME="${KEY_VAULT_NAME:-${NAME_PREFIX}-kv-${ENV_SHORT}-main}"
# Database (az-we-psql-dev-main)
readonly POSTGRES_SERVER_NAME="${POSTGRES_SERVER_NAME:-${NAME_PREFIX}-psql-${ENV_SHORT}-main}"
readonly POSTGRES_DB_NAME="${POSTGRES_DB_NAME:-${NAME_PREFIX}-db-${ENV_SHORT}-main}"
# Storage (azwesadevdata - alphanumeric only, max 24 chars)
readonly STORAGE_ACCOUNT_NAME="${STORAGE_ACCOUNT_NAME:-az${REGION_SHORT}sa${ENV_SHORT}data}"
# Services
readonly SERVICES=("identity" "intake" "finance" "dataroom")
readonly APPS=("portal-public" "portal-internal")
# Service ports
declare -A SERVICE_PORTS=(
["identity"]="4002"
["intake"]="4001"
["finance"]="4003"
["dataroom"]="4004"
["portal-public"]="3000"
["portal-internal"]="3001"
)
# Logging
readonly LOG_DIR="${PROJECT_ROOT}/logs"
readonly LOG_FILE="${LOG_DIR}/deployment-$(date +%Y%m%d-%H%M%S).log"
# Deployment state
readonly STATE_DIR="${PROJECT_ROOT}/.deployment"
readonly STATE_FILE="${STATE_DIR}/${ENVIRONMENT}.state"
# Create necessary directories
mkdir -p "${LOG_DIR}"
mkdir -p "${STATE_DIR}"
# Logging functions
log_info() {
echo -e "${BLUE}[INFO]${NC} $*" | tee -a "${LOG_FILE}"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $*" | tee -a "${LOG_FILE}"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $*" | tee -a "${LOG_FILE}"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $*" | tee -a "${LOG_FILE}"
}
log_step() {
echo -e "${CYAN}[STEP]${NC} $*" | tee -a "${LOG_FILE}"
}
# Error handling
error_exit() {
log_error "$1"
exit "${2:-1}"
}
# Validation functions
check_command() {
if ! command -v "$1" &> /dev/null; then
error_exit "$1 is not installed. Please install it first."
fi
}
check_azure_login() {
if ! az account show &> /dev/null; then
log_warning "Not logged into Azure. Attempting login..."
az login || error_exit "Failed to login to Azure"
fi
}
check_prerequisites() {
log_info "Checking prerequisites..."
check_command "node"
check_command "pnpm"
check_command "az"
check_command "terraform"
check_command "kubectl"
check_command "docker"
log_success "All prerequisites met"
}
# State management
save_state() {
local phase="$1"
local step="$2"
echo "{\"phase\":\"${phase}\",\"step\":\"${step}\",\"timestamp\":\"$(date -Iseconds)\"}" > "${STATE_FILE}"
}
load_state() {
if [ -f "${STATE_FILE}" ]; then
cat "${STATE_FILE}"
else
echo "{\"phase\":\"none\",\"step\":\"none\"}"
fi
}
# Export functions
export -f log_info log_success log_warning log_error log_step error_exit
export -f check_command check_azure_login check_prerequisites
export -f save_state load_state

256
scripts/deploy/deploy.sh Executable file
View File

@@ -0,0 +1,256 @@
#!/bin/bash
#
# The Order - Automated Deployment Script
# Main orchestrator for deployment automation
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
# Usage
usage() {
cat << EOF
Usage: $0 [OPTIONS] [PHASES...]
Deploy The Order application to Azure/Kubernetes.
OPTIONS:
-e, --environment ENV Environment (dev, stage, prod) [default: dev]
-p, --phase PHASE Run specific phase (1-15)
-a, --all Run all phases
-s, --skip PHASE Skip specific phase
-c, --continue Continue from last saved state
--auto-apply Auto-apply Terraform changes (default: requires review)
--skip-build Skip build steps
--dry-run Show what would be done without executing
-h, --help Show this help message
PHASES:
1 - Prerequisites
2 - Azure Infrastructure Setup
3 - Entra ID Configuration (manual)
4 - Database & Storage Setup
5 - Container Registry Setup
6 - Application Build & Package
7 - Database Migrations
8 - Secrets Configuration (manual)
9 - Infrastructure Services Deployment
10 - Backend Services Deployment
11 - Frontend Applications Deployment
12 - Networking & Gateways
13 - Monitoring & Observability
14 - Testing & Validation
15 - Production Hardening
EXAMPLES:
# Deploy all phases for dev environment
$0 --all --environment dev
# Deploy specific phases
$0 --phase 1 --phase 2 --phase 6
# Continue from last state
$0 --continue
# Deploy with auto-apply
$0 --all --auto-apply
EOF
}
# Parse arguments
PHASES=()
SKIP_PHASES=()
RUN_ALL=false
CONTINUE=false
DRY_RUN=false
while [[ $# -gt 0 ]]; do
case $1 in
-e|--environment)
ENVIRONMENT="$2"
export ENVIRONMENT
shift 2
;;
-p|--phase)
PHASES+=("$2")
shift 2
;;
-a|--all)
RUN_ALL=true
shift
;;
-s|--skip)
SKIP_PHASES+=("$2")
shift 2
;;
-c|--continue)
CONTINUE=true
shift
;;
--auto-apply)
export AUTO_APPLY=true
shift
;;
--skip-build)
export SKIP_BUILD=true
shift
;;
--dry-run)
DRY_RUN=true
shift
;;
-h|--help)
usage
exit 0
;;
*)
echo "Unknown option: $1"
usage
exit 1
;;
esac
done
# Load state if continuing
if [ "${CONTINUE}" = "true" ]; then
STATE=$(load_state)
LAST_PHASE=$(echo "${STATE}" | jq -r '.phase' 2>/dev/null || echo "none")
log_info "Continuing from phase: ${LAST_PHASE}"
fi
# Phase scripts mapping
declare -A PHASE_SCRIPTS=(
["1"]="phase1-prerequisites.sh"
["2"]="phase2-azure-infrastructure.sh"
["3"]="phase3-entra-id.sh"
["4"]="phase4-database-storage.sh"
["5"]="phase5-container-registry.sh"
["6"]="phase6-build-package.sh"
["7"]="phase7-database-migrations.sh"
["8"]="phase8-secrets.sh"
["9"]="phase9-infrastructure-services.sh"
["10"]="phase10-backend-services.sh"
["11"]="phase11-frontend-apps.sh"
["12"]="phase12-networking.sh"
["13"]="phase13-monitoring.sh"
["14"]="phase14-testing.sh"
["15"]="phase15-production.sh"
)
# Determine phases to run
PHASES_TO_RUN=()
if [ "${RUN_ALL}" = "true" ]; then
PHASES_TO_RUN=({1..15})
elif [ ${#PHASES[@]} -gt 0 ]; then
PHASES_TO_RUN=("${PHASES[@]}")
elif [ "${CONTINUE}" = "true" ]; then
# Continue from last phase
if [ "${LAST_PHASE}" != "none" ] && [ "${LAST_PHASE}" != "complete" ]; then
LAST_PHASE_NUM=$(echo "${LAST_PHASE}" | sed 's/phase//')
for i in $(seq "${LAST_PHASE_NUM}" 15); do
PHASES_TO_RUN+=("$i")
done
else
log_info "No previous state found, starting from phase 1"
PHASES_TO_RUN=({1..15})
fi
else
log_error "No phases specified. Use --all, --phase, or --continue"
usage
exit 1
fi
# Filter out skipped phases
FILTERED_PHASES=()
for phase in "${PHASES_TO_RUN[@]}"; do
SKIP=false
for skip in "${SKIP_PHASES[@]}"; do
if [ "${phase}" = "${skip}" ]; then
SKIP=true
break
fi
done
if [ "${SKIP}" = "false" ]; then
FILTERED_PHASES+=("${phase}")
fi
done
# Run phases
log_info "=========================================="
log_info "The Order - Automated Deployment"
log_info "Environment: ${ENVIRONMENT}"
log_info "Phases to run: ${FILTERED_PHASES[*]}"
log_info "=========================================="
if [ "${DRY_RUN}" = "true" ]; then
log_info "DRY RUN MODE - No changes will be made"
for phase in "${FILTERED_PHASES[@]}"; do
SCRIPT="${PHASE_SCRIPTS[$phase]}"
if [ -n "${SCRIPT}" ]; then
log_info "Would run: ${SCRIPT_DIR}/${SCRIPT}"
else
log_warning "No script found for phase ${phase}"
fi
done
exit 0
fi
# Execute phases
FAILED_PHASES=()
for phase in "${FILTERED_PHASES[@]}"; do
SCRIPT="${PHASE_SCRIPTS[$phase]}"
if [ -z "${SCRIPT}" ]; then
log_warning "No script found for phase ${phase}, skipping..."
continue
fi
SCRIPT_PATH="${SCRIPT_DIR}/${SCRIPT}"
if [ ! -f "${SCRIPT_PATH}" ]; then
log_warning "Script not found: ${SCRIPT_PATH}"
log_info "Phase ${phase} may require manual steps. See deployment guide."
continue
fi
log_info "=========================================="
log_info "Executing Phase ${phase}"
log_info "=========================================="
if bash "${SCRIPT_PATH}"; then
log_success "Phase ${phase} completed successfully"
else
log_error "Phase ${phase} failed"
FAILED_PHASES+=("${phase}")
# Ask if should continue
read -p "Continue with next phase? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log_error "Deployment stopped by user"
break
fi
fi
done
# Summary
log_info "=========================================="
log_info "Deployment Summary"
log_info "=========================================="
if [ ${#FAILED_PHASES[@]} -eq 0 ]; then
log_success "All phases completed successfully!"
save_state "complete" "all"
else
log_error "Failed phases: ${FAILED_PHASES[*]}"
log_info "Review logs at: ${LOG_FILE}"
exit 1
fi
log_info "Deployment log: ${LOG_FILE}"
log_info "State file: ${STATE_FILE}"

View File

@@ -0,0 +1,108 @@
#!/bin/bash
#
# Phase 1: Prerequisites
# Development environment setup and validation
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 1: Prerequisites"
log_info "=========================================="
# 1.1 Development Environment Setup
log_step "1.1 Checking development environment..."
check_prerequisites
# Check Node.js version
NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
if [ "${NODE_VERSION}" -lt 18 ]; then
error_exit "Node.js version 18 or higher is required. Current: $(node --version)"
fi
log_success "Node.js version: $(node --version)"
# Check pnpm version
PNPM_VERSION=$(pnpm --version | cut -d'.' -f1)
if [ "${PNPM_VERSION}" -lt 8 ]; then
error_exit "pnpm version 8 or higher is required. Current: $(pnpm --version)"
fi
log_success "pnpm version: $(pnpm --version)"
# Check Terraform version
TERRAFORM_VERSION=$(terraform version -json | jq -r '.terraform_version' | cut -d'.' -f1)
if [ "${TERRAFORM_VERSION}" -lt 1 ]; then
error_exit "Terraform version 1.5.0 or higher is required"
fi
log_success "Terraform version: $(terraform version -json | jq -r '.terraform_version')"
log_success "All tools verified"
# 1.2 Azure Account Setup
log_step "1.2 Setting up Azure account..."
check_azure_login
if [ -z "${AZURE_SUBSCRIPTION_ID}" ]; then
log_warning "AZURE_SUBSCRIPTION_ID not set. Using current subscription."
AZURE_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
fi
az account set --subscription "${AZURE_SUBSCRIPTION_ID}" || error_exit "Failed to set subscription"
SUBSCRIPTION_NAME=$(az account show --query name -o tsv)
log_success "Using Azure subscription: ${SUBSCRIPTION_NAME} (${AZURE_SUBSCRIPTION_ID})"
# Verify permissions
log_step "Checking Azure permissions..."
ROLE=$(az role assignment list --assignee "$(az account show --query user.name -o tsv)" --query "[0].roleDefinitionName" -o tsv 2>/dev/null || echo "Unknown")
if [[ "${ROLE}" != *"Contributor"* ]] && [[ "${ROLE}" != *"Owner"* ]]; then
log_warning "Current role: ${ROLE}. Contributor or Owner role recommended."
else
log_success "Permissions verified: ${ROLE}"
fi
# 1.3 Install Dependencies
log_step "1.3 Installing dependencies..."
cd "${PROJECT_ROOT}"
if [ ! -d "node_modules" ]; then
log_info "Installing dependencies with pnpm..."
pnpm install --frozen-lockfile || error_exit "Failed to install dependencies"
log_success "Dependencies installed"
else
log_info "Dependencies already installed, skipping..."
fi
# 1.4 Build Packages
log_step "1.4 Building packages..."
if [ "${SKIP_BUILD:-false}" != "true" ]; then
log_info "Building all packages..."
pnpm build || error_exit "Failed to build packages"
log_success "All packages built"
else
log_info "Skipping build (SKIP_BUILD=true)"
fi
# 1.5 Initialize Git Submodules (if any)
log_step "1.5 Initializing git submodules..."
if [ -f "${PROJECT_ROOT}/.gitmodules" ]; then
git submodule update --init --recursive || log_warning "Failed to update submodules"
log_success "Git submodules initialized"
else
log_info "No git submodules found"
fi
# Save state
save_state "phase1" "complete"
log_success "=========================================="
log_success "Phase 1: Prerequisites - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,117 @@
#!/bin/bash
#
# Phase 10: Backend Services Deployment
# Deploy backend services to Kubernetes
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 10: Backend Services Deployment"
log_info "=========================================="
# Verify Kubernetes access
log_step "10.1 Verifying Kubernetes access..."
if ! kubectl cluster-info &> /dev/null; then
log_info "Getting AKS credentials..."
az aks get-credentials --resource-group "${AKS_RESOURCE_GROUP}" \
--name "${AKS_NAME}" \
--overwrite-existing \
|| error_exit "Failed to get AKS credentials"
fi
kubectl cluster-info || error_exit "Kubernetes cluster not accessible"
# Ensure namespace exists
log_step "10.2 Ensuring namespace exists..."
kubectl create namespace "${NAMESPACE}" --dry-run=client -o yaml | kubectl apply -f - || \
log_warning "Namespace may already exist"
# Deploy External Secrets (if not already deployed)
log_step "10.3 Checking External Secrets Operator..."
if ! kubectl get crd externalsecrets.external-secrets.io &> /dev/null; then
log_info "Installing External Secrets Operator..."
kubectl apply -f https://external-secrets.io/latest/deploy/ || error_exit "Failed to install External Secrets"
log_info "Waiting for External Secrets Operator to be ready..."
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/name=external-secrets \
-n external-secrets-system \
--timeout=300s || log_warning "External Secrets Operator not ready yet"
else
log_success "External Secrets Operator already installed"
fi
# Deploy each service
log_step "10.4 Deploying backend services..."
for service in "${SERVICES[@]}"; do
log_info "Deploying ${service} service..."
# Check if manifests exist
SERVICE_DIR="${K8S_DIR}/base/${service}"
if [ ! -d "${SERVICE_DIR}" ]; then
log_warning "Kubernetes manifests not found for ${service} at ${SERVICE_DIR}"
log_info "Skipping ${service} deployment"
continue
fi
# Apply manifests
kubectl apply -f "${SERVICE_DIR}" -n "${NAMESPACE}" || error_exit "Failed to deploy ${service}"
# Wait for deployment
log_info "Waiting for ${service} deployment..."
kubectl wait --for=condition=available \
deployment/"${service}" \
-n "${NAMESPACE}" \
--timeout=300s || log_warning "${service} deployment not ready yet"
# Verify pods
PODS=$(kubectl get pods -l app="${service}" -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l)
if [ "${PODS}" -gt 0 ]; then
log_success "${service} deployed (${PODS} pod(s))"
# Check pod status
kubectl get pods -l app="${service}" -n "${NAMESPACE}"
else
log_warning "${service} pods not found"
fi
done
# Verify service endpoints
log_step "10.5 Verifying service endpoints..."
for service in "${SERVICES[@]}"; do
if kubectl get svc "${service}" -n "${NAMESPACE}" &> /dev/null; then
log_success "Service ${service} endpoint created"
# Test health endpoint (if accessible)
PORT="${SERVICE_PORTS[$service]}"
if [ -n "${PORT}" ]; then
log_info "Testing ${service} health endpoint on port ${PORT}..."
kubectl run test-${service}-health \
--image=curlimages/curl \
--rm -i --restart=Never \
-- curl -f "http://${service}:${PORT}/health" \
-n "${NAMESPACE}" 2>/dev/null && \
log_success "${service} health check passed" || \
log_warning "${service} health check failed or endpoint not ready"
fi
else
log_warning "Service ${service} endpoint not found"
fi
done
# Save state
save_state "phase10" "complete"
log_success "=========================================="
log_success "Phase 10: Backend Services - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,65 @@
#!/bin/bash
#
# Phase 11: Frontend Applications Deployment
# Deploy portal applications to Kubernetes
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 11: Frontend Applications Deployment"
log_info "=========================================="
# Verify Kubernetes access
if ! kubectl cluster-info &> /dev/null; then
az aks get-credentials --resource-group "${AKS_RESOURCE_GROUP}" \
--name "${AKS_NAME}" \
--overwrite-existing
fi
# Ensure namespace exists
kubectl create namespace "${NAMESPACE}" --dry-run=client -o yaml | kubectl apply -f -
# Deploy each app
log_step "11.1 Deploying frontend applications..."
for app in "${APPS[@]}"; do
log_info "Deploying ${app}..."
APP_DIR="${K8S_DIR}/base/${app}"
if [ ! -d "${APP_DIR}" ]; then
log_warning "Kubernetes manifests not found for ${app} at ${APP_DIR}"
log_info "Skipping ${app} deployment"
continue
fi
# Apply manifests
kubectl apply -f "${APP_DIR}" -n "${NAMESPACE}" || error_exit "Failed to deploy ${app}"
# Wait for deployment
log_info "Waiting for ${app} deployment..."
kubectl wait --for=condition=available \
deployment/"${app}" \
-n "${NAMESPACE}" \
--timeout=300s || log_warning "${app} deployment not ready yet"
# Verify pods
PODS=$(kubectl get pods -l app="${app}" -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l)
if [ "${PODS}" -gt 0 ]; then
log_success "${app} deployed (${PODS} pod(s))"
kubectl get pods -l app="${app}" -n "${NAMESPACE}"
else
log_warning "${app} pods not found"
fi
done
# Save state
save_state "phase11" "complete"
log_success "=========================================="
log_success "Phase 11: Frontend Applications - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,81 @@
#!/bin/bash
#
# Phase 12: Networking & Gateways
# Configure ingress, DNS, SSL/TLS, WAF
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 12: Networking & Gateways"
log_info "=========================================="
log_warning "This phase requires manual configuration for DNS and SSL certificates"
log_info "See docs/deployment/DEPLOYMENT_GUIDE.md Phase 12 for detailed instructions"
# 12.1 Deploy Ingress Controller
log_step "12.1 Deploying NGINX Ingress Controller..."
if ! command -v helm &> /dev/null; then
log_warning "Helm not found. Install Helm to deploy ingress controller."
else
if ! helm list -n ingress-nginx | grep -q ingress-nginx; then
log_info "Installing NGINX Ingress Controller..."
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx --dry-run=client -o yaml | kubectl apply -f -
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
|| log_warning "Ingress controller installation failed or already exists"
else
log_success "Ingress controller already installed"
fi
fi
# 12.2 Apply Ingress Resources
log_step "12.2 Applying ingress resources..."
INGRESS_FILE="${K8S_DIR}/base/ingress.yaml"
if [ -f "${INGRESS_FILE}" ]; then
kubectl apply -f "${INGRESS_FILE}" -n "${NAMESPACE}" || log_warning "Failed to apply ingress"
log_success "Ingress resources applied"
else
log_warning "Ingress configuration not found at ${INGRESS_FILE}"
log_info "Create ingress.yaml in ${K8S_DIR}/base/"
fi
# 12.3 Install cert-manager (for Let's Encrypt)
log_step "12.3 Installing cert-manager..."
if ! kubectl get crd certificates.cert-manager.io &> /dev/null; then
log_info "Installing cert-manager..."
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml || \
log_warning "Failed to install cert-manager"
log_info "Waiting for cert-manager to be ready..."
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/instance=cert-manager \
-n cert-manager \
--timeout=300s || log_warning "cert-manager not ready yet"
else
log_success "cert-manager already installed"
fi
log_info "Networking configuration complete"
log_info "Next steps (manual):"
log_info " 1. Configure DNS records"
log_info " 2. Create ClusterIssuer for Let's Encrypt"
log_info " 3. Configure WAF rules (if using Application Gateway)"
# Save state
save_state "phase12" "complete"
log_success "=========================================="
log_success "Phase 12: Networking & Gateways - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,85 @@
#!/bin/bash
#
# Phase 13: Monitoring & Observability
# Configure Application Insights, Log Analytics, alerts, dashboards
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 13: Monitoring & Observability"
log_info "=========================================="
# 13.1 Application Insights
log_step "13.1 Creating Application Insights..."
APP_INSIGHTS_NAME="${PROJECT_NAME}-${ENVIRONMENT}-ai"
APP_INSIGHTS_EXISTS=$(az monitor app-insights component show \
--app "${APP_INSIGHTS_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--query name -o tsv 2>/dev/null || echo "")
if [ -z "${APP_INSIGHTS_EXISTS}" ]; then
log_info "Creating Application Insights resource..."
az monitor app-insights component create \
--app "${APP_INSIGHTS_NAME}" \
--location "${AZURE_REGION}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--application-type web \
|| log_warning "Failed to create Application Insights"
else
log_success "Application Insights already exists"
fi
# Get instrumentation key
INSTRUMENTATION_KEY=$(az monitor app-insights component show \
--app "${APP_INSIGHTS_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--query instrumentationKey -o tsv 2>/dev/null || echo "")
if [ -n "${INSTRUMENTATION_KEY}" ]; then
log_info "Storing instrumentation key in Key Vault..."
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "app-insights-instrumentation-key" \
--value "${INSTRUMENTATION_KEY}" \
|| log_warning "Failed to store instrumentation key"
fi
# 13.2 Log Analytics Workspace
log_step "13.2 Creating Log Analytics workspace..."
LOG_ANALYTICS_NAME="${PROJECT_NAME}-${ENVIRONMENT}-logs"
LOG_ANALYTICS_EXISTS=$(az monitor log-analytics workspace show \
--workspace-name "${LOG_ANALYTICS_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--query name -o tsv 2>/dev/null || echo "")
if [ -z "${LOG_ANALYTICS_EXISTS}" ]; then
log_info "Creating Log Analytics workspace..."
az monitor log-analytics workspace create \
--workspace-name "${LOG_ANALYTICS_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--location "${AZURE_REGION}" \
|| log_warning "Failed to create Log Analytics workspace"
else
log_success "Log Analytics workspace already exists"
fi
log_info "Monitoring configuration complete"
log_info "Next steps (manual):"
log_info " 1. Configure alerts in Azure Portal"
log_info " 2. Set up Grafana dashboards"
log_info " 3. Configure log queries"
log_info " 4. Set up notification channels"
# Save state
save_state "phase13" "complete"
log_success "=========================================="
log_success "Phase 13: Monitoring & Observability - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,69 @@
#!/bin/bash
#
# Phase 14: Testing & Validation
# Health checks, integration tests, E2E tests, performance tests, security tests
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 14: Testing & Validation"
log_info "=========================================="
# 14.1 Health Checks
log_step "14.1 Running health checks..."
if ! kubectl cluster-info &> /dev/null; then
az aks get-credentials --resource-group "${AKS_RESOURCE_GROUP}" \
--name "${AKS_NAME}" \
--overwrite-existing
fi
# Check all pods
log_info "Checking pod status..."
kubectl get pods -n "${NAMESPACE}" || log_warning "Failed to get pods"
# Check service endpoints
log_info "Checking service endpoints..."
for service in "${SERVICES[@]}"; do
if kubectl get svc "${service}" -n "${NAMESPACE}" &> /dev/null; then
PORT="${SERVICE_PORTS[$service]}"
log_info "Testing ${service} health endpoint..."
kubectl run test-${service}-health \
--image=curlimages/curl \
--rm -i --restart=Never \
-- curl -f "http://${service}:${PORT}/health" \
-n "${NAMESPACE}" 2>/dev/null && \
log_success "${service} health check passed" || \
log_warning "${service} health check failed"
fi
done
# 14.2 Integration Testing
log_step "14.2 Running integration tests..."
if [ -f "${PROJECT_ROOT}/package.json" ]; then
log_info "Running integration tests..."
cd "${PROJECT_ROOT}"
pnpm test:integration || log_warning "Integration tests failed or not configured"
else
log_info "Integration tests not configured"
fi
log_info "Testing complete"
log_info "Next steps (manual):"
log_info " 1. Run E2E tests"
log_info " 2. Run performance tests"
log_info " 3. Run security scans"
log_info " 4. Review test results"
# Save state
save_state "phase14" "complete"
log_success "=========================================="
log_success "Phase 14: Testing & Validation - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,65 @@
#!/bin/bash
#
# Phase 15: Production Hardening
# Resource limits, backups, disaster recovery, documentation
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 15: Production Hardening"
log_info "=========================================="
# 15.1 Production Configuration
log_step "15.1 Configuring production settings..."
if [ "${ENVIRONMENT}" != "prod" ]; then
log_warning "Not production environment. Skipping production hardening."
exit 0
fi
# Update replica counts
log_info "Updating replica counts for production..."
for service in "${SERVICES[@]}"; do
kubectl scale deployment "${service}" \
--replicas=3 \
-n "${NAMESPACE}" \
|| log_warning "Failed to scale ${service}"
done
# 15.2 Backup Configuration
log_step "15.2 Configuring backups..."
# Database backups
log_info "Configuring database backups..."
az postgres server backup create \
--resource-group "${AKS_RESOURCE_GROUP}" \
--server-name "${POSTGRES_SERVER_NAME}" \
--backup-name "daily-backup-$(date +%Y%m%d)" \
|| log_warning "Failed to create database backup"
# Storage backups
log_info "Enabling storage versioning..."
az storage account blob-service-properties update \
--account-name "${STORAGE_ACCOUNT_NAME}" \
--enable-versioning true \
|| log_warning "Failed to enable versioning"
log_info "Production hardening complete"
log_info "Next steps (manual):"
log_info " 1. Configure resource limits in deployments"
log_info " 2. Set up automated backups"
log_info " 3. Configure disaster recovery"
log_info " 4. Review security settings"
log_info " 5. Update documentation"
# Save state
save_state "phase15" "complete"
log_success "=========================================="
log_success "Phase 15: Production Hardening - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,103 @@
#!/bin/bash
#
# Phase 2: Azure Infrastructure Setup
# Terraform infrastructure deployment
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 2: Azure Infrastructure Setup"
log_info "=========================================="
# 2.1 Azure Subscription Preparation
log_step "2.1 Preparing Azure subscription..."
cd "${PROJECT_ROOT}"
# Run Azure setup scripts
if [ -f "${INFRA_DIR}/scripts/azure-setup.sh" ]; then
log_info "Running Azure setup script..."
bash "${INFRA_DIR}/scripts/azure-setup.sh" || error_exit "Azure setup script failed"
else
log_warning "Azure setup script not found, skipping..."
fi
# Register resource providers
if [ -f "${INFRA_DIR}/scripts/azure-register-providers.sh" ]; then
log_info "Registering Azure resource providers..."
bash "${INFRA_DIR}/scripts/azure-register-providers.sh" || error_exit "Provider registration failed"
else
log_warning "Provider registration script not found, skipping..."
fi
# 2.2 Terraform Infrastructure Deployment
log_step "2.2 Deploying Terraform infrastructure..."
cd "${TERRAFORM_DIR}"
# Initialize Terraform
log_info "Initializing Terraform..."
terraform init || error_exit "Terraform initialization failed"
# Create initial state storage if needed
if [ "${CREATE_STATE_STORAGE:-false}" = "true" ]; then
log_info "Creating Terraform state storage..."
terraform plan -target=azurerm_resource_group.terraform_state \
-target=azurerm_storage_account.terraform_state \
-target=azurerm_storage_container.terraform_state \
|| log_warning "State storage resources may already exist"
terraform apply -target=azurerm_resource_group.terraform_state \
-target=azurerm_storage_account.terraform_state \
-target=azurerm_storage_container.terraform_state \
|| log_warning "State storage may already exist"
fi
# Plan infrastructure
log_info "Planning infrastructure changes..."
terraform plan -out=tfplan || error_exit "Terraform plan failed"
# Review plan (optional)
if [ "${AUTO_APPLY:-false}" != "true" ]; then
log_warning "Terraform plan created. Review tfplan before applying."
log_info "To apply: terraform apply tfplan"
log_info "To auto-apply: set AUTO_APPLY=true"
else
log_info "Applying Terraform plan..."
terraform apply tfplan || error_exit "Terraform apply failed"
log_success "Infrastructure deployed"
fi
# Get outputs
log_info "Retrieving Terraform outputs..."
terraform output -json > "${STATE_DIR}/terraform-outputs.json" || log_warning "Failed to save outputs"
# 2.3 Kubernetes Configuration
log_step "2.3 Configuring Kubernetes..."
# Get AKS credentials
log_info "Getting AKS credentials..."
az aks get-credentials --resource-group "${AKS_RESOURCE_GROUP}" \
--name "${AKS_NAME}" \
--overwrite-existing \
|| log_warning "AKS cluster may not exist yet"
# Verify cluster access
if kubectl cluster-info &> /dev/null; then
log_success "Kubernetes cluster accessible"
kubectl get nodes || log_warning "No nodes found"
else
log_warning "Kubernetes cluster not accessible yet"
fi
# Save state
save_state "phase2" "complete"
log_success "=========================================="
log_success "Phase 2: Azure Infrastructure - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,48 @@
#!/bin/bash
#
# Phase 3: Entra ID Configuration
# Note: Most steps require manual configuration in Azure Portal
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 3: Entra ID Configuration"
log_info "=========================================="
log_warning "This phase requires manual steps in Azure Portal"
log_info "See docs/deployment/DEPLOYMENT_GUIDE.md for detailed instructions"
# Check if secrets already exist
log_step "3.1 Checking for existing Entra ID configuration..."
ENTRA_TENANT_ID=$(az keyvault secret show \
--vault-name "${KEY_VAULT_NAME}" \
--name "entra-tenant-id" \
--query value -o tsv 2>/dev/null || echo "")
if [ -n "${ENTRA_TENANT_ID}" ]; then
log_success "Entra ID configuration found in Key Vault"
log_info "Tenant ID: ${ENTRA_TENANT_ID}"
else
log_warning "Entra ID configuration not found"
log_info "Please complete manual steps:"
log_info " 1. Create App Registration in Azure Portal"
log_info " 2. Configure API permissions"
log_info " 3. Create client secret"
log_info " 4. Enable Verified ID service"
log_info " 5. Create credential manifest"
log_info ""
log_info "Then run: scripts/deploy/store-entra-secrets.sh"
fi
# Save state
save_state "phase3" "manual-steps-required"
log_success "=========================================="
log_success "Phase 3: Entra ID - Manual steps required"
log_success "=========================================="

View File

@@ -0,0 +1,89 @@
#!/bin/bash
#
# Phase 4: Database & Storage Setup
# Configure PostgreSQL, Storage Accounts, Redis, OpenSearch
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 4: Database & Storage Setup"
log_info "=========================================="
# 4.1 PostgreSQL Database Setup
log_step "4.1 Configuring PostgreSQL database..."
# Check if database exists
DB_EXISTS=$(az postgres db show \
--resource-group "${AKS_RESOURCE_GROUP}" \
--server-name "${POSTGRES_SERVER_NAME}" \
--name "${POSTGRES_DB_NAME}" \
--query name -o tsv 2>/dev/null || echo "")
if [ -z "${DB_EXISTS}" ]; then
log_info "Creating database ${POSTGRES_DB_NAME}..."
az postgres db create \
--resource-group "${AKS_RESOURCE_GROUP}" \
--server-name "${POSTGRES_SERVER_NAME}" \
--name "${POSTGRES_DB_NAME}" \
|| error_exit "Failed to create database"
log_success "Database created"
else
log_success "Database already exists"
fi
# Configure firewall rules for AKS
log_step "4.2 Configuring database firewall rules..."
# Get AKS outbound IPs (if using NAT gateway)
# For now, allow Azure services
az postgres server firewall-rule create \
--resource-group "${AKS_RESOURCE_GROUP}" \
--server-name "${POSTGRES_SERVER_NAME}" \
--name "AllowAzureServices" \
--start-ip-address "0.0.0.0" \
--end-ip-address "0.0.0.0" \
--output none 2>/dev/null || log_info "Firewall rule may already exist"
log_success "Database firewall configured"
# 4.2 Storage Account Setup
log_step "4.3 Configuring storage accounts..."
# Verify storage account exists
STORAGE_EXISTS=$(az storage account show \
--name "${STORAGE_ACCOUNT_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--query name -o tsv 2>/dev/null || echo "")
if [ -z "${STORAGE_EXISTS}" ]; then
log_warning "Storage account ${STORAGE_ACCOUNT_NAME} not found"
log_info "Storage account should be created by Terraform"
else
log_success "Storage account found"
# Create containers
CONTAINERS=("intake-documents" "dataroom-deals" "credentials")
for container in "${CONTAINERS[@]}"; do
log_info "Creating container: ${container}..."
az storage container create \
--name "${container}" \
--account-name "${STORAGE_ACCOUNT_NAME}" \
--auth-mode login \
--output none 2>/dev/null && \
log_success "Container ${container} created" || \
log_info "Container ${container} may already exist"
done
fi
# Save state
save_state "phase4" "complete"
log_success "=========================================="
log_success "Phase 4: Database & Storage - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,64 @@
#!/bin/bash
#
# Phase 5: Container Registry Setup
# Configure Azure Container Registry and attach to AKS
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 5: Container Registry Setup"
log_info "=========================================="
# 5.1 Verify ACR exists
log_step "5.1 Verifying Azure Container Registry..."
ACR_EXISTS=$(az acr show \
--name "${ACR_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--query name -o tsv 2>/dev/null || echo "")
if [ -z "${ACR_EXISTS}" ]; then
log_warning "ACR ${ACR_NAME} not found"
log_info "ACR should be created by Terraform"
log_info "Skipping ACR configuration"
exit 0
fi
log_success "ACR found: ${ACR_NAME}"
# 5.2 Configure ACR access
log_step "5.2 Configuring ACR access..."
# Enable admin user (or use managed identity)
az acr update --name "${ACR_NAME}" --admin-enabled true \
|| log_warning "Failed to enable admin user (may already be enabled)"
# 5.3 Attach ACR to AKS
log_step "5.3 Attaching ACR to AKS..."
az aks update \
--name "${AKS_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--attach-acr "${ACR_NAME}" \
|| log_warning "Failed to attach ACR (may already be attached)"
log_success "ACR attached to AKS"
# 5.4 Test ACR access
log_step "5.4 Testing ACR access..."
az acr login --name "${ACR_NAME}" || error_exit "Failed to login to ACR"
log_success "ACR access verified"
# Save state
save_state "phase5" "complete"
log_success "=========================================="
log_success "Phase 5: Container Registry - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,140 @@
#!/bin/bash
#
# Phase 6: Application Build & Package
# Build applications and create Docker images
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 6: Application Build & Package"
log_info "=========================================="
cd "${PROJECT_ROOT}"
# 6.1 Build Packages
log_step "6.1 Building shared packages..."
if [ "${SKIP_BUILD:-false}" != "true" ]; then
log_info "Building all packages..."
pnpm build || error_exit "Failed to build packages"
log_success "All packages built"
else
log_info "Skipping build (SKIP_BUILD=true)"
fi
# 6.2 Build Frontend Applications
log_step "6.2 Building frontend applications..."
for app in "${APPS[@]}"; do
log_info "Building ${app}..."
pnpm --filter "${app}" build || log_warning "Failed to build ${app}"
done
log_success "Frontend applications built"
# 6.3 Build Backend Services
log_step "6.3 Building backend services..."
for service in "${SERVICES[@]}"; do
log_info "Building ${service}..."
pnpm --filter "@the-order/${service}" build || log_warning "Failed to build ${service}"
done
log_success "Backend services built"
# 6.4 Create Docker Images
log_step "6.4 Creating Docker images..."
# Check if Docker is running
if ! docker info &> /dev/null; then
error_exit "Docker is not running"
fi
# Login to ACR
log_info "Logging into Azure Container Registry..."
az acr login --name "${ACR_NAME}" || error_exit "Failed to login to ACR"
# Build and push service images
for service in "${SERVICES[@]}"; do
DOCKERFILE="${PROJECT_ROOT}/services/${service}/Dockerfile"
if [ ! -f "${DOCKERFILE}" ]; then
log_warning "Dockerfile not found for ${service}: ${DOCKERFILE}"
log_info "Skipping ${service} image build"
continue
fi
IMAGE_NAME="${ACR_NAME}.azurecr.io/${service}:${IMAGE_TAG}"
IMAGE_NAME_SHA="${ACR_NAME}.azurecr.io/${service}:$(git rev-parse --short HEAD 2>/dev/null || echo 'latest')"
log_info "Building ${service} image..."
docker build -t "${IMAGE_NAME}" \
-t "${IMAGE_NAME_SHA}" \
-f "${DOCKERFILE}" \
"${PROJECT_ROOT}" || error_exit "Failed to build ${service} image"
log_info "Pushing ${service} image..."
docker push "${IMAGE_NAME}" || error_exit "Failed to push ${service} image"
docker push "${IMAGE_NAME_SHA}" || log_warning "Failed to push ${service} SHA image"
log_success "${service} image built and pushed"
done
# Build and push app images
for app in "${APPS[@]}"; do
DOCKERFILE="${PROJECT_ROOT}/apps/${app}/Dockerfile"
if [ ! -f "${DOCKERFILE}" ]; then
log_warning "Dockerfile not found for ${app}: ${DOCKERFILE}"
log_info "Skipping ${app} image build"
continue
fi
IMAGE_NAME="${ACR_NAME}.azurecr.io/${app}:${IMAGE_TAG}"
IMAGE_NAME_SHA="${ACR_NAME}.azurecr.io/${app}:$(git rev-parse --short HEAD 2>/dev/null || echo 'latest')"
log_info "Building ${app} image..."
docker build -t "${IMAGE_NAME}" \
-t "${IMAGE_NAME_SHA}" \
-f "${DOCKERFILE}" \
"${PROJECT_ROOT}" || error_exit "Failed to build ${app} image"
log_info "Pushing ${app} image..."
docker push "${IMAGE_NAME}" || error_exit "Failed to push ${app} image"
docker push "${IMAGE_NAME_SHA}" || log_warning "Failed to push ${app} SHA image"
log_success "${app} image built and pushed"
done
# Sign images with Cosign (if available)
if command -v cosign &> /dev/null; then
log_step "6.5 Signing images with Cosign..."
for service in "${SERVICES[@]}"; do
IMAGE_NAME="${ACR_NAME}.azurecr.io/${service}:${IMAGE_TAG}"
log_info "Signing ${service} image..."
cosign sign --yes "${IMAGE_NAME}" || log_warning "Failed to sign ${service} image"
done
for app in "${APPS[@]}"; do
IMAGE_NAME="${ACR_NAME}.azurecr.io/${app}:${IMAGE_TAG}"
log_info "Signing ${app} image..."
cosign sign --yes "${IMAGE_NAME}" || log_warning "Failed to sign ${app} image"
done
log_success "Images signed"
else
log_warning "Cosign not found, skipping image signing"
fi
# Save state
save_state "phase6" "complete"
log_success "=========================================="
log_success "Phase 6: Build & Package - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,70 @@
#!/bin/bash
#
# Phase 7: Database Migrations
# Run database schema migrations
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 7: Database Migrations"
log_info "=========================================="
cd "${PROJECT_ROOT}"
# Get database URL from Key Vault or environment
if [ -z "${DATABASE_URL:-}" ]; then
log_info "Retrieving DATABASE_URL from Azure Key Vault..."
DATABASE_URL=$(az keyvault secret show \
--vault-name "${KEY_VAULT_NAME}" \
--name "database-url-${ENVIRONMENT}" \
--query value -o tsv 2>/dev/null || echo "")
if [ -z "${DATABASE_URL}" ]; then
error_exit "DATABASE_URL not found in Key Vault and not set in environment"
fi
fi
log_step "7.1 Running database migrations for ${ENVIRONMENT}..."
# Export DATABASE_URL for migration script
export DATABASE_URL
# Run migrations
log_info "Running migrations..."
pnpm --filter @the-order/database migrate up || error_exit "Database migrations failed"
log_success "Database migrations completed"
# Verify schema
log_step "7.2 Verifying database schema..."
# Check if we can connect and list tables
if command -v psql &> /dev/null; then
# Extract connection details from DATABASE_URL
# Format: postgresql://user:pass@host:port/database
DB_CONN=$(echo "${DATABASE_URL}" | sed 's|postgresql://||')
DB_USER=$(echo "${DB_CONN}" | cut -d':' -f1)
DB_PASS=$(echo "${DB_CONN}" | cut -d':' -f2 | cut -d'@' -f1)
DB_HOST=$(echo "${DB_CONN}" | cut -d'@' -f2 | cut -d':' -f1)
DB_PORT=$(echo "${DB_CONN}" | cut -d':' -f3 | cut -d'/' -f1)
DB_NAME=$(echo "${DB_CONN}" | cut -d'/' -f2)
log_info "Verifying connection to ${DB_HOST}:${DB_PORT}/${DB_NAME}..."
PGPASSWORD="${DB_PASS}" psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_USER}" -d "${DB_NAME}" -c "\dt" &> /dev/null && \
log_success "Database connection verified" || \
log_warning "Could not verify database connection with psql"
else
log_warning "psql not found, skipping schema verification"
fi
# Save state
save_state "phase7" "complete"
log_success "=========================================="
log_success "Phase 7: Database Migrations - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,93 @@
#!/bin/bash
#
# Phase 8: Secrets Configuration
# Store secrets in Azure Key Vault
# Note: Some secrets may need to be set manually
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 8: Secrets Configuration"
log_info "=========================================="
# Verify Key Vault exists
log_step "8.1 Verifying Azure Key Vault..."
KV_EXISTS=$(az keyvault show \
--name "${KEY_VAULT_NAME}" \
--resource-group "${AKS_RESOURCE_GROUP}" \
--query name -o tsv 2>/dev/null || echo "")
if [ -z "${KV_EXISTS}" ]; then
error_exit "Key Vault ${KEY_VAULT_NAME} not found. Create it first with Terraform."
fi
log_success "Key Vault found: ${KEY_VAULT_NAME}"
# Store database URL if provided
if [ -n "${DATABASE_URL:-}" ]; then
log_step "8.2 Storing database URL..."
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "database-url-${ENVIRONMENT}" \
--value "${DATABASE_URL}" \
|| log_warning "Failed to store database URL"
log_success "Database URL stored"
fi
# Check for Entra secrets
log_step "8.3 Checking Entra ID secrets..."
ENTRA_SECRETS=("entra-tenant-id" "entra-client-id" "entra-client-secret" "entra-credential-manifest-id")
MISSING_SECRETS=()
for secret in "${ENTRA_SECRETS[@]}"; do
if ! az keyvault secret show \
--vault-name "${KEY_VAULT_NAME}" \
--name "${secret}" \
--query value -o tsv &> /dev/null; then
MISSING_SECRETS+=("${secret}")
fi
done
if [ ${#MISSING_SECRETS[@]} -gt 0 ]; then
log_warning "Missing Entra ID secrets: ${MISSING_SECRETS[*]}"
log_info "Run: ./scripts/deploy/store-entra-secrets.sh"
else
log_success "All Entra ID secrets found"
fi
# Store JWT secret if not exists
log_step "8.4 Storing JWT secret..."
if ! az keyvault secret show \
--vault-name "${KEY_VAULT_NAME}" \
--name "jwt-secret" \
--query value -o tsv &> /dev/null; then
JWT_SECRET=$(openssl rand -base64 32)
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "jwt-secret" \
--value "${JWT_SECRET}" \
|| error_exit "Failed to store JWT secret"
log_success "JWT secret generated and stored"
else
log_success "JWT secret already exists"
fi
log_info "Secrets configuration complete"
log_info "Note: Additional secrets may need to be set manually"
log_info "See docs/deployment/DEPLOYMENT_GUIDE.md Phase 8 for complete list"
# Save state
save_state "phase8" "complete"
log_success "=========================================="
log_success "Phase 8: Secrets Configuration - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,73 @@
#!/bin/bash
#
# Phase 9: Infrastructure Services Deployment
# Deploy monitoring, logging, and infrastructure services
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "=========================================="
log_info "Phase 9: Infrastructure Services Deployment"
log_info "=========================================="
# Verify Kubernetes access
if ! kubectl cluster-info &> /dev/null; then
az aks get-credentials --resource-group "${AKS_RESOURCE_GROUP}" \
--name "${AKS_NAME}" \
--overwrite-existing
fi
# 9.1 External Secrets Operator
log_step "9.1 Deploying External Secrets Operator..."
if ! kubectl get crd externalsecrets.external-secrets.io &> /dev/null; then
log_info "Installing External Secrets Operator..."
kubectl apply -f https://external-secrets.io/latest/deploy/ || error_exit "Failed to install"
log_info "Waiting for operator to be ready..."
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/name=external-secrets \
-n external-secrets-system \
--timeout=300s || log_warning "Operator not ready yet"
else
log_success "External Secrets Operator already installed"
fi
# 9.2 Monitoring Stack (Prometheus & Grafana)
log_step "9.2 Deploying monitoring stack..."
if ! command -v helm &> /dev/null; then
log_warning "Helm not found. Install Helm to deploy monitoring stack."
log_info "See: https://helm.sh/docs/intro/install/"
else
if ! helm repo list | grep -q prometheus-community; then
log_info "Adding Prometheus Helm repository..."
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
fi
if ! helm list -n monitoring | grep -q prometheus; then
log_info "Installing Prometheus stack..."
kubectl create namespace monitoring --dry-run=client -o yaml | kubectl apply -f -
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
|| log_warning "Prometheus installation failed or already exists"
else
log_success "Prometheus already installed"
fi
fi
log_info "Monitoring stack deployment complete"
log_info "Access Grafana: kubectl port-forward svc/prometheus-grafana 3000:80 -n monitoring"
# Save state
save_state "phase9" "complete"
log_success "=========================================="
log_success "Phase 9: Infrastructure Services - COMPLETE"
log_success "=========================================="

View File

@@ -0,0 +1,58 @@
#!/bin/bash
#
# Store Entra ID secrets in Azure Key Vault
# Run this after completing manual Entra ID setup
#
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/config.sh"
log_info "Storing Entra ID secrets in Azure Key Vault..."
# Prompt for values if not in environment
if [ -z "${ENTRA_TENANT_ID:-}" ]; then
read -p "Enter Entra Tenant ID: " ENTRA_TENANT_ID
fi
if [ -z "${ENTRA_CLIENT_ID:-}" ]; then
read -p "Enter Entra Client ID: " ENTRA_CLIENT_ID
fi
if [ -z "${ENTRA_CLIENT_SECRET:-}" ]; then
read -sp "Enter Entra Client Secret: " ENTRA_CLIENT_SECRET
echo
fi
if [ -z "${ENTRA_CREDENTIAL_MANIFEST_ID:-}" ]; then
read -p "Enter Entra Credential Manifest ID: " ENTRA_CREDENTIAL_MANIFEST_ID
fi
# Store secrets
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "entra-tenant-id" \
--value "${ENTRA_TENANT_ID}" \
|| error_exit "Failed to store tenant ID"
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "entra-client-id" \
--value "${ENTRA_CLIENT_ID}" \
|| error_exit "Failed to store client ID"
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "entra-client-secret" \
--value "${ENTRA_CLIENT_SECRET}" \
|| error_exit "Failed to store client secret"
az keyvault secret set \
--vault-name "${KEY_VAULT_NAME}" \
--name "entra-credential-manifest-id" \
--value "${ENTRA_CREDENTIAL_MANIFEST_ID}" \
|| error_exit "Failed to store manifest ID"
log_success "Entra ID secrets stored in Key Vault"