Files
defiQUG 1fb7266469 Add Oracle Aggregator and CCIP Integration
- Introduced Aggregator.sol for Chainlink-compatible oracle functionality, including round-based updates and access control.
- Added OracleWithCCIP.sol to extend Aggregator with CCIP cross-chain messaging capabilities.
- Created .gitmodules to include OpenZeppelin contracts as a submodule.
- Developed a comprehensive deployment guide in NEXT_STEPS_COMPLETE_GUIDE.md for Phase 2 and smart contract deployment.
- Implemented Vite configuration for the orchestration portal, supporting both Vue and React frameworks.
- Added server-side logic for the Multi-Cloud Orchestration Portal, including API endpoints for environment management and monitoring.
- Created scripts for resource import and usage validation across non-US regions.
- Added tests for CCIP error handling and integration to ensure robust functionality.
- Included various new files and directories for the orchestration portal and deployment scripts.
2025-12-12 14:57:48 -08:00
..

Phase 2: Docker Compose Deployment

Overview

Phase 2 deploys multi-service docker-compose stacks to Phase 1 VMs. Each region gets a region-specific docker-compose file with services including Besu, FireFly, Cacti, Chainlink, databases, and monitoring.

Prerequisites

  1. Phase 1 must be deployed first - Phase 2 depends on Phase 1 VMs
  2. SSH access to VMs - Requires:
    • SSH private key corresponding to the public key used in Phase 1
    • Network connectivity to VMs (VPN/ExpressRoute/Cloudflare Tunnel if using private IPs)
  3. Docker Engine installed - Phase 1 cloud-init installs Docker, but verify it's running

Architecture

Region to Docker Compose Mapping

  • Central US (centralus)docker-compose.cus.yml

    • Besu + FireFly Core A + Cacti Core A + Chainlink A + shared DBs + agents
  • East US (eastus)docker-compose.eus.yml

    • Besu + FireFly Core B + primary FireFly/Cacti DBs + Chainlink B + agents
  • East US 2 (eastus2)docker-compose.eus2.yml

    • Besu + FireFly DataExchange A + IPFS + Cacti UI + Prometheus primary + agents
  • West US (westus)docker-compose.wus.yml

    • Besu + FireFly DataExchange B + Prometheus secondary + Grafana + Alertmanager + Chainlink C + agents
  • West US 2 (westus2)docker-compose.wus2.yml

    • Besu + Loki + Log UI + FireFly Postgres Replica + Cacti Core B + agents

Configuration

Variables

Create a terraform.tfvars file or set environment variables:

environment = "prod"
vm_admin_username = "besuadmin"
ssh_private_key_path = "/path/to/ssh/private/key"

# Phase 1 VM information - pass from Phase 1 outputs
phase1_vm_info = {
  centralus = {
    vm_names    = ["az-p-cus-vm-besu-node-0"]
    private_ips = ["10.3.1.4"]
    public_ips  = []
    resource_group = "az-p-cus-rg-comp-001"
    region      = "centralus"
  }
  eastus = {
    vm_names    = ["az-p-eus-vm-besu-node-0"]
    private_ips = ["10.1.1.4"]
    public_ips  = []
    resource_group = "az-p-eus-rg-comp-001"
    region      = "eastus"
  }
  # ... similar for other regions
}

docker_compose_source_path = "../../../docker/phase2"

Getting Phase 1 VM Information

If Phase 1 was deployed via Terraform, you can get the VM information from Phase 1 outputs:

cd terraform/phases/phase1
terraform output -json phase1_us_regions > phase1-outputs.json

Then use a script to convert Phase 1 outputs to Phase 2 phase1_vm_info format.

Deployment

Method 1: Terraform Apply

cd terraform/phases/phase2
terraform init
terraform plan
terraform apply

Method 2: Deployment Script

cd terraform/phases/phase2/scripts
./deploy-phase2.sh [region]

If no region is specified, deploys to all regions.

Service Management

Start Services

cd terraform/phases/phase2/scripts
./start-services.sh [region]

Stop Services

cd terraform/phases/phase2/scripts
./stop-services.sh [region]

Check Status

cd terraform/phases/phase2/scripts
./status.sh [region]

Manual Service Management

SSH to the VM and use systemctl:

# Start
sudo systemctl start phase2-stack.service

# Stop
sudo systemctl stop phase2-stack.service

# Restart
sudo systemctl restart phase2-stack.service

# Status
sudo systemctl status phase2-stack.service

# View logs
sudo journalctl -u phase2-stack.service -f

Or use docker compose directly:

cd /opt/docker-compose
docker compose ps
docker compose logs -f
docker compose restart

Terraform Outputs

View Deployment Status

cd terraform/phases/phase2
terraform output deployment_status
terraform output systemd_service_status
terraform output management_commands

Example Output

{
  "centralus": {
    "region": "centralus",
    "vm_name": "az-p-cus-vm-besu-node-0",
    "compose_file": "docker-compose.cus.yml",
    "status": "deployed",
    "docker_compose_path": "/opt/docker-compose/docker-compose.yml"
  },
  ...
}

File Locations on VMs

  • Docker Compose File: /opt/docker-compose/docker-compose.yml
  • Systemd Service: /etc/systemd/system/phase2-stack.service
  • Volume Mounts:
    • Besu: /opt/besu/{data,config,keys,logs}
    • FireFly: /opt/firefly/{postgres,postgres-primary,postgres-replica}
    • Cacti: /opt/cacti/{postgres,postgres-primary}
    • Prometheus: /opt/prometheus
    • Grafana: /opt/grafana, /opt/grafana-logs
    • Alertmanager: /opt/alertmanager
    • Loki: /opt/loki/{data,config}
    • IPFS: /opt/ipfs/data
    • Promtail: /opt/promtail

Network Connectivity

Important: Phase 1 VMs use private IPs only. To deploy Phase 2, ensure:

  1. VPN/ExpressRoute is configured to access VM private IPs, OR
  2. Cloudflare Tunnel is running on VMs, OR
  3. Bastion Host is configured for SSH access, OR
  4. VMs are temporarily assigned public IPs for deployment

The Terraform provisioner will use:

  • Private IP if available
  • Public IP as fallback
  • Requires SSH key path to be set in ssh_private_key_path variable

Troubleshooting

SSH Connection Issues

If you cannot SSH to VMs:

  1. Verify network connectivity (VPN/ExpressRoute/Bastion)
  2. Check SSH key path and permissions
  3. Verify VM is running and accessible
  4. Check NSG rules allow SSH from your IP

Docker Compose Issues

If services fail to start:

  1. Check docker compose file syntax: docker compose config
  2. Verify all required directories exist with correct permissions
  3. Check docker logs: docker compose logs
  4. Verify systemd service: sudo systemctl status phase2-stack.service

Volume Mount Issues

If volumes are not accessible:

  1. Verify directories exist: ls -la /opt/*
  2. Check permissions: sudo chown -R besuadmin:besuadmin /opt/*
  3. Ensure Docker has access: sudo usermod -aG docker besuadmin

Next Steps

After Phase 2 deployment:

  1. Verify all services are running: ./status.sh all
  2. Configure service-specific settings (database passwords, API keys, etc.)
  3. Set up monitoring and alerting
  4. Configure cross-service connectivity (FireFly to Besu, etc.)
  5. Test end-to-end workflows
  • Phase 1 Deployment: ../phase1/README.md
  • Docker Compose Files: ../../../docker/phase2/
  • Service Documentation: See individual service documentation in docs/