Files
smom-dbis-138/scripts/deployment/resolve-aks-issue.sh
defiQUG 1fb7266469 Add Oracle Aggregator and CCIP Integration
- Introduced Aggregator.sol for Chainlink-compatible oracle functionality, including round-based updates and access control.
- Added OracleWithCCIP.sol to extend Aggregator with CCIP cross-chain messaging capabilities.
- Created .gitmodules to include OpenZeppelin contracts as a submodule.
- Developed a comprehensive deployment guide in NEXT_STEPS_COMPLETE_GUIDE.md for Phase 2 and smart contract deployment.
- Implemented Vite configuration for the orchestration portal, supporting both Vue and React frameworks.
- Added server-side logic for the Multi-Cloud Orchestration Portal, including API endpoints for environment management and monitoring.
- Created scripts for resource import and usage validation across non-US regions.
- Added tests for CCIP error handling and integration to ensure robust functionality.
- Included various new files and directories for the orchestration portal and deployment scripts.
2025-12-12 14:57:48 -08:00

41 lines
1.4 KiB
Bash
Executable File

#!/usr/bin/env bash
# Resolve AKS Deployment Issue
set -e
cd "$(dirname "$0")/../.."
echo "==================================================================="
echo " RESOLVING AKS DEPLOYMENT ISSUE"
echo "==================================================================="
echo ""
# The issue is that Terraform is trying to modify a cluster that exists
# in a different subscription. We need to either:
# 1. Remove it from state and create new
# 2. Import it properly
# 3. Fix the auto-scaling configuration
cd terraform
# Check current state
echo "Checking Terraform state..."
if terraform state list 2>/dev/null | grep -q "module.aks.azurerm_kubernetes_cluster.main"; then
echo "⚠️ Cluster found in state"
echo " Removing from state to recreate in correct subscription..."
terraform state rm module.aks.azurerm_kubernetes_cluster.main 2>&1 || echo "Not in state or already removed"
else
echo "✅ Cluster not in state"
fi
# The auto-scaling error suggests we need to disable auto-scaling
# or set min/max counts instead of node_count when auto-scaling is enabled
echo ""
echo "✅ State cleaned. Ready to apply with correct configuration."
echo ""
echo "Note: The auto-scaling configuration needs to be fixed in the"
echo " Kubernetes module to either:"
echo " 1. Disable auto-scaling, OR"
echo " 2. Set min_count and max_count instead of node_count"