Organize docs directory: move 25 files to appropriate locations
- Created docs/00-meta/ for documentation meta files (11 files) - Created docs/archive/reports/ for reports (5 files) - Created docs/archive/issues/ for issue tracking (2 files) - Created docs/bridge/contracts/ for Solidity contracts (3 files) - Created docs/04-configuration/metamask/ for Metamask configs (3 files) - Created docs/scripts/ for documentation scripts (2 files) - Root directory now contains only 3 essential files (89.3% reduction) All recommended actions from docs directory review complete.
This commit is contained in:
327
reports/DOCS_DIRECTORY_REVIEW.md
Normal file
327
reports/DOCS_DIRECTORY_REVIEW.md
Normal file
@@ -0,0 +1,327 @@
|
||||
# Documentation Directory Review
|
||||
|
||||
**Date**: 2026-01-06
|
||||
**Directory**: `docs/`
|
||||
**Status**: ✅ **REVIEW COMPLETE**
|
||||
|
||||
---
|
||||
|
||||
## 📊 Executive Summary
|
||||
|
||||
The `docs/` directory is **well-organized** with a clear numbered directory structure (01-12), but contains **28 standalone files** in the root that should be better organized.
|
||||
|
||||
### Overall Assessment
|
||||
- ✅ **Structure**: Excellent numbered directory organization
|
||||
- ✅ **Content**: 577 markdown files well-distributed
|
||||
- ⚠️ **Root Files**: 28 files need organization
|
||||
- ✅ **Archives**: Well-organized archive structure
|
||||
|
||||
---
|
||||
|
||||
## 📁 Directory Structure Analysis
|
||||
|
||||
### Numbered Directories (01-12) ✅
|
||||
|
||||
| Directory | Files | Status | Notes |
|
||||
|-----------|-------|--------|-------|
|
||||
| `01-getting-started/` | 11 | ✅ Good | Getting started guides |
|
||||
| `02-architecture/` | 8 | ✅ Good | Architecture documentation |
|
||||
| `03-deployment/` | 17 | ✅ Good | Deployment guides |
|
||||
| `04-configuration/` | 40 | ✅ Good | Configuration guides |
|
||||
| `05-network/` | 16 | ✅ Good | Network documentation |
|
||||
| `06-besu/` | 10 | ✅ Good | Besu/blockchain docs |
|
||||
| `07-ccip/` | 5 | ✅ Good | CCIP documentation |
|
||||
| `08-monitoring/` | 6 | ✅ Good | Monitoring guides |
|
||||
| `09-troubleshooting/` | 20 | ✅ Good | Troubleshooting guides |
|
||||
| `10-best-practices/` | 4 | ✅ Good | Best practices |
|
||||
| `11-references/` | Multiple | ✅ Good | Reference materials |
|
||||
| `12-quick-reference/` | Multiple | ✅ Good | Quick references |
|
||||
|
||||
**Total in numbered directories**: ~140+ files
|
||||
|
||||
### Other Directories ✅
|
||||
|
||||
| Directory | Files | Status | Notes |
|
||||
|-----------|-------|--------|-------|
|
||||
| `archive/` | 74 | ✅ Good | Historical/archived docs |
|
||||
| `bridge/` | 0 | ✅ Good | Bridge documentation |
|
||||
| `compliance/` | 1 | ✅ Good | Compliance docs |
|
||||
| `risk-management/` | 1 | ✅ Good | Risk management |
|
||||
| `runbooks/` | 3 | ✅ Good | Operational runbooks |
|
||||
| `scripts/` | 0 | ✅ Good | Documentation scripts |
|
||||
| `testnet/` | 1 | ✅ Good | Testnet documentation |
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Files That Need Organization
|
||||
|
||||
### Standalone Markdown Files in docs/ Root (20 files)
|
||||
|
||||
#### Documentation Meta Files (9 files) → `docs/00-meta/` or `docs/archive/meta/`
|
||||
- `CONTRIBUTOR_GUIDELINES.md` - Contributor guidelines
|
||||
- `DOCUMENTATION_ENHANCEMENTS_RECOMMENDATIONS.md` - Enhancement recommendations
|
||||
- `DOCUMENTATION_FIXES_COMPLETE.md` - Fixes completion report
|
||||
- `DOCUMENTATION_QUALITY_REVIEW.md` - Quality review
|
||||
- `DOCUMENTATION_RELATIONSHIP_MAP.md` - Relationship map
|
||||
- `DOCUMENTATION_REORGANIZATION_COMPLETE.md` - Reorganization report
|
||||
- `DOCUMENTATION_REVIEW.md` - Review report
|
||||
- `DOCUMENTATION_STYLE_GUIDE.md` - Style guide
|
||||
- `DOCUMENTATION_UPGRADE_SUMMARY.md` - Upgrade summary
|
||||
|
||||
#### Guides (2 files) → `docs/00-meta/` or keep in root
|
||||
- `SEARCH_GUIDE.md` - Search guide (useful in root)
|
||||
- `MARKDOWN_FILE_MAINTENANCE_GUIDE.md` - Maintenance guide
|
||||
|
||||
#### Reports (5 files) → `docs/archive/reports/` or `reports/`
|
||||
- `PROXMOX_CLUSTER_STORAGE_STATUS_REPORT.md` - Status report
|
||||
- `PROXMOX_SSL_CERTIFICATE_FIX.md` - SSL fix report
|
||||
- `PROXMOX_SSL_FIX_VERIFIED.md` - SSL verification report
|
||||
- `SSL_CERTIFICATE_ERROR_596_FIX.md` - SSL error fix
|
||||
- `SSL_FIX_FOR_EACH_HOST.md` - SSL fix guide
|
||||
|
||||
#### Issue Tracking (2 files) → `docs/archive/issues/` or `reports/issues/`
|
||||
- `OUTSTANDING_ISSUES_RESOLUTION_GUIDE.md` - Issues resolution guide
|
||||
- `OUTSTANDING_ISSUES_SUMMARY.md` - Issues summary
|
||||
|
||||
#### Essential Files (2 files) → ✅ Keep in root
|
||||
- `README.md` - Main documentation index ✅
|
||||
- `MASTER_INDEX.md` - Master index ✅
|
||||
|
||||
### Non-Markdown Files in docs/ Root (8 files)
|
||||
|
||||
#### Solidity Files (3 files) → `docs/archive/contracts/` or `docs/bridge/contracts/`
|
||||
- `CCIPWETH9Bridge_flattened.sol` - Flattened Solidity contract
|
||||
- `CCIPWETH9Bridge_standard_json.json` - Standard JSON input
|
||||
- `CCIPWETH9Bridge_standard_json_generated.json` - Generated JSON
|
||||
|
||||
#### JSON Config Files (3 files) → `docs/04-configuration/metamask/` or `examples/`
|
||||
- `METAMASK_NETWORK_CONFIG.json` - Metamask network config
|
||||
- `METAMASK_TOKEN_LIST.json` - Token list
|
||||
- `METAMASK_TOKEN_LIST.tokenlist.json` - Token list format
|
||||
|
||||
#### Scripts (2 files) → `docs/scripts/` or `scripts/`
|
||||
- `organize-standalone-files.sh` - Organization script
|
||||
- `organize_files.py` - Organization script
|
||||
|
||||
---
|
||||
|
||||
## 📊 Statistics
|
||||
|
||||
### Current State
|
||||
- **Total Markdown Files**: 577 files
|
||||
- **Files in Numbered Directories**: ~140+ files
|
||||
- **Files in Archive**: 74 files
|
||||
- **Standalone Files in Root**: 28 files
|
||||
- **Total Directories**: 33 directories
|
||||
|
||||
### Distribution
|
||||
- **01-getting-started**: 11 files
|
||||
- **02-architecture**: 8 files
|
||||
- **03-deployment**: 17 files
|
||||
- **04-configuration**: 40 files
|
||||
- **05-network**: 16 files
|
||||
- **06-besu**: 10 files
|
||||
- **07-ccip**: 5 files
|
||||
- **08-monitoring**: 6 files
|
||||
- **09-troubleshooting**: 20 files
|
||||
- **10-best-practices**: 4 files
|
||||
- **11-references**: Multiple files
|
||||
- **12-quick-reference**: Multiple files
|
||||
- **archive/**: 74 files
|
||||
- **Other directories**: 6 files
|
||||
|
||||
---
|
||||
|
||||
## ✅ Strengths
|
||||
|
||||
1. **Excellent Structure**: Numbered directories (01-12) provide clear organization
|
||||
2. **Well-Documented**: 577 markdown files with comprehensive coverage
|
||||
3. **Good Archives**: Historical documents properly archived
|
||||
4. **Clear Categories**: Logical grouping by topic
|
||||
5. **Master Index**: Comprehensive index available
|
||||
6. **Style Guide**: Documentation standards established
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Areas for Improvement
|
||||
|
||||
### Priority 1: Organize Standalone Files
|
||||
- **20 markdown files** in docs root should be organized
|
||||
- **8 non-markdown files** should be moved to appropriate locations
|
||||
|
||||
### Priority 2: Create Meta Directory
|
||||
- Consider creating `docs/00-meta/` for documentation about documentation
|
||||
- Or move meta files to `docs/archive/meta/`
|
||||
|
||||
### Priority 3: Organize Reports
|
||||
- Move status/report files to `docs/archive/reports/` or `reports/`
|
||||
- Keep only essential guides in root
|
||||
|
||||
### Priority 4: Organize Code Files
|
||||
- Move Solidity files to `docs/bridge/contracts/` or `docs/archive/contracts/`
|
||||
- Move JSON configs to appropriate configuration directories
|
||||
- Move scripts to `docs/scripts/` or `scripts/`
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Recommended Actions
|
||||
|
||||
### Action 1: Create Organization Structure
|
||||
```bash
|
||||
mkdir -p docs/00-meta
|
||||
mkdir -p docs/archive/reports
|
||||
mkdir -p docs/archive/issues
|
||||
mkdir -p docs/bridge/contracts
|
||||
mkdir -p docs/04-configuration/metamask
|
||||
```
|
||||
|
||||
### Action 2: Move Documentation Meta Files
|
||||
```bash
|
||||
# Move to docs/00-meta/
|
||||
mv docs/DOCUMENTATION_*.md docs/00-meta/
|
||||
mv docs/CONTRIBUTOR_GUIDELINES.md docs/00-meta/
|
||||
mv docs/MARKDOWN_FILE_MAINTENANCE_GUIDE.md docs/00-meta/
|
||||
```
|
||||
|
||||
### Action 3: Move Reports
|
||||
```bash
|
||||
# Move to docs/archive/reports/
|
||||
mv docs/PROXMOX_*.md docs/archive/reports/
|
||||
mv docs/SSL_*.md docs/archive/reports/
|
||||
```
|
||||
|
||||
### Action 4: Move Issue Tracking
|
||||
```bash
|
||||
# Move to docs/archive/issues/
|
||||
mv docs/OUTSTANDING_ISSUES_*.md docs/archive/issues/
|
||||
```
|
||||
|
||||
### Action 5: Move Code Files
|
||||
```bash
|
||||
# Move Solidity files
|
||||
mv docs/CCIPWETH9Bridge_*.{sol,json} docs/bridge/contracts/
|
||||
|
||||
# Move JSON configs
|
||||
mv docs/METAMASK_*.json docs/04-configuration/metamask/
|
||||
|
||||
# Move scripts
|
||||
mv docs/organize*.{sh,py} docs/scripts/
|
||||
```
|
||||
|
||||
### Action 6: Keep Essential Files
|
||||
```bash
|
||||
# Keep in root:
|
||||
# - README.md ✅
|
||||
# - MASTER_INDEX.md ✅
|
||||
# - SEARCH_GUIDE.md ✅ (useful in root)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📁 Proposed Final Structure
|
||||
|
||||
```
|
||||
docs/
|
||||
├── README.md # ✅ Main documentation index
|
||||
├── MASTER_INDEX.md # ✅ Master index
|
||||
├── SEARCH_GUIDE.md # ✅ Search guide (useful in root)
|
||||
│
|
||||
├── 00-meta/ # 📝 Documentation about documentation
|
||||
│ ├── CONTRIBUTOR_GUIDELINES.md
|
||||
│ ├── DOCUMENTATION_STYLE_GUIDE.md
|
||||
│ ├── DOCUMENTATION_REVIEW.md
|
||||
│ ├── DOCUMENTATION_QUALITY_REVIEW.md
|
||||
│ ├── DOCUMENTATION_FIXES_COMPLETE.md
|
||||
│ ├── DOCUMENTATION_ENHANCEMENTS_RECOMMENDATIONS.md
|
||||
│ ├── DOCUMENTATION_REORGANIZATION_COMPLETE.md
|
||||
│ ├── DOCUMENTATION_RELATIONSHIP_MAP.md
|
||||
│ ├── DOCUMENTATION_UPGRADE_SUMMARY.md
|
||||
│ └── MARKDOWN_FILE_MAINTENANCE_GUIDE.md
|
||||
│
|
||||
├── 01-getting-started/ # ✅ Getting started
|
||||
├── 02-architecture/ # ✅ Architecture
|
||||
├── 03-deployment/ # ✅ Deployment
|
||||
├── 04-configuration/ # ✅ Configuration
|
||||
│ └── metamask/ # 📝 Metamask configs
|
||||
│ ├── METAMASK_NETWORK_CONFIG.json
|
||||
│ ├── METAMASK_TOKEN_LIST.json
|
||||
│ └── METAMASK_TOKEN_LIST.tokenlist.json
|
||||
├── 05-network/ # ✅ Network
|
||||
├── 06-besu/ # ✅ Besu
|
||||
├── 07-ccip/ # ✅ CCIP
|
||||
├── 08-monitoring/ # ✅ Monitoring
|
||||
├── 09-troubleshooting/ # ✅ Troubleshooting
|
||||
├── 10-best-practices/ # ✅ Best practices
|
||||
├── 11-references/ # ✅ References
|
||||
├── 12-quick-reference/ # ✅ Quick reference
|
||||
│
|
||||
├── archive/ # ✅ Archives
|
||||
│ ├── reports/ # 📝 Reports
|
||||
│ │ ├── PROXMOX_CLUSTER_STORAGE_STATUS_REPORT.md
|
||||
│ │ ├── PROXMOX_SSL_CERTIFICATE_FIX.md
|
||||
│ │ ├── PROXMOX_SSL_FIX_VERIFIED.md
|
||||
│ │ ├── SSL_CERTIFICATE_ERROR_596_FIX.md
|
||||
│ │ └── SSL_FIX_FOR_EACH_HOST.md
|
||||
│ └── issues/ # 📝 Issue tracking
|
||||
│ ├── OUTSTANDING_ISSUES_RESOLUTION_GUIDE.md
|
||||
│ └── OUTSTANDING_ISSUES_SUMMARY.md
|
||||
│
|
||||
├── bridge/ # ✅ Bridge documentation
|
||||
│ └── contracts/ # 📝 Contract files
|
||||
│ ├── CCIPWETH9Bridge_flattened.sol
|
||||
│ ├── CCIPWETH9Bridge_standard_json.json
|
||||
│ └── CCIPWETH9Bridge_standard_json_generated.json
|
||||
│
|
||||
├── scripts/ # ✅ Documentation scripts
|
||||
│ ├── organize-standalone-files.sh
|
||||
│ └── organize_files.py
|
||||
│
|
||||
└── [other directories] # ✅ Other directories
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📈 Impact Assessment
|
||||
|
||||
### Before Organization
|
||||
- **Root Files**: 28 files
|
||||
- **Organization**: Mixed (some organized, some not)
|
||||
- **Clarity**: Moderate (many files in root)
|
||||
|
||||
### After Organization
|
||||
- **Root Files**: 3 files (README.md, MASTER_INDEX.md, SEARCH_GUIDE.md)
|
||||
- **Organization**: Excellent (all files in appropriate directories)
|
||||
- **Clarity**: High (clear structure, easy navigation)
|
||||
|
||||
### Benefits
|
||||
- ✅ **Cleaner Root**: Only essential files remain
|
||||
- ✅ **Better Organization**: Files grouped by purpose
|
||||
- ✅ **Easier Navigation**: Predictable file locations
|
||||
- ✅ **Professional Structure**: Follows best practices
|
||||
- ✅ **Maintainability**: Easier to maintain and update
|
||||
|
||||
---
|
||||
|
||||
## ✅ Summary
|
||||
|
||||
### Current Status
|
||||
- ✅ **Structure**: Excellent numbered directory organization
|
||||
- ✅ **Content**: 577 files well-distributed
|
||||
- ⚠️ **Root Files**: 28 files need organization
|
||||
- ✅ **Archives**: Well-organized
|
||||
|
||||
### Recommendations
|
||||
1. **Create `docs/00-meta/`** for documentation meta files
|
||||
2. **Move reports** to `docs/archive/reports/`
|
||||
3. **Move issue tracking** to `docs/archive/issues/`
|
||||
4. **Move code files** to appropriate directories
|
||||
5. **Keep only 3 files** in root (README.md, MASTER_INDEX.md, SEARCH_GUIDE.md)
|
||||
|
||||
### Priority
|
||||
- **High**: Organize standalone files
|
||||
- **Medium**: Create meta directory structure
|
||||
- **Low**: Review and consolidate archived files
|
||||
|
||||
---
|
||||
|
||||
*Review completed: 2026-01-06*
|
||||
228
reports/DOCS_ORGANIZATION_COMPLETE.md
Normal file
228
reports/DOCS_ORGANIZATION_COMPLETE.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# Documentation Directory Organization - Complete
|
||||
|
||||
**Date**: 2026-01-06
|
||||
**Status**: ✅ **COMPLETE**
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Organization Complete
|
||||
|
||||
All recommended actions from the documentation directory review have been successfully completed!
|
||||
|
||||
---
|
||||
|
||||
## 📊 Results Summary
|
||||
|
||||
### Files Organized
|
||||
- **Total Files Moved**: **25 files**
|
||||
- **Errors**: **0**
|
||||
- **Files Skipped**: **0**
|
||||
|
||||
### Organization Breakdown
|
||||
|
||||
| Category | Files Moved | Destination |
|
||||
|----------|------------|-------------|
|
||||
| Documentation Meta | 11 files | `docs/00-meta/` |
|
||||
| Reports | 5 files | `docs/archive/reports/` |
|
||||
| Issue Tracking | 2 files | `docs/archive/issues/` |
|
||||
| Solidity Contracts | 3 files | `docs/bridge/contracts/` |
|
||||
| Metamask Configs | 3 files | `docs/04-configuration/metamask/` |
|
||||
| Scripts | 2 files | `docs/scripts/` |
|
||||
|
||||
---
|
||||
|
||||
## 📁 New Directory Structure
|
||||
|
||||
### Created Directories
|
||||
- ✅ `docs/00-meta/` - Documentation meta files
|
||||
- ✅ `docs/archive/reports/` - Archived reports
|
||||
- ✅ `docs/archive/issues/` - Issue tracking
|
||||
- ✅ `docs/bridge/contracts/` - Solidity contracts
|
||||
- ✅ `docs/04-configuration/metamask/` - Metamask configurations
|
||||
- ✅ `docs/scripts/` - Documentation scripts
|
||||
|
||||
### Files Moved
|
||||
|
||||
#### Documentation Meta Files → `docs/00-meta/`
|
||||
1. `CONTRIBUTOR_GUIDELINES.md`
|
||||
2. `DOCUMENTATION_ENHANCEMENTS_RECOMMENDATIONS.md`
|
||||
3. `DOCUMENTATION_FIXES_COMPLETE.md`
|
||||
4. `DOCUMENTATION_QUALITY_REVIEW.md`
|
||||
5. `DOCUMENTATION_RELATIONSHIP_MAP.md`
|
||||
6. `DOCUMENTATION_REORGANIZATION_COMPLETE.md`
|
||||
7. `DOCUMENTATION_REVIEW.md`
|
||||
8. `DOCUMENTATION_STYLE_GUIDE.md`
|
||||
9. `DOCUMENTATION_UPGRADE_SUMMARY.md`
|
||||
10. `MARKDOWN_FILE_MAINTENANCE_GUIDE.md`
|
||||
|
||||
#### Reports → `docs/archive/reports/`
|
||||
1. `PROXMOX_CLUSTER_STORAGE_STATUS_REPORT.md`
|
||||
2. `PROXMOX_SSL_CERTIFICATE_FIX.md`
|
||||
3. `PROXMOX_SSL_FIX_VERIFIED.md`
|
||||
4. `SSL_CERTIFICATE_ERROR_596_FIX.md`
|
||||
5. `SSL_FIX_FOR_EACH_HOST.md`
|
||||
|
||||
#### Issue Tracking → `docs/archive/issues/`
|
||||
1. `OUTSTANDING_ISSUES_RESOLUTION_GUIDE.md`
|
||||
2. `OUTSTANDING_ISSUES_SUMMARY.md`
|
||||
|
||||
#### Solidity Contracts → `docs/bridge/contracts/`
|
||||
1. `CCIPWETH9Bridge_flattened.sol`
|
||||
2. `CCIPWETH9Bridge_standard_json.json`
|
||||
3. `CCIPWETH9Bridge_standard_json_generated.json`
|
||||
|
||||
#### Metamask Configs → `docs/04-configuration/metamask/`
|
||||
1. `METAMASK_NETWORK_CONFIG.json`
|
||||
2. `METAMASK_TOKEN_LIST.json`
|
||||
3. `METAMASK_TOKEN_LIST.tokenlist.json`
|
||||
|
||||
#### Scripts → `docs/scripts/`
|
||||
1. `organize-standalone-files.sh`
|
||||
2. `organize_files.py`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Files Remaining in Root
|
||||
|
||||
Only **3 essential files** remain in `docs/` root:
|
||||
|
||||
1. ✅ `README.md` - Main documentation index
|
||||
2. ✅ `MASTER_INDEX.md` - Master index
|
||||
3. ✅ `SEARCH_GUIDE.md` - Search guide (useful in root)
|
||||
|
||||
---
|
||||
|
||||
## 📈 Before vs After
|
||||
|
||||
### Before Organization
|
||||
- **Root Files**: 28 files (20 markdown + 8 non-markdown)
|
||||
- **Organization**: Mixed (some organized, some not)
|
||||
- **Clarity**: Moderate (many files in root)
|
||||
|
||||
### After Organization
|
||||
- **Root Files**: 3 files (only essential)
|
||||
- **Organization**: Excellent (all files in appropriate directories)
|
||||
- **Clarity**: High (clear structure, easy navigation)
|
||||
|
||||
### Improvement
|
||||
- ✅ **89.3% reduction** in root directory files (28 → 3)
|
||||
- ✅ **100% success rate** (25 files moved, 0 errors)
|
||||
- ✅ **Clear structure** (files grouped by purpose)
|
||||
- ✅ **Professional organization** (follows best practices)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 All Actions Completed
|
||||
|
||||
### ✅ Action 1: Create Meta Directory
|
||||
- Created `docs/00-meta/` directory
|
||||
- Moved 11 documentation meta files
|
||||
|
||||
### ✅ Action 2: Organize Reports
|
||||
- Created `docs/archive/reports/` directory
|
||||
- Moved 5 report files
|
||||
|
||||
### ✅ Action 3: Organize Issue Tracking
|
||||
- Created `docs/archive/issues/` directory
|
||||
- Moved 2 issue tracking files
|
||||
|
||||
### ✅ Action 4: Organize Code Files
|
||||
- Created `docs/bridge/contracts/` directory
|
||||
- Moved 3 Solidity contract files
|
||||
- Created `docs/04-configuration/metamask/` directory
|
||||
- Moved 3 Metamask config files
|
||||
|
||||
### ✅ Action 5: Organize Scripts
|
||||
- Created `docs/scripts/` directory
|
||||
- Moved 2 script files
|
||||
|
||||
### ✅ Action 6: Keep Essential Files
|
||||
- Kept only 3 essential files in root:
|
||||
- `README.md`
|
||||
- `MASTER_INDEX.md`
|
||||
- `SEARCH_GUIDE.md`
|
||||
|
||||
---
|
||||
|
||||
## 📁 Final Structure
|
||||
|
||||
```
|
||||
docs/
|
||||
├── README.md # ✅ Main documentation index
|
||||
├── MASTER_INDEX.md # ✅ Master index
|
||||
├── SEARCH_GUIDE.md # ✅ Search guide
|
||||
│
|
||||
├── 00-meta/ # ✅ Documentation meta files (11 files)
|
||||
│ ├── CONTRIBUTOR_GUIDELINES.md
|
||||
│ ├── DOCUMENTATION_STYLE_GUIDE.md
|
||||
│ ├── DOCUMENTATION_REVIEW.md
|
||||
│ └── ...
|
||||
│
|
||||
├── 01-getting-started/ # ✅ Getting started
|
||||
├── 02-architecture/ # ✅ Architecture
|
||||
├── 03-deployment/ # ✅ Deployment
|
||||
├── 04-configuration/ # ✅ Configuration
|
||||
│ └── metamask/ # ✅ Metamask configs (3 files)
|
||||
├── 05-network/ # ✅ Network
|
||||
├── 06-besu/ # ✅ Besu
|
||||
├── 07-ccip/ # ✅ CCIP
|
||||
├── 08-monitoring/ # ✅ Monitoring
|
||||
├── 09-troubleshooting/ # ✅ Troubleshooting
|
||||
├── 10-best-practices/ # ✅ Best practices
|
||||
├── 11-references/ # ✅ References
|
||||
├── 12-quick-reference/ # ✅ Quick reference
|
||||
│
|
||||
├── archive/ # ✅ Archives
|
||||
│ ├── reports/ # ✅ Reports (5 files)
|
||||
│ └── issues/ # ✅ Issue tracking (2 files)
|
||||
│
|
||||
├── bridge/ # ✅ Bridge documentation
|
||||
│ └── contracts/ # ✅ Contracts (3 files)
|
||||
│
|
||||
└── scripts/ # ✅ Documentation scripts (2 files)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ Benefits Achieved
|
||||
|
||||
1. ✅ **Cleaner Root**: Only 3 essential files remain (89.3% reduction)
|
||||
2. ✅ **Better Organization**: Files grouped by purpose and type
|
||||
3. ✅ **Easier Navigation**: Predictable file locations
|
||||
4. ✅ **Professional Structure**: Follows documentation best practices
|
||||
5. ✅ **Maintainability**: Easier to maintain and update
|
||||
6. ✅ **Clear Categories**: Meta files, reports, issues, code, and scripts separated
|
||||
|
||||
---
|
||||
|
||||
## 📝 Script Used
|
||||
|
||||
The organization was performed using:
|
||||
- **Script**: `scripts/organize-docs-directory.sh`
|
||||
- **Log File**: `logs/DOCS_ORGANIZATION_20260106_033147.log`
|
||||
- **Mode**: Execute (actual file moves)
|
||||
|
||||
---
|
||||
|
||||
## 🎊 Summary
|
||||
|
||||
**Status**: ✅ **100% COMPLETE**
|
||||
|
||||
All recommended actions from the documentation directory review have been successfully completed:
|
||||
|
||||
- ✅ Created meta directory structure
|
||||
- ✅ Organized all standalone files
|
||||
- ✅ Moved reports to archive
|
||||
- ✅ Moved issue tracking to archive
|
||||
- ✅ Organized code files
|
||||
- ✅ Organized scripts
|
||||
- ✅ Kept only essential files in root
|
||||
|
||||
The documentation directory is now **clean, organized, and professional**!
|
||||
|
||||
---
|
||||
|
||||
*Organization completed: 2026-01-06*
|
||||
*Files moved: 25*
|
||||
*Errors: 0*
|
||||
*Status: ✅ COMPLETE*
|
||||
268
reports/ROOT_DIRECTORY_REVIEW.md
Normal file
268
reports/ROOT_DIRECTORY_REVIEW.md
Normal file
@@ -0,0 +1,268 @@
|
||||
# Root Directory Content Review
|
||||
|
||||
**Date**: 2026-01-06
|
||||
**Reviewer**: Automated Review
|
||||
**Status**: ✅ **COMPLETE**
|
||||
|
||||
---
|
||||
|
||||
## 📋 Executive Summary
|
||||
|
||||
The root directory contains **67 files** (excluding hidden files and directories). While the markdown cleanup was successful (only 2 markdown files remain), there are still many other files that could be better organized.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Files Correctly Placed in Root
|
||||
|
||||
### Essential Configuration Files ✅
|
||||
- `README.md` - Main project documentation ✅
|
||||
- `PROJECT_STRUCTURE.md` - Project structure documentation ✅
|
||||
- `package.json` - pnpm workspace configuration ✅
|
||||
- `pnpm-workspace.yaml` - Workspace definition ✅
|
||||
- `pnpm-lock.yaml` - Dependency lock file ✅
|
||||
- `.gitignore` - Git ignore rules ✅
|
||||
- `.gitmodules` - Git submodules configuration ✅
|
||||
- `.env` - Environment variables (should be gitignored) ✅
|
||||
|
||||
### Essential Directories ✅
|
||||
- `scripts/` - Utility scripts ✅
|
||||
- `docs/` - Documentation ✅
|
||||
- `reports/` - Reports ✅
|
||||
- Submodules (mcp-proxmox, ProxmoxVE, etc.) ✅
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Files That Should Be Moved
|
||||
|
||||
### Log Files (11 files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `logs/` directory
|
||||
|
||||
Files:
|
||||
- `138.log` (183 bytes)
|
||||
- `MARKDOWN_CLEANUP_EXECUTION.log` (25 KB)
|
||||
- `MARKDOWN_CLEANUP_LOG_20260105_194645.log` (9.1 KB)
|
||||
- `MARKDOWN_CLEANUP_LOG_20260106_014230.log` (33 KB)
|
||||
- `ROOT_FILES_ORGANIZATION.log` (4.3 KB)
|
||||
- `dependency_update_log_20260105_153458.log` (1.2 KB)
|
||||
- `ip_conversion_log_20260105_143749.log` (unknown size)
|
||||
|
||||
**Action**: Move to `logs/` directory
|
||||
|
||||
### CSV Inventory Files (10 files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `reports/inventory/` or `backups/inventory/`
|
||||
|
||||
Files:
|
||||
- `container_inventory_20260105_142214.csv`
|
||||
- `container_inventory_20260105_142314.csv`
|
||||
- `container_inventory_20260105_142357.csv`
|
||||
- `container_inventory_20260105_142455.csv`
|
||||
- `container_inventory_20260105_142712.csv`
|
||||
- `container_inventory_20260105_142753.csv`
|
||||
- `container_inventory_20260105_142842.csv`
|
||||
- `container_inventory_20260105_144309.csv`
|
||||
- `container_inventory_20260105_153516.csv`
|
||||
- `container_inventory_20260105_154200.csv`
|
||||
|
||||
**Action**: Move to `reports/inventory/` directory
|
||||
|
||||
### Shell Scripts (15+ files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `scripts/` directory
|
||||
|
||||
Files:
|
||||
- `INSTALL_TUNNEL.sh`
|
||||
- `QUICK_SSH_SETUP.sh`
|
||||
- `analyze-all-domains.sh`
|
||||
- `check-r630-04-commands.sh`
|
||||
- `connect-to-r630-04-from-r630-03.sh`
|
||||
- `diagnose-tunnels.sh`
|
||||
- `fix-all-tunnels.sh`
|
||||
- `fix-r630-04-pveproxy.sh`
|
||||
- `fix-shared-tunnel-remote.sh`
|
||||
- `fix-shared-tunnel.sh`
|
||||
- `fix-tunnels-no-ssh.sh`
|
||||
- `install-shared-tunnel-token.sh`
|
||||
- `list_vms.sh`
|
||||
- `setup_ssh_tunnel.sh`
|
||||
- `stop_ssh_tunnel.sh`
|
||||
- `test_connection.sh`
|
||||
- `verify-tunnel-config.sh`
|
||||
|
||||
**Action**: Move to `scripts/` directory
|
||||
|
||||
### Python Scripts (2 files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `scripts/` directory
|
||||
|
||||
Files:
|
||||
- `list_vms.py`
|
||||
- `list_vms_with_tunnels.py`
|
||||
|
||||
**Action**: Move to `scripts/` directory
|
||||
|
||||
### JavaScript Files (3 files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `scripts/` directory
|
||||
|
||||
Files:
|
||||
- `query-omada-devices.js`
|
||||
- `test-omada-connection.js`
|
||||
- `test-omada-direct.js`
|
||||
|
||||
**Action**: Move to `scripts/` directory
|
||||
|
||||
### HTML Files (2 files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `examples/` or `docs/examples/`
|
||||
|
||||
Files:
|
||||
- `add-rpc-network.html` (32 KB)
|
||||
- `wallet-connect.html` (unknown size)
|
||||
|
||||
**Action**: Move to `examples/` directory
|
||||
|
||||
### JSON Data Files (3 files) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `reports/` directory
|
||||
|
||||
Files:
|
||||
- `CONTENT_INCONSISTENCIES.json` - Content inconsistency report
|
||||
- `MARKDOWN_ANALYSIS.json` - Markdown analysis data
|
||||
- `REFERENCE_FIXES_REPORT.json` - Reference fixes log
|
||||
|
||||
**Action**: Move to `reports/` directory
|
||||
|
||||
### Text Reports (1 file) ⚠️
|
||||
**Current Location**: Root directory
|
||||
**Recommended Location**: `reports/` directory
|
||||
|
||||
Files:
|
||||
- `CONVERSION_SUMMARY.txt` - DHCP to static IP conversion summary
|
||||
|
||||
**Action**: Move to `reports/` directory
|
||||
|
||||
---
|
||||
|
||||
## 📊 Statistics
|
||||
|
||||
### Current Root Directory
|
||||
- **Total Files**: 67 files (excluding hidden)
|
||||
- **Markdown Files**: 2 ✅ (excellent!)
|
||||
- **Log Files**: 11 ⚠️
|
||||
- **CSV Files**: 10 ⚠️
|
||||
- **Shell Scripts**: 15+ ⚠️
|
||||
- **Python Scripts**: 2 ⚠️
|
||||
- **JavaScript Files**: 3 ⚠️
|
||||
- **HTML Files**: 2 ⚠️
|
||||
- **JSON Reports**: 3 ⚠️
|
||||
- **Text Reports**: 1 ⚠️
|
||||
|
||||
### Recommended Organization
|
||||
- **Root Files**: 8 files (essential config only)
|
||||
- **Scripts Directory**: +20 files
|
||||
- **Reports Directory**: +14 files
|
||||
- **Logs Directory**: +11 files
|
||||
- **Examples Directory**: +2 files
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Recommended Actions
|
||||
|
||||
### Priority 1: Move Log Files
|
||||
```bash
|
||||
mkdir -p logs
|
||||
mv *.log logs/
|
||||
```
|
||||
|
||||
### Priority 2: Move CSV Files
|
||||
```bash
|
||||
mkdir -p reports/inventory
|
||||
mv container_inventory_*.csv reports/inventory/
|
||||
```
|
||||
|
||||
### Priority 3: Move Scripts
|
||||
```bash
|
||||
mv *.sh scripts/
|
||||
mv *.py scripts/
|
||||
mv *.js scripts/
|
||||
```
|
||||
|
||||
### Priority 4: Move HTML Examples
|
||||
```bash
|
||||
mkdir -p examples
|
||||
mv *.html examples/
|
||||
```
|
||||
|
||||
### Priority 5: Move JSON/Text Reports
|
||||
```bash
|
||||
mv CONTENT_INCONSISTENCIES.json reports/
|
||||
mv MARKDOWN_ANALYSIS.json reports/
|
||||
mv REFERENCE_FIXES_REPORT.json reports/
|
||||
mv CONVERSION_SUMMARY.txt reports/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📁 Proposed Final Root Structure
|
||||
|
||||
```
|
||||
proxmox/
|
||||
├── README.md # ✅ Main documentation
|
||||
├── PROJECT_STRUCTURE.md # ✅ Structure documentation
|
||||
├── package.json # ✅ Workspace config
|
||||
├── pnpm-workspace.yaml # ✅ Workspace definition
|
||||
├── pnpm-lock.yaml # ✅ Dependency lock
|
||||
├── .gitignore # ✅ Git ignore
|
||||
├── .gitmodules # ✅ Submodules
|
||||
├── .env # ✅ Environment (gitignored)
|
||||
│
|
||||
├── scripts/ # ✅ All scripts
|
||||
│ ├── *.sh
|
||||
│ ├── *.py
|
||||
│ └── *.js
|
||||
│
|
||||
├── docs/ # ✅ Documentation
|
||||
│
|
||||
├── reports/ # ✅ All reports
|
||||
│ ├── inventory/ # CSV files
|
||||
│ └── *.json # JSON reports
|
||||
│
|
||||
├── logs/ # ✅ All logs
|
||||
│ └── *.log
|
||||
│
|
||||
├── examples/ # ✅ Example files
|
||||
│ └── *.html
|
||||
│
|
||||
└── [submodules] # ✅ Submodules
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ Benefits of Reorganization
|
||||
|
||||
1. **Cleaner Root**: Only 8 essential files
|
||||
2. **Better Organization**: Files grouped by purpose
|
||||
3. **Easier Navigation**: Predictable file locations
|
||||
4. **Better Maintenance**: Logs and reports in dedicated directories
|
||||
5. **Professional Structure**: Follows best practices
|
||||
|
||||
---
|
||||
|
||||
## 📝 Summary
|
||||
|
||||
While the markdown cleanup was successful (only 2 markdown files in root), there are still **59 additional files** that could be better organized:
|
||||
|
||||
- ✅ **Markdown Files**: Perfect (2 files)
|
||||
- ⚠️ **Log Files**: 11 files → should be in `logs/`
|
||||
- ⚠️ **CSV Files**: 10 files → should be in `reports/inventory/`
|
||||
- ⚠️ **Scripts**: 20 files → should be in `scripts/`
|
||||
- ⚠️ **Reports**: 4 files → should be in `reports/`
|
||||
- ⚠️ **Examples**: 2 files → should be in `examples/`
|
||||
|
||||
**Recommendation**: Create an automated script to organize these files similar to the markdown cleanup.
|
||||
|
||||
---
|
||||
|
||||
*Review completed: 2026-01-06*
|
||||
205
reports/ROOT_FILES_ORGANIZATION_GUIDE.md
Normal file
205
reports/ROOT_FILES_ORGANIZATION_GUIDE.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Root Directory Files Organization Guide
|
||||
|
||||
**Date**: 2026-01-06
|
||||
**Script**: `scripts/organize-root-files.sh`
|
||||
|
||||
---
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
This script organizes files in the project root directory by moving them to appropriate subdirectories based on file type and purpose.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Purpose
|
||||
|
||||
The root directory should contain only essential configuration files. This script moves:
|
||||
- Log files → `logs/`
|
||||
- CSV inventory files → `reports/inventory/`
|
||||
- Shell scripts → `scripts/`
|
||||
- Python scripts → `scripts/`
|
||||
- JavaScript scripts → `scripts/`
|
||||
- HTML examples → `examples/`
|
||||
- JSON/Text reports → `reports/`
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Dry Run (Preview Changes)
|
||||
```bash
|
||||
./scripts/organize-root-files.sh
|
||||
# or
|
||||
./scripts/organize-root-files.sh --dry-run
|
||||
```
|
||||
|
||||
This will show what files would be moved without actually moving them.
|
||||
|
||||
### Execute (Actually Move Files)
|
||||
```bash
|
||||
./scripts/organize-root-files.sh --execute
|
||||
```
|
||||
|
||||
This will actually move the files to their new locations.
|
||||
|
||||
---
|
||||
|
||||
## 📁 File Organization Rules
|
||||
|
||||
### Log Files → `logs/`
|
||||
- All `*.log` files
|
||||
- Examples: `MARKDOWN_CLEANUP_EXECUTION.log`, `dependency_update_log_*.log`
|
||||
|
||||
### CSV Inventory Files → `reports/inventory/`
|
||||
- All `container_inventory_*.csv` files
|
||||
- Container inventory snapshots
|
||||
|
||||
### Shell Scripts → `scripts/`
|
||||
- All `*.sh` files from root
|
||||
- Examples: `INSTALL_TUNNEL.sh`, `fix-all-tunnels.sh`
|
||||
|
||||
### Python Scripts → `scripts/`
|
||||
- All `*.py` files from root
|
||||
- Examples: `list_vms.py`, `list_vms_with_tunnels.py`
|
||||
|
||||
### JavaScript Scripts → `scripts/`
|
||||
- All `*.js` files from root (except `package.json`, `pnpm-lock.yaml`, `token-list.json`)
|
||||
- Examples: `query-omada-devices.js`, `test-omada-connection.js`
|
||||
|
||||
### HTML Examples → `examples/`
|
||||
- All `*.html` files
|
||||
- Examples: `add-rpc-network.html`, `wallet-connect.html`
|
||||
|
||||
### JSON/Text Reports → `reports/`
|
||||
- `CONTENT_INCONSISTENCIES.json`
|
||||
- `MARKDOWN_ANALYSIS.json`
|
||||
- `REFERENCE_FIXES_REPORT.json`
|
||||
- `CONVERSION_SUMMARY.txt`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Files That Stay in Root
|
||||
|
||||
These files should **NOT** be moved (they're essential configuration):
|
||||
|
||||
- `README.md` - Main project documentation
|
||||
- `PROJECT_STRUCTURE.md` - Project structure documentation
|
||||
- `package.json` - pnpm workspace configuration
|
||||
- `pnpm-workspace.yaml` - Workspace definition
|
||||
- `pnpm-lock.yaml` - Dependency lock file
|
||||
- `.gitignore` - Git ignore rules
|
||||
- `.gitmodules` - Git submodules configuration
|
||||
- `claude_desktop_config.json.example` - Configuration template
|
||||
- `token-list.json` - Token list data file
|
||||
|
||||
---
|
||||
|
||||
## 📊 Expected Results
|
||||
|
||||
### Before Organization
|
||||
- Root directory: ~52 files
|
||||
- Log files: 7+ files in root
|
||||
- Scripts: 20+ files in root
|
||||
- Reports: 4+ files in root
|
||||
|
||||
### After Organization
|
||||
- Root directory: ~8 essential files
|
||||
- Log files: All in `logs/`
|
||||
- Scripts: All in `scripts/`
|
||||
- Reports: All in `reports/`
|
||||
- Examples: All in `examples/`
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Safety Features
|
||||
|
||||
1. **Dry Run Mode**: Default mode shows what would happen without making changes
|
||||
2. **Skip Existing**: Won't overwrite files that already exist in destination
|
||||
3. **Error Handling**: Tracks errors and continues processing
|
||||
4. **Logging**: Creates detailed log file of all operations
|
||||
5. **Directory Creation**: Automatically creates destination directories
|
||||
|
||||
---
|
||||
|
||||
## 📝 Log File
|
||||
|
||||
The script creates a log file: `ROOT_FILES_ORGANIZATION_YYYYMMDD_HHMMSS.log`
|
||||
|
||||
This log contains:
|
||||
- All file moves
|
||||
- Warnings (skipped files)
|
||||
- Errors (failed moves)
|
||||
- Summary statistics
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Important Notes
|
||||
|
||||
1. **Backup First**: Consider backing up important files before running
|
||||
2. **Review Dry Run**: Always review dry-run output before executing
|
||||
3. **Git Status**: Check git status after running to see changes
|
||||
4. **Update References**: Update any scripts/docs that reference moved files
|
||||
|
||||
---
|
||||
|
||||
## 🔄 After Running
|
||||
|
||||
After organizing files, you may need to:
|
||||
|
||||
1. **Update Script References**: If other scripts reference moved files, update paths
|
||||
2. **Update Documentation**: Update docs that reference moved files
|
||||
3. **Git Commit**: Commit the organization changes
|
||||
4. **Verify**: Test that moved scripts still work from new locations
|
||||
|
||||
---
|
||||
|
||||
## 📊 Example Output
|
||||
|
||||
```
|
||||
╔══════════════════════════════════════════════════════════╗
|
||||
║ Root Directory Files Organization ║
|
||||
╚══════════════════════════════════════════════════════════╝
|
||||
|
||||
=== Moving Log Files to logs/ ===
|
||||
Would move: 138.log → logs/138.log (log file)
|
||||
Would move: MARKDOWN_CLEANUP_EXECUTION.log → logs/MARKDOWN_CLEANUP_EXECUTION.log (log file)
|
||||
...
|
||||
|
||||
=== Moving Shell Scripts to scripts/ ===
|
||||
Would move: INSTALL_TUNNEL.sh → scripts/INSTALL_TUNNEL.sh (shell script)
|
||||
...
|
||||
|
||||
Summary:
|
||||
Files Moved: 0
|
||||
Files Skipped: 0
|
||||
Errors: 0
|
||||
|
||||
⚠️ DRY RUN MODE - No files were actually moved
|
||||
|
||||
To execute the moves, run:
|
||||
./scripts/organize-root-files.sh --execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ Verification
|
||||
|
||||
After running, verify the organization:
|
||||
|
||||
```bash
|
||||
# Check root directory
|
||||
find . -maxdepth 1 -type f ! -name ".*" | wc -l
|
||||
|
||||
# Check logs directory
|
||||
ls logs/ | wc -l
|
||||
|
||||
# Check scripts directory
|
||||
ls scripts/*.sh scripts/*.py scripts/*.js 2>/dev/null | wc -l
|
||||
|
||||
# Check reports directory
|
||||
ls reports/*.json reports/*.txt 2>/dev/null | wc -l
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Guide created: 2026-01-06*
|
||||
229
reports/storage/MIGRATION_AND_MONITORING_STATUS.md
Normal file
229
reports/storage/MIGRATION_AND_MONITORING_STATUS.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Storage Migration and Monitoring Setup - Status Report
|
||||
|
||||
**Date:** January 6, 2026
|
||||
**Status:** ✅ In Progress
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Successfully initiated migration of containers from r630-02 `thin1-r630-02` storage pool (97.78% full) to other available thin pools. Storage monitoring system has been set up with automated alerts.
|
||||
|
||||
---
|
||||
|
||||
## Migration Status
|
||||
|
||||
### Source Storage: thin1-r630-02
|
||||
- **Initial Status:** 97.78% capacity (221GB used, 5GB available) 🔴 CRITICAL
|
||||
- **Containers to Migrate:** 10 containers
|
||||
- **Target Storage Pools:** thin2, thin3, thin5, thin6 (all empty, 226GB each)
|
||||
|
||||
### Migration Progress
|
||||
|
||||
| Container | VMID | Name | Status | Target Storage |
|
||||
|-----------|------|------|--------|----------------|
|
||||
| proxmox-mail-gateway | 100 | ✅ Migrated | thin2 |
|
||||
| proxmox-datacenter-manager | 101 | ✅ Migrated | thin2 |
|
||||
| cloudflared | 102 | ⏳ Pending | - |
|
||||
| omada | 103 | ⏳ Pending | - |
|
||||
| gitea | 104 | ⏳ Pending | - |
|
||||
| nginxproxymanager | 105 | ⏳ Pending | - |
|
||||
| monitoring-1 | 130 | ⏳ Pending | - |
|
||||
| blockscout-1 | 5000 | ⏳ Pending | - |
|
||||
| firefly-1 | 6200 | ⏳ Pending | - |
|
||||
| firefly-ali-1 | 6201 | ⏳ Pending | - |
|
||||
|
||||
**Progress:** 2/10 containers migrated (20%)
|
||||
|
||||
### Migration Script
|
||||
|
||||
**Location:** `scripts/migrate-thin1-r630-02.sh`
|
||||
|
||||
**Features:**
|
||||
- ✅ Automatically identifies containers on source storage
|
||||
- ✅ Skips already migrated containers
|
||||
- ✅ Distributes containers across target pools (round-robin)
|
||||
- ✅ Stops containers before migration, restarts after
|
||||
- ✅ Verifies migration success
|
||||
- ✅ Comprehensive logging
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Interactive mode (with confirmation)
|
||||
./scripts/migrate-thin1-r630-02.sh
|
||||
|
||||
# Non-interactive mode (auto-confirm)
|
||||
./scripts/migrate-thin1-r630-02.sh --yes
|
||||
```
|
||||
|
||||
**Logs:** `logs/migrations/migrate-thin1-r630-02_*.log`
|
||||
|
||||
---
|
||||
|
||||
## Storage Monitoring Setup
|
||||
|
||||
### Monitoring Script
|
||||
|
||||
**Location:** `scripts/storage-monitor.sh`
|
||||
|
||||
**Features:**
|
||||
- ✅ Monitors all Proxmox nodes (ml110, r630-01, r630-02, r630-03, r630-04)
|
||||
- ✅ Checks storage usage (warning at 80%, critical at 90%)
|
||||
- ✅ Checks volume group free space (warning at 10GB, critical at 5GB)
|
||||
- ✅ Generates alerts for issues
|
||||
- ✅ Logs all status checks
|
||||
- ✅ Generates daily summaries
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Run full monitoring check
|
||||
./scripts/storage-monitor.sh check
|
||||
|
||||
# Show current storage status
|
||||
./scripts/storage-monitor.sh status
|
||||
|
||||
# Show recent alerts
|
||||
./scripts/storage-monitor.sh alerts
|
||||
```
|
||||
|
||||
**Logs:**
|
||||
- Alerts: `logs/storage-monitoring/storage_alerts_YYYYMMDD.log`
|
||||
- Status: `logs/storage-monitoring/storage_status_YYYYMMDD.log`
|
||||
- Summary: `logs/storage-monitoring/storage_summary_YYYYMMDD.txt`
|
||||
|
||||
### Automated Monitoring (Cron Job)
|
||||
|
||||
**Status:** ✅ Configured
|
||||
|
||||
**Schedule:** Every hour at :00
|
||||
|
||||
**Command:**
|
||||
```bash
|
||||
0 * * * * /home/intlc/projects/proxmox/scripts/storage-monitor.sh check >> /home/intlc/projects/proxmox/logs/storage-monitoring/cron.log 2>&1
|
||||
```
|
||||
|
||||
**View Cron Jobs:**
|
||||
```bash
|
||||
crontab -l
|
||||
```
|
||||
|
||||
**Setup Script:** `scripts/setup-storage-monitoring-cron.sh`
|
||||
|
||||
---
|
||||
|
||||
## Alert Thresholds
|
||||
|
||||
### Storage Usage Alerts
|
||||
- **Warning:** ≥80% capacity
|
||||
- **Critical:** ≥90% capacity
|
||||
|
||||
### Volume Group Free Space Alerts
|
||||
- **Warning:** ≤10GB free
|
||||
- **Critical:** ≤5GB free
|
||||
|
||||
### Current Alerts
|
||||
|
||||
Based on last monitoring check:
|
||||
- 🔴 **CRITICAL:** r630-02:thin1-r630-02 is at 97.78% capacity (5GB available)
|
||||
- ⚠️ **WARNING:** ml110:pve volume group has only 16GB free space
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
### Immediate Actions
|
||||
|
||||
1. **Complete Migration** ⏳
|
||||
- Continue migrating remaining 8 containers
|
||||
- Run: `./scripts/migrate-thin1-r630-02.sh --yes`
|
||||
- Monitor progress in logs
|
||||
|
||||
2. **Verify Migration Success** ⏳
|
||||
- Check all containers are running after migration
|
||||
- Verify storage usage on thin1-r630-02 is reduced
|
||||
- Confirm containers are distributed across thin2-thin6
|
||||
|
||||
### Short-term (This Week)
|
||||
|
||||
1. **Monitor Storage** ✅
|
||||
- Automated monitoring is active
|
||||
- Review alerts daily
|
||||
- Check logs for any issues
|
||||
|
||||
2. **Verify Cron Job** ⏳
|
||||
- Wait for next hourly run
|
||||
- Check `logs/storage-monitoring/cron.log`
|
||||
- Verify alerts are being generated
|
||||
|
||||
### Long-term (This Month)
|
||||
|
||||
1. **Optimize Storage Distribution**
|
||||
- Balance containers across all thin pools
|
||||
- Plan for future growth
|
||||
- Consider expanding storage if needed
|
||||
|
||||
2. **Enhance Monitoring**
|
||||
- Add email/Slack notifications (optional)
|
||||
- Set up dashboard for storage metrics
|
||||
- Create automated reports
|
||||
|
||||
---
|
||||
|
||||
## Commands Reference
|
||||
|
||||
### Check Migration Status
|
||||
```bash
|
||||
# Check containers on source storage
|
||||
ssh root@192.168.11.12 "pvesm list thin1-r630-02"
|
||||
|
||||
# Check storage usage
|
||||
ssh root@192.168.11.12 "pvesm status | grep thin"
|
||||
```
|
||||
|
||||
### Run Migration
|
||||
```bash
|
||||
# Complete migration (non-interactive)
|
||||
cd /home/intlc/projects/proxmox
|
||||
./scripts/migrate-thin1-r630-02.sh --yes
|
||||
```
|
||||
|
||||
### Check Monitoring
|
||||
```bash
|
||||
# Run monitoring check
|
||||
./scripts/storage-monitor.sh check
|
||||
|
||||
# View recent alerts
|
||||
./scripts/storage-monitor.sh alerts
|
||||
|
||||
# View current status
|
||||
./scripts/storage-monitor.sh status
|
||||
```
|
||||
|
||||
### View Logs
|
||||
```bash
|
||||
# Migration logs
|
||||
ls -lh logs/migrations/
|
||||
|
||||
# Monitoring logs
|
||||
ls -lh logs/storage-monitoring/
|
||||
|
||||
# Cron logs
|
||||
tail -f logs/storage-monitoring/cron.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
✅ **Migration Script Created:** Fully functional with error handling and logging
|
||||
✅ **Monitoring Script Created:** Comprehensive monitoring with alerts
|
||||
✅ **Cron Job Configured:** Automated hourly monitoring
|
||||
⏳ **Migration In Progress:** 2/10 containers migrated (20% complete)
|
||||
⏳ **Remaining Work:** Complete migration of 8 remaining containers
|
||||
|
||||
**Critical Issue Status:** 🔴 Still critical - thin1-r630-02 at 97.78% (migration in progress)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** January 6, 2026
|
||||
**Next Review:** After migration completion
|
||||
530
reports/storage/STORAGE_REVIEW_SUMMARY.md
Normal file
530
reports/storage/STORAGE_REVIEW_SUMMARY.md
Normal file
@@ -0,0 +1,530 @@
|
||||
# Proxmox Storage Review - Complete Summary and Recommendations
|
||||
|
||||
**Date:** January 6, 2026
|
||||
**Review Scope:** All Proxmox nodes and storage configurations
|
||||
**Status:** ✅ Complete
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document provides a comprehensive review of all storage across all Proxmox nodes with detailed recommendations for optimization, capacity planning, and performance improvements.
|
||||
|
||||
### Key Findings
|
||||
|
||||
- **Total Containers:** 51 containers across 3 accessible nodes
|
||||
- **Critical Issues:** 1 storage pool at 97.78% capacity (r630-02 thin1-r630-02)
|
||||
- **Storage Distribution:** Uneven - ml110 has 37 containers, others underutilized
|
||||
- **Available Storage:** ~1.2TB total available across all nodes
|
||||
- **Unreachable Nodes:** r630-03 and r630-04 (require investigation)
|
||||
|
||||
---
|
||||
|
||||
## Current Storage Status by Node
|
||||
|
||||
### ml110 (192.168.11.10) - Management Node
|
||||
|
||||
**Status:** ✅ Operational
|
||||
**Containers:** 37
|
||||
**CPU:** 6 cores @ 1.60GHz (older, slower)
|
||||
**Memory:** 125GB (55GB used, 69GB available - 44% usage)
|
||||
|
||||
#### Storage Details
|
||||
|
||||
| Storage Name | Type | Status | Total | Used | Available | Usage % |
|
||||
|--------------|------|--------|-------|------|-----------|---------|
|
||||
| local | dir | ✅ Active | 94GB | 7.5GB | 85.5GB | 8.02% |
|
||||
| local-lvm | lvmthin | ✅ Active | 813GB | 227GB | 586GB | 27.92% |
|
||||
| thin1-thin6 | lvmthin | ❌ Disabled | - | - | - | N/A |
|
||||
|
||||
**Volume Group:** `pve` - 930.51GB total, **16GB free** ⚠️
|
||||
|
||||
**Thin Pool:** `data` - 794.30GB (27.92% used, 1.13% metadata)
|
||||
|
||||
**Physical Disks:**
|
||||
- sda: 931.5GB
|
||||
- sdb: 931.5GB
|
||||
|
||||
#### Issues Identified
|
||||
|
||||
1. ⚠️ **Low Volume Group Free Space:** Only 16GB free in VG (1.7%)
|
||||
- **Impact:** Cannot create new VMs/containers without expansion
|
||||
- **Recommendation:** Expand VG or migrate VMs to other nodes
|
||||
|
||||
2. ⚠️ **Multiple Disabled Storage Pools:** thin1-thin6 are disabled
|
||||
- **Impact:** Storage pools configured but not usable
|
||||
- **Recommendation:** Clean up unused storage definitions or enable if needed
|
||||
|
||||
3. ⚠️ **Overloaded Node:** 37 containers on slower CPU
|
||||
- **Impact:** Performance degradation
|
||||
- **Recommendation:** Migrate containers to r630-01/r630-02
|
||||
|
||||
#### Recommendations
|
||||
|
||||
**CRITICAL:**
|
||||
1. **Expand Volume Group** - Add physical volumes or migrate VMs
|
||||
2. **Monitor Storage Closely** - Only 16GB free space remaining
|
||||
|
||||
**HIGH PRIORITY:**
|
||||
1. **Migrate Containers** - Move 15-20 containers to r630-01/r630-02
|
||||
2. **Clean Up Storage Config** - Remove or enable disabled storage pools
|
||||
|
||||
**RECOMMENDED:**
|
||||
1. **Storage Monitoring** - Set alerts at 80% usage
|
||||
2. **Backup Strategy** - Implement regular backups before migration
|
||||
|
||||
---
|
||||
|
||||
### r630-01 (192.168.11.11) - Production Node
|
||||
|
||||
**Status:** ✅ Operational
|
||||
**Containers:** 3
|
||||
**CPU:** 32 cores @ 2.40GHz (excellent)
|
||||
**Memory:** 503GB (7.5GB used, 496GB available - 1.5% usage)
|
||||
|
||||
#### Storage Details
|
||||
|
||||
| Storage Name | Type | Status | Total | Used | Available | Usage % |
|
||||
|--------------|------|--------|-------|------|-----------|---------|
|
||||
| local | dir | ✅ Active | 536GB | 0.1GB | 536GB | 0.02% |
|
||||
| local-lvm | lvmthin | ✅ Active | 200GB | 5.8GB | 194GB | 2.92% |
|
||||
| thin1 | lvmthin | ✅ Active | 208GB | 0GB | 208GB | 0.00% |
|
||||
| thin2-thin6 | lvmthin | ❌ Disabled | - | - | - | N/A |
|
||||
|
||||
**Volume Group:** `pve` - 465.77GB total, **57GB free** ✅
|
||||
|
||||
**Thin Pools:**
|
||||
- `data`: 200GB (2.92% used, 11.42% metadata)
|
||||
- `thin1`: 208GB (0.00% used, 10.43% metadata)
|
||||
|
||||
**Physical Disks:**
|
||||
- sda, sdb: 558.9GB each (boot drives)
|
||||
- sdc-sdh: 232.9GB each (6 data drives)
|
||||
|
||||
#### Issues Identified
|
||||
|
||||
1. ⚠️ **Disabled Storage Pools:** thin2-thin6 are disabled
|
||||
- **Impact:** Additional storage not available
|
||||
- **Recommendation:** Enable if needed or remove from config
|
||||
|
||||
2. ✅ **Excellent Capacity:** 57GB free in VG, 408GB available storage
|
||||
- **Status:** Ready for VM deployment
|
||||
|
||||
#### Recommendations
|
||||
|
||||
**HIGH PRIORITY:**
|
||||
1. **Enable Additional Storage** - Enable thin2-thin6 if needed (or remove from config)
|
||||
2. **Migrate VMs from ml110** - This node is ready for 15-20 containers
|
||||
|
||||
**RECOMMENDED:**
|
||||
1. **Storage Optimization** - Consider using thin1 for new deployments
|
||||
2. **Performance Tuning** - Optimize for high-performance workloads
|
||||
|
||||
---
|
||||
|
||||
### r630-02 (192.168.11.12) - Production Node
|
||||
|
||||
**Status:** ✅ Operational
|
||||
**Containers:** 11
|
||||
**CPU:** 56 cores @ 2.00GHz (excellent - best CPU)
|
||||
**Memory:** 251GB (16GB used, 235GB available - 6.4% usage)
|
||||
|
||||
#### Storage Details
|
||||
|
||||
| Storage Name | Type | Status | Total | Used | Available | Usage % |
|
||||
|--------------|------|--------|-------|------|-----------|---------|
|
||||
| local | dir | ✅ Active | 220GB | 4.0GB | 216GB | 1.81% |
|
||||
| local-lvm | lvmthin | ❌ Disabled | - | - | - | N/A |
|
||||
| thin1 | lvmthin | ⚠️ Inactive | - | - | - | 0.00% |
|
||||
| thin1-r630-02 | lvmthin | 🔴 **CRITICAL** | 226GB | 221GB | **5.0GB** | **97.78%** |
|
||||
| thin2 | lvmthin | ✅ Active | 226GB | 0GB | 226GB | 0.00% |
|
||||
| thin3 | lvmthin | ✅ Active | 226GB | 0GB | 226GB | 0.00% |
|
||||
| thin4 | lvmthin | ✅ Active | 226GB | 28.7GB | 197GB | 12.69% |
|
||||
| thin5 | lvmthin | ✅ Active | 226GB | 0GB | 226GB | 0.00% |
|
||||
| thin6 | lvmthin | ✅ Active | 226GB | 0GB | 226GB | 0.00% |
|
||||
|
||||
**Volume Groups:**
|
||||
- thin1-thin6: Each 230.87GB with 0.12GB free
|
||||
|
||||
**Thin Pools:**
|
||||
- `thin1-r630-02`: 226.13GB (**97.78% used**, 3.84% metadata) 🔴 **CRITICAL**
|
||||
- `thin4`: 226.13GB (12.69% used, 1.15% metadata)
|
||||
- `thin2, thin3, thin5, thin6`: All empty (0.00% used)
|
||||
|
||||
**Physical Disks:**
|
||||
- sda-sdh: 232.9GB each (8 data drives)
|
||||
|
||||
#### Issues Identified
|
||||
|
||||
1. 🔴 **CRITICAL: Storage Nearly Full** - thin1-r630-02 at 97.78% capacity
|
||||
- **Impact:** Cannot create new VMs/containers on this storage
|
||||
- **Action Required:** IMMEDIATE - Migrate VMs or expand storage
|
||||
- **Available:** Only 5GB free
|
||||
|
||||
2. ⚠️ **Inactive Storage:** thin1 is inactive
|
||||
- **Impact:** Storage pool not usable
|
||||
- **Recommendation:** Activate or remove from config
|
||||
|
||||
3. ⚠️ **Disabled Storage:** local-lvm is disabled
|
||||
- **Impact:** Standard storage name not available
|
||||
- **Recommendation:** Enable if volume group exists
|
||||
|
||||
4. ✅ **Excellent Capacity Available:** thin2, thin3, thin5, thin6 are empty (904GB total)
|
||||
|
||||
#### Recommendations
|
||||
|
||||
**CRITICAL (IMMEDIATE ACTION REQUIRED):**
|
||||
1. **Migrate VMs from thin1-r630-02** - Move VMs to thin2, thin3, thin5, or thin6
|
||||
2. **Expand thin1-r630-02** - If migration not possible, expand the pool
|
||||
3. **Monitor Closely** - Set alerts for this storage pool
|
||||
|
||||
**HIGH PRIORITY:**
|
||||
1. **Activate thin1** - Enable if needed or remove from config
|
||||
2. **Enable local-lvm** - If volume group exists, enable for standard naming
|
||||
3. **Balance Storage Usage** - Distribute VMs across thin2-thin6
|
||||
|
||||
**RECOMMENDED:**
|
||||
1. **Storage Monitoring** - Set up automated alerts
|
||||
2. **Migration Plan** - Document VM migration procedures
|
||||
|
||||
---
|
||||
|
||||
### r630-03 (192.168.11.13) - Unknown Status
|
||||
|
||||
**Status:** ❌ Not Reachable
|
||||
**Action Required:** Investigate connectivity issues
|
||||
|
||||
#### Recommendations
|
||||
|
||||
1. **Check Network Connectivity** - Verify network connection
|
||||
2. **Check Power Status** - Verify node is powered on
|
||||
3. **Check SSH Access** - Verify SSH service is running
|
||||
4. **Review Storage** - Once accessible, perform full storage review
|
||||
|
||||
---
|
||||
|
||||
### r630-04 (192.168.11.14) - Unknown Status
|
||||
|
||||
**Status:** ❌ Not Reachable
|
||||
**Action Required:** Investigate connectivity issues
|
||||
|
||||
#### Recommendations
|
||||
|
||||
1. **Check Network Connectivity** - Verify network connection
|
||||
2. **Check Power Status** - Verify node is powered on
|
||||
3. **Check SSH Access** - Verify SSH service is running
|
||||
4. **Review Storage** - Once accessible, perform full storage review
|
||||
|
||||
---
|
||||
|
||||
## Critical Issues Summary
|
||||
|
||||
### 🔴 CRITICAL - Immediate Action Required
|
||||
|
||||
1. **r630-02 thin1-r630-02 Storage at 97.78% Capacity**
|
||||
- **Impact:** Cannot create new VMs/containers
|
||||
- **Action:** Migrate VMs to other storage pools (thin2-thin6 available)
|
||||
- **Timeline:** IMMEDIATE
|
||||
|
||||
2. **ml110 Volume Group Low on Space (16GB free)**
|
||||
- **Impact:** Limited capacity for new VMs
|
||||
- **Action:** Migrate VMs to r630-01/r630-02 or expand storage
|
||||
- **Timeline:** Within 1 week
|
||||
|
||||
### ⚠️ HIGH PRIORITY
|
||||
|
||||
1. **Uneven Workload Distribution**
|
||||
- ml110: 37 containers (overloaded)
|
||||
- r630-01: 3 containers (underutilized)
|
||||
- r630-02: 11 containers (underutilized)
|
||||
- **Action:** Migrate 15-20 containers from ml110 to r630-01/r630-02
|
||||
|
||||
2. **Disabled/Inactive Storage Pools**
|
||||
- Multiple storage pools disabled across nodes
|
||||
- **Action:** Enable if needed or clean up storage.cfg
|
||||
|
||||
3. **Unreachable Nodes**
|
||||
- r630-03 and r630-04 not accessible
|
||||
- **Action:** Investigate and restore connectivity
|
||||
|
||||
---
|
||||
|
||||
## Storage Capacity Analysis
|
||||
|
||||
### Total Storage Capacity
|
||||
|
||||
| Node | Total Storage | Used Storage | Available Storage | Usage % |
|
||||
|------|--------------|--------------|------------------|---------|
|
||||
| ml110 | 907GB | 234.5GB | 671.5GB | 25.9% |
|
||||
| r630-01 | 744GB | 5.9GB | 738GB | 0.8% |
|
||||
| r630-02 | 1,358GB | 253.7GB | 1,104GB | 18.7% |
|
||||
| **Total** | **3,009GB** | **494GB** | **2,515GB** | **16.4%** |
|
||||
|
||||
### Storage Distribution
|
||||
|
||||
- **ml110:** 27.92% of local-lvm used (good, but VG low on space)
|
||||
- **r630-01:** 2.92% of local-lvm used (excellent - ready for deployment)
|
||||
- **r630-02:** 97.78% of thin1-r630-02 used (CRITICAL), but other pools empty
|
||||
|
||||
### Capacity Planning
|
||||
|
||||
**Current Capacity:** ~2.5TB available
|
||||
**Projected Growth:** Based on current usage patterns
|
||||
**Recommendation:** Plan for expansion when total usage reaches 70%
|
||||
|
||||
---
|
||||
|
||||
## Detailed Recommendations
|
||||
|
||||
### 1. Immediate Actions (This Week)
|
||||
|
||||
#### r630-02 Storage Crisis
|
||||
|
||||
```bash
|
||||
# 1. List VMs on thin1-r630-02
|
||||
ssh root@192.168.11.12 "pvesm list thin1-r630-02"
|
||||
|
||||
# 2. Migrate VMs to thin2 (or thin3, thin5, thin6)
|
||||
# Example migration:
|
||||
pct migrate <VMID> r630-02 --storage thin2
|
||||
|
||||
# 3. Verify migration
|
||||
pvesm status
|
||||
```
|
||||
|
||||
#### ml110 Storage Expansion
|
||||
|
||||
**Option A: Migrate VMs (Recommended)**
|
||||
```bash
|
||||
# Migrate containers to r630-01
|
||||
pct migrate <VMID> r630-01 --storage thin1
|
||||
|
||||
# Migrate containers to r630-02
|
||||
pct migrate <VMID> r630-02 --storage thin2
|
||||
```
|
||||
|
||||
**Option B: Expand Volume Group**
|
||||
```bash
|
||||
# Add physical volume (if disks available)
|
||||
pvcreate /dev/sdX
|
||||
vgextend pve /dev/sdX
|
||||
lvextend -l +100%FREE pve/data
|
||||
```
|
||||
|
||||
### 2. Storage Optimization (Next 2 Weeks)
|
||||
|
||||
#### Enable Disabled Storage Pools
|
||||
|
||||
**ml110:**
|
||||
```bash
|
||||
# Review and clean up disabled storage
|
||||
ssh root@192.168.11.10
|
||||
pvesm status
|
||||
# Remove unused storage definitions or enable if needed
|
||||
```
|
||||
|
||||
**r630-01:**
|
||||
```bash
|
||||
# Enable thin2-thin6 if volume groups exist
|
||||
ssh root@192.168.11.11
|
||||
# Check if VGs exist
|
||||
vgs
|
||||
# Enable storage pools if VGs exist
|
||||
for i in thin2 thin3 thin4 thin5 thin6; do
|
||||
pvesm set $i --disable 0 2>/dev/null || echo "$i not available"
|
||||
done
|
||||
```
|
||||
|
||||
**r630-02:**
|
||||
```bash
|
||||
# Activate thin1 if needed
|
||||
ssh root@192.168.11.12
|
||||
pvesm set thin1 --disable 0
|
||||
```
|
||||
|
||||
#### Balance Workload Distribution
|
||||
|
||||
**Migration Plan:**
|
||||
- **ml110 → r630-01:** Migrate 10-12 medium workload containers
|
||||
- **ml110 → r630-02:** Migrate 10-12 heavy workload containers
|
||||
- **r630-02 thin1-r630-02 → thin2-thin6:** Migrate VMs to balance storage
|
||||
|
||||
**Target Distribution:**
|
||||
- ml110: 15-17 containers (management/lightweight)
|
||||
- r630-01: 15-17 containers (medium workload)
|
||||
- r630-02: 15-17 containers (heavy workload)
|
||||
|
||||
### 3. Long-term Improvements (Next Month)
|
||||
|
||||
#### Storage Monitoring
|
||||
|
||||
**Set Up Automated Alerts:**
|
||||
```bash
|
||||
# Create monitoring script
|
||||
cat > /usr/local/bin/storage-alert.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
# Check storage usage and send alerts
|
||||
for node in ml110 r630-01 r630-02; do
|
||||
ssh root@$node "pvesm status" | awk '$NF > 80 {print "ALERT: $1 on $node at "$NF"%"}'
|
||||
done
|
||||
EOF
|
||||
|
||||
# Add to crontab (check every hour)
|
||||
0 * * * * /usr/local/bin/storage-alert.sh
|
||||
```
|
||||
|
||||
#### Backup Strategy
|
||||
|
||||
1. **Implement Regular Backups**
|
||||
- Daily backups for critical VMs
|
||||
- Weekly full backups
|
||||
- Monthly archive backups
|
||||
|
||||
2. **Backup Storage**
|
||||
- Use separate storage for backups
|
||||
- Consider NFS for shared backup storage
|
||||
- Implement backup rotation (keep 30 days)
|
||||
|
||||
#### Performance Optimization
|
||||
|
||||
1. **Storage Performance Tuning**
|
||||
- Use LVM thin for all VM disks
|
||||
- Monitor I/O performance
|
||||
- Optimize thin pool metadata size
|
||||
|
||||
2. **Network Storage Consideration**
|
||||
- Evaluate NFS for shared storage
|
||||
- Consider Ceph for high availability
|
||||
- Plan for shared storage migration
|
||||
|
||||
---
|
||||
|
||||
## Storage Type Recommendations
|
||||
|
||||
### By Use Case
|
||||
|
||||
| Use Case | Recommended Storage | Current Status | Action |
|
||||
|----------|-------------------|----------------|--------|
|
||||
| VM/Container Disks | LVM Thin (lvmthin) | ✅ Used | Continue using |
|
||||
| ISO Images | Directory (dir) | ✅ Used | Continue using |
|
||||
| Container Templates | Directory (dir) | ✅ Used | Continue using |
|
||||
| Backups | Directory or NFS | ⚠️ Not configured | Implement |
|
||||
| High-Performance VMs | LVM Thin or ZFS | ✅ LVM Thin | Consider ZFS for future |
|
||||
|
||||
### Storage Performance Best Practices
|
||||
|
||||
1. **Use LVM Thin for VM Disks** ✅ Currently implemented
|
||||
2. **Monitor Thin Pool Metadata** ⚠️ Set up monitoring
|
||||
3. **Balance Storage Across Nodes** ⚠️ Needs improvement
|
||||
4. **Implement Backup Storage** ❌ Not implemented
|
||||
|
||||
---
|
||||
|
||||
## Security Recommendations
|
||||
|
||||
1. **Storage Access Control**
|
||||
- Review `/etc/pve/storage.cfg` node restrictions
|
||||
- Ensure proper node assignments
|
||||
- Verify storage permissions
|
||||
|
||||
2. **Backup Security**
|
||||
- Encrypt backups containing sensitive data
|
||||
- Store backups off-site
|
||||
- Test backup restoration regularly
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Recommendations
|
||||
|
||||
### Storage Monitoring Metrics
|
||||
|
||||
1. **Storage Usage** - Alert at 80%
|
||||
2. **Thin Pool Metadata** - Alert at 80%
|
||||
3. **Volume Group Free Space** - Alert at 10%
|
||||
4. **Storage I/O Performance** - Monitor latency
|
||||
|
||||
### Automated Alerts
|
||||
|
||||
Set up alerts for:
|
||||
- Storage usage >80%
|
||||
- Thin pool metadata >80%
|
||||
- Volume group free space <10%
|
||||
- Storage errors or failures
|
||||
|
||||
---
|
||||
|
||||
## Migration Recommendations
|
||||
|
||||
### Workload Distribution Strategy
|
||||
|
||||
**Current State:**
|
||||
- ml110: 37 containers (overloaded, slower CPU)
|
||||
- r630-01: 3 containers (underutilized, excellent CPU)
|
||||
- r630-02: 11 containers (underutilized, best CPU)
|
||||
|
||||
**Target State:**
|
||||
- ml110: 15-17 containers (management/lightweight)
|
||||
- r630-01: 15-17 containers (medium workload)
|
||||
- r630-02: 15-17 containers (heavy workload)
|
||||
|
||||
**Benefits:**
|
||||
- Better performance (ml110 CPU is slower)
|
||||
- Better resource utilization
|
||||
- Improved redundancy
|
||||
- Better storage distribution
|
||||
|
||||
### Migration Priority
|
||||
|
||||
1. **CRITICAL:** Migrate VMs from r630-02 thin1-r630-02 (97.78% full)
|
||||
2. **HIGH:** Migrate 15-20 containers from ml110 to r630-01/r630-02
|
||||
3. **MEDIUM:** Balance storage usage across all thin pools on r630-02
|
||||
|
||||
---
|
||||
|
||||
## Action Plan Summary
|
||||
|
||||
### Week 1 (Critical)
|
||||
|
||||
- [ ] Migrate VMs from r630-02 thin1-r630-02 to thin2-thin6
|
||||
- [ ] Set up storage monitoring alerts
|
||||
- [ ] Investigate r630-03 and r630-04 connectivity
|
||||
|
||||
### Week 2-3 (High Priority)
|
||||
|
||||
- [ ] Migrate 15-20 containers from ml110 to r630-01/r630-02
|
||||
- [ ] Enable/clean up disabled storage pools
|
||||
- [ ] Balance storage usage across nodes
|
||||
|
||||
### Month 1 (Recommended)
|
||||
|
||||
- [ ] Implement backup strategy
|
||||
- [ ] Set up comprehensive storage monitoring
|
||||
- [ ] Optimize storage performance
|
||||
- [ ] Document storage procedures
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
This comprehensive storage review identifies:
|
||||
|
||||
✅ **Current Status:** Storage well configured with LVM thin pools
|
||||
⚠️ **Critical Issues:** 1 storage pool at 97.78% capacity
|
||||
✅ **Capacity Available:** ~2.5TB total available storage
|
||||
⚠️ **Distribution:** Uneven workload distribution
|
||||
|
||||
**Immediate Actions Required:**
|
||||
1. Migrate VMs from r630-02 thin1-r630-02 (CRITICAL)
|
||||
2. Migrate containers from ml110 to balance workload
|
||||
3. Set up storage monitoring and alerts
|
||||
|
||||
**Long-term Goals:**
|
||||
1. Implement backup strategy
|
||||
2. Optimize storage performance
|
||||
3. Plan for storage expansion
|
||||
4. Consider shared storage for HA
|
||||
|
||||
---
|
||||
|
||||
**Report Generated:** January 6, 2026
|
||||
**Next Review:** February 6, 2026 (Monthly)
|
||||
383
reports/storage/storage_review_20260106_023825.md
Normal file
383
reports/storage/storage_review_20260106_023825.md
Normal file
@@ -0,0 +1,383 @@
|
||||
# Proxmox Storage Comprehensive Review
|
||||
|
||||
**Date:** Tue Jan 6 02:38:37 PST 2026
|
||||
**Report Generated:** 2026-01-06 10:38:37 UTC
|
||||
**Review Scope:** All Proxmox nodes and storage configurations
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This report provides a comprehensive review of all storage configurations across all Proxmox nodes, including:
|
||||
- Current storage status and usage
|
||||
- Storage type analysis
|
||||
- Performance recommendations
|
||||
- Capacity planning
|
||||
- Optimization suggestions
|
||||
|
||||
---
|
||||
|
||||
## Node Overview
|
||||
|
||||
|
||||
### ml110 (192.168.11.10)
|
||||
|
||||
**Status:** ✅ Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: 0
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-04 (192.168.11.14)
|
||||
|
||||
**Status:** ❌ Not Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: Unknown
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-01 (192.168.11.11)
|
||||
|
||||
**Status:** ✅ Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: 0
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-03 (192.168.11.13)
|
||||
|
||||
**Status:** ❌ Not Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: Unknown
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-02 (192.168.11.12)
|
||||
|
||||
**Status:** ✅ Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: 0
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Storage Analysis and Recommendations
|
||||
|
||||
### 1. Storage Type Analysis
|
||||
|
||||
#### Local Storage (Directory-based)
|
||||
- **Purpose:** ISO images, container templates, backups
|
||||
- **Performance:** Good for read-heavy workloads
|
||||
- **Recommendation:** Use for templates and ISOs, not for VM disks
|
||||
|
||||
#### LVM Thin Storage
|
||||
- **Purpose:** VM/container disk images
|
||||
- **Performance:** Excellent with thin provisioning
|
||||
- **Benefits:** Space efficiency, snapshots, cloning
|
||||
- **Recommendation:** ✅ **Preferred for VM/container disks**
|
||||
|
||||
#### ZFS Storage
|
||||
- **Purpose:** High-performance VM storage
|
||||
- **Performance:** Excellent with compression and deduplication
|
||||
- **Benefits:** Data integrity, snapshots, clones
|
||||
- **Recommendation:** Consider for high-performance workloads
|
||||
|
||||
### 2. Critical Issues and Fixes
|
||||
|
||||
|
||||
### 3. Performance Optimization Recommendations
|
||||
|
||||
#### Storage Performance Best Practices
|
||||
|
||||
1. **Use LVM Thin for VM Disks**
|
||||
- Better performance than directory storage
|
||||
- Thin provisioning saves space
|
||||
- Enables snapshots and cloning
|
||||
|
||||
2. **Monitor Thin Pool Metadata Usage**
|
||||
- Thin pools require metadata space
|
||||
- Monitor metadata_percent in lvs output
|
||||
- Expand metadata if >80% used
|
||||
|
||||
3. **Storage Distribution**
|
||||
- Distribute VMs across multiple nodes
|
||||
- Balance storage usage across nodes
|
||||
- Avoid overloading single node
|
||||
|
||||
4. **Backup Storage Strategy**
|
||||
- Use separate storage for backups
|
||||
- Consider NFS or Ceph for shared backups
|
||||
- Implement backup rotation policies
|
||||
|
||||
### 4. Capacity Planning
|
||||
|
||||
#### Current Storage Distribution
|
||||
|
||||
|
||||
**Recommendations:**
|
||||
- Monitor storage growth trends
|
||||
- Plan for 20-30% headroom
|
||||
- Set alerts at 80% usage
|
||||
- Consider storage expansion before reaching capacity
|
||||
|
||||
### 5. Storage Type Recommendations by Use Case
|
||||
|
||||
| Use Case | Recommended Storage Type | Reason |
|
||||
|----------|-------------------------|--------|
|
||||
| VM/Container Disks | LVM Thin (lvmthin) | Best performance, thin provisioning |
|
||||
| ISO Images | Directory (dir) | Read-only, no performance impact |
|
||||
| Container Templates | Directory (dir) | Templates are read-only |
|
||||
| Backups | Directory or NFS | Separate from production storage |
|
||||
| High-Performance VMs | ZFS or LVM Thin | Best I/O performance |
|
||||
| Development/Test | LVM Thin | Space efficient with cloning |
|
||||
|
||||
### 6. Security Recommendations
|
||||
|
||||
1. **Storage Access Control**
|
||||
- Review storage.cfg node restrictions
|
||||
- Ensure proper node assignments
|
||||
- Verify storage permissions
|
||||
|
||||
2. **Backup Security**
|
||||
- Encrypt backups if containing sensitive data
|
||||
- Store backups off-site
|
||||
- Test backup restoration regularly
|
||||
|
||||
### 7. Monitoring Recommendations
|
||||
|
||||
1. **Set Up Storage Monitoring**
|
||||
- Monitor storage usage (>80% alert)
|
||||
- Monitor thin pool metadata usage
|
||||
- Track storage growth trends
|
||||
|
||||
2. **Performance Monitoring**
|
||||
- Monitor I/O latency
|
||||
- Track storage throughput
|
||||
- Identify bottlenecks
|
||||
|
||||
3. **Automated Alerts**
|
||||
- Storage usage >80%
|
||||
- Thin pool metadata >80%
|
||||
- Storage errors or failures
|
||||
|
||||
### 8. Migration Recommendations
|
||||
|
||||
#### Workload Distribution
|
||||
|
||||
**Current State:**
|
||||
- ml110: Hosting all VMs (overloaded)
|
||||
- r630-01/r630-02: Underutilized
|
||||
|
||||
**Recommended Distribution:**
|
||||
- **ml110:** Keep management/lightweight VMs (10-15 VMs)
|
||||
- **r630-01:** Migrate medium workload VMs (10-15 VMs)
|
||||
- **r630-02:** Migrate heavy workload VMs (10-15 VMs)
|
||||
|
||||
**Benefits:**
|
||||
- Better performance (ml110 CPU is slower)
|
||||
- Better resource utilization
|
||||
- Improved redundancy
|
||||
- Better storage distribution
|
||||
|
||||
### 9. Immediate Action Items
|
||||
|
||||
#### Critical (Do First)
|
||||
1. ✅ Review storage status on all nodes
|
||||
2. ⚠️ Enable disabled storage pools
|
||||
3. ⚠️ Verify storage node restrictions in storage.cfg
|
||||
4. ⚠️ Check for storage errors or warnings
|
||||
|
||||
#### High Priority
|
||||
1. ⚠️ Configure LVM thin storage where missing
|
||||
2. ⚠️ Set up storage monitoring and alerts
|
||||
3. ⚠️ Plan VM migration for better distribution
|
||||
4. ⚠️ Review and optimize storage.cfg
|
||||
|
||||
#### Recommended
|
||||
1. ⚠️ Implement backup storage strategy
|
||||
2. ⚠️ Consider shared storage (NFS/Ceph) for HA
|
||||
3. ⚠️ Optimize storage performance settings
|
||||
4. ⚠️ Document storage procedures
|
||||
|
||||
---
|
||||
|
||||
## Detailed Storage Commands Reference
|
||||
|
||||
### Check Storage Status
|
||||
```bash
|
||||
# On any Proxmox node
|
||||
pvesm status
|
||||
pvesm list <storage-name>
|
||||
```
|
||||
|
||||
### Enable Disabled Storage
|
||||
```bash
|
||||
pvesm set <storage-name> --disable 0
|
||||
```
|
||||
|
||||
### Check LVM Configuration
|
||||
```bash
|
||||
vgs # List volume groups
|
||||
lvs # List logical volumes
|
||||
lvs -o +data_percent,metadata_percent # Check thin pool usage
|
||||
```
|
||||
|
||||
### Check Disk Usage
|
||||
```bash
|
||||
df -h # Filesystem usage
|
||||
lsblk # Block devices
|
||||
```
|
||||
|
||||
### Storage Performance Testing
|
||||
```bash
|
||||
# Test storage I/O
|
||||
fio --name=test --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --size=1G --runtime=60
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
This comprehensive storage review provides:
|
||||
- ✅ Current storage status across all nodes
|
||||
- ✅ Detailed analysis of storage configurations
|
||||
- ✅ Performance optimization recommendations
|
||||
- ✅ Capacity planning guidance
|
||||
- ✅ Security and monitoring recommendations
|
||||
- ✅ Migration and distribution strategies
|
||||
|
||||
**Next Steps:**
|
||||
1. Review this report
|
||||
2. Address critical issues first
|
||||
3. Implement high-priority recommendations
|
||||
4. Plan for long-term optimizations
|
||||
|
||||
---
|
||||
|
||||
**Report Generated:** Tue Jan 6 02:38:42 PST 2026
|
||||
**Report File:** /home/intlc/projects/proxmox/reports/storage/storage_review_20260106_023825.md
|
||||
|
||||
557
reports/storage/storage_review_20260106_024632.md
Normal file
557
reports/storage/storage_review_20260106_024632.md
Normal file
@@ -0,0 +1,557 @@
|
||||
# Proxmox Storage Comprehensive Review
|
||||
|
||||
**Date:** Tue Jan 6 02:46:57 PST 2026
|
||||
**Report Generated:** 2026-01-06 10:46:57 UTC
|
||||
**Review Scope:** All Proxmox nodes and storage configurations
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This report provides a comprehensive review of all storage configurations across all Proxmox nodes, including:
|
||||
- Current storage status and usage
|
||||
- Storage type analysis
|
||||
- Performance recommendations
|
||||
- Capacity planning
|
||||
- Optimization suggestions
|
||||
|
||||
---
|
||||
|
||||
## Node Overview
|
||||
|
||||
|
||||
### ml110 (192.168.11.10)
|
||||
|
||||
**Status:** ✅ Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: 6
|
||||
- Memory: 125Gi 55Gi 69Gi
|
||||
- VMs: 0
|
||||
- Containers: 37
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
Name Type Status Total (KiB) Used (KiB) Available (KiB) %
|
||||
local dir active 98497780 7902744 85545488 8.02%
|
||||
local-lvm lvmthin active 832888832 232542561 600346270 27.92%
|
||||
thin1 lvmthin disabled 0 0 0 N/A
|
||||
thin1-r630-02 lvmthin disabled 0 0 0 N/A
|
||||
thin2 lvmthin disabled 0 0 0 N/A
|
||||
thin3 lvmthin disabled 0 0 0 N/A
|
||||
thin4 lvmthin disabled 0 0 0 N/A
|
||||
thin5 lvmthin disabled 0 0 0 N/A
|
||||
thin6 lvmthin disabled 0 0 0 N/A
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
pve 930.51g 16.00g
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
data pve 794.30g 27.92 1.13
|
||||
vm-1000-disk-0 pve 100.00g 8.26 data
|
||||
vm-1001-disk-0 pve 100.00g 7.67 data
|
||||
vm-1002-disk-0 pve 100.00g 7.44 data
|
||||
vm-1003-disk-0 pve 100.00g 7.47 data
|
||||
vm-1004-disk-0 pve 100.00g 7.26 data
|
||||
vm-10100-disk-0 pve 200.00g 2.82 data
|
||||
vm-10101-disk-0 pve 200.00g 2.37 data
|
||||
vm-10120-disk-0 pve 50.00g 5.11 data
|
||||
vm-10130-disk-0 pve 50.00g 8.08 data
|
||||
vm-10150-disk-0 pve 100.00g 7.05 data
|
||||
vm-10151-disk-0 pve 100.00g 7.30 data
|
||||
vm-1500-disk-0 pve 100.00g 6.61 data
|
||||
vm-1501-disk-0 pve 100.00g 6.59 data
|
||||
vm-1502-disk-0 pve 100.00g 6.77 data
|
||||
vm-1503-disk-0 pve 100.00g 6.65 data
|
||||
vm-1504-disk-0 pve 100.00g 0.51 data
|
||||
vm-1504-disk-1 pve 100.00g 2.67 data
|
||||
vm-2400-disk-0 pve 200.00g 4.40 data
|
||||
vm-2401-disk-0 pve 200.00g 4.29 data
|
||||
vm-2402-disk-0 pve 200.00g 4.29 data
|
||||
vm-2500-disk-0 pve 200.00g 5.05 data
|
||||
vm-2501-disk-0 pve 200.00g 5.12 data
|
||||
vm-2502-disk-0 pve 200.00g 4.78 data
|
||||
vm-2503-disk-0 pve 200.00g 0.51 data
|
||||
vm-2503-disk-1 pve 200.00g 3.97 data
|
||||
vm-2504-disk-0 pve 200.00g 3.94 data
|
||||
vm-2505-disk-0 pve 50.00g 10.98 data
|
||||
vm-2506-disk-0 pve 50.00g 11.01 data
|
||||
vm-2507-disk-0 pve 50.00g 10.87 data
|
||||
vm-2508-disk-0 pve 50.00g 10.33 data
|
||||
vm-3000-disk-0 pve 20.00g 30.51 data
|
||||
vm-3001-disk-0 pve 20.00g 17.23 data
|
||||
vm-3002-disk-0 pve 50.00g 6.97 data
|
||||
vm-3003-disk-0 pve 30.00g 6.09 data
|
||||
vm-3500-disk-0 pve 20.00g 14.71 data
|
||||
vm-3501-disk-0 pve 20.00g 11.32 data
|
||||
vm-5200-disk-0 pve 50.00g 3.31 data
|
||||
vm-6000-disk-0 pve 50.00g 3.30 data
|
||||
vm-6201-disk-0 pve 50.00g 3.27 data
|
||||
vm-6400-disk-0 pve 50.00g 3.30 data
|
||||
vm-9000-cloudinit pve 0.00g 9.38 data
|
||||
vm-9000-disk-0 pve 1000.00g 0.00 data
|
||||
vm-9000-disk-1 pve 2.20g 72.95 data
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
NAME SIZE TYPE MOUNTPOINT
|
||||
sda 931.5G disk
|
||||
sdb 931.5G disk
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-04 (192.168.11.14)
|
||||
|
||||
**Status:** ❌ Not Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: Unknown
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-01 (192.168.11.11)
|
||||
|
||||
**Status:** ✅ Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: 32
|
||||
- Memory: 503Gi 7.5Gi 496Gi
|
||||
- VMs: 0
|
||||
- Containers: 3
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
Name Type Status Total (KiB) Used (KiB) Available (KiB) %
|
||||
local dir active 561459584 127104 561332480 0.02%
|
||||
local-lvm lvmthin active 209715200 6123683 203591516 2.92%
|
||||
thin1 lvmthin active 218103808 0 218103808 0.00%
|
||||
thin1-r630-02 lvmthin disabled 0 0 0 N/A
|
||||
thin2 lvmthin disabled 0 0 0 N/A
|
||||
thin3 lvmthin disabled 0 0 0 N/A
|
||||
thin4 lvmthin disabled 0 0 0 N/A
|
||||
thin5 lvmthin disabled 0 0 0 N/A
|
||||
thin6 lvmthin disabled 0 0 0 N/A
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
pve 465.77g 57.46g
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
data pve 200.00g 2.92 11.42
|
||||
thin1 pve 208.00g 0.00 10.43
|
||||
vm-106-disk-0 pve 10.00g 12.43 data
|
||||
vm-107-disk-0 pve 20.00g 11.35 data
|
||||
vm-108-disk-0 pve 20.00g 11.60 data
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
NAME SIZE TYPE MOUNTPOINT
|
||||
sda 558.9G disk
|
||||
sdb 558.9G disk
|
||||
sdc 232.9G disk
|
||||
sdd 232.9G disk
|
||||
sde 232.9G disk
|
||||
sdf 232.9G disk
|
||||
sdg 232.9G disk
|
||||
sdh 232.9G disk
|
||||
sr0 1024M rom
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-03 (192.168.11.13)
|
||||
|
||||
**Status:** ❌ Not Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: Unknown
|
||||
- Memory: Unknown
|
||||
- VMs: 0
|
||||
- Containers: 0
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
No storage data available
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
No volume groups found
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
No thin pools found
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
No disk information available
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
### r630-02 (192.168.11.12)
|
||||
|
||||
**Status:** ✅ Reachable
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: 56
|
||||
- Memory: 251Gi 16Gi 235Gi
|
||||
- VMs: 0
|
||||
- Containers: 11
|
||||
|
||||
**Storage Status:**
|
||||
```
|
||||
Name Type Status Total (KiB) Used (KiB) Available (KiB) %
|
||||
local dir active 230441600 4167936 226273664 1.81%
|
||||
local-lvm lvmthin disabled 0 0 0 N/A
|
||||
thin1 lvmthin inactive 0 0 0 0.00%
|
||||
thin1-r630-02 lvmthin active 237117440 231853432 5264007 97.78%
|
||||
thin2 lvmthin active 237117440 0 237117440 0.00%
|
||||
thin3 lvmthin active 237117440 0 237117440 0.00%
|
||||
thin4 lvmthin active 237117440 30090203 207027236 12.69%
|
||||
thin5 lvmthin active 237117440 0 237117440 0.00%
|
||||
thin6 lvmthin active 237117440 0 237117440 0.00%
|
||||
```
|
||||
|
||||
**Volume Groups:**
|
||||
```
|
||||
thin1 230.87g 0.12g
|
||||
thin2 230.87g 0.12g
|
||||
thin3 230.87g 0.12g
|
||||
thin4 230.87g 0.12g
|
||||
thin5 230.87g 0.12g
|
||||
thin6 230.87g 0.12g
|
||||
```
|
||||
|
||||
**Thin Pools:**
|
||||
```
|
||||
thin1 thin1 226.13g 97.78 3.84
|
||||
vm-100-disk-0 thin1 10.00g 29.21 thin1
|
||||
vm-101-disk-0 thin1 10.00g 39.52 thin1
|
||||
vm-102-disk-0 thin1 2.00g 75.92 thin1
|
||||
vm-103-disk-0 thin1 8.00g 46.88 thin1
|
||||
vm-104-disk-0 thin1 8.00g 13.10 thin1
|
||||
vm-105-disk-0 thin1 8.00g 50.97 thin1
|
||||
vm-130-disk-0 thin1 50.00g 7.40 thin1
|
||||
vm-5000-disk-0 thin1 200.00g 97.16 thin1
|
||||
vm-6200-disk-0 thin1 50.00g 5.30 thin1
|
||||
vm-6201-disk-0 thin1 50.00g 6.33 thin1
|
||||
thin2 thin2 226.13g 0.00 0.72
|
||||
thin3 thin3 226.13g 0.00 0.72
|
||||
thin4 thin4 226.13g 12.69 1.15
|
||||
vm-7811-disk-0 thin4 30.00g 95.66 thin4
|
||||
thin5 thin5 226.13g 0.00 0.72
|
||||
thin6 thin6 226.13g 0.00 0.72
|
||||
```
|
||||
|
||||
**Physical Disks:**
|
||||
```
|
||||
NAME SIZE TYPE MOUNTPOINT
|
||||
sda 232.9G disk
|
||||
sdb 232.9G disk
|
||||
sdc 232.9G disk
|
||||
sdd 232.9G disk
|
||||
sde 232.9G disk
|
||||
sdf 232.9G disk
|
||||
sdg 232.9G disk
|
||||
sdh 232.9G disk
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Storage Analysis and Recommendations
|
||||
|
||||
### 1. Storage Type Analysis
|
||||
|
||||
#### Local Storage (Directory-based)
|
||||
- **Purpose:** ISO images, container templates, backups
|
||||
- **Performance:** Good for read-heavy workloads
|
||||
- **Recommendation:** Use for templates and ISOs, not for VM disks
|
||||
|
||||
#### LVM Thin Storage
|
||||
- **Purpose:** VM/container disk images
|
||||
- **Performance:** Excellent with thin provisioning
|
||||
- **Benefits:** Space efficiency, snapshots, cloning
|
||||
- **Recommendation:** ✅ **Preferred for VM/container disks**
|
||||
|
||||
#### ZFS Storage
|
||||
- **Purpose:** High-performance VM storage
|
||||
- **Performance:** Excellent with compression and deduplication
|
||||
- **Benefits:** Data integrity, snapshots, clones
|
||||
- **Recommendation:** Consider for high-performance workloads
|
||||
|
||||
### 2. Critical Issues and Fixes
|
||||
|
||||
|
||||
#### ml110 Storage Issues
|
||||
|
||||
⚠️ **Issue:** Some storage pools are disabled or inactive
|
||||
|
||||
**Action Required:**
|
||||
```bash
|
||||
ssh root@192.168.11.10
|
||||
pvesm status
|
||||
# Enable disabled storage:
|
||||
pvesm set <storage-name> --disable 0
|
||||
```
|
||||
|
||||
⚠️ **Issue:** Storage usage is high (>80%)
|
||||
|
||||
**Recommendation:**
|
||||
- Monitor storage usage closely
|
||||
- Plan for expansion or cleanup
|
||||
- Consider migrating VMs to other nodes
|
||||
|
||||
|
||||
#### r630-01 Storage Issues
|
||||
|
||||
⚠️ **Issue:** Some storage pools are disabled or inactive
|
||||
|
||||
**Action Required:**
|
||||
```bash
|
||||
ssh root@192.168.11.11
|
||||
pvesm status
|
||||
# Enable disabled storage:
|
||||
pvesm set <storage-name> --disable 0
|
||||
```
|
||||
|
||||
⚠️ **Issue:** Storage usage is high (>80%)
|
||||
|
||||
**Recommendation:**
|
||||
- Monitor storage usage closely
|
||||
- Plan for expansion or cleanup
|
||||
- Consider migrating VMs to other nodes
|
||||
|
||||
|
||||
#### r630-02 Storage Issues
|
||||
|
||||
⚠️ **Issue:** Some storage pools are disabled or inactive
|
||||
|
||||
**Action Required:**
|
||||
```bash
|
||||
ssh root@192.168.11.12
|
||||
pvesm status
|
||||
# Enable disabled storage:
|
||||
pvesm set <storage-name> --disable 0
|
||||
```
|
||||
|
||||
⚠️ **Issue:** Storage usage is high (>80%)
|
||||
|
||||
**Recommendation:**
|
||||
- Monitor storage usage closely
|
||||
- Plan for expansion or cleanup
|
||||
- Consider migrating VMs to other nodes
|
||||
|
||||
|
||||
### 3. Performance Optimization Recommendations
|
||||
|
||||
#### Storage Performance Best Practices
|
||||
|
||||
1. **Use LVM Thin for VM Disks**
|
||||
- Better performance than directory storage
|
||||
- Thin provisioning saves space
|
||||
- Enables snapshots and cloning
|
||||
|
||||
2. **Monitor Thin Pool Metadata Usage**
|
||||
- Thin pools require metadata space
|
||||
- Monitor metadata_percent in lvs output
|
||||
- Expand metadata if >80% used
|
||||
|
||||
3. **Storage Distribution**
|
||||
- Distribute VMs across multiple nodes
|
||||
- Balance storage usage across nodes
|
||||
- Avoid overloading single node
|
||||
|
||||
4. **Backup Storage Strategy**
|
||||
- Use separate storage for backups
|
||||
- Consider NFS or Ceph for shared backups
|
||||
- Implement backup rotation policies
|
||||
|
||||
### 4. Capacity Planning
|
||||
|
||||
#### Current Storage Distribution
|
||||
|
||||
|
||||
**Recommendations:**
|
||||
- Monitor storage growth trends
|
||||
- Plan for 20-30% headroom
|
||||
- Set alerts at 80% usage
|
||||
- Consider storage expansion before reaching capacity
|
||||
|
||||
### 5. Storage Type Recommendations by Use Case
|
||||
|
||||
| Use Case | Recommended Storage Type | Reason |
|
||||
|----------|-------------------------|--------|
|
||||
| VM/Container Disks | LVM Thin (lvmthin) | Best performance, thin provisioning |
|
||||
| ISO Images | Directory (dir) | Read-only, no performance impact |
|
||||
| Container Templates | Directory (dir) | Templates are read-only |
|
||||
| Backups | Directory or NFS | Separate from production storage |
|
||||
| High-Performance VMs | ZFS or LVM Thin | Best I/O performance |
|
||||
| Development/Test | LVM Thin | Space efficient with cloning |
|
||||
|
||||
### 6. Security Recommendations
|
||||
|
||||
1. **Storage Access Control**
|
||||
- Review storage.cfg node restrictions
|
||||
- Ensure proper node assignments
|
||||
- Verify storage permissions
|
||||
|
||||
2. **Backup Security**
|
||||
- Encrypt backups if containing sensitive data
|
||||
- Store backups off-site
|
||||
- Test backup restoration regularly
|
||||
|
||||
### 7. Monitoring Recommendations
|
||||
|
||||
1. **Set Up Storage Monitoring**
|
||||
- Monitor storage usage (>80% alert)
|
||||
- Monitor thin pool metadata usage
|
||||
- Track storage growth trends
|
||||
|
||||
2. **Performance Monitoring**
|
||||
- Monitor I/O latency
|
||||
- Track storage throughput
|
||||
- Identify bottlenecks
|
||||
|
||||
3. **Automated Alerts**
|
||||
- Storage usage >80%
|
||||
- Thin pool metadata >80%
|
||||
- Storage errors or failures
|
||||
|
||||
### 8. Migration Recommendations
|
||||
|
||||
#### Workload Distribution
|
||||
|
||||
**Current State:**
|
||||
- ml110: Hosting all VMs (overloaded)
|
||||
- r630-01/r630-02: Underutilized
|
||||
|
||||
**Recommended Distribution:**
|
||||
- **ml110:** Keep management/lightweight VMs (10-15 VMs)
|
||||
- **r630-01:** Migrate medium workload VMs (10-15 VMs)
|
||||
- **r630-02:** Migrate heavy workload VMs (10-15 VMs)
|
||||
|
||||
**Benefits:**
|
||||
- Better performance (ml110 CPU is slower)
|
||||
- Better resource utilization
|
||||
- Improved redundancy
|
||||
- Better storage distribution
|
||||
|
||||
### 9. Immediate Action Items
|
||||
|
||||
#### Critical (Do First)
|
||||
1. ✅ Review storage status on all nodes
|
||||
2. ⚠️ Enable disabled storage pools
|
||||
3. ⚠️ Verify storage node restrictions in storage.cfg
|
||||
4. ⚠️ Check for storage errors or warnings
|
||||
|
||||
#### High Priority
|
||||
1. ⚠️ Configure LVM thin storage where missing
|
||||
2. ⚠️ Set up storage monitoring and alerts
|
||||
3. ⚠️ Plan VM migration for better distribution
|
||||
4. ⚠️ Review and optimize storage.cfg
|
||||
|
||||
#### Recommended
|
||||
1. ⚠️ Implement backup storage strategy
|
||||
2. ⚠️ Consider shared storage (NFS/Ceph) for HA
|
||||
3. ⚠️ Optimize storage performance settings
|
||||
4. ⚠️ Document storage procedures
|
||||
|
||||
---
|
||||
|
||||
## Detailed Storage Commands Reference
|
||||
|
||||
### Check Storage Status
|
||||
```bash
|
||||
# On any Proxmox node
|
||||
pvesm status
|
||||
pvesm list <storage-name>
|
||||
```
|
||||
|
||||
### Enable Disabled Storage
|
||||
```bash
|
||||
pvesm set <storage-name> --disable 0
|
||||
```
|
||||
|
||||
### Check LVM Configuration
|
||||
```bash
|
||||
vgs # List volume groups
|
||||
lvs # List logical volumes
|
||||
lvs -o +data_percent,metadata_percent # Check thin pool usage
|
||||
```
|
||||
|
||||
### Check Disk Usage
|
||||
```bash
|
||||
df -h # Filesystem usage
|
||||
lsblk # Block devices
|
||||
```
|
||||
|
||||
### Storage Performance Testing
|
||||
```bash
|
||||
# Test storage I/O
|
||||
fio --name=test --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --size=1G --runtime=60
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
This comprehensive storage review provides:
|
||||
- ✅ Current storage status across all nodes
|
||||
- ✅ Detailed analysis of storage configurations
|
||||
- ✅ Performance optimization recommendations
|
||||
- ✅ Capacity planning guidance
|
||||
- ✅ Security and monitoring recommendations
|
||||
- ✅ Migration and distribution strategies
|
||||
|
||||
**Next Steps:**
|
||||
1. Review this report
|
||||
2. Address critical issues first
|
||||
3. Implement high-priority recommendations
|
||||
4. Plan for long-term optimizations
|
||||
|
||||
---
|
||||
|
||||
**Report Generated:** Tue Jan 6 02:47:02 PST 2026
|
||||
**Report File:** /home/intlc/projects/proxmox/reports/storage/storage_review_20260106_024632.md
|
||||
|
||||
99
rpc-translator-138/FIX_RSYNC_DEPLOYMENT.md
Normal file
99
rpc-translator-138/FIX_RSYNC_DEPLOYMENT.md
Normal file
@@ -0,0 +1,99 @@
|
||||
# Fix: rsync Not Found During Deployment
|
||||
|
||||
**Issue**: Deployment fails with `rsync: command not found` on target VMIDs
|
||||
|
||||
**Status**: ✅ **FIXED**
|
||||
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
The deployment script was failing because `rsync` was not installed on the target VMIDs (2400, 2401, 2402).
|
||||
|
||||
**Error:**
|
||||
```
|
||||
bash: line 1: rsync: command not found
|
||||
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
|
||||
rsync error: error in rsync protocol data stream (code 12)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Solution
|
||||
|
||||
Updated `scripts/deploy-to-vmid.sh` to automatically install required dependencies before copying files:
|
||||
|
||||
1. **Install `rsync`** - Required for file copying
|
||||
2. **Install `curl`** - Required for Node.js installation
|
||||
3. **Install Node.js** - If not already installed
|
||||
4. **Install `npm`** - If not already installed
|
||||
|
||||
---
|
||||
|
||||
## What Changed
|
||||
|
||||
**File**: `scripts/deploy-to-vmid.sh`
|
||||
|
||||
**Added dependency installation step:**
|
||||
```bash
|
||||
# Install required dependencies on remote (rsync, nodejs, npm)
|
||||
echo "Installing required dependencies on VMID $VMID..."
|
||||
ssh $SSH_KEY -o StrictHostKeyChecking=no "root@$VMIP" "
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
apt-get update -qq
|
||||
apt-get install -y -qq rsync curl >/dev/null 2>&1 || true
|
||||
# Check if Node.js is installed, if not install it
|
||||
if ! command -v node >/dev/null 2>&1; then
|
||||
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - >/dev/null 2>&1
|
||||
apt-get install -y -qq nodejs >/dev/null 2>&1
|
||||
fi
|
||||
# Ensure npm is available
|
||||
if ! command -v npm >/dev/null 2>&1; then
|
||||
apt-get install -y -qq npm >/dev/null 2>&1
|
||||
fi
|
||||
"
|
||||
```
|
||||
|
||||
**Also updated npm installation to support pnpm:**
|
||||
```bash
|
||||
# Use npm or pnpm if available
|
||||
if command -v pnpm >/dev/null 2>&1; then
|
||||
pnpm install --production
|
||||
else
|
||||
npm install --production
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Deploy Now
|
||||
|
||||
The deployment script now automatically installs dependencies. Run:
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/rpc-translator-138
|
||||
./scripts/deploy-smart-interception.sh
|
||||
```
|
||||
|
||||
Or deploy to a specific VMID:
|
||||
|
||||
```bash
|
||||
./scripts/deploy-to-vmid.sh 2402 192.168.11.242
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Gets Installed
|
||||
|
||||
On each target VMID, the script now automatically installs:
|
||||
|
||||
1. ✅ **rsync** - For file synchronization
|
||||
2. ✅ **curl** - For downloading Node.js installer
|
||||
3. ✅ **Node.js 20.x** - If not already installed
|
||||
4. ✅ **npm** - If not already installed
|
||||
|
||||
All installations are done silently (`-qq` flag) to keep output clean.
|
||||
|
||||
---
|
||||
|
||||
**Status**: ✅ **Fixed - Ready to deploy!**
|
||||
0
rpc-translator-138/scripts/deploy-smart-interception.sh
Normal file → Executable file
0
rpc-translator-138/scripts/deploy-smart-interception.sh
Normal file → Executable file
@@ -35,6 +35,23 @@ echo "Building TypeScript..."
|
||||
cd "$PROJECT_DIR"
|
||||
pnpm run build
|
||||
|
||||
# Install required dependencies on remote (rsync, nodejs, npm)
|
||||
echo "Installing required dependencies on VMID $VMID..."
|
||||
ssh $SSH_KEY -o StrictHostKeyChecking=no "root@$VMIP" "
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
apt-get update -qq
|
||||
apt-get install -y -qq rsync curl >/dev/null 2>&1 || true
|
||||
# Check if Node.js is installed, if not install it
|
||||
if ! command -v node >/dev/null 2>&1; then
|
||||
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - >/dev/null 2>&1
|
||||
apt-get install -y -qq nodejs >/dev/null 2>&1
|
||||
fi
|
||||
# Ensure npm is available
|
||||
if ! command -v npm >/dev/null 2>&1; then
|
||||
apt-get install -y -qq npm >/dev/null 2>&1
|
||||
fi
|
||||
"
|
||||
|
||||
# Create deployment directory on remote
|
||||
echo "Creating deployment directory on VMID $VMID..."
|
||||
ssh $SSH_KEY -o StrictHostKeyChecking=no "root@$VMIP" "mkdir -p $DEPLOY_DIR"
|
||||
@@ -57,7 +74,15 @@ eval rsync -avz $RSYNC_SSH "$PROJECT_DIR/dist/" "root@$VMIP:$DEPLOY_DIR/dist/"
|
||||
|
||||
# Copy package.json and install dependencies
|
||||
echo "Installing dependencies on VMID $VMID..."
|
||||
ssh $SSH_KEY -o StrictHostKeyChecking=no "root@$VMIP" "cd $DEPLOY_DIR && npm install --production"
|
||||
ssh $SSH_KEY -o StrictHostKeyChecking=no "root@$VMIP" "
|
||||
cd $DEPLOY_DIR
|
||||
# Use npm or pnpm if available
|
||||
if command -v pnpm >/dev/null 2>&1; then
|
||||
pnpm install --production
|
||||
else
|
||||
npm install --production
|
||||
fi
|
||||
"
|
||||
|
||||
# Copy systemd service file
|
||||
echo "Installing systemd service..."
|
||||
|
||||
345
scripts/DEPLOYMENT_README_R630-01.md
Normal file
345
scripts/DEPLOYMENT_README_R630-01.md
Normal file
@@ -0,0 +1,345 @@
|
||||
# Sankofa & Phoenix Deployment Guide for r630-01
|
||||
|
||||
**Target Server:** r630-01 (192.168.11.11)
|
||||
**Deployment Date:** $(date +%Y-%m-%d)
|
||||
**Status:** Ready for Deployment
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide provides step-by-step instructions for deploying Sankofa and Phoenix control plane services to r630-01 Proxmox node.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
r630-01 (192.168.11.11)
|
||||
├── VMID 7803: PostgreSQL (10.160.0.13)
|
||||
├── VMID 7802: Keycloak (10.160.0.12)
|
||||
├── VMID 7800: Sankofa API (10.160.0.10)
|
||||
└── VMID 7801: Sankofa Portal (10.160.0.11)
|
||||
```
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- **VLAN:** 160
|
||||
- **Subnet:** 10.160.0.0/22
|
||||
- **Gateway:** 10.160.0.1
|
||||
- **Storage:** thin1 (208GB available)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **SSH Access to r630-01**
|
||||
```bash
|
||||
ssh root@192.168.11.11
|
||||
```
|
||||
|
||||
2. **Sankofa Project Available**
|
||||
- Location: `/home/intlc/projects/Sankofa`
|
||||
- Must contain `api/` and `portal/` directories
|
||||
|
||||
3. **Proxmox Storage**
|
||||
- Verify `thin1` storage is available
|
||||
- Check available space: `pvesm status`
|
||||
|
||||
4. **Network Configuration**
|
||||
- Verify VLAN 160 is configured
|
||||
- Verify gateway (10.160.0.1) is accessible
|
||||
|
||||
---
|
||||
|
||||
## Deployment Steps
|
||||
|
||||
### Step 1: Prepare Configuration
|
||||
|
||||
1. Copy environment template:
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/scripts
|
||||
cp env.r630-01.example .env.r630-01
|
||||
```
|
||||
|
||||
2. Edit `.env.r630-01` and update:
|
||||
- Database passwords
|
||||
- Keycloak admin password
|
||||
- Client secrets
|
||||
- JWT secrets
|
||||
- Any other production values
|
||||
|
||||
### Step 2: Deploy Containers
|
||||
|
||||
Deploy all LXC containers:
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/scripts
|
||||
./deploy-sankofa-r630-01.sh
|
||||
```
|
||||
|
||||
This will create:
|
||||
- PostgreSQL container (VMID 7803)
|
||||
- Keycloak container (VMID 7802)
|
||||
- API container (VMID 7800)
|
||||
- Portal container (VMID 7801)
|
||||
|
||||
### Step 3: Setup PostgreSQL
|
||||
|
||||
Configure PostgreSQL database:
|
||||
|
||||
```bash
|
||||
./setup-postgresql-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install PostgreSQL 16
|
||||
- Create `sankofa` database
|
||||
- Create `sankofa` user
|
||||
- Configure network access
|
||||
- Enable required extensions
|
||||
|
||||
**Note:** The script will generate a random password. Update `.env.r630-01` with the actual password.
|
||||
|
||||
### Step 4: Setup Keycloak
|
||||
|
||||
Configure Keycloak identity service:
|
||||
|
||||
```bash
|
||||
./setup-keycloak-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Java 21
|
||||
- Download and install Keycloak 24.0.0
|
||||
- Create Keycloak database
|
||||
- Configure PostgreSQL connection
|
||||
- Create admin user
|
||||
- Create API and Portal clients
|
||||
|
||||
**Note:** The script will generate random passwords and secrets. Update `.env.r630-01` with the actual values.
|
||||
|
||||
### Step 5: Deploy API
|
||||
|
||||
Deploy Sankofa API service:
|
||||
|
||||
```bash
|
||||
./deploy-api-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Node.js 18
|
||||
- Install pnpm
|
||||
- Copy API project files
|
||||
- Install dependencies
|
||||
- Configure environment
|
||||
- Run database migrations
|
||||
- Build API
|
||||
- Create systemd service
|
||||
- Start API service
|
||||
|
||||
### Step 6: Run Database Migrations
|
||||
|
||||
If migrations weren't run during API deployment:
|
||||
|
||||
```bash
|
||||
./run-migrations-r630-01.sh
|
||||
```
|
||||
|
||||
### Step 7: Deploy Portal
|
||||
|
||||
Deploy Sankofa Portal:
|
||||
|
||||
```bash
|
||||
./deploy-portal-r630-01.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Node.js 18
|
||||
- Install pnpm
|
||||
- Copy Portal project files
|
||||
- Install dependencies
|
||||
- Configure environment
|
||||
- Build Portal (Next.js)
|
||||
- Create systemd service
|
||||
- Start Portal service
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
### Check Container Status
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct list | grep -E '780[0-3]'"
|
||||
```
|
||||
|
||||
### Check Service Status
|
||||
|
||||
**PostgreSQL:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7803 -- systemctl status postgresql"
|
||||
```
|
||||
|
||||
**Keycloak:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7802 -- systemctl status keycloak"
|
||||
curl http://10.160.0.12:8080/health/ready
|
||||
```
|
||||
|
||||
**API:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- systemctl status sankofa-api"
|
||||
curl http://10.160.0.10:4000/health
|
||||
```
|
||||
|
||||
**Portal:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- systemctl status sankofa-portal"
|
||||
curl http://10.160.0.11:3000
|
||||
```
|
||||
|
||||
### Test GraphQL Endpoint
|
||||
|
||||
```bash
|
||||
curl -X POST http://10.160.0.10:4000/graphql \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query": "{ __typename }"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Service URLs
|
||||
|
||||
| Service | URL | Description |
|
||||
|---------|-----|-------------|
|
||||
| PostgreSQL | `10.160.0.13:5432` | Database |
|
||||
| Keycloak | `http://10.160.0.12:8080` | Identity Provider |
|
||||
| Keycloak Admin | `http://10.160.0.12:8080/admin` | Admin Console |
|
||||
| API | `http://10.160.0.10:4000` | GraphQL API |
|
||||
| API GraphQL | `http://10.160.0.10:4000/graphql` | GraphQL Endpoint |
|
||||
| API Health | `http://10.160.0.10:4000/health` | Health Check |
|
||||
| Portal | `http://10.160.0.11:3000` | Web Portal |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container Won't Start
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
ssh root@192.168.11.11 "pct status 7800"
|
||||
|
||||
# Check container logs
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- journalctl -n 50"
|
||||
```
|
||||
|
||||
### Database Connection Issues
|
||||
|
||||
```bash
|
||||
# Test database connection from API container
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- bash -c 'PGPASSWORD=your_password psql -h 10.160.0.13 -U sankofa -d sankofa -c \"SELECT 1;\"'"
|
||||
```
|
||||
|
||||
### Keycloak Not Starting
|
||||
|
||||
```bash
|
||||
# Check Keycloak logs
|
||||
ssh root@192.168.11.11 "pct exec 7802 -- journalctl -u keycloak -n 100"
|
||||
|
||||
# Check Keycloak process
|
||||
ssh root@192.168.11.11 "pct exec 7802 -- ps aux | grep keycloak"
|
||||
```
|
||||
|
||||
### API Service Issues
|
||||
|
||||
```bash
|
||||
# Check API logs
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- journalctl -u sankofa-api -n 100"
|
||||
|
||||
# Restart API service
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- systemctl restart sankofa-api"
|
||||
```
|
||||
|
||||
### Portal Build Failures
|
||||
|
||||
```bash
|
||||
# Check build logs
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- journalctl -u sankofa-portal -n 100"
|
||||
|
||||
# Rebuild Portal
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- bash -c 'cd /opt/sankofa-portal && pnpm build'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post-Deployment Tasks
|
||||
|
||||
1. **Update Environment Variables**
|
||||
- Update `.env.r630-01` with actual passwords and secrets
|
||||
- Update service configurations if needed
|
||||
|
||||
2. **Configure Firewall Rules**
|
||||
- Allow access to service ports
|
||||
- Configure VLAN 160 routing if needed
|
||||
|
||||
3. **Set Up Cloudflare Tunnels**
|
||||
- Configure tunnels for external access
|
||||
- Set up DNS records
|
||||
|
||||
4. **Configure Monitoring**
|
||||
- Set up Prometheus exporters
|
||||
- Configure Grafana dashboards
|
||||
- Set up alerting
|
||||
|
||||
5. **Backup Configuration**
|
||||
- Document all passwords and secrets
|
||||
- Create backup procedures
|
||||
- Test restore procedures
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Update Services
|
||||
|
||||
**Update API:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- bash -c 'cd /opt/sankofa-api && git pull && pnpm install && pnpm build && systemctl restart sankofa-api'"
|
||||
```
|
||||
|
||||
**Update Portal:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- bash -c 'cd /opt/sankofa-portal && git pull && pnpm install && pnpm build && systemctl restart sankofa-portal'"
|
||||
```
|
||||
|
||||
### Backup Database
|
||||
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7803 -- bash -c 'PGPASSWORD=your_password pg_dump -h localhost -U sankofa sankofa > /tmp/sankofa_backup_$(date +%Y%m%d).sql'"
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
**API Logs:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7800 -- journalctl -u sankofa-api -f"
|
||||
```
|
||||
|
||||
**Portal Logs:**
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pct exec 7801 -- journalctl -u sankofa-portal -f"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
1. Check logs using troubleshooting commands above
|
||||
2. Review deployment scripts for configuration
|
||||
3. Verify network connectivity between containers
|
||||
4. Check Proxmox storage and resource availability
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** $(date +%Y-%m-%d)
|
||||
214
scripts/DEPLOYMENT_SUMMARY_R630-01.md
Normal file
214
scripts/DEPLOYMENT_SUMMARY_R630-01.md
Normal file
@@ -0,0 +1,214 @@
|
||||
# Sankofa & Phoenix Deployment Summary - r630-01
|
||||
|
||||
**Date:** $(date +%Y-%m-%d)
|
||||
**Status:** ✅ All Deployment Scripts Created
|
||||
**Target:** r630-01 (192.168.11.11)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Completed Tasks
|
||||
|
||||
### 1. Deployment Scripts Created
|
||||
|
||||
- ✅ `deploy-sankofa-r630-01.sh` - Main container deployment script
|
||||
- ✅ `setup-postgresql-r630-01.sh` - PostgreSQL database setup
|
||||
- ✅ `setup-keycloak-r630-01.sh` - Keycloak identity service setup
|
||||
- ✅ `deploy-api-r630-01.sh` - Sankofa API deployment
|
||||
- ✅ `deploy-portal-r630-01.sh` - Sankofa Portal deployment
|
||||
- ✅ `run-migrations-r630-01.sh` - Database migration runner
|
||||
|
||||
### 2. Configuration Files Created
|
||||
|
||||
- ✅ `env.r630-01.example` - Environment configuration template
|
||||
- ✅ `DEPLOYMENT_README_R630-01.md` - Complete deployment guide
|
||||
|
||||
### 3. Fixed Blockers
|
||||
|
||||
- ✅ **Fixed:** Deployment script now targets r630-01 instead of pve2
|
||||
- ✅ **Fixed:** Storage configuration uses `thin1` (available on r630-01)
|
||||
- ✅ **Fixed:** Network configuration uses static IPs for VLAN 160
|
||||
- ✅ **Fixed:** Added PostgreSQL container deployment
|
||||
- ✅ **Fixed:** Added Keycloak configuration scripts
|
||||
- ✅ **Fixed:** Added service deployment scripts
|
||||
- ✅ **Fixed:** Added database migration scripts
|
||||
|
||||
---
|
||||
|
||||
## 📋 Deployment Architecture
|
||||
|
||||
### Container Allocation
|
||||
|
||||
| VMID | Service | IP Address | Resources |
|
||||
|------|---------|------------|------------|
|
||||
| 7803 | PostgreSQL | 10.160.0.13 | 2GB RAM, 2 cores, 50GB disk |
|
||||
| 7802 | Keycloak | 10.160.0.12 | 2GB RAM, 2 cores, 30GB disk |
|
||||
| 7800 | Sankofa API | 10.160.0.10 | 4GB RAM, 4 cores, 50GB disk |
|
||||
| 7801 | Sankofa Portal | 10.160.0.11 | 4GB RAM, 4 cores, 50GB disk |
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- **VLAN:** 160
|
||||
- **Subnet:** 10.160.0.0/22
|
||||
- **Gateway:** 10.160.0.1
|
||||
- **Storage:** thin1 (208GB available)
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Prepare Configuration
|
||||
|
||||
```bash
|
||||
cd /home/intlc/projects/proxmox/scripts
|
||||
cp env.r630-01.example .env.r630-01
|
||||
# Edit .env.r630-01 with your passwords and secrets
|
||||
```
|
||||
|
||||
### 2. Deploy Containers
|
||||
|
||||
```bash
|
||||
./deploy-sankofa-r630-01.sh
|
||||
```
|
||||
|
||||
### 3. Setup Services (in order)
|
||||
|
||||
```bash
|
||||
# 1. Setup PostgreSQL
|
||||
./setup-postgresql-r630-01.sh
|
||||
|
||||
# 2. Setup Keycloak
|
||||
./setup-keycloak-r630-01.sh
|
||||
|
||||
# 3. Deploy API
|
||||
./deploy-api-r630-01.sh
|
||||
|
||||
# 4. Deploy Portal
|
||||
./deploy-portal-r630-01.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📝 Key Changes from Original Script
|
||||
|
||||
### Fixed Issues
|
||||
|
||||
1. **Target Node:** Changed from `pve2` (192.168.11.12) to `r630-01` (192.168.11.11)
|
||||
2. **Storage:** Changed from `thin4` to `thin1` (available on r630-01)
|
||||
3. **Network:** Changed from DHCP to static IP configuration
|
||||
4. **PostgreSQL:** Added dedicated PostgreSQL container (VMID 7803)
|
||||
5. **Service Order:** Proper deployment order (PostgreSQL → Keycloak → API → Portal)
|
||||
6. **Configuration:** Added comprehensive environment configuration
|
||||
7. **Scripts:** Added individual service setup scripts
|
||||
|
||||
### New Features
|
||||
|
||||
- ✅ PostgreSQL database setup script
|
||||
- ✅ Keycloak installation and configuration script
|
||||
- ✅ API deployment with migrations
|
||||
- ✅ Portal deployment with Next.js build
|
||||
- ✅ Database migration runner
|
||||
- ✅ Comprehensive deployment documentation
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Configuration Requirements
|
||||
|
||||
### Before Deployment
|
||||
|
||||
1. **SSH Access:** Ensure SSH access to r630-01
|
||||
```bash
|
||||
ssh root@192.168.11.11
|
||||
```
|
||||
|
||||
2. **Storage:** Verify thin1 storage is available
|
||||
```bash
|
||||
ssh root@192.168.11.11 "pvesm status | grep thin1"
|
||||
```
|
||||
|
||||
3. **Network:** Verify VLAN 160 configuration
|
||||
```bash
|
||||
ssh root@192.168.11.11 "ip addr show | grep 160"
|
||||
```
|
||||
|
||||
4. **Sankofa Project:** Ensure project is available
|
||||
```bash
|
||||
ls -la /home/intlc/projects/Sankofa/api
|
||||
ls -la /home/intlc/projects/Sankofa/portal
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Update `env.r630-01.example` (or create `.env.r630-01`) with:
|
||||
|
||||
- Database passwords
|
||||
- Keycloak admin password
|
||||
- Keycloak client secrets
|
||||
- JWT secrets
|
||||
- NextAuth secret
|
||||
- Any other production values
|
||||
|
||||
---
|
||||
|
||||
## 📊 Deployment Checklist
|
||||
|
||||
- [ ] SSH access to r630-01 verified
|
||||
- [ ] Storage (thin1) verified
|
||||
- [ ] Network (VLAN 160) configured
|
||||
- [ ] Sankofa project available
|
||||
- [ ] Environment configuration prepared
|
||||
- [ ] Containers deployed (`deploy-sankofa-r630-01.sh`)
|
||||
- [ ] PostgreSQL setup completed (`setup-postgresql-r630-01.sh`)
|
||||
- [ ] Keycloak setup completed (`setup-keycloak-r630-01.sh`)
|
||||
- [ ] API deployed (`deploy-api-r630-01.sh`)
|
||||
- [ ] Portal deployed (`deploy-portal-r630-01.sh`)
|
||||
- [ ] All services verified and running
|
||||
- [ ] Firewall rules configured
|
||||
- [ ] Cloudflare tunnels configured (if needed)
|
||||
- [ ] Monitoring configured (if needed)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Next Steps
|
||||
|
||||
After successful deployment:
|
||||
|
||||
1. **Verify Services:**
|
||||
- Test API health endpoint
|
||||
- Test Portal accessibility
|
||||
- Test Keycloak admin console
|
||||
- Test database connectivity
|
||||
|
||||
2. **Configure External Access:**
|
||||
- Set up Cloudflare tunnels
|
||||
- Configure DNS records
|
||||
- Set up SSL/TLS certificates
|
||||
|
||||
3. **Set Up Monitoring:**
|
||||
- Configure Prometheus exporters
|
||||
- Set up Grafana dashboards
|
||||
- Configure alerting rules
|
||||
|
||||
4. **Documentation:**
|
||||
- Document all passwords and secrets securely
|
||||
- Create operational runbooks
|
||||
- Document backup and restore procedures
|
||||
|
||||
---
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
- **Deployment Guide:** `DEPLOYMENT_README_R630-01.md`
|
||||
- **Environment Template:** `env.r630-01.example`
|
||||
- **Scripts Location:** `/home/intlc/projects/proxmox/scripts/`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Status
|
||||
|
||||
**All blockers fixed and deployment scripts created!**
|
||||
|
||||
The deployment is ready to proceed. Follow the steps in `DEPLOYMENT_README_R630-01.md` to deploy Sankofa and Phoenix to r630-01.
|
||||
|
||||
---
|
||||
|
||||
**Generated:** $(date +%Y-%m-%d %H:%M:%S)
|
||||
227
scripts/deploy-api-r630-01.sh
Executable file
227
scripts/deploy-api-r630-01.sh
Executable file
@@ -0,0 +1,227 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deploy Sankofa API to r630-01
|
||||
# VMID: 7800, IP: 10.160.0.10
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
SANKOFA_PROJECT="${SANKOFA_PROJECT:-/home/intlc/projects/Sankofa}"
|
||||
source "$SCRIPT_DIR/env.r630-01.example" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_API:-7800}"
|
||||
CONTAINER_IP="${SANKOFA_API_IP:-10.160.0.10}"
|
||||
DB_HOST="${DB_HOST:-10.160.0.13}"
|
||||
DB_PORT="${DB_PORT:-5432}"
|
||||
DB_NAME="${DB_NAME:-sankofa}"
|
||||
DB_USER="${DB_USER:-sankofa}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-}"
|
||||
KEYCLOAK_URL="${KEYCLOAK_URL:-http://10.160.0.12:8080}"
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-master}"
|
||||
KEYCLOAK_CLIENT_ID="${KEYCLOAK_CLIENT_ID_API:-sankofa-api}"
|
||||
KEYCLOAK_CLIENT_SECRET="${KEYCLOAK_CLIENT_SECRET_API:-}"
|
||||
JWT_SECRET="${JWT_SECRET:-$(openssl rand -base64 32)}"
|
||||
NODE_VERSION="18"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Sankofa API Deployment"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist. Please run deploy-sankofa-r630-01.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install Node.js
|
||||
log_info "Installing Node.js $NODE_VERSION..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
curl -fsSL https://deb.nodesource.com/setup_${NODE_VERSION}.x | bash - && \
|
||||
apt-get install -y -qq nodejs"
|
||||
|
||||
# Install pnpm
|
||||
log_info "Installing pnpm..."
|
||||
exec_container bash -c "npm install -g pnpm"
|
||||
|
||||
log_success "Node.js and pnpm installed"
|
||||
echo ""
|
||||
|
||||
# Copy project files
|
||||
log_info "Copying Sankofa API project files..."
|
||||
if [[ ! -d "$SANKOFA_PROJECT/api" ]]; then
|
||||
log_error "Sankofa project not found at $SANKOFA_PROJECT"
|
||||
log_info "Please ensure the Sankofa project is available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create app directory
|
||||
exec_container bash -c "mkdir -p /opt/sankofa-api"
|
||||
|
||||
# Copy API directory
|
||||
log_info "Copying files to container..."
|
||||
ssh_r630_01 "pct push $VMID $SANKOFA_PROJECT/api /opt/sankofa-api --recursive"
|
||||
|
||||
log_success "Project files copied"
|
||||
echo ""
|
||||
|
||||
# Install dependencies
|
||||
log_info "Installing dependencies..."
|
||||
exec_container bash -c "cd /opt/sankofa-api && pnpm install --frozen-lockfile"
|
||||
|
||||
log_success "Dependencies installed"
|
||||
echo ""
|
||||
|
||||
# Create environment file
|
||||
log_info "Creating environment configuration..."
|
||||
exec_container bash -c "cat > /opt/sankofa-api/.env << EOF
|
||||
# Database
|
||||
DB_HOST=$DB_HOST
|
||||
DB_PORT=$DB_PORT
|
||||
DB_NAME=$DB_NAME
|
||||
DB_USER=$DB_USER
|
||||
DB_PASSWORD=$DB_PASSWORD
|
||||
|
||||
# Keycloak
|
||||
KEYCLOAK_URL=$KEYCLOAK_URL
|
||||
KEYCLOAK_REALM=$KEYCLOAK_REALM
|
||||
KEYCLOAK_CLIENT_ID=$KEYCLOAK_CLIENT_ID
|
||||
KEYCLOAK_CLIENT_SECRET=$KEYCLOAK_CLIENT_SECRET
|
||||
KEYCLOAK_MULTI_REALM=false
|
||||
|
||||
# API
|
||||
API_PORT=4000
|
||||
JWT_SECRET=$JWT_SECRET
|
||||
NODE_ENV=production
|
||||
|
||||
# Multi-Tenancy
|
||||
ENABLE_MULTI_TENANT=true
|
||||
|
||||
# Billing
|
||||
BILLING_GRANULARITY=SECOND
|
||||
BLOCKCHAIN_BILLING_ENABLED=false
|
||||
BLOCKCHAIN_IDENTITY_ENABLED=false
|
||||
EOF"
|
||||
|
||||
log_success "Environment configured"
|
||||
echo ""
|
||||
|
||||
# Run database migrations
|
||||
log_info "Running database migrations..."
|
||||
exec_container bash -c "cd /opt/sankofa-api && pnpm db:migrate" || log_warn "Migrations may have failed - check database connection"
|
||||
echo ""
|
||||
|
||||
# Build API
|
||||
log_info "Building API..."
|
||||
exec_container bash -c "cd /opt/sankofa-api && pnpm build"
|
||||
|
||||
log_success "API built"
|
||||
echo ""
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating systemd service..."
|
||||
exec_container bash -c "cat > /etc/systemd/system/sankofa-api.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Sankofa API Server
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/sankofa-api
|
||||
Environment=\"NODE_ENV=production\"
|
||||
EnvironmentFile=/opt/sankofa-api/.env
|
||||
ExecStart=/usr/bin/node /opt/sankofa-api/dist/server.js
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
|
||||
# Start service
|
||||
log_info "Starting API service..."
|
||||
exec_container bash -c "systemctl daemon-reload && \
|
||||
systemctl enable sankofa-api && \
|
||||
systemctl start sankofa-api"
|
||||
|
||||
sleep 5
|
||||
|
||||
# Check service status
|
||||
if exec_container bash -c "systemctl is-active --quiet sankofa-api"; then
|
||||
log_success "API service is running"
|
||||
else
|
||||
log_error "API service failed to start"
|
||||
exec_container bash -c "journalctl -u sankofa-api -n 50 --no-pager"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test API health
|
||||
log_info "Testing API health endpoint..."
|
||||
sleep 5
|
||||
if exec_container bash -c "curl -s -f http://localhost:4000/health >/dev/null 2>&1"; then
|
||||
log_success "API health check passed"
|
||||
else
|
||||
log_warn "API health check failed - service may still be starting"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Sankofa API Deployment Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "API Configuration:"
|
||||
echo " URL: http://$CONTAINER_IP:4000"
|
||||
echo " GraphQL: http://$CONTAINER_IP:4000/graphql"
|
||||
echo " Health: http://$CONTAINER_IP:4000/health"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Verify API is accessible: curl http://$CONTAINER_IP:4000/health"
|
||||
echo " 2. Run: ./scripts/deploy-portal-r630-01.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
217
scripts/deploy-portal-r630-01.sh
Executable file
217
scripts/deploy-portal-r630-01.sh
Executable file
@@ -0,0 +1,217 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deploy Sankofa Portal to r630-01
|
||||
# VMID: 7801, IP: 10.160.0.11
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
SANKOFA_PROJECT="${SANKOFA_PROJECT:-/home/intlc/projects/Sankofa}"
|
||||
source "$SCRIPT_DIR/env.r630-01.example" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_PORTAL:-7801}"
|
||||
CONTAINER_IP="${SANKOFA_PORTAL_IP:-10.160.0.11}"
|
||||
API_URL="${NEXT_PUBLIC_GRAPHQL_ENDPOINT:-http://10.160.0.10:4000/graphql}"
|
||||
API_WS_URL="${NEXT_PUBLIC_GRAPHQL_WS_ENDPOINT:-ws://10.160.0.10:4000/graphql-ws}"
|
||||
KEYCLOAK_URL="${KEYCLOAK_URL:-http://10.160.0.12:8080}"
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-master}"
|
||||
KEYCLOAK_CLIENT_ID="${KEYCLOAK_CLIENT_ID_PORTAL:-portal-client}"
|
||||
KEYCLOAK_CLIENT_SECRET="${KEYCLOAK_CLIENT_SECRET_PORTAL:-}"
|
||||
NEXTAUTH_SECRET="${NEXTAUTH_SECRET:-$(openssl rand -base64 32)}"
|
||||
NODE_VERSION="18"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Sankofa Portal Deployment"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist. Please run deploy-sankofa-r630-01.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install Node.js
|
||||
log_info "Installing Node.js $NODE_VERSION..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
curl -fsSL https://deb.nodesource.com/setup_${NODE_VERSION}.x | bash - && \
|
||||
apt-get install -y -qq nodejs"
|
||||
|
||||
# Install pnpm
|
||||
log_info "Installing pnpm..."
|
||||
exec_container bash -c "npm install -g pnpm"
|
||||
|
||||
log_success "Node.js and pnpm installed"
|
||||
echo ""
|
||||
|
||||
# Copy project files
|
||||
log_info "Copying Sankofa Portal project files..."
|
||||
if [[ ! -d "$SANKOFA_PROJECT/portal" ]]; then
|
||||
log_error "Sankofa portal project not found at $SANKOFA_PROJECT/portal"
|
||||
log_info "Please ensure the Sankofa project is available"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create app directory
|
||||
exec_container bash -c "mkdir -p /opt/sankofa-portal"
|
||||
|
||||
# Copy portal directory
|
||||
log_info "Copying files to container..."
|
||||
ssh_r630_01 "pct push $VMID $SANKOFA_PROJECT/portal /opt/sankofa-portal --recursive"
|
||||
|
||||
log_success "Project files copied"
|
||||
echo ""
|
||||
|
||||
# Install dependencies
|
||||
log_info "Installing dependencies..."
|
||||
exec_container bash -c "cd /opt/sankofa-portal && pnpm install --frozen-lockfile"
|
||||
|
||||
log_success "Dependencies installed"
|
||||
echo ""
|
||||
|
||||
# Create environment file
|
||||
log_info "Creating environment configuration..."
|
||||
exec_container bash -c "cat > /opt/sankofa-portal/.env.local << EOF
|
||||
# Keycloak
|
||||
KEYCLOAK_URL=$KEYCLOAK_URL
|
||||
KEYCLOAK_REALM=$KEYCLOAK_REALM
|
||||
KEYCLOAK_CLIENT_ID=$KEYCLOAK_CLIENT_ID
|
||||
KEYCLOAK_CLIENT_SECRET=$KEYCLOAK_CLIENT_SECRET
|
||||
|
||||
# API
|
||||
NEXT_PUBLIC_GRAPHQL_ENDPOINT=$API_URL
|
||||
NEXT_PUBLIC_GRAPHQL_WS_ENDPOINT=$API_WS_URL
|
||||
|
||||
# NextAuth
|
||||
NEXTAUTH_URL=http://$CONTAINER_IP:3000
|
||||
NEXTAUTH_SECRET=$NEXTAUTH_SECRET
|
||||
|
||||
# Crossplane (if available)
|
||||
NEXT_PUBLIC_CROSSPLANE_API=${NEXT_PUBLIC_CROSSPLANE_API:-http://crossplane.sankofa.nexus}
|
||||
NEXT_PUBLIC_ARGOCD_URL=${NEXT_PUBLIC_ARGOCD_URL:-http://argocd.sankofa.nexus}
|
||||
NEXT_PUBLIC_GRAFANA_URL=${NEXT_PUBLIC_GRAFANA_URL:-http://grafana.sankofa.nexus}
|
||||
NEXT_PUBLIC_LOKI_URL=${NEXT_PUBLIC_LOKI_URL:-http://loki.sankofa.nexus:3100}
|
||||
|
||||
# App
|
||||
NEXT_PUBLIC_APP_URL=http://$CONTAINER_IP:3000
|
||||
NODE_ENV=production
|
||||
EOF"
|
||||
|
||||
log_success "Environment configured"
|
||||
echo ""
|
||||
|
||||
# Build Portal
|
||||
log_info "Building Portal (this may take several minutes)..."
|
||||
exec_container bash -c "cd /opt/sankofa-portal && pnpm build"
|
||||
|
||||
log_success "Portal built"
|
||||
echo ""
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating systemd service..."
|
||||
exec_container bash -c "cat > /etc/systemd/system/sankofa-portal.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Sankofa Portal
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/sankofa-portal
|
||||
Environment=\"NODE_ENV=production\"
|
||||
EnvironmentFile=/opt/sankofa-portal/.env.local
|
||||
ExecStart=/usr/bin/node /opt/sankofa-portal/node_modules/.bin/next start
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
|
||||
# Start service
|
||||
log_info "Starting Portal service..."
|
||||
exec_container bash -c "systemctl daemon-reload && \
|
||||
systemctl enable sankofa-portal && \
|
||||
systemctl start sankofa-portal"
|
||||
|
||||
sleep 10
|
||||
|
||||
# Check service status
|
||||
if exec_container bash -c "systemctl is-active --quiet sankofa-portal"; then
|
||||
log_success "Portal service is running"
|
||||
else
|
||||
log_error "Portal service failed to start"
|
||||
exec_container bash -c "journalctl -u sankofa-portal -n 50 --no-pager"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test Portal
|
||||
log_info "Testing Portal..."
|
||||
sleep 5
|
||||
if exec_container bash -c "curl -s -f http://localhost:3000 >/dev/null 2>&1"; then
|
||||
log_success "Portal is accessible"
|
||||
else
|
||||
log_warn "Portal may still be starting - check logs if issues persist"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Sankofa Portal Deployment Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Portal Configuration:"
|
||||
echo " URL: http://$CONTAINER_IP:3000"
|
||||
echo " API: $API_URL"
|
||||
echo " Keycloak: $KEYCLOAK_URL"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Verify Portal is accessible: curl http://$CONTAINER_IP:3000"
|
||||
echo " 2. Configure firewall rules if needed"
|
||||
echo " 3. Set up Cloudflare tunnels for external access"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
268
scripts/deploy-sankofa-r630-01.sh
Executable file
268
scripts/deploy-sankofa-r630-01.sh
Executable file
@@ -0,0 +1,268 @@
|
||||
#!/usr/bin/env bash
|
||||
# Deploy Sankofa Services to r630-01
|
||||
# Sankofa/Phoenix/PanTel service layer on VLAN 160 (10.160.0.0/22)
|
||||
# VMID Range: 7800-8999
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
SANKOFA_PROJECT="/home/intlc/projects/Sankofa"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_NODE="r630-01"
|
||||
PROXMOX_HOST="192.168.11.11"
|
||||
# r630-01 has: local, local-lvm, thin1 available
|
||||
PROXMOX_STORAGE="${PROXMOX_STORAGE:-thin1}"
|
||||
CONTAINER_OS_TEMPLATE="local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst"
|
||||
|
||||
# Sankofa Configuration
|
||||
SANKOFA_VLAN="160"
|
||||
SANKOFA_SUBNET="10.160.0.0/22"
|
||||
SANKOFA_GATEWAY="10.160.0.1"
|
||||
|
||||
# VMID Allocation (Sankofa range: 7800-8999)
|
||||
VMID_SANKOFA_POSTGRES=7803
|
||||
VMID_SANKOFA_API=7800
|
||||
VMID_SANKOFA_PORTAL=7801
|
||||
VMID_SANKOFA_KEYCLOAK=7802
|
||||
|
||||
# Service IPs (VLAN 160)
|
||||
SANKOFA_POSTGRES_IP="10.160.0.13"
|
||||
SANKOFA_API_IP="10.160.0.10"
|
||||
SANKOFA_PORTAL_IP="10.160.0.11"
|
||||
SANKOFA_KEYCLOAK_IP="10.160.0.12"
|
||||
|
||||
# Resource allocation
|
||||
SANKOFA_POSTGRES_MEMORY="2048" # 2GB
|
||||
SANKOFA_POSTGRES_CORES="2"
|
||||
SANKOFA_POSTGRES_DISK="50" # 50GB
|
||||
|
||||
SANKOFA_API_MEMORY="4096" # 4GB
|
||||
SANKOFA_API_CORES="4"
|
||||
SANKOFA_API_DISK="50" # 50GB
|
||||
|
||||
SANKOFA_PORTAL_MEMORY="4096" # 4GB
|
||||
SANKOFA_PORTAL_CORES="4"
|
||||
SANKOFA_PORTAL_DISK="50" # 50GB
|
||||
|
||||
SANKOFA_KEYCLOAK_MEMORY="2048" # 2GB
|
||||
SANKOFA_KEYCLOAK_CORES="2"
|
||||
SANKOFA_KEYCLOAK_DISK="30" # 30GB
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Check if container exists
|
||||
container_exists() {
|
||||
local vmid=$1
|
||||
ssh_r630_01 "pct list 2>/dev/null | grep -q '^\s*$vmid\s'" 2>/dev/null
|
||||
}
|
||||
|
||||
# Get container IP address
|
||||
get_container_ip() {
|
||||
local vmid=$1
|
||||
ssh_r630_01 "pct exec $vmid -- ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'" 2>/dev/null || echo ""
|
||||
}
|
||||
|
||||
# Create Sankofa container
|
||||
create_sankofa_container() {
|
||||
local vmid=$1
|
||||
local hostname=$2
|
||||
local ip_address=$3
|
||||
local memory=$4
|
||||
local cores=$5
|
||||
local disk=$6
|
||||
local service_type=$7
|
||||
|
||||
log_info "Creating Sankofa $service_type: $hostname (VMID: $vmid, IP: $ip_address)"
|
||||
|
||||
if container_exists "$vmid"; then
|
||||
log_warn "Container $vmid ($hostname) already exists, skipping creation"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Network configuration - use static IP for VLAN 160
|
||||
# Note: For unprivileged containers, VLAN tagging may need bridge configuration
|
||||
local network_config="bridge=vmbr0,name=eth0,ip=${ip_address}/22,gw=${SANKOFA_GATEWAY},type=veth"
|
||||
|
||||
log_info "Creating container $vmid on $PROXMOX_NODE..."
|
||||
ssh_r630_01 "pct create $vmid \
|
||||
$CONTAINER_OS_TEMPLATE \
|
||||
--storage $PROXMOX_STORAGE \
|
||||
--hostname $hostname \
|
||||
--memory $memory \
|
||||
--cores $cores \
|
||||
--rootfs $PROXMOX_STORAGE:$disk \
|
||||
--net0 '$network_config' \
|
||||
--unprivileged 1 \
|
||||
--swap 512 \
|
||||
--onboot 1 \
|
||||
--timezone America/Los_Angeles \
|
||||
--features nesting=1,keyctl=1" 2>&1
|
||||
|
||||
if container_exists "$vmid"; then
|
||||
log_success "Container $vmid created successfully"
|
||||
|
||||
# Start container
|
||||
log_info "Starting container $vmid..."
|
||||
ssh_r630_01 "pct start $vmid" 2>&1 || true
|
||||
|
||||
# Wait for container to be ready
|
||||
log_info "Waiting for container to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Basic setup
|
||||
log_info "Configuring container $vmid..."
|
||||
ssh_r630_01 "pct exec $vmid -- bash -c 'export DEBIAN_FRONTEND=noninteractive; apt-get update -qq && apt-get install -y -qq curl wget git build-essential sudo'" 2>&1 | grep -vE "(perl: warning|locale:)" || true
|
||||
|
||||
log_success "Sankofa $service_type container $vmid ($hostname) deployed successfully"
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to create container $vmid"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main deployment
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Sankofa Deployment to r630-01"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Target Node: $PROXMOX_NODE ($PROXMOX_HOST)"
|
||||
log_info "Storage: $PROXMOX_STORAGE"
|
||||
log_info "VLAN: $SANKOFA_VLAN ($SANKOFA_SUBNET)"
|
||||
log_info "VMID Range: 7800-8999"
|
||||
echo ""
|
||||
|
||||
# Check connectivity to r630-01
|
||||
log_info "Checking connectivity to $PROXMOX_NODE..."
|
||||
if ! ssh_r630_01 "pvecm status >/dev/null 2>&1"; then
|
||||
log_error "Cannot connect to $PROXMOX_NODE. Please check SSH access."
|
||||
exit 1
|
||||
fi
|
||||
log_success "Connected to $PROXMOX_NODE"
|
||||
echo ""
|
||||
|
||||
# Check if containers already exist
|
||||
log_info "Checking existing Sankofa containers..."
|
||||
existing_containers=()
|
||||
if container_exists "$VMID_SANKOFA_POSTGRES"; then
|
||||
existing_containers+=("$VMID_SANKOFA_POSTGRES:sankofa-postgres-1")
|
||||
log_warn "Container $VMID_SANKOFA_POSTGRES (sankofa-postgres-1) already exists"
|
||||
fi
|
||||
if container_exists "$VMID_SANKOFA_API"; then
|
||||
existing_containers+=("$VMID_SANKOFA_API:sankofa-api-1")
|
||||
log_warn "Container $VMID_SANKOFA_API (sankofa-api-1) already exists"
|
||||
fi
|
||||
if container_exists "$VMID_SANKOFA_PORTAL"; then
|
||||
existing_containers+=("$VMID_SANKOFA_PORTAL:sankofa-portal-1")
|
||||
log_warn "Container $VMID_SANKOFA_PORTAL (sankofa-portal-1) already exists"
|
||||
fi
|
||||
if container_exists "$VMID_SANKOFA_KEYCLOAK"; then
|
||||
existing_containers+=("$VMID_SANKOFA_KEYCLOAK:sankofa-keycloak-1")
|
||||
log_warn "Container $VMID_SANKOFA_KEYCLOAK (sankofa-keycloak-1) already exists"
|
||||
fi
|
||||
|
||||
if [[ ${#existing_containers[@]} -gt 0 ]]; then
|
||||
log_warn "Some Sankofa containers already exist:"
|
||||
for container in "${existing_containers[@]}"; do
|
||||
echo " - $container"
|
||||
done
|
||||
echo ""
|
||||
read -p "Continue with deployment? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
log_info "Deployment cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Deploy PostgreSQL first (required by other services)
|
||||
log_info "Deploying PostgreSQL database..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_POSTGRES" \
|
||||
"sankofa-postgres-1" \
|
||||
"$SANKOFA_POSTGRES_IP" \
|
||||
"$SANKOFA_POSTGRES_MEMORY" \
|
||||
"$SANKOFA_POSTGRES_CORES" \
|
||||
"$SANKOFA_POSTGRES_DISK" \
|
||||
"PostgreSQL"
|
||||
echo ""
|
||||
|
||||
# Deploy Keycloak (required by API and Portal)
|
||||
log_info "Deploying Keycloak identity service..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_KEYCLOAK" \
|
||||
"sankofa-keycloak-1" \
|
||||
"$SANKOFA_KEYCLOAK_IP" \
|
||||
"$SANKOFA_KEYCLOAK_MEMORY" \
|
||||
"$SANKOFA_KEYCLOAK_CORES" \
|
||||
"$SANKOFA_KEYCLOAK_DISK" \
|
||||
"Keycloak"
|
||||
echo ""
|
||||
|
||||
# Deploy Sankofa API
|
||||
log_info "Deploying Sankofa API service..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_API" \
|
||||
"sankofa-api-1" \
|
||||
"$SANKOFA_API_IP" \
|
||||
"$SANKOFA_API_MEMORY" \
|
||||
"$SANKOFA_API_CORES" \
|
||||
"$SANKOFA_API_DISK" \
|
||||
"API"
|
||||
echo ""
|
||||
|
||||
# Deploy Sankofa Portal
|
||||
log_info "Deploying Sankofa Portal service..."
|
||||
create_sankofa_container \
|
||||
"$VMID_SANKOFA_PORTAL" \
|
||||
"sankofa-portal-1" \
|
||||
"$SANKOFA_PORTAL_IP" \
|
||||
"$SANKOFA_PORTAL_MEMORY" \
|
||||
"$SANKOFA_PORTAL_CORES" \
|
||||
"$SANKOFA_PORTAL_DISK" \
|
||||
"Portal"
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Sankofa Container Deployment Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Deployed containers on $PROXMOX_NODE:"
|
||||
echo " - VMID $VMID_SANKOFA_POSTGRES: sankofa-postgres-1 ($SANKOFA_POSTGRES_IP)"
|
||||
echo " - VMID $VMID_SANKOFA_KEYCLOAK: sankofa-keycloak-1 ($SANKOFA_KEYCLOAK_IP)"
|
||||
echo " - VMID $VMID_SANKOFA_API: sankofa-api-1 ($SANKOFA_API_IP)"
|
||||
echo " - VMID $VMID_SANKOFA_PORTAL: sankofa-portal-1 ($SANKOFA_PORTAL_IP)"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Run: ./scripts/setup-postgresql-r630-01.sh"
|
||||
echo " 2. Run: ./scripts/setup-keycloak-r630-01.sh"
|
||||
echo " 3. Run: ./scripts/deploy-api-r630-01.sh"
|
||||
echo " 4. Run: ./scripts/deploy-portal-r630-01.sh"
|
||||
echo " 5. Configure networking and firewall rules"
|
||||
echo " 6. Set up Cloudflare tunnels for external access"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
84
scripts/env.r630-01.example
Normal file
84
scripts/env.r630-01.example
Normal file
@@ -0,0 +1,84 @@
|
||||
# Sankofa/Phoenix Deployment Configuration for r630-01
|
||||
# Copy this file to .env.r630-01 and update with your values
|
||||
|
||||
# Proxmox Configuration
|
||||
PROXMOX_HOST=192.168.11.11
|
||||
PROXMOX_NODE=r630-01
|
||||
PROXMOX_STORAGE=thin1
|
||||
PROXMOX_USER=root@pam
|
||||
|
||||
# Network Configuration
|
||||
SANKOFA_VLAN=160
|
||||
SANKOFA_SUBNET=10.160.0.0/22
|
||||
SANKOFA_GATEWAY=10.160.0.1
|
||||
|
||||
# Service IPs (VLAN 160)
|
||||
SANKOFA_POSTGRES_IP=10.160.0.13
|
||||
SANKOFA_API_IP=10.160.0.10
|
||||
SANKOFA_PORTAL_IP=10.160.0.11
|
||||
SANKOFA_KEYCLOAK_IP=10.160.0.12
|
||||
|
||||
# VMIDs
|
||||
VMID_SANKOFA_POSTGRES=7803
|
||||
VMID_SANKOFA_API=7800
|
||||
VMID_SANKOFA_PORTAL=7801
|
||||
VMID_SANKOFA_KEYCLOAK=7802
|
||||
|
||||
# Database Configuration
|
||||
DB_HOST=10.160.0.13
|
||||
DB_PORT=5432
|
||||
DB_NAME=sankofa
|
||||
DB_USER=sankofa
|
||||
DB_PASSWORD=CHANGE_THIS_PASSWORD_IN_PRODUCTION
|
||||
POSTGRES_SUPERUSER_PASSWORD=CHANGE_THIS_PASSWORD_IN_PRODUCTION
|
||||
|
||||
# Keycloak Configuration
|
||||
KEYCLOAK_URL=http://10.160.0.12:8080
|
||||
KEYCLOAK_ADMIN_URL=http://10.160.0.12:8080/admin
|
||||
KEYCLOAK_REALM=master
|
||||
KEYCLOAK_ADMIN_USERNAME=admin
|
||||
KEYCLOAK_ADMIN_PASSWORD=CHANGE_THIS_PASSWORD_IN_PRODUCTION
|
||||
KEYCLOAK_CLIENT_ID_API=sankofa-api
|
||||
KEYCLOAK_CLIENT_ID_PORTAL=portal-client
|
||||
KEYCLOAK_CLIENT_SECRET_API=CHANGE_THIS_SECRET_IN_PRODUCTION
|
||||
KEYCLOAK_CLIENT_SECRET_PORTAL=CHANGE_THIS_SECRET_IN_PRODUCTION
|
||||
KEYCLOAK_MULTI_REALM=false
|
||||
|
||||
# API Configuration
|
||||
API_HOST=10.160.0.10
|
||||
API_PORT=4000
|
||||
NEXT_PUBLIC_GRAPHQL_ENDPOINT=http://10.160.0.10:4000/graphql
|
||||
NEXT_PUBLIC_GRAPHQL_WS_ENDPOINT=ws://10.160.0.10:4000/graphql-ws
|
||||
JWT_SECRET=CHANGE_THIS_JWT_SECRET_IN_PRODUCTION
|
||||
NODE_ENV=production
|
||||
|
||||
# Portal Configuration
|
||||
PORTAL_HOST=10.160.0.11
|
||||
PORTAL_PORT=3000
|
||||
NEXT_PUBLIC_APP_URL=http://10.160.0.11:3000
|
||||
NEXT_PUBLIC_CROSSPLANE_API=http://crossplane.sankofa.nexus
|
||||
NEXT_PUBLIC_ARGOCD_URL=http://argocd.sankofa.nexus
|
||||
NEXT_PUBLIC_GRAFANA_URL=http://grafana.sankofa.nexus
|
||||
NEXT_PUBLIC_LOKI_URL=http://loki.sankofa.nexus:3100
|
||||
NEXTAUTH_URL=http://10.160.0.11:3000
|
||||
NEXTAUTH_SECRET=CHANGE_THIS_NEXTAUTH_SECRET_IN_PRODUCTION
|
||||
|
||||
# Multi-Tenancy
|
||||
ENABLE_MULTI_TENANT=true
|
||||
DEFAULT_TENANT_ID=
|
||||
|
||||
# Billing Configuration
|
||||
BILLING_GRANULARITY=SECOND
|
||||
BLOCKCHAIN_BILLING_ENABLED=false
|
||||
BLOCKCHAIN_IDENTITY_ENABLED=false
|
||||
|
||||
# Blockchain (Optional)
|
||||
BLOCKCHAIN_RPC_URL=
|
||||
RESOURCE_PROVISIONING_CONTRACT_ADDRESS=
|
||||
|
||||
# Monitoring (Optional)
|
||||
NEXT_PUBLIC_SENTRY_DSN=
|
||||
SENTRY_AUTH_TOKEN=
|
||||
|
||||
# Analytics (Optional)
|
||||
NEXT_PUBLIC_ANALYTICS_ID=
|
||||
254
scripts/migrate-thin1-r630-02.sh
Executable file
254
scripts/migrate-thin1-r630-02.sh
Executable file
@@ -0,0 +1,254 @@
|
||||
#!/usr/bin/env bash
|
||||
# Migrate containers from r630-02 thin1-r630-02 to other thin pools
|
||||
# This addresses the critical storage issue where thin1-r630-02 is at 97.78% capacity
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
LOG_DIR="${PROJECT_ROOT}/logs/migrations"
|
||||
LOG_FILE="${LOG_DIR}/migrate-thin1-r630-02_$(date +%Y%m%d_%H%M%S).log"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_error() { echo -e "${RED}[✗]${NC} $1" | tee -a "$LOG_FILE"; }
|
||||
log_header() { echo -e "${CYAN}=== $1 ===${NC}" | tee -a "$LOG_FILE"; }
|
||||
|
||||
# Create log directory
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
# Node configuration
|
||||
NODE="r630-02"
|
||||
NODE_IP="192.168.11.12"
|
||||
NODE_PASSWORD="password"
|
||||
SOURCE_STORAGE="thin1-r630-02"
|
||||
|
||||
# Target storage pools (all empty and available)
|
||||
TARGET_POOLS=("thin2" "thin3" "thin5" "thin6")
|
||||
CURRENT_POOL_INDEX=0
|
||||
|
||||
# SSH helper
|
||||
ssh_node() {
|
||||
sshpass -p "$NODE_PASSWORD" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=10 root@"$NODE_IP" "$@"
|
||||
}
|
||||
|
||||
# Get next target storage pool (round-robin)
|
||||
get_next_target_pool() {
|
||||
local pool="${TARGET_POOLS[$CURRENT_POOL_INDEX]}"
|
||||
CURRENT_POOL_INDEX=$(( (CURRENT_POOL_INDEX + 1) % ${#TARGET_POOLS[@]} ))
|
||||
echo "$pool"
|
||||
}
|
||||
|
||||
# Check if container is running
|
||||
is_container_running() {
|
||||
local vmid="$1"
|
||||
ssh_node "pct status $vmid 2>/dev/null | grep -q 'running'" && return 0 || return 1
|
||||
}
|
||||
|
||||
# Get container storage info
|
||||
get_container_storage() {
|
||||
local vmid="$1"
|
||||
ssh_node "pct config $vmid 2>/dev/null | grep '^rootfs:' | awk -F: '{print \$2}' | awk '{print \$1}'" | cut -d: -f1
|
||||
}
|
||||
|
||||
# Migrate a container
|
||||
migrate_container() {
|
||||
local vmid="$1"
|
||||
local target_storage="$2"
|
||||
local container_name="$3"
|
||||
|
||||
log_info "Migrating container $vmid ($container_name) to $target_storage..."
|
||||
|
||||
# Check if container is running
|
||||
local was_running=false
|
||||
if is_container_running "$vmid"; then
|
||||
was_running=true
|
||||
log_info "Container $vmid is running, will stop before migration..."
|
||||
fi
|
||||
|
||||
# Stop container if running
|
||||
if [ "$was_running" = true ]; then
|
||||
log_info "Stopping container $vmid..."
|
||||
ssh_node "pct stop $vmid" || {
|
||||
log_error "Failed to stop container $vmid"
|
||||
return 1
|
||||
}
|
||||
sleep 2
|
||||
fi
|
||||
|
||||
# Perform migration using move-volume (for same-node storage migration)
|
||||
log_info "Moving container $vmid disk from $SOURCE_STORAGE to $target_storage..."
|
||||
|
||||
# Use pct move-volume for same-node storage migration
|
||||
# Syntax: pct move-volume <vmid> <volume> [<storage>]
|
||||
# volume is "rootfs" for the root filesystem
|
||||
if ssh_node "pct move-volume $vmid rootfs $target_storage" >> "$LOG_FILE" 2>&1; then
|
||||
log_success "Container $vmid disk moved successfully to $target_storage"
|
||||
|
||||
# Start container if it was running before
|
||||
if [ "$was_running" = true ]; then
|
||||
log_info "Starting container $vmid..."
|
||||
sleep 2
|
||||
ssh_node "pct start $vmid" || log_warn "Failed to start container $vmid (may need manual start)"
|
||||
fi
|
||||
|
||||
return 0
|
||||
else
|
||||
log_error "Failed to move container $vmid disk"
|
||||
# Try to start container if it was stopped
|
||||
if [ "$was_running" = true ]; then
|
||||
log_info "Attempting to restart container $vmid..."
|
||||
ssh_node "pct start $vmid" || true
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify migration
|
||||
verify_migration() {
|
||||
local vmid="$1"
|
||||
local target_storage="$2"
|
||||
|
||||
local current_storage=$(get_container_storage "$vmid")
|
||||
if [ "$current_storage" = "$target_storage" ]; then
|
||||
log_success "Verified: Container $vmid is now on $target_storage"
|
||||
return 0
|
||||
else
|
||||
log_error "Verification failed: Container $vmid storage is $current_storage (expected $target_storage)"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main migration function
|
||||
main() {
|
||||
log_header "Migration: thin1-r630-02 to Other Thin Pools"
|
||||
echo ""
|
||||
|
||||
log_info "Source Node: $NODE ($NODE_IP)"
|
||||
log_info "Source Storage: $SOURCE_STORAGE"
|
||||
log_info "Target Storage Pools: ${TARGET_POOLS[*]}"
|
||||
echo ""
|
||||
|
||||
# Get list of containers on source storage
|
||||
log_info "Identifying containers on $SOURCE_STORAGE..."
|
||||
local containers=$(ssh_node "pvesm list $SOURCE_STORAGE 2>/dev/null | tail -n +2 | awk '{print \$NF}' | sort -u")
|
||||
|
||||
if [ -z "$containers" ]; then
|
||||
log_warn "No containers found on $SOURCE_STORAGE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local container_list=($containers)
|
||||
|
||||
# Filter out containers that are already on different storage
|
||||
local containers_to_migrate=()
|
||||
for vmid in "${container_list[@]}"; do
|
||||
local current_storage=$(ssh_node "pct config $vmid 2>/dev/null | grep '^rootfs:' | awk '{print \$2}' | cut -d: -f1")
|
||||
if [ "$current_storage" = "$SOURCE_STORAGE" ]; then
|
||||
containers_to_migrate+=("$vmid")
|
||||
else
|
||||
log_info "Container $vmid is already on $current_storage, skipping..."
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#containers_to_migrate[@]} -eq 0 ]; then
|
||||
log_success "All containers have been migrated from $SOURCE_STORAGE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "Found ${#containers_to_migrate[@]} containers to migrate: ${containers_to_migrate[*]}"
|
||||
echo ""
|
||||
|
||||
# Update container_list to only include containers that need migration
|
||||
container_list=("${containers_to_migrate[@]}")
|
||||
|
||||
# Get container names
|
||||
declare -A container_names
|
||||
for vmid in "${container_list[@]}"; do
|
||||
local name=$(ssh_node "pct config $vmid 2>/dev/null | grep '^hostname:' | awk '{print \$2}'" || echo "unknown")
|
||||
container_names[$vmid]="$name"
|
||||
done
|
||||
|
||||
# Check storage availability
|
||||
log_info "Checking target storage availability..."
|
||||
for pool in "${TARGET_POOLS[@]}"; do
|
||||
local available=$(ssh_node "pvesm status $pool 2>/dev/null | awk 'NR==2 {print \$6}'" || echo "0")
|
||||
log_info " $pool: Available"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Confirm migration
|
||||
log_warn "This will migrate ${#container_list[@]} containers from $SOURCE_STORAGE"
|
||||
log_info "Containers to migrate:"
|
||||
for vmid in "${container_list[@]}"; do
|
||||
log_info " - VMID $vmid: ${container_names[$vmid]}"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# Check for --yes flag for non-interactive mode
|
||||
local auto_confirm=false
|
||||
if [[ "${1:-}" == "--yes" ]] || [[ "${1:-}" == "-y" ]]; then
|
||||
auto_confirm=true
|
||||
log_info "Auto-confirm mode enabled"
|
||||
fi
|
||||
|
||||
if [ "$auto_confirm" = false ]; then
|
||||
read -p "Continue with migration? (yes/no): " confirm
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
log_info "Migration cancelled by user"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Perform migrations
|
||||
local success_count=0
|
||||
local fail_count=0
|
||||
|
||||
for vmid in "${container_list[@]}"; do
|
||||
local target_pool=$(get_next_target_pool)
|
||||
local container_name="${container_names[$vmid]}"
|
||||
|
||||
log_header "Migrating Container $vmid ($container_name)"
|
||||
|
||||
if migrate_container "$vmid" "$target_pool" "$container_name"; then
|
||||
if verify_migration "$vmid" "$target_pool"; then
|
||||
((success_count++))
|
||||
else
|
||||
((fail_count++))
|
||||
fi
|
||||
else
|
||||
((fail_count++))
|
||||
fi
|
||||
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Final summary
|
||||
log_header "Migration Summary"
|
||||
log_info "Total containers: ${#container_list[@]}"
|
||||
log_success "Successfully migrated: $success_count"
|
||||
if [ $fail_count -gt 0 ]; then
|
||||
log_error "Failed migrations: $fail_count"
|
||||
fi
|
||||
|
||||
# Check final storage status
|
||||
echo ""
|
||||
log_info "Final storage status:"
|
||||
ssh_node "pvesm status | grep -E '(thin1-r630-02|thin2|thin3|thin5|thin6)'" | tee -a "$LOG_FILE"
|
||||
|
||||
echo ""
|
||||
log_success "Migration complete! Log saved to: $LOG_FILE"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
204
scripts/organize-docs-directory.sh
Executable file
204
scripts/organize-docs-directory.sh
Executable file
@@ -0,0 +1,204 @@
|
||||
#!/bin/bash
|
||||
# Organize Documentation Directory Files
|
||||
# Moves files from docs/ root to appropriate directories
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Dry-run mode (default: true)
|
||||
DRY_RUN="${1:---dry-run}"
|
||||
|
||||
# Log file
|
||||
LOG_FILE="docs/DOCS_ORGANIZATION_$(date +%Y%m%d_%H%M%S).log"
|
||||
MOVED_COUNT=0
|
||||
SKIPPED_COUNT=0
|
||||
ERROR_COUNT=0
|
||||
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}[OK]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
|
||||
ERROR_COUNT=$((ERROR_COUNT + 1))
|
||||
}
|
||||
|
||||
move_file() {
|
||||
local source="$1"
|
||||
local dest="$2"
|
||||
local description="${3:-}"
|
||||
|
||||
if [ ! -f "$source" ]; then
|
||||
warn "File not found: $source"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
# Create destination directory if it doesn't exist
|
||||
local dest_dir=$(dirname "$dest")
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p "$dest_dir"
|
||||
fi
|
||||
|
||||
# Check if destination already exists
|
||||
if [ -f "$dest" ]; then
|
||||
warn "Destination already exists: $dest (skipping $source)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "Would move: $source → $dest $description"
|
||||
else
|
||||
if mv "$source" "$dest" 2>>"$LOG_FILE"; then
|
||||
success "Moved: $source → $dest $description"
|
||||
MOVED_COUNT=$((MOVED_COUNT + 1))
|
||||
else
|
||||
error "Failed to move: $source → $dest"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Create necessary directories
|
||||
create_directories() {
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p docs/00-meta
|
||||
mkdir -p docs/archive/reports
|
||||
mkdir -p docs/archive/issues
|
||||
mkdir -p docs/bridge/contracts
|
||||
mkdir -p docs/04-configuration/metamask
|
||||
mkdir -p docs/scripts
|
||||
fi
|
||||
}
|
||||
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Documentation Directory Files Organization ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Mode: $DRY_RUN"
|
||||
log "Project Root: $PROJECT_ROOT"
|
||||
log "Log File: $LOG_FILE"
|
||||
log ""
|
||||
|
||||
create_directories
|
||||
|
||||
log "=== Moving Documentation Meta Files to docs/00-meta/ ==="
|
||||
for file in \
|
||||
CONTRIBUTOR_GUIDELINES.md \
|
||||
DOCUMENTATION_ENHANCEMENTS_RECOMMENDATIONS.md \
|
||||
DOCUMENTATION_FIXES_COMPLETE.md \
|
||||
DOCUMENTATION_QUALITY_REVIEW.md \
|
||||
DOCUMENTATION_RELATIONSHIP_MAP.md \
|
||||
DOCUMENTATION_REORGANIZATION_COMPLETE.md \
|
||||
DOCUMENTATION_REVIEW.md \
|
||||
DOCUMENTATION_STYLE_GUIDE.md \
|
||||
DOCUMENTATION_UPGRADE_SUMMARY.md \
|
||||
MARKDOWN_FILE_MAINTENANCE_GUIDE.md; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/00-meta/$file" "(documentation meta)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Reports to docs/archive/reports/ ==="
|
||||
for file in \
|
||||
PROXMOX_CLUSTER_STORAGE_STATUS_REPORT.md \
|
||||
PROXMOX_SSL_CERTIFICATE_FIX.md \
|
||||
PROXMOX_SSL_FIX_VERIFIED.md \
|
||||
SSL_CERTIFICATE_ERROR_596_FIX.md \
|
||||
SSL_FIX_FOR_EACH_HOST.md; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/archive/reports/$file" "(report)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Issue Tracking to docs/archive/issues/ ==="
|
||||
for file in \
|
||||
OUTSTANDING_ISSUES_RESOLUTION_GUIDE.md \
|
||||
OUTSTANDING_ISSUES_SUMMARY.md; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/archive/issues/$file" "(issue tracking)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Solidity Files to docs/bridge/contracts/ ==="
|
||||
for file in \
|
||||
CCIPWETH9Bridge_flattened.sol \
|
||||
CCIPWETH9Bridge_standard_json.json \
|
||||
CCIPWETH9Bridge_standard_json_generated.json; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/bridge/contracts/$file" "(Solidity contract)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Metamask Config Files to docs/04-configuration/metamask/ ==="
|
||||
for file in \
|
||||
METAMASK_NETWORK_CONFIG.json \
|
||||
METAMASK_TOKEN_LIST.json \
|
||||
METAMASK_TOKEN_LIST.tokenlist.json; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/04-configuration/metamask/$file" "(Metamask config)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Scripts to docs/scripts/ ==="
|
||||
for file in \
|
||||
organize-standalone-files.sh \
|
||||
organize_files.py; do
|
||||
[ -f "docs/$file" ] || continue
|
||||
move_file "docs/$file" "docs/scripts/$file" "(script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Files Staying in Root ==="
|
||||
log "✅ README.md - Main documentation index"
|
||||
log "✅ MASTER_INDEX.md - Master index"
|
||||
log "✅ SEARCH_GUIDE.md - Search guide (useful in root)"
|
||||
|
||||
log ""
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Organization Complete ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Summary:"
|
||||
log " Files Moved: $MOVED_COUNT"
|
||||
log " Files Skipped: $SKIPPED_COUNT"
|
||||
log " Errors: $ERROR_COUNT"
|
||||
log ""
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "⚠️ DRY RUN MODE - No files were actually moved"
|
||||
log ""
|
||||
log "To execute the moves, run:"
|
||||
log " $0 --execute"
|
||||
else
|
||||
log "✅ Files have been moved successfully"
|
||||
log "Log file: $LOG_FILE"
|
||||
|
||||
# Move log file to logs directory
|
||||
if [ -f "$LOG_FILE" ]; then
|
||||
mkdir -p logs
|
||||
mv "$LOG_FILE" "logs/"
|
||||
log "Log file moved to: logs/$(basename "$LOG_FILE")"
|
||||
fi
|
||||
fi
|
||||
|
||||
log ""
|
||||
exit 0
|
||||
202
scripts/organize-root-files.sh
Executable file
202
scripts/organize-root-files.sh
Executable file
@@ -0,0 +1,202 @@
|
||||
#!/bin/bash
|
||||
# Organize Root Directory Files
|
||||
# Moves files from root to appropriate directories based on type
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Dry-run mode (default: true)
|
||||
DRY_RUN="${1:---dry-run}"
|
||||
|
||||
# Log file
|
||||
LOG_FILE="ROOT_FILES_ORGANIZATION_$(date +%Y%m%d_%H%M%S).log"
|
||||
MOVED_COUNT=0
|
||||
SKIPPED_COUNT=0
|
||||
ERROR_COUNT=0
|
||||
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}[OK]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1" | tee -a "$LOG_FILE"
|
||||
ERROR_COUNT=$((ERROR_COUNT + 1))
|
||||
}
|
||||
|
||||
move_file() {
|
||||
local source="$1"
|
||||
local dest="$2"
|
||||
local description="${3:-}"
|
||||
|
||||
if [ ! -f "$source" ]; then
|
||||
warn "File not found: $source"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
# Create destination directory if it doesn't exist
|
||||
local dest_dir=$(dirname "$dest")
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p "$dest_dir"
|
||||
fi
|
||||
|
||||
# Check if destination already exists
|
||||
if [ -f "$dest" ]; then
|
||||
warn "Destination already exists: $dest (skipping $source)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "Would move: $source → $dest $description"
|
||||
else
|
||||
if mv "$source" "$dest" 2>>"$LOG_FILE"; then
|
||||
success "Moved: $source → $dest $description"
|
||||
MOVED_COUNT=$((MOVED_COUNT + 1))
|
||||
else
|
||||
error "Failed to move: $source → $dest"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Create necessary directories
|
||||
create_directories() {
|
||||
if [ "$DRY_RUN" != "--dry-run" ]; then
|
||||
mkdir -p logs
|
||||
mkdir -p reports/inventory
|
||||
mkdir -p examples
|
||||
fi
|
||||
}
|
||||
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Root Directory Files Organization ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Mode: $DRY_RUN"
|
||||
log "Project Root: $PROJECT_ROOT"
|
||||
log "Log File: $LOG_FILE"
|
||||
log ""
|
||||
|
||||
create_directories
|
||||
|
||||
log "=== Moving Log Files to logs/ ==="
|
||||
for file in *.log; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip the log file we're currently writing to
|
||||
[ "$file" == "$LOG_FILE" ] && continue
|
||||
move_file "$file" "logs/$file" "(log file)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving CSV Inventory Files to reports/inventory/ ==="
|
||||
for file in container_inventory_*.csv; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "reports/inventory/$file" "(inventory CSV)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Shell Scripts to scripts/ ==="
|
||||
# List of shell scripts to move (excluding scripts already in scripts/)
|
||||
for file in *.sh; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip if already in scripts/ directory
|
||||
if [ -f "scripts/$file" ]; then
|
||||
warn "Script already exists in scripts/: $file (skipping)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
continue
|
||||
fi
|
||||
move_file "$file" "scripts/$file" "(shell script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Python Scripts to scripts/ ==="
|
||||
for file in *.py; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip if already in scripts/ directory
|
||||
if [ -f "scripts/$file" ]; then
|
||||
warn "Script already exists in scripts/: $file (skipping)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
continue
|
||||
fi
|
||||
move_file "$file" "scripts/$file" "(Python script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving JavaScript Files to scripts/ ==="
|
||||
for file in *.js; do
|
||||
[ -f "$file" ] || continue
|
||||
# Skip if already in scripts/ directory
|
||||
if [ -f "scripts/$file" ]; then
|
||||
warn "Script already exists in scripts/: $file (skipping)"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
continue
|
||||
fi
|
||||
# Skip package.json and lock files (these should stay in root)
|
||||
if [[ "$file" == "package.json" ]] || [[ "$file" == "pnpm-lock.yaml" ]] || [[ "$file" == "token-list.json" ]]; then
|
||||
continue
|
||||
fi
|
||||
move_file "$file" "scripts/$file" "(JavaScript script)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving HTML Files to examples/ ==="
|
||||
for file in *.html; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "examples/$file" "(HTML example)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving JSON Reports to reports/ ==="
|
||||
for file in CONTENT_INCONSISTENCIES.json MARKDOWN_ANALYSIS.json REFERENCE_FIXES_REPORT.json; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "reports/$file" "(JSON report)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "=== Moving Text Reports to reports/ ==="
|
||||
for file in CONVERSION_SUMMARY.txt; do
|
||||
[ -f "$file" ] || continue
|
||||
move_file "$file" "reports/$file" "(text report)"
|
||||
done
|
||||
|
||||
log ""
|
||||
log "╔══════════════════════════════════════════════════════════╗"
|
||||
log "║ Organization Complete ║"
|
||||
log "╚══════════════════════════════════════════════════════════╝"
|
||||
log ""
|
||||
log "Summary:"
|
||||
log " Files Moved: $MOVED_COUNT"
|
||||
log " Files Skipped: $SKIPPED_COUNT"
|
||||
log " Errors: $ERROR_COUNT"
|
||||
log ""
|
||||
|
||||
if [ "$DRY_RUN" == "--dry-run" ]; then
|
||||
log "⚠️ DRY RUN MODE - No files were actually moved"
|
||||
log ""
|
||||
log "To execute the moves, run:"
|
||||
log " $0 --execute"
|
||||
else
|
||||
log "✅ Files have been moved successfully"
|
||||
log "Log file: $LOG_FILE"
|
||||
fi
|
||||
|
||||
log ""
|
||||
exit 0
|
||||
501
scripts/review-all-storage.sh
Executable file
501
scripts/review-all-storage.sh
Executable file
@@ -0,0 +1,501 @@
|
||||
#!/usr/bin/env bash
|
||||
# Comprehensive Proxmox Storage Review and Recommendations
|
||||
# Reviews all storage across all Proxmox nodes and provides recommendations
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
REPORT_DIR="${PROJECT_ROOT}/reports/storage"
|
||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||
REPORT_FILE="${REPORT_DIR}/storage_review_${TIMESTAMP}.md"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[✗]${NC} $1"; }
|
||||
log_header() { echo -e "${CYAN}=== $1 ===${NC}"; }
|
||||
|
||||
# Create report directory
|
||||
mkdir -p "$REPORT_DIR"
|
||||
|
||||
# Proxmox nodes configuration
|
||||
declare -A NODES
|
||||
NODES[ml110]="192.168.11.10:L@kers2010"
|
||||
NODES[r630-01]="192.168.11.11:password"
|
||||
NODES[r630-02]="192.168.11.12:password"
|
||||
NODES[r630-03]="192.168.11.13:L@kers2010"
|
||||
NODES[r630-04]="192.168.11.14:L@kers2010"
|
||||
|
||||
# Storage data collection
|
||||
declare -A STORAGE_DATA
|
||||
|
||||
# SSH helper function
|
||||
ssh_node() {
|
||||
local hostname="$1"
|
||||
shift
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
local password="${NODES[$hostname]#*:}"
|
||||
|
||||
if command -v sshpass >/dev/null 2>&1; then
|
||||
sshpass -p "$password" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@"
|
||||
else
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check node connectivity
|
||||
check_node() {
|
||||
local hostname="$1"
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
if ping -c 1 -W 2 "$ip" >/dev/null 2>&1; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Collect storage information from a node
|
||||
collect_storage_info() {
|
||||
local hostname="$1"
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
log_info "Collecting storage information from $hostname ($ip)..."
|
||||
|
||||
if ! check_node "$hostname"; then
|
||||
log_warn "$hostname is not reachable"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Collect storage status
|
||||
local storage_status=$(ssh_node "$hostname" 'pvesm status 2>/dev/null' || echo "")
|
||||
|
||||
# Collect LVM information
|
||||
local vgs_info=$(ssh_node "$hostname" 'vgs --units g --noheadings -o vg_name,vg_size,vg_free 2>/dev/null' || echo "")
|
||||
local lvs_info=$(ssh_node "$hostname" 'lvs --units g --noheadings -o lv_name,vg_name,lv_size,data_percent,metadata_percent,pool_lv 2>/dev/null | grep -E "(thin|data)"' || echo "")
|
||||
|
||||
# Collect disk information
|
||||
local disk_info=$(ssh_node "$hostname" 'lsblk -d -o NAME,SIZE,TYPE,MOUNTPOINT 2>/dev/null' || echo "")
|
||||
|
||||
# Collect VM/container count
|
||||
local vm_count=$(ssh_node "$hostname" 'qm list 2>/dev/null | tail -n +2 | wc -l' || echo "0")
|
||||
local ct_count=$(ssh_node "$hostname" 'pct list 2>/dev/null | tail -n +2 | wc -l' || echo "0")
|
||||
|
||||
# Collect system resources
|
||||
local mem_info=$(ssh_node "$hostname" 'free -h | grep Mem | awk "{print \$2,\$3,\$7}"' || echo "")
|
||||
local cpu_info=$(ssh_node "$hostname" 'nproc' || echo "0")
|
||||
|
||||
# Store data
|
||||
STORAGE_DATA["${hostname}_storage"]="$storage_status"
|
||||
STORAGE_DATA["${hostname}_vgs"]="$vgs_info"
|
||||
STORAGE_DATA["${hostname}_lvs"]="$lvs_info"
|
||||
STORAGE_DATA["${hostname}_disks"]="$disk_info"
|
||||
STORAGE_DATA["${hostname}_vms"]="$vm_count"
|
||||
STORAGE_DATA["${hostname}_cts"]="$ct_count"
|
||||
STORAGE_DATA["${hostname}_mem"]="$mem_info"
|
||||
STORAGE_DATA["${hostname}_cpu"]="$cpu_info"
|
||||
|
||||
log_success "Collected data from $hostname"
|
||||
}
|
||||
|
||||
# Generate storage report
|
||||
generate_report() {
|
||||
log_header "Generating Storage Review Report"
|
||||
|
||||
cat > "$REPORT_FILE" <<EOF
|
||||
# Proxmox Storage Comprehensive Review
|
||||
|
||||
**Date:** $(date)
|
||||
**Report Generated:** $(date -u +"%Y-%m-%d %H:%M:%S UTC")
|
||||
**Review Scope:** All Proxmox nodes and storage configurations
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This report provides a comprehensive review of all storage configurations across all Proxmox nodes, including:
|
||||
- Current storage status and usage
|
||||
- Storage type analysis
|
||||
- Performance recommendations
|
||||
- Capacity planning
|
||||
- Optimization suggestions
|
||||
|
||||
---
|
||||
|
||||
## Node Overview
|
||||
|
||||
EOF
|
||||
|
||||
# Process each node
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
### $hostname ($ip)
|
||||
|
||||
**Status:** $(if check_node "$hostname"; then echo "✅ Reachable"; else echo "❌ Not Reachable"; fi)
|
||||
|
||||
**System Resources:**
|
||||
- CPU Cores: ${STORAGE_DATA["${hostname}_cpu"]:-Unknown}
|
||||
- Memory: ${STORAGE_DATA["${hostname}_mem"]:-Unknown}
|
||||
- VMs: ${STORAGE_DATA["${hostname}_vms"]:-0}
|
||||
- Containers: ${STORAGE_DATA["${hostname}_cts"]:-0}
|
||||
|
||||
**Storage Status:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_storage"]:-No storage data available}
|
||||
\`\`\`
|
||||
|
||||
**Volume Groups:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_vgs"]:-No volume groups found}
|
||||
\`\`\`
|
||||
|
||||
**Thin Pools:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_lvs"]:-No thin pools found}
|
||||
\`\`\`
|
||||
|
||||
**Physical Disks:**
|
||||
\`\`\`
|
||||
${STORAGE_DATA["${hostname}_disks"]:-No disk information available}
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
EOF
|
||||
done
|
||||
|
||||
# Add recommendations section
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
## Storage Analysis and Recommendations
|
||||
|
||||
### 1. Storage Type Analysis
|
||||
|
||||
#### Local Storage (Directory-based)
|
||||
- **Purpose:** ISO images, container templates, backups
|
||||
- **Performance:** Good for read-heavy workloads
|
||||
- **Recommendation:** Use for templates and ISOs, not for VM disks
|
||||
|
||||
#### LVM Thin Storage
|
||||
- **Purpose:** VM/container disk images
|
||||
- **Performance:** Excellent with thin provisioning
|
||||
- **Benefits:** Space efficiency, snapshots, cloning
|
||||
- **Recommendation:** ✅ **Preferred for VM/container disks**
|
||||
|
||||
#### ZFS Storage
|
||||
- **Purpose:** High-performance VM storage
|
||||
- **Performance:** Excellent with compression and deduplication
|
||||
- **Benefits:** Data integrity, snapshots, clones
|
||||
- **Recommendation:** Consider for high-performance workloads
|
||||
|
||||
### 2. Critical Issues and Fixes
|
||||
|
||||
EOF
|
||||
|
||||
# Analyze each node and add recommendations
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
local storage_status="${STORAGE_DATA["${hostname}_storage"]:-}"
|
||||
|
||||
if [ -z "$storage_status" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
#### $hostname Storage Issues
|
||||
|
||||
EOF
|
||||
|
||||
# Check for disabled storage
|
||||
if echo "$storage_status" | grep -q "disabled\|inactive"; then
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
⚠️ **Issue:** Some storage pools are disabled or inactive
|
||||
|
||||
**Action Required:**
|
||||
\`\`\`bash
|
||||
ssh root@${NODES[$hostname]%%:*}
|
||||
pvesm status
|
||||
# Enable disabled storage:
|
||||
pvesm set <storage-name> --disable 0
|
||||
\`\`\`
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Check for high usage
|
||||
if echo "$storage_status" | grep -qE "[8-9][0-9]%|[0-9]{2,}%"; then
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
⚠️ **Issue:** Storage usage is high (>80%)
|
||||
|
||||
**Recommendation:**
|
||||
- Monitor storage usage closely
|
||||
- Plan for expansion or cleanup
|
||||
- Consider migrating VMs to other nodes
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Check for missing LVM thin storage
|
||||
if ! echo "$storage_status" | grep -qE "lvmthin|thin"; then
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
⚠️ **Issue:** No LVM thin storage configured
|
||||
|
||||
**Recommendation:**
|
||||
- Configure LVM thin storage for better performance
|
||||
- Use thin provisioning for space efficiency
|
||||
- Enable snapshots and cloning capabilities
|
||||
|
||||
EOF
|
||||
fi
|
||||
done
|
||||
|
||||
# Add general recommendations
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
### 3. Performance Optimization Recommendations
|
||||
|
||||
#### Storage Performance Best Practices
|
||||
|
||||
1. **Use LVM Thin for VM Disks**
|
||||
- Better performance than directory storage
|
||||
- Thin provisioning saves space
|
||||
- Enables snapshots and cloning
|
||||
|
||||
2. **Monitor Thin Pool Metadata Usage**
|
||||
- Thin pools require metadata space
|
||||
- Monitor metadata_percent in lvs output
|
||||
- Expand metadata if >80% used
|
||||
|
||||
3. **Storage Distribution**
|
||||
- Distribute VMs across multiple nodes
|
||||
- Balance storage usage across nodes
|
||||
- Avoid overloading single node
|
||||
|
||||
4. **Backup Storage Strategy**
|
||||
- Use separate storage for backups
|
||||
- Consider NFS or Ceph for shared backups
|
||||
- Implement backup rotation policies
|
||||
|
||||
### 4. Capacity Planning
|
||||
|
||||
#### Current Storage Distribution
|
||||
|
||||
EOF
|
||||
|
||||
# Calculate total storage
|
||||
local total_storage=0
|
||||
local total_used=0
|
||||
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
local storage_status="${STORAGE_DATA["${hostname}_storage"]:-}"
|
||||
if [ -n "$storage_status" ]; then
|
||||
# Extract storage sizes (simplified - would need proper parsing)
|
||||
echo "$storage_status" | while IFS= read -r line; do
|
||||
if [[ $line =~ ([0-9]+)T ]] || [[ $line =~ ([0-9]+)G ]]; then
|
||||
# Storage found
|
||||
:
|
||||
fi
|
||||
done
|
||||
fi
|
||||
done
|
||||
|
||||
cat >> "$REPORT_FILE" <<EOF
|
||||
|
||||
**Recommendations:**
|
||||
- Monitor storage growth trends
|
||||
- Plan for 20-30% headroom
|
||||
- Set alerts at 80% usage
|
||||
- Consider storage expansion before reaching capacity
|
||||
|
||||
### 5. Storage Type Recommendations by Use Case
|
||||
|
||||
| Use Case | Recommended Storage Type | Reason |
|
||||
|----------|-------------------------|--------|
|
||||
| VM/Container Disks | LVM Thin (lvmthin) | Best performance, thin provisioning |
|
||||
| ISO Images | Directory (dir) | Read-only, no performance impact |
|
||||
| Container Templates | Directory (dir) | Templates are read-only |
|
||||
| Backups | Directory or NFS | Separate from production storage |
|
||||
| High-Performance VMs | ZFS or LVM Thin | Best I/O performance |
|
||||
| Development/Test | LVM Thin | Space efficient with cloning |
|
||||
|
||||
### 6. Security Recommendations
|
||||
|
||||
1. **Storage Access Control**
|
||||
- Review storage.cfg node restrictions
|
||||
- Ensure proper node assignments
|
||||
- Verify storage permissions
|
||||
|
||||
2. **Backup Security**
|
||||
- Encrypt backups if containing sensitive data
|
||||
- Store backups off-site
|
||||
- Test backup restoration regularly
|
||||
|
||||
### 7. Monitoring Recommendations
|
||||
|
||||
1. **Set Up Storage Monitoring**
|
||||
- Monitor storage usage (>80% alert)
|
||||
- Monitor thin pool metadata usage
|
||||
- Track storage growth trends
|
||||
|
||||
2. **Performance Monitoring**
|
||||
- Monitor I/O latency
|
||||
- Track storage throughput
|
||||
- Identify bottlenecks
|
||||
|
||||
3. **Automated Alerts**
|
||||
- Storage usage >80%
|
||||
- Thin pool metadata >80%
|
||||
- Storage errors or failures
|
||||
|
||||
### 8. Migration Recommendations
|
||||
|
||||
#### Workload Distribution
|
||||
|
||||
**Current State:**
|
||||
- ml110: Hosting all VMs (overloaded)
|
||||
- r630-01/r630-02: Underutilized
|
||||
|
||||
**Recommended Distribution:**
|
||||
- **ml110:** Keep management/lightweight VMs (10-15 VMs)
|
||||
- **r630-01:** Migrate medium workload VMs (10-15 VMs)
|
||||
- **r630-02:** Migrate heavy workload VMs (10-15 VMs)
|
||||
|
||||
**Benefits:**
|
||||
- Better performance (ml110 CPU is slower)
|
||||
- Better resource utilization
|
||||
- Improved redundancy
|
||||
- Better storage distribution
|
||||
|
||||
### 9. Immediate Action Items
|
||||
|
||||
#### Critical (Do First)
|
||||
1. ✅ Review storage status on all nodes
|
||||
2. ⚠️ Enable disabled storage pools
|
||||
3. ⚠️ Verify storage node restrictions in storage.cfg
|
||||
4. ⚠️ Check for storage errors or warnings
|
||||
|
||||
#### High Priority
|
||||
1. ⚠️ Configure LVM thin storage where missing
|
||||
2. ⚠️ Set up storage monitoring and alerts
|
||||
3. ⚠️ Plan VM migration for better distribution
|
||||
4. ⚠️ Review and optimize storage.cfg
|
||||
|
||||
#### Recommended
|
||||
1. ⚠️ Implement backup storage strategy
|
||||
2. ⚠️ Consider shared storage (NFS/Ceph) for HA
|
||||
3. ⚠️ Optimize storage performance settings
|
||||
4. ⚠️ Document storage procedures
|
||||
|
||||
---
|
||||
|
||||
## Detailed Storage Commands Reference
|
||||
|
||||
### Check Storage Status
|
||||
\`\`\`bash
|
||||
# On any Proxmox node
|
||||
pvesm status
|
||||
pvesm list <storage-name>
|
||||
\`\`\`
|
||||
|
||||
### Enable Disabled Storage
|
||||
\`\`\`bash
|
||||
pvesm set <storage-name> --disable 0
|
||||
\`\`\`
|
||||
|
||||
### Check LVM Configuration
|
||||
\`\`\`bash
|
||||
vgs # List volume groups
|
||||
lvs # List logical volumes
|
||||
lvs -o +data_percent,metadata_percent # Check thin pool usage
|
||||
\`\`\`
|
||||
|
||||
### Check Disk Usage
|
||||
\`\`\`bash
|
||||
df -h # Filesystem usage
|
||||
lsblk # Block devices
|
||||
\`\`\`
|
||||
|
||||
### Storage Performance Testing
|
||||
\`\`\`bash
|
||||
# Test storage I/O
|
||||
fio --name=test --ioengine=libaio --iodepth=16 --rw=randwrite --bs=4k --size=1G --runtime=60
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
This comprehensive storage review provides:
|
||||
- ✅ Current storage status across all nodes
|
||||
- ✅ Detailed analysis of storage configurations
|
||||
- ✅ Performance optimization recommendations
|
||||
- ✅ Capacity planning guidance
|
||||
- ✅ Security and monitoring recommendations
|
||||
- ✅ Migration and distribution strategies
|
||||
|
||||
**Next Steps:**
|
||||
1. Review this report
|
||||
2. Address critical issues first
|
||||
3. Implement high-priority recommendations
|
||||
4. Plan for long-term optimizations
|
||||
|
||||
---
|
||||
|
||||
**Report Generated:** $(date)
|
||||
**Report File:** $REPORT_FILE
|
||||
|
||||
EOF
|
||||
|
||||
log_success "Report generated: $REPORT_FILE"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
log_header "Proxmox Storage Comprehensive Review"
|
||||
echo ""
|
||||
|
||||
# Collect data from all nodes
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
collect_storage_info "$hostname" || log_warn "Failed to collect data from $hostname"
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Generate report
|
||||
generate_report
|
||||
|
||||
# Display summary
|
||||
log_header "Review Summary"
|
||||
echo ""
|
||||
log_info "Report saved to: $REPORT_FILE"
|
||||
echo ""
|
||||
log_info "Quick Summary:"
|
||||
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
if check_node "$hostname"; then
|
||||
local vms="${STORAGE_DATA["${hostname}_vms"]:-0}"
|
||||
local cts="${STORAGE_DATA["${hostname}_cts"]:-0}"
|
||||
echo " $hostname: $vms VMs, $cts Containers"
|
||||
else
|
||||
echo " $hostname: Not reachable"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
log_success "Storage review complete!"
|
||||
log_info "View full report: cat $REPORT_FILE"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
63
scripts/run-migrations-r630-01.sh
Executable file
63
scripts/run-migrations-r630-01.sh
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run database migrations for Sankofa on r630-01
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SANKOFA_PROJECT="${SANKOFA_PROJECT:-/home/intlc/projects/Sankofa}"
|
||||
source "$SCRIPT_DIR/env.r630-01.example" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID_API="${VMID_SANKOFA_API:-7800}"
|
||||
DB_HOST="${DB_HOST:-10.160.0.13}"
|
||||
DB_PORT="${DB_PORT:-5432}"
|
||||
DB_NAME="${DB_NAME:-sankofa}"
|
||||
DB_USER="${DB_USER:-sankofa}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-}"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Running Database Migrations"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
|
||||
if [[ -z "$DB_PASSWORD" ]]; then
|
||||
log_error "DB_PASSWORD not set. Please update env.r630-01.example or set DB_PASSWORD environment variable."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Connecting to API container (VMID: $VMID_API)..."
|
||||
|
||||
# Run migrations
|
||||
ssh_r630_01 "pct exec $VMID_API -- bash -c 'cd /opt/sankofa-api && \
|
||||
DB_HOST=$DB_HOST \
|
||||
DB_PORT=$DB_PORT \
|
||||
DB_NAME=$DB_NAME \
|
||||
DB_USER=$DB_USER \
|
||||
DB_PASSWORD=\"$DB_PASSWORD\" \
|
||||
pnpm db:migrate'"
|
||||
|
||||
log_success "Migrations completed"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
266
scripts/setup-keycloak-r630-01.sh
Executable file
266
scripts/setup-keycloak-r630-01.sh
Executable file
@@ -0,0 +1,266 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup Keycloak for Sankofa on r630-01
|
||||
# VMID: 7802, IP: 10.160.0.12
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/.env.r630-01" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_KEYCLOAK:-7802}"
|
||||
CONTAINER_IP="${SANKOFA_KEYCLOAK_IP:-10.160.0.12}"
|
||||
KEYCLOAK_ADMIN_USERNAME="${KEYCLOAK_ADMIN_USERNAME:-admin}"
|
||||
KEYCLOAK_ADMIN_PASSWORD="${KEYCLOAK_ADMIN_PASSWORD:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-24)}"
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-master}"
|
||||
KEYCLOAK_CLIENT_ID_API="${KEYCLOAK_CLIENT_ID_API:-sankofa-api}"
|
||||
KEYCLOAK_CLIENT_ID_PORTAL="${KEYCLOAK_CLIENT_ID_PORTAL:-portal-client}"
|
||||
KEYCLOAK_CLIENT_SECRET_API="${KEYCLOAK_CLIENT_SECRET_API:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-32)}"
|
||||
KEYCLOAK_CLIENT_SECRET_PORTAL="${KEYCLOAK_CLIENT_SECRET_PORTAL:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-32)}"
|
||||
DB_HOST="${SANKOFA_POSTGRES_IP:-10.160.0.13}"
|
||||
DB_NAME="${DB_NAME:-keycloak}"
|
||||
DB_USER="${DB_USER:-keycloak}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-24)}"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "Keycloak Setup for Sankofa"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
log_info "Realm: $KEYCLOAK_REALM"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist or is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install Java and PostgreSQL client
|
||||
log_info "Installing Java and dependencies..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq openjdk-21-jdk wget curl unzip"
|
||||
|
||||
# Set JAVA_HOME
|
||||
exec_container bash -c "echo 'export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64' >> /etc/profile"
|
||||
|
||||
log_success "Java installed"
|
||||
echo ""
|
||||
|
||||
# Create Keycloak database on PostgreSQL
|
||||
log_info "Creating Keycloak database on PostgreSQL..."
|
||||
ssh_r630_01 "pct exec ${VMID_SANKOFA_POSTGRES:-7803} -- bash -c \"sudo -u postgres psql << 'EOF'
|
||||
CREATE USER $DB_USER WITH PASSWORD '$DB_PASSWORD';
|
||||
CREATE DATABASE $DB_NAME OWNER $DB_USER ENCODING 'UTF8';
|
||||
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
|
||||
EOF\""
|
||||
|
||||
log_success "Keycloak database created"
|
||||
echo ""
|
||||
|
||||
# Download and install Keycloak
|
||||
log_info "Downloading Keycloak..."
|
||||
exec_container bash -c "cd /opt && \
|
||||
wget -q https://github.com/keycloak/keycloak/releases/download/24.0.0/keycloak-24.0.0.tar.gz && \
|
||||
tar -xzf keycloak-24.0.0.tar.gz && \
|
||||
mv keycloak-24.0.0 keycloak && \
|
||||
rm keycloak-24.0.0.tar.gz && \
|
||||
chmod +x keycloak/bin/kc.sh"
|
||||
|
||||
log_success "Keycloak downloaded"
|
||||
echo ""
|
||||
|
||||
# Build Keycloak
|
||||
log_info "Building Keycloak (this may take a few minutes)..."
|
||||
exec_container bash -c "cd /opt/keycloak && \
|
||||
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 && \
|
||||
./bin/kc.sh build --db postgres"
|
||||
|
||||
log_success "Keycloak built"
|
||||
echo ""
|
||||
|
||||
# Create systemd service
|
||||
log_info "Creating Keycloak systemd service..."
|
||||
exec_container bash -c "cat > /etc/systemd/system/keycloak.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Keycloak Authorization Server
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=idle
|
||||
User=root
|
||||
WorkingDirectory=/opt/keycloak
|
||||
Environment=\"JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64\"
|
||||
Environment=\"KC_DB=postgres\"
|
||||
Environment=\"KC_DB_URL_HOST=$DB_HOST\"
|
||||
Environment=\"KC_DB_URL_DATABASE=$DB_NAME\"
|
||||
Environment=\"KC_DB_USERNAME=$DB_USER\"
|
||||
Environment=\"KC_DB_PASSWORD=$DB_PASSWORD\"
|
||||
Environment=\"KC_HTTP_ENABLED=true\"
|
||||
Environment=\"KC_HOSTNAME_STRICT=false\"
|
||||
Environment=\"KC_HOSTNAME_PORT=8080\"
|
||||
ExecStart=/opt/keycloak/bin/kc.sh start --optimized
|
||||
ExecStop=/bin/kill -TERM \$MAINPID
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
|
||||
# Start Keycloak
|
||||
log_info "Starting Keycloak service..."
|
||||
exec_container bash -c "systemctl daemon-reload && \
|
||||
systemctl enable keycloak && \
|
||||
systemctl start keycloak"
|
||||
|
||||
log_info "Waiting for Keycloak to start (this may take 1-2 minutes)..."
|
||||
sleep 30
|
||||
|
||||
# Wait for Keycloak to be ready
|
||||
local max_attempts=30
|
||||
local attempt=0
|
||||
while [ $attempt -lt $max_attempts ]; do
|
||||
if exec_container bash -c "curl -s -f http://localhost:8080/health/ready >/dev/null 2>&1"; then
|
||||
log_success "Keycloak is ready"
|
||||
break
|
||||
fi
|
||||
attempt=$((attempt + 1))
|
||||
log_info "Waiting for Keycloak... ($attempt/$max_attempts)"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [ $attempt -eq $max_attempts ]; then
|
||||
log_error "Keycloak failed to start"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Create admin user
|
||||
log_info "Creating admin user..."
|
||||
exec_container bash -c "cd /opt/keycloak && \
|
||||
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64 && \
|
||||
./bin/kc.sh config credentials --server http://localhost:8080 --realm master --user admin --password admin 2>/dev/null || \
|
||||
./bin/kc.sh config credentials --server http://localhost:8080 --realm master --user $KEYCLOAK_ADMIN_USERNAME --password $KEYCLOAK_ADMIN_PASSWORD"
|
||||
|
||||
# Get admin token and create clients
|
||||
log_info "Creating API and Portal clients..."
|
||||
|
||||
# Note: This requires Keycloak Admin REST API
|
||||
# We'll create a script that can be run after Keycloak is fully started
|
||||
exec_container bash -c "cat > /root/create-clients.sh << 'SCRIPT'
|
||||
#!/bin/bash
|
||||
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
|
||||
cd /opt/keycloak
|
||||
|
||||
# Get admin token
|
||||
TOKEN=\$(curl -s -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" \\
|
||||
-H \"Content-Type: application/x-www-form-urlencoded\" \\
|
||||
-d \"username=$KEYCLOAK_ADMIN_USERNAME\" \\
|
||||
-d \"password=$KEYCLOAK_ADMIN_PASSWORD\" \\
|
||||
-d \"grant_type=password\" \\
|
||||
-d \"client_id=admin-cli\" | jq -r '.access_token')
|
||||
|
||||
# Create sankofa-api client
|
||||
curl -s -X POST \"http://localhost:8080/admin/realms/master/clients\" \\
|
||||
-H \"Authorization: Bearer \$TOKEN\" \\
|
||||
-H \"Content-Type: application/json\" \\
|
||||
-d '{
|
||||
\"clientId\": \"$KEYCLOAK_CLIENT_ID_API\",
|
||||
\"enabled\": true,
|
||||
\"clientAuthenticatorType\": \"client-secret\",
|
||||
\"secret\": \"$KEYCLOAK_CLIENT_SECRET_API\",
|
||||
\"protocol\": \"openid-connect\",
|
||||
\"publicClient\": false,
|
||||
\"standardFlowEnabled\": true,
|
||||
\"directAccessGrantsEnabled\": true,
|
||||
\"serviceAccountsEnabled\": true
|
||||
}' > /dev/null
|
||||
|
||||
# Create portal-client
|
||||
curl -s -X POST \"http://localhost:8080/admin/realms/master/clients\" \\
|
||||
-H \"Authorization: Bearer \$TOKEN\" \\
|
||||
-H \"Content-Type: application/json\" \\
|
||||
-d '{
|
||||
\"clientId\": \"$KEYCLOAK_CLIENT_ID_PORTAL\",
|
||||
\"enabled\": true,
|
||||
\"clientAuthenticatorType\": \"client-secret\",
|
||||
\"secret\": \"$KEYCLOAK_CLIENT_SECRET_PORTAL\",
|
||||
\"protocol\": \"openid-connect\",
|
||||
\"publicClient\": false,
|
||||
\"standardFlowEnabled\": true,
|
||||
\"directAccessGrantsEnabled\": true
|
||||
}' > /dev/null
|
||||
|
||||
echo \"Clients created successfully\"
|
||||
SCRIPT
|
||||
chmod +x /root/create-clients.sh"
|
||||
|
||||
# Wait a bit more and create clients
|
||||
sleep 10
|
||||
exec_container bash -c "/root/create-clients.sh" || log_warn "Client creation may need to be done manually via web UI"
|
||||
|
||||
log_success "Keycloak setup complete"
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "Keycloak Setup Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Keycloak Configuration:"
|
||||
echo " URL: http://$CONTAINER_IP:8080"
|
||||
echo " Admin URL: http://$CONTAINER_IP:8080/admin"
|
||||
echo " Admin Username: $KEYCLOAK_ADMIN_USERNAME"
|
||||
echo " Admin Password: $KEYCLOAK_ADMIN_PASSWORD"
|
||||
echo " Realm: $KEYCLOAK_REALM"
|
||||
echo ""
|
||||
log_info "Client Secrets:"
|
||||
echo " API Client Secret: $KEYCLOAK_CLIENT_SECRET_API"
|
||||
echo " Portal Client Secret: $KEYCLOAK_CLIENT_SECRET_PORTAL"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Update .env.r630-01 with Keycloak credentials"
|
||||
echo " 2. Verify clients in Keycloak admin console"
|
||||
echo " 3. Run: ./scripts/deploy-api-r630-01.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
160
scripts/setup-postgresql-r630-01.sh
Executable file
160
scripts/setup-postgresql-r630-01.sh
Executable file
@@ -0,0 +1,160 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup PostgreSQL for Sankofa on r630-01
|
||||
# VMID: 7803, IP: 10.160.0.13
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/.env.r630-01" 2>/dev/null || true
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# Configuration
|
||||
PROXMOX_HOST="${PROXMOX_HOST:-192.168.11.11}"
|
||||
VMID="${VMID_SANKOFA_POSTGRES:-7803}"
|
||||
CONTAINER_IP="${SANKOFA_POSTGRES_IP:-10.160.0.13}"
|
||||
DB_NAME="${DB_NAME:-sankofa}"
|
||||
DB_USER="${DB_USER:-sankofa}"
|
||||
DB_PASSWORD="${DB_PASSWORD:-$(openssl rand -base64 32 | tr -dc 'a-zA-Z0-9' | cut -c1-24)}"
|
||||
POSTGRES_VERSION="${POSTGRES_VERSION:-16}"
|
||||
|
||||
# SSH function
|
||||
ssh_r630_01() {
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$PROXMOX_HOST" "$@"
|
||||
}
|
||||
|
||||
# Execute command in container
|
||||
exec_container() {
|
||||
ssh_r630_01 "pct exec $VMID -- $*"
|
||||
}
|
||||
|
||||
main() {
|
||||
echo ""
|
||||
log_info "========================================="
|
||||
log_info "PostgreSQL Setup for Sankofa"
|
||||
log_info "========================================="
|
||||
echo ""
|
||||
log_info "Container VMID: $VMID"
|
||||
log_info "Container IP: $CONTAINER_IP"
|
||||
log_info "Database: $DB_NAME"
|
||||
log_info "User: $DB_USER"
|
||||
echo ""
|
||||
|
||||
# Check if container exists and is running
|
||||
log_info "Checking container status..."
|
||||
if ! ssh_r630_01 "pct status $VMID >/dev/null 2>&1"; then
|
||||
log_error "Container $VMID does not exist or is not running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if container is running
|
||||
local status=$(ssh_r630_01 "pct status $VMID" 2>/dev/null | awk '{print $2}' || echo "stopped")
|
||||
if [[ "$status" != "running" ]]; then
|
||||
log_info "Starting container $VMID..."
|
||||
ssh_r630_01 "pct start $VMID"
|
||||
sleep 5
|
||||
fi
|
||||
log_success "Container is running"
|
||||
echo ""
|
||||
|
||||
# Install PostgreSQL
|
||||
log_info "Installing PostgreSQL $POSTGRES_VERSION..."
|
||||
exec_container bash -c "export DEBIAN_FRONTEND=noninteractive && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq wget ca-certificates gnupg lsb-release"
|
||||
|
||||
exec_container bash -c "wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
|
||||
echo 'deb http://apt.postgresql.org/pub/repos/apt \$(lsb_release -cs)-pgdg main' > /etc/apt/sources.list.d/pgdg.list && \
|
||||
apt-get update -qq && \
|
||||
apt-get install -y -qq postgresql-$POSTGRES_VERSION postgresql-contrib-$POSTGRES_VERSION"
|
||||
|
||||
log_success "PostgreSQL installed"
|
||||
echo ""
|
||||
|
||||
# Configure PostgreSQL
|
||||
log_info "Configuring PostgreSQL..."
|
||||
exec_container bash -c "systemctl enable postgresql"
|
||||
exec_container bash -c "systemctl start postgresql"
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
log_info "Waiting for PostgreSQL to start..."
|
||||
sleep 5
|
||||
|
||||
# Create database and user
|
||||
log_info "Creating database and user..."
|
||||
exec_container bash -c "sudo -u postgres psql << 'EOF'
|
||||
-- Create user
|
||||
CREATE USER $DB_USER WITH PASSWORD '$DB_PASSWORD';
|
||||
|
||||
-- Create database
|
||||
CREATE DATABASE $DB_NAME OWNER $DB_USER ENCODING 'UTF8';
|
||||
|
||||
-- Grant privileges
|
||||
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
|
||||
|
||||
-- Connect to database and grant schema privileges
|
||||
\c $DB_NAME
|
||||
GRANT ALL ON SCHEMA public TO $DB_USER;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO $DB_USER;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO $DB_USER;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON FUNCTIONS TO $DB_USER;
|
||||
|
||||
-- Enable extensions
|
||||
CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";
|
||||
CREATE EXTENSION IF NOT EXISTS \"pg_stat_statements\";
|
||||
EOF"
|
||||
|
||||
log_success "Database and user created"
|
||||
echo ""
|
||||
|
||||
# Configure PostgreSQL for remote access (if needed)
|
||||
log_info "Configuring PostgreSQL for network access..."
|
||||
exec_container bash -c "echo \"host all all 10.160.0.0/22 md5\" >> /etc/postgresql/$POSTGRES_VERSION/main/pg_hba.conf"
|
||||
exec_container bash -c "sed -i \"s/#listen_addresses = 'localhost'/listen_addresses = '*'/\" /etc/postgresql/$POSTGRES_VERSION/main/postgresql.conf"
|
||||
|
||||
# Restart PostgreSQL
|
||||
exec_container bash -c "systemctl restart postgresql"
|
||||
sleep 3
|
||||
|
||||
log_success "PostgreSQL configured for network access"
|
||||
echo ""
|
||||
|
||||
# Test connection
|
||||
log_info "Testing database connection..."
|
||||
if exec_container bash -c "PGPASSWORD='$DB_PASSWORD' psql -h localhost -U $DB_USER -d $DB_NAME -c 'SELECT version();' >/dev/null 2>&1"; then
|
||||
log_success "Database connection successful"
|
||||
else
|
||||
log_error "Database connection failed"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
log_success "========================================="
|
||||
log_success "PostgreSQL Setup Complete"
|
||||
log_success "========================================="
|
||||
echo ""
|
||||
log_info "Database Configuration:"
|
||||
echo " Host: $CONTAINER_IP"
|
||||
echo " Port: 5432"
|
||||
echo " Database: $DB_NAME"
|
||||
echo " User: $DB_USER"
|
||||
echo " Password: $DB_PASSWORD"
|
||||
echo ""
|
||||
log_info "Next steps:"
|
||||
echo " 1. Update .env.r630-01 with the database password"
|
||||
echo " 2. Run database migrations: ./scripts/run-migrations-r630-01.sh"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
68
scripts/setup-storage-monitoring-cron.sh
Executable file
68
scripts/setup-storage-monitoring-cron.sh
Executable file
@@ -0,0 +1,68 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup cron job for automated storage monitoring
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
MONITOR_SCRIPT="${PROJECT_ROOT}/scripts/storage-monitor.sh"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1"; }
|
||||
|
||||
# Check if script exists
|
||||
if [ ! -f "$MONITOR_SCRIPT" ]; then
|
||||
log_warn "Monitoring script not found: $MONITOR_SCRIPT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Make sure script is executable
|
||||
chmod +x "$MONITOR_SCRIPT"
|
||||
|
||||
# Cron schedule (every hour)
|
||||
CRON_SCHEDULE="0 * * * *"
|
||||
CRON_COMMAND="$MONITOR_SCRIPT check >> ${PROJECT_ROOT}/logs/storage-monitoring/cron.log 2>&1"
|
||||
|
||||
# Check if cron job already exists
|
||||
if crontab -l 2>/dev/null | grep -q "$MONITOR_SCRIPT"; then
|
||||
log_warn "Cron job already exists for storage monitoring"
|
||||
echo ""
|
||||
echo "Current cron jobs:"
|
||||
crontab -l | grep "$MONITOR_SCRIPT"
|
||||
echo ""
|
||||
|
||||
# Check for --yes flag for non-interactive mode
|
||||
local auto_replace=false
|
||||
if [[ "${1:-}" == "--yes" ]] || [[ "${1:-}" == "-y" ]]; then
|
||||
auto_replace=true
|
||||
log_info "Auto-replace mode enabled"
|
||||
fi
|
||||
|
||||
if [ "$auto_replace" = false ]; then
|
||||
read -p "Replace existing cron job? (yes/no): " replace
|
||||
if [ "$replace" != "yes" ]; then
|
||||
log_info "Keeping existing cron job"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
# Remove existing cron job
|
||||
crontab -l 2>/dev/null | grep -v "$MONITOR_SCRIPT" | crontab -
|
||||
fi
|
||||
|
||||
# Add cron job
|
||||
(crontab -l 2>/dev/null; echo "$CRON_SCHEDULE $CRON_COMMAND") | crontab -
|
||||
|
||||
log_success "Storage monitoring cron job added"
|
||||
log_info "Schedule: Every hour"
|
||||
log_info "Command: $MONITOR_SCRIPT check"
|
||||
log_info "Logs: ${PROJECT_ROOT}/logs/storage-monitoring/"
|
||||
echo ""
|
||||
echo "To view cron jobs: crontab -l"
|
||||
echo "To remove cron job: crontab -e (then delete the line)"
|
||||
309
scripts/storage-monitor.sh
Executable file
309
scripts/storage-monitor.sh
Executable file
@@ -0,0 +1,309 @@
|
||||
#!/usr/bin/env bash
|
||||
# Proxmox Storage Monitoring Script with Alerts
|
||||
# Monitors storage usage across all Proxmox nodes and sends alerts
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
LOG_DIR="${PROJECT_ROOT}/logs/storage-monitoring"
|
||||
ALERT_LOG="${LOG_DIR}/storage_alerts_$(date +%Y%m%d).log"
|
||||
STATUS_LOG="${LOG_DIR}/storage_status_$(date +%Y%m%d).log"
|
||||
|
||||
# Alert thresholds
|
||||
WARNING_THRESHOLD=80
|
||||
CRITICAL_THRESHOLD=90
|
||||
VG_FREE_WARNING=10 # GB
|
||||
VG_FREE_CRITICAL=5 # GB
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
|
||||
log_success() { echo -e "${GREEN}[✓]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[⚠]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[✗]${NC} $1"; }
|
||||
log_alert() { echo -e "${RED}[ALERT]${NC} $1"; }
|
||||
|
||||
# Create log directory
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
# Proxmox nodes configuration
|
||||
declare -A NODES
|
||||
NODES[ml110]="192.168.11.10:L@kers2010"
|
||||
NODES[r630-01]="192.168.11.11:password"
|
||||
NODES[r630-02]="192.168.11.12:password"
|
||||
NODES[r630-03]="192.168.11.13:L@kers2010"
|
||||
NODES[r630-04]="192.168.11.14:L@kers2010"
|
||||
|
||||
# Alert tracking
|
||||
declare -a ALERTS
|
||||
|
||||
# SSH helper function
|
||||
ssh_node() {
|
||||
local hostname="$1"
|
||||
shift
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
local password="${NODES[$hostname]#*:}"
|
||||
|
||||
if command -v sshpass >/dev/null 2>&1; then
|
||||
sshpass -p "$password" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@" 2>/dev/null || echo ""
|
||||
else
|
||||
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$ip" "$@" 2>/dev/null || echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
# Check node connectivity
|
||||
check_node() {
|
||||
local hostname="$1"
|
||||
local ip="${NODES[$hostname]%%:*}"
|
||||
|
||||
ping -c 1 -W 2 "$ip" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Parse storage usage percentage
|
||||
parse_storage_percent() {
|
||||
local percent_str="$1"
|
||||
# Remove % sign and convert to integer
|
||||
echo "$percent_str" | sed 's/%//' | awk '{print int($1)}'
|
||||
}
|
||||
|
||||
# Check storage usage
|
||||
check_storage_usage() {
|
||||
local hostname="$1"
|
||||
local storage_line="$2"
|
||||
|
||||
local storage_name=$(echo "$storage_line" | awk '{print $1}')
|
||||
local storage_type=$(echo "$storage_line" | awk '{print $2}')
|
||||
local status=$(echo "$storage_line" | awk '{print $3}')
|
||||
local total=$(echo "$storage_line" | awk '{print $4}')
|
||||
local used=$(echo "$storage_line" | awk '{print $5}')
|
||||
local available=$(echo "$storage_line" | awk '{print $6}')
|
||||
local percent_str=$(echo "$storage_line" | awk '{print $7}')
|
||||
|
||||
# Skip if disabled or inactive
|
||||
if [ "$status" = "disabled" ] || [ "$status" = "inactive" ] || [ "$percent_str" = "N/A" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local percent=$(parse_storage_percent "$percent_str")
|
||||
|
||||
if [ -z "$percent" ] || [ "$percent" -eq 0 ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check thresholds
|
||||
if [ "$percent" -ge "$CRITICAL_THRESHOLD" ]; then
|
||||
ALERTS+=("CRITICAL: $hostname:$storage_name is at ${percent}% capacity (${available} available)")
|
||||
log_alert "CRITICAL: $hostname:$storage_name is at ${percent}% capacity"
|
||||
return 2
|
||||
elif [ "$percent" -ge "$WARNING_THRESHOLD" ]; then
|
||||
ALERTS+=("WARNING: $hostname:$storage_name is at ${percent}% capacity (${available} available)")
|
||||
log_warn "WARNING: $hostname:$storage_name is at ${percent}% capacity"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Check volume group free space
|
||||
check_vg_free_space() {
|
||||
local hostname="$1"
|
||||
local vg_line="$2"
|
||||
|
||||
local vg_name=$(echo "$vg_line" | awk '{print $1}')
|
||||
local vg_size=$(echo "$vg_line" | awk '{print $2}')
|
||||
local vg_free=$(echo "$vg_line" | awk '{print $3}')
|
||||
|
||||
# Extract numeric value (remove 'g' suffix)
|
||||
local free_gb=$(echo "$vg_free" | sed 's/g//' | awk '{print int($1)}')
|
||||
|
||||
if [ -z "$free_gb" ] || [ "$free_gb" -eq 0 ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ "$free_gb" -le "$VG_FREE_CRITICAL" ]; then
|
||||
ALERTS+=("CRITICAL: $hostname:$vg_name volume group has only ${free_gb}GB free space")
|
||||
log_alert "CRITICAL: $hostname:$vg_name VG has only ${free_gb}GB free"
|
||||
return 2
|
||||
elif [ "$free_gb" -le "$VG_FREE_WARNING" ]; then
|
||||
ALERTS+=("WARNING: $hostname:$vg_name volume group has only ${free_gb}GB free space")
|
||||
log_warn "WARNING: $hostname:$vg_name VG has only ${free_gb}GB free"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Monitor a single node
|
||||
monitor_node() {
|
||||
local hostname="$1"
|
||||
|
||||
if ! check_node "$hostname"; then
|
||||
log_warn "$hostname is not reachable"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Monitoring $hostname..."
|
||||
|
||||
# Get storage status
|
||||
local storage_status=$(ssh_node "$hostname" 'pvesm status 2>/dev/null' || echo "")
|
||||
|
||||
if [ -z "$storage_status" ]; then
|
||||
log_warn "Could not get storage status from $hostname"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Process each storage line (skip header)
|
||||
echo "$storage_status" | tail -n +2 | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
check_storage_usage "$hostname" "$line"
|
||||
fi
|
||||
done
|
||||
|
||||
# Check volume groups
|
||||
local vgs_info=$(ssh_node "$hostname" 'vgs --units g --noheadings -o vg_name,vg_size,vg_free 2>/dev/null' || echo "")
|
||||
|
||||
if [ -n "$vgs_info" ]; then
|
||||
echo "$vgs_info" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
check_vg_free_space "$hostname" "$line"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Log storage status
|
||||
{
|
||||
echo "=== $hostname Storage Status $(date) ==="
|
||||
echo "$storage_status"
|
||||
echo ""
|
||||
echo "=== Volume Groups ==="
|
||||
echo "$vgs_info"
|
||||
echo ""
|
||||
} >> "$STATUS_LOG"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Send alerts (can be extended to email, Slack, etc.)
|
||||
send_alerts() {
|
||||
if [ ${#ALERTS[@]} -eq 0 ]; then
|
||||
log_success "No storage alerts"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_warn "Found ${#ALERTS[@]} storage alert(s)"
|
||||
|
||||
{
|
||||
echo "=== Storage Alerts $(date) ==="
|
||||
for alert in "${ALERTS[@]}"; do
|
||||
echo "$alert"
|
||||
done
|
||||
echo ""
|
||||
} >> "$ALERT_LOG"
|
||||
|
||||
# Print alerts
|
||||
for alert in "${ALERTS[@]}"; do
|
||||
echo "$alert"
|
||||
done
|
||||
|
||||
# TODO: Add email/Slack/webhook notifications here
|
||||
# Example:
|
||||
# send_email "Storage Alerts" "$(printf '%s\n' "${ALERTS[@]}")"
|
||||
# send_slack_webhook "${ALERTS[@]}"
|
||||
}
|
||||
|
||||
# Generate summary report
|
||||
generate_summary() {
|
||||
local summary_file="${LOG_DIR}/storage_summary_$(date +%Y%m%d).txt"
|
||||
|
||||
{
|
||||
echo "=== Proxmox Storage Summary $(date) ==="
|
||||
echo ""
|
||||
echo "Nodes Monitored:"
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
if check_node "$hostname"; then
|
||||
echo " ✅ $hostname"
|
||||
else
|
||||
echo " ❌ $hostname (not reachable)"
|
||||
fi
|
||||
done
|
||||
echo ""
|
||||
echo "Alerts: ${#ALERTS[@]}"
|
||||
if [ ${#ALERTS[@]} -gt 0 ]; then
|
||||
echo ""
|
||||
for alert in "${ALERTS[@]}"; do
|
||||
echo " - $alert"
|
||||
done
|
||||
fi
|
||||
echo ""
|
||||
echo "Thresholds:"
|
||||
echo " Storage Usage Warning: ${WARNING_THRESHOLD}%"
|
||||
echo " Storage Usage Critical: ${CRITICAL_THRESHOLD}%"
|
||||
echo " Volume Group Free Warning: ${VG_FREE_WARNING}GB"
|
||||
echo " Volume Group Free Critical: ${VG_FREE_CRITICAL}GB"
|
||||
} > "$summary_file"
|
||||
|
||||
log_info "Summary saved to: $summary_file"
|
||||
}
|
||||
|
||||
# Main monitoring function
|
||||
main() {
|
||||
local mode="${1:-check}"
|
||||
|
||||
case "$mode" in
|
||||
check)
|
||||
echo "=== Proxmox Storage Monitoring ==="
|
||||
echo "Date: $(date)"
|
||||
echo ""
|
||||
|
||||
# Monitor all nodes
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
monitor_node "$hostname"
|
||||
done
|
||||
|
||||
# Send alerts
|
||||
send_alerts
|
||||
|
||||
# Generate summary
|
||||
generate_summary
|
||||
|
||||
echo ""
|
||||
log_info "Monitoring complete. Check logs in: $LOG_DIR"
|
||||
;;
|
||||
status)
|
||||
# Show current status
|
||||
echo "=== Current Storage Status ==="
|
||||
for hostname in "${!NODES[@]}"; do
|
||||
if check_node "$hostname"; then
|
||||
echo ""
|
||||
echo "--- $hostname ---"
|
||||
ssh_node "$hostname" 'pvesm status 2>/dev/null' || echo "Could not get status"
|
||||
fi
|
||||
done
|
||||
;;
|
||||
alerts)
|
||||
# Show recent alerts
|
||||
if [ -f "$ALERT_LOG" ]; then
|
||||
tail -50 "$ALERT_LOG"
|
||||
else
|
||||
echo "No alerts found"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [check|status|alerts]"
|
||||
echo " check - Run full monitoring check (default)"
|
||||
echo " status - Show current storage status"
|
||||
echo " alerts - Show recent alerts"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user