- Add comprehensive database migrations (001-024) for schema evolution - Enhance API schema with expanded type definitions and resolvers - Add new middleware: audit logging, rate limiting, MFA enforcement, security, tenant auth - Implement new services: AI optimization, billing, blockchain, compliance, marketplace - Add adapter layer for cloud integrations (Cloudflare, Kubernetes, Proxmox, storage) - Update Crossplane provider with enhanced VM management capabilities - Add comprehensive test suite for API endpoints and services - Update frontend components with improved GraphQL subscriptions and real-time updates - Enhance security configurations and headers (CSP, CORS, etc.) - Update documentation and configuration files - Add new CI/CD workflows and validation scripts - Implement design system improvements and UI enhancements
3.3 KiB
3.3 KiB
Ceph Complete Setup Summary
Date: 2024-12-19
Status: Complete and Operational
Cluster Overview
- Cluster ID: 5fb968ae-12ab-405f-b05f-0df29a168328
- Version: Ceph 19.2.3-pve2 (Squid)
- Nodes: 2 (ML110-01, R630-01)
- Network: 192.168.11.0/24
Components
Monitors
- ML110-01: Active monitor
- R630-01: Active monitor
- Quorum: 2/2 monitors
Managers
- ML110-01: Active manager
- R630-01: Standby manager
OSDs
- OSD 0 (ML110-01): UP, 0.91 TiB
- OSD 1 (R630-01): UP, 0.27 TiB
- Total Capacity: 1.2 TiB
Storage Pools
RBD Pool
- Name:
rbd - Size: 2 (min_size: 1)
- PG Count: 128
- Application: RBD enabled
- Use Case: VM disk images
- Proxmox Storage:
ceph-rbd
CephFS
- Name:
cephfs - Metadata Pool:
cephfs_metadata(32 PGs) - Data Pool:
cephfs_data(128 PGs) - Use Case: ISOs, backups, snippets
- Proxmox Storage:
ceph-fs
Configuration
Pool Settings (Optimized for 2-node)
size = 2
min_size = 1
pg_num = 128
pgp_num = 128
Network Configuration
- Public Network: 192.168.11.0/24
- Cluster Network: 192.168.11.0/24
Access Information
Dashboard
- URL: https://ml110-01.sankofa.nexus:8443
- Username: admin
- Password: sankofa-admin
Prometheus Metrics
- ML110-01: http://ml110-01.sankofa.nexus:9283/metrics
- R630-01: http://r630-01.sankofa.nexus:9283/metrics
Firewall Ports
- 6789/tcp: Ceph Monitor (v1)
- 3300/tcp: Ceph Monitor (v2)
- 6800-7300/tcp: Ceph OSD
- 8443/tcp: Ceph Dashboard
- 9283/tcp: Prometheus Metrics
Health Status
Current Status
- Health: HEALTH_WARN (expected for 2-node setup)
- Warnings:
- OSD count (2) < default size (3) - Normal for 2-node
- Some degraded objects during initial setup - Will resolve
Monitoring
# Check cluster status
ceph -s
# Check OSD tree
ceph osd tree
# Check pool status
ceph osd pool ls detail
Proxmox Integration
Storage Pools
-
RBD Storage (
ceph-rbd)- Type: RBD
- Pool: rbd
- Content: Images, Rootdir
- Access: Datacenter → Storage → ceph-rbd
-
CephFS Storage (
ceph-fs)- Type: CephFS
- Filesystem: cephfs
- Content: ISO, Backup, Snippets
- Access: Datacenter → Storage → ceph-fs
Maintenance Commands
Cluster Management
# Cluster status
pveceph status
ceph -s
# OSD management
ceph osd tree
pveceph osd create /dev/sdX
pveceph osd destroy <osd-id>
# Pool management
ceph osd pool ls
pveceph pool create <name>
pveceph pool destroy <name>
Storage Management
# List storage
pvesm status
# Add storage
pvesm add rbd <name> --pool <pool> --monhost <hosts>
pvesm add cephfs <name> --monhost <hosts> --fsname <fsname>
Troubleshooting
Common Issues
-
OSD Down
systemctl status ceph-osd@<id> systemctl start ceph-osd@<id> -
Monitor Issues
systemctl status ceph-mon@<id> pveceph mon create -
Pool Warnings
# Adjust pool size ceph osd pool set <pool> size 2 ceph osd pool set <pool> min_size 1