Files
Sankofa/docs/storage/CEPH_COMPLETE_SETUP.md
defiQUG 9daf1fd378 Apply Composer changes: comprehensive API updates, migrations, middleware, and infrastructure improvements
- Add comprehensive database migrations (001-024) for schema evolution
- Enhance API schema with expanded type definitions and resolvers
- Add new middleware: audit logging, rate limiting, MFA enforcement, security, tenant auth
- Implement new services: AI optimization, billing, blockchain, compliance, marketplace
- Add adapter layer for cloud integrations (Cloudflare, Kubernetes, Proxmox, storage)
- Update Crossplane provider with enhanced VM management capabilities
- Add comprehensive test suite for API endpoints and services
- Update frontend components with improved GraphQL subscriptions and real-time updates
- Enhance security configurations and headers (CSP, CORS, etc.)
- Update documentation and configuration files
- Add new CI/CD workflows and validation scripts
- Implement design system improvements and UI enhancements
2025-12-12 18:01:35 -08:00

3.3 KiB

Ceph Complete Setup Summary

Date: 2024-12-19
Status: Complete and Operational

Cluster Overview

  • Cluster ID: 5fb968ae-12ab-405f-b05f-0df29a168328
  • Version: Ceph 19.2.3-pve2 (Squid)
  • Nodes: 2 (ML110-01, R630-01)
  • Network: 192.168.11.0/24

Components

Monitors

  • ML110-01: Active monitor
  • R630-01: Active monitor
  • Quorum: 2/2 monitors

Managers

  • ML110-01: Active manager
  • R630-01: Standby manager

OSDs

  • OSD 0 (ML110-01): UP, 0.91 TiB
  • OSD 1 (R630-01): UP, 0.27 TiB
  • Total Capacity: 1.2 TiB

Storage Pools

RBD Pool

  • Name: rbd
  • Size: 2 (min_size: 1)
  • PG Count: 128
  • Application: RBD enabled
  • Use Case: VM disk images
  • Proxmox Storage: ceph-rbd

CephFS

  • Name: cephfs
  • Metadata Pool: cephfs_metadata (32 PGs)
  • Data Pool: cephfs_data (128 PGs)
  • Use Case: ISOs, backups, snippets
  • Proxmox Storage: ceph-fs

Configuration

Pool Settings (Optimized for 2-node)

size = 2
min_size = 1
pg_num = 128
pgp_num = 128

Network Configuration

  • Public Network: 192.168.11.0/24
  • Cluster Network: 192.168.11.0/24

Access Information

Dashboard

Prometheus Metrics

Firewall Ports

  • 6789/tcp: Ceph Monitor (v1)
  • 3300/tcp: Ceph Monitor (v2)
  • 6800-7300/tcp: Ceph OSD
  • 8443/tcp: Ceph Dashboard
  • 9283/tcp: Prometheus Metrics

Health Status

Current Status

  • Health: HEALTH_WARN (expected for 2-node setup)
  • Warnings:
    • OSD count (2) < default size (3) - Normal for 2-node
    • Some degraded objects during initial setup - Will resolve

Monitoring

# Check cluster status
ceph -s

# Check OSD tree
ceph osd tree

# Check pool status
ceph osd pool ls detail

Proxmox Integration

Storage Pools

  1. RBD Storage (ceph-rbd)

    • Type: RBD
    • Pool: rbd
    • Content: Images, Rootdir
    • Access: Datacenter → Storage → ceph-rbd
  2. CephFS Storage (ceph-fs)

    • Type: CephFS
    • Filesystem: cephfs
    • Content: ISO, Backup, Snippets
    • Access: Datacenter → Storage → ceph-fs

Maintenance Commands

Cluster Management

# Cluster status
pveceph status
ceph -s

# OSD management
ceph osd tree
pveceph osd create /dev/sdX
pveceph osd destroy <osd-id>

# Pool management
ceph osd pool ls
pveceph pool create <name>
pveceph pool destroy <name>

Storage Management

# List storage
pvesm status

# Add storage
pvesm add rbd <name> --pool <pool> --monhost <hosts>
pvesm add cephfs <name> --monhost <hosts> --fsname <fsname>

Troubleshooting

Common Issues

  1. OSD Down

    systemctl status ceph-osd@<id>
    systemctl start ceph-osd@<id>
    
  2. Monitor Issues

    systemctl status ceph-mon@<id>
    pveceph mon create
    
  3. Pool Warnings

    # Adjust pool size
    ceph osd pool set <pool> size 2
    ceph osd pool set <pool> min_size 1