- Backend: ShallowEtagHeaderFilter for /api/v1/*, API-VERSIONING.md, README (tenant, CORS, Flyway, ETag) - k8s: backend-deployment.yaml (Deployment, Service, Secret/ConfigMap) - Web: scaffold with directory pull, 304 handling, touch-friendly UI - Android 16: ANDROID-16-TARGET.md; BuildConfig STUN/signaling, SMOAApplication configures InfrastructureManager - Domain: CertificateManager revocation stub, ReportService signReports, ZeroTrust/ThreatDetection minimal docs - TODO.md and IMPLEMENTATION_STATUS.md updated; communications README for endpoint config Co-authored-by: Cursor <cursoragent@cursor.com>
9.4 KiB
Proxmox VE template – hardware requirements for SMOA backend and supporting infra
This document lists hardware requirements for building a Proxmox VE template used to run the SMOA backend and supporting infrastructure (database, optional reverse proxy, optional TURN/signaling).
Required target (mandatory minimum)
The minimum viable target for a single Proxmox VE template running the SMOA backend is:
| Aspect | Required minimum |
|---|---|
| Backend VM | 2 vCPU, 1 GiB RAM, 8 GiB disk, 1 Gbps network |
| OS | Linux (e.g. Debian 12 or Ubuntu 22.04 LTS) |
| Java | OpenJDK 17 (Eclipse Temurin or equivalent) |
| Backend | smoa-backend JAR on port 8080; H2 file DB or PostgreSQL |
| Data | Persistent storage for ./data/smoa (H2) or PostgreSQL data directory |
| Proxmox host | 4 physical cores, 8 GiB RAM, 128 GiB SSD, 1 Gbps NIC |
Below this, the backend may run but is not supported for production (no headroom for spikes, logs, or audit growth). All other dimensions (RAM, disk, vCPU, separate DB/proxy/TURN) are scaling aspects described in the next section.
1. Backend service (smoa-backend)
| Resource | Minimum (dev/small) | Recommended (production) | Notes |
|---|---|---|---|
| vCPU | 2 | 4 | Spring Boot + JPA; sync and pull endpoints can spike briefly. |
| RAM | 1 GiB | 2–4 GiB | JVM heap ~512 MiB–1 GiB; leave headroom for OS and buffers. |
| Disk | 8 GiB | 20–40 GiB | OS + JAR + H2 data (or PostgreSQL data dir if DB on same VM). Logs and audit table growth. |
| Network | 1 Gbps (shared) | 1 Gbps | API traffic; rate limit 120 req/min per client by default. |
- Stack: OpenJDK 17 (Eclipse Temurin), Spring Boot 3, Kotlin; H2 (file) or PostgreSQL.
- Ports: 8080 (HTTP); optionally 8443 if TLS is terminated on the VM.
- Storage: Persistent volume for
./data/smoa(H2) or PostgreSQL data directory; consider separate disk for logs/audit.
2. Supporting infrastructure (same or separate VMs)
2.1 Database (if not H2 on backend VM)
When moving off H2 to PostgreSQL (recommended for production):
| Resource | Minimum | Recommended |
|---|---|---|
| vCPU | 2 | 2–4 |
| RAM | 1 GiB | 2–4 GiB |
| Disk | 20 GiB | 50–100 GiB (SSD preferred) |
| Network | 1 Gbps | 1 Gbps |
- Can run on the same Proxmox VM as the backend (small deployments) or a dedicated VM (better isolation and scaling).
2.2 Reverse proxy (optional)
If you run Nginx, Traefik, or Caddy in front of the backend (TLS, load balancing, rate limiting):
| Resource | Minimum | Notes |
|---|---|---|
| vCPU | 1 | Light. |
| RAM | 512 MiB | |
| Disk | 4 GiB | Config + certs + logs. |
- Can share a VM with the backend (e.g. Nginx in same template, backend as systemd service) or run as a separate small VM.
2.3 TURN / signaling (optional)
If you host TURN and/or signaling for WebRTC (meetings) instead of using external services:
| Resource | Minimum | Recommended |
|---|---|---|
| vCPU | 2 | 4 |
| RAM | 1 GiB | 2 GiB |
| Disk | 10 GiB | 20 GiB |
| Network | 1 Gbps | 1 Gbps+, low latency |
- Media traffic can be CPU- and bandwidth-heavy; size for peak concurrent sessions.
3. Combined “all-in-one” template (single VM)
A single Proxmox VE template that runs backend + PostgreSQL + optional Nginx on one VM:
| Resource | Minimum | Recommended (production) |
|---|---|---|
| vCPU | 4 | 6–8 |
| RAM | 4 GiB | 8 GiB |
| Disk | 40 GiB | 80–120 GiB (SSD) |
| Network | 1 Gbps | 1 Gbps |
- Layout:
- OS (e.g. Debian 12 / Ubuntu 22.04 LTS), Docker or systemd.
- Backend JAR (or container), listening on 8080.
- PostgreSQL (if used) and optional Nginx on same host.
- Persistent volumes for DB data, backend H2 (if kept), and logs.
4. Proxmox VE host (physical) recommendations
To run one or more VMs built from the template:
| Resource | Small (dev / few users) | Production (dozens of devices) |
|---|---|---|
| CPU | 4 cores | 8–16 cores |
| RAM | 8 GiB | 32–64 GiB |
| Storage | 128 GiB SSD | 256–512 GiB SSD (or NVMe) |
| Network | 1 Gbps | 1 Gbps (low latency to mobile clients) |
- Prefer SSD/NVMe for database and backend data directories.
- Backups: Use Proxmox backup or external backup for VM disks / PostgreSQL dumps and backend audit data.
5. Template contents checklist
- OS: Debian 12 or Ubuntu 22.04 LTS (minimal/server).
- Java: OpenJDK 17 (Eclipse Temurin) or Adoptium.
- Backend: Install path for
smoa-backend-*.jar; systemd unit; env file forSERVER_PORT,SPRING_PROFILES_ACTIVE,SMOA_API_KEY,spring.datasource.url(if PostgreSQL). - Optional: PostgreSQL 15+ (if not using H2); Nginx/Caddy for reverse proxy and TLS.
- Firewall: Allow 8080 (backend) and 80/443 if reverse proxy; restrict admin/SSH.
- Persistent: Separate disk or volume for data (H2
./data/smoaor PostgreSQL data dir) and logs; exclude from “golden” template so each clone gets its own data.
6. Summary table (single backend VM, no separate DB/proxy)
| Component | vCPU | RAM | Disk | Network |
|---|---|---|---|---|
| SMOA backend (all-in-one) | 4 | 4 GiB | 40 GiB | 1 Gbps |
| Production (backend + PostgreSQL on same VM) | 6 | 8 GiB | 80 GiB SSD | 1 Gbps |
7. All aspects which scale
Every dimension below scales with load, retention, or features. The required target (Section above) is the floor; use this section to size for growth.
| Aspect | What it scales with | How to scale | Config / notes |
|---|---|---|---|
| vCPU (backend) | Concurrent requests, JPA/DB work, sync bursts | Add vCPUs (4 → 6 → 8). Consider second backend instance + load balancer for high concurrency. | Spring Boot thread pool; no app config for vCPU. |
| RAM (backend) | JVM heap, connection pools, cached entities, OS buffers | Increase VM RAM; set -Xmx (e.g. 1 GiB–2 GiB) leaving headroom for OS. |
JAVA_OPTS or systemd Environment. |
| Disk (backend) | H2/PostgreSQL data, log files, audit table (sync_audit_log) |
Add disk or separate volume; rotate logs; archive/trim audit by date. | spring.datasource.url; logging config; optional audit retention job. |
| Network (backend) | Request volume, payload size (sync/pull), rate limit | Bigger NIC or multiple backends behind proxy. | smoa.rate-limit.requests-per-minute (default 120 per key/IP). |
| Rate limit | Number of clients and req/min per client | Increase smoa.rate-limit.requests-per-minute or disable for trusted LAN. |
application.yml / env SMOA_RATE_LIMIT_RPM. |
| Concurrent devices (API) | Sync + pull traffic from many devices | More backend vCPU/RAM; optional horizontal scaling (multiple backends + Nginx/Traefik). | No hard cap in app; rate limit is per key/IP. |
| Database size | Directory, orders, evidence, credentials, reports, audit rows | More disk; move to dedicated PostgreSQL VM; indexes and vacuum. | spring.datasource.*; JPA/ddl-auto or Flyway. |
| Audit retention | Compliance; sync_audit_log row count |
More disk; periodic delete/archive by date; separate audit store. | Application-level job or DB cron. |
| vCPU (PostgreSQL) | Query concurrency, connections, joins | Add vCPUs or move DB to dedicated VM with more cores. | max_connections, connection pool in backend. |
| RAM (PostgreSQL) | Cache, working set | Increase VM RAM; tune shared_buffers / work_mem. |
PostgreSQL config. |
| Disk (PostgreSQL) | Tables, indexes, WAL | Add disk or volume; use SSD. | Data directory; backup size. |
| Reverse proxy | TLS, load balancing, rate limiting | Add vCPU/RAM if many backends or heavy TLS; scale Nginx/Caddy workers. | Nginx worker_processes; upstreams. |
| TURN / signaling | Concurrent WebRTC sessions, media bitrate | Scale vCPU (media encode/decode), RAM, and network bandwidth; add TURN instances for geography. | TURN/signaling server config; app InfrastructureManager endpoints. |
| Proxmox host CPU | Sum of all VMs’ vCPU; burst load | Add physical cores; avoid overcommit (e.g. total vCPU < 2× physical for production). | VM vCPU count. |
| Proxmox host RAM | Sum of all VMs’ RAM | Add DIMMs; avoid overcommit. | VM RAM allocation. |
| Proxmox host disk | All VMs + backups | Add disks or NAS; use SSD for DB and backend data. | VM disk size; backup retention. |
| Proxmox host network | All VMs’ traffic; backup/restore | 1 Gbps minimum; 10 Gbps for many devices or TURN. | NIC; VLANs if needed. |
Scaling summary
- Backend only: Scale vCPU and RAM for more concurrent devices and request spikes; disk for logs and audit.
- Backend + PostgreSQL: Scale DB disk and DB RAM with data size and query load; backend vCPU/RAM with API load.
- With TURN/signaling: Scale TURN vCPU, RAM, and network with concurrent WebRTC sessions and media bitrate.
- Multi-node: Add more backend or TURN VMs and scale reverse proxy and Proxmox host to support them.
These hardware requirements support the SMOA backend (sync, pull, delete, rate limiting, audit logging) and optional supporting infrastructure for a Proxmox VE template.