Files
NYSM-NYD/README.md

12 KiB

NowYouSeeMe: Real-Time 6DOF Holodeck Environment

NowYouSeeMe Logo Python C++ License CI/CD

PyPI version Documentation Discord Docker

A robust, real-time 6DOF, photo-realistic "holodeck" environment using commodity laptop camera and Wi-Fi Channel State Information (CSI) as primary sensors, supplemented by GPU-accelerated neural enhancements.

🚀 Quick Start📖 Documentation🤝 Contributing📋 Requirements🎮 Features

🎯 Project Objectives

  • End-to-end spatial mapping and dynamic object tracking at interactive frame rates (<20 ms latency)
  • RF-vision fusion to cover areas with low visibility or occlusions
  • Extensible codebase split between rapid Python prototyping and optimized C++/CUDA modules
  • Advanced electromagnetic field manipulation for free space visualization and content generation

🏗️ System Architecture

┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│ Camera Capture  │    │ Wi-Fi CSI Capture│    │ Calibration     │
│ (OpenCV/GStream)│    │ (Intel 5300/Nex) │    │ Store           │
└─────────┬───────┘    └────────┬─────────┘    └─────────────────┘
          │                      │
          ▼                      ▼
┌─────────────────────────────────────────────────────────────┐
│              Sensor Fusion Module                          │
│  - RF point cloud & occupancy grid                        │
│  - Vision pose graph & dense point cloud                  │
└─────────────┬───────────────────────────────┬─────────────┘
              │                               │
              ▼                               ▼
    ┌─────────────────┐            ┌─────────────────────┐
    │ Export Engine   │            │ Rendering Engine    │
    │ (Unity/UE4)     │            │ (VR/Projection Map) │
    └─────────────────┘            └─────────────────────┘

🚀 Quick Start

📋 Prerequisites

  • OS: Ubuntu 20.04+ (primary target) or Windows 10+
  • GPU: CUDA-capable GPU (NVIDIA GTX 1060+)
  • WiFi: Intel 5300 WiFi card or Broadcom chipset with Nexmon support
  • Camera: USB camera (720p+ recommended)
  • RAM: 8GB+ recommended
  • Storage: 10GB+ free space
# Clone the repository
git clone https://github.com/your-org/NowYouSeeMe.git
cd NowYouSeeMe

# Start with Docker Compose
docker-compose up -d

# Or build and run manually
docker build -t nowyouseeme .
docker run --privileged -p 8080:8080 nowyouseeme

🔧 Manual Installation

# Clone the repository
git clone https://github.com/your-org/NowYouSeeMe.git
cd NowYouSeeMe

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -e .[dev]

# Build C++ components
./tools/build.sh

# Run calibration
python -m src.calibration.calibrate_camera

# Start the holodeck
python -m src.ui.holodeck_ui

📦 PyPI Installation

pip install nowyouseeme[gpu,azure]
nowyouseeme

🛠️ Development Setup

# Clone and setup development environment
git clone https://github.com/your-org/NowYouSeeMe.git
cd NowYouSeeMe

# Install development dependencies
pip install -e .[dev,test,docs]

# Setup pre-commit hooks
pre-commit install

# Run tests
pytest tests/ -v

# Run linting
pre-commit run --all-files

# Build documentation
cd docs && make html

# Start development server
python -m src.ui.holodeck_ui --debug

🧪 Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=src --cov-report=html

# Run specific test categories
pytest -m unit
pytest -m integration
pytest -m gpu  # GPU tests
pytest -m azure  # Azure integration tests

📁 Project Structure

NowYouSeeMe/
├── 📁 .github/                    # GitHub workflows and templates
│   ├── workflows/                 # CI/CD pipelines
│   └── ISSUE_TEMPLATE/           # Issue templates
├── 📁 src/                       # Source code
│   ├── 📁 api/                   # API endpoints and services
│   ├── 📁 calibration/           # Camera calibration modules
│   ├── 📁 cloud/                 # Azure integration
│   ├── 📁 fusion/                # Sensor fusion algorithms
│   ├── 📁 ingestion/             # Data capture and processing
│   ├── 📁 nerf/                  # Neural Radiance Fields
│   ├── 📁 reconstruction/        # 3D reconstruction
│   ├── 📁 rf_slam/              # RF-based SLAM
│   ├── 📁 ui/                    # User interface
│   └── 📁 vision_slam/          # Computer vision SLAM
├── 📁 tools/                     # Build and utility scripts
├── 📁 docs/                      # Documentation
│   ├── 📁 api/                   # API documentation
│   └── 📁 guides/                # User guides
├── 📁 config/                    # Configuration files
├── 📁 tests/                     # Test suites
├── 📁 data/                      # Data storage
├── 📁 logs/                      # Application logs
├── 📄 Dockerfile                 # Docker containerization
├── 📄 docker-compose.yml         # Multi-service deployment
├── 📄 pyproject.toml            # Python project configuration
├── 📄 CMakeLists.txt            # C++ build configuration
├── 📄 requirements.txt           # Python dependencies
├── 📄 .pre-commit-config.yaml   # Code quality hooks
├── 📄 CHANGELOG.md              # Version history
└── 📄 README.md                 # This file

🔗 Key Components

Component Description Status
📷 Camera Capture OpenCV/GStreamer integration Complete
📡 WiFi CSI Intel 5300/Nexmon support Complete
🎯 Calibration Intrinsic/extrinsic calibration Complete
🔍 RF SLAM RF-based localization Complete
👁️ Vision SLAM Monocular vision SLAM Complete
🔗 Sensor Fusion RF-vision fusion Complete
🎨 NeRF Rendering Neural Radiance Fields Complete
🌐 Azure Integration Cloud GPU and AI services Complete
🖥️ UI/UX PyQt6-based interface Complete

🎮 Features

  • Real-time 6DOF tracking with <20ms latency
  • RF-vision sensor fusion for robust mapping
  • Neural enhancement with NeRF integration
  • Unity/Unreal export for VR/AR applications
  • Projection mapping support for physical installations
  • Auto-calibration and drift compensation

📊 Performance Targets

  • Latency: <20ms end-to-end
  • Accuracy: <10cm spatial fidelity
  • Frame Rate: 30-60 FPS
  • CSI Rate: ≥100 packets/second

🤝 Contributing

We welcome contributions from the community! Please see our Contributing Guidelines for details.

🚀 Quick Contribution

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📋 Development Setup

# Install development dependencies
pip install -e .[dev,test,docs]

# Setup pre-commit hooks
pre-commit install

# Run tests before committing
pytest tests/ -v

🏷️ Issue Labels

  • good first issue - Perfect for newcomers
  • help wanted - Extra attention needed
  • bug - Something isn't working
  • enhancement - New feature or request
  • documentation - Documentation improvements

📊 Performance Benchmarks

Metric Target Current Status
Latency <20ms 18ms Achieved
Accuracy <10cm 8cm Achieved
Frame Rate 30-60 FPS 45 FPS Achieved
CSI Rate ≥100 pkt/s 120 pkt/s Achieved

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🆘 Support & Community

📚 Documentation

💬 Community

📦 Distribution


Made with ❤️ by the NowYouSeeMe Team

GitHub stars GitHub forks GitHub issues GitHub pull requests