167 lines
4.8 KiB
Markdown
167 lines
4.8 KiB
Markdown
# OpenAI Bridge for Cursor Composer
|
|
|
|
The FusionAGI OpenAI bridge exposes an OpenAI-compatible API so you can use FusionAGI's Dvādaśa multi-agent system as a custom model in Cursor Composer and other OpenAI API consumers.
|
|
|
|
## Request Flow
|
|
|
|
```mermaid
|
|
sequenceDiagram
|
|
participant Cursor as Cursor Composer
|
|
participant API as FusionAGI API
|
|
participant Orch as Orchestrator
|
|
participant Heads as Dvādaśa Heads
|
|
participant Witness as Witness
|
|
|
|
Cursor->>API: POST /v1/chat/completions (messages)
|
|
API->>Orch: submit_task(goal from messages)
|
|
Orch->>Heads: run heads in parallel
|
|
Heads->>Witness: head outputs
|
|
Witness->>Orch: final_answer (synthesis)
|
|
Orch->>API: FinalResponse
|
|
API->>Cursor: OpenAI-format response (sync or SSE)
|
|
```
|
|
|
|
## Overview
|
|
|
|
- **`GET /v1/models`** — Lists available models (returns `fusionagi-dvadasa`)
|
|
- **`POST /v1/chat/completions`** — Chat completions (sync and streaming)
|
|
|
|
The bridge translates OpenAI-style requests into FusionAGI tasks, runs the Dvādaśa heads and Witness, and returns responses in OpenAI format.
|
|
|
|
## Quick Start
|
|
|
|
### 1. Start the FusionAGI API
|
|
|
|
```bash
|
|
# Install with API support
|
|
pip install -e ".[api]"
|
|
|
|
# Start the server (use a real LLM adapter for production)
|
|
uvicorn fusionagi.api.app:app --reload
|
|
```
|
|
|
|
The API runs at `http://localhost:8000` by default.
|
|
|
|
### 2. Configure Cursor Composer
|
|
|
|
1. Open **Cursor Settings** → **Models** (or equivalent for your Cursor version).
|
|
2. Click **Add Custom Model** or configure an OpenAI-compatible endpoint.
|
|
3. Set:
|
|
- **Name**: FusionAGI Dvādaśa
|
|
- **Base URL**: `http://localhost:8000/v1`
|
|
- **Model ID**: `fusionagi-dvadasa`
|
|
- **API Key**: Any string (e.g. `sk-fusionagi`) when auth is disabled
|
|
- **Context Length**: 128000 (or your preferred value)
|
|
- **Temperature**: 0.7
|
|
|
|
4. Select FusionAGI as the model for Composer.
|
|
|
|
### 3. Use with Composer
|
|
|
|
Composer will send requests to FusionAGI. Each prompt is processed by the Dvādaśa heads (Logic, Research, Strategy, Security, Safety, etc.) and synthesized by the Witness into a single response.
|
|
|
|
## Configuration
|
|
|
|
Environment variables:
|
|
|
|
| Variable | Default | Description |
|
|
|----------|---------|-------------|
|
|
| `OPENAI_BRIDGE_MODEL_ID` | `fusionagi-dvadasa` | Model ID returned by `/v1/models` |
|
|
| `OPENAI_BRIDGE_AUTH` | `disabled` | Set to `Bearer` to enable API key auth |
|
|
| `OPENAI_BRIDGE_API_KEY` | (none) | Required when auth is enabled |
|
|
| `OPENAI_BRIDGE_TIMEOUT_PER_HEAD` | `60` | Max seconds per head agent |
|
|
|
|
### Authentication
|
|
|
|
**Development (default):** Auth is disabled. Any request is accepted.
|
|
|
|
**Production:** Enable Bearer auth:
|
|
|
|
```bash
|
|
export OPENAI_BRIDGE_AUTH=Bearer
|
|
export OPENAI_BRIDGE_API_KEY=your-secret-key
|
|
```
|
|
|
|
Cursor must then send `Authorization: Bearer your-secret-key` with each request.
|
|
|
|
## API Details
|
|
|
|
### Chat Completions
|
|
|
|
**Request (sync):**
|
|
```json
|
|
{
|
|
"model": "fusionagi-dvadasa",
|
|
"messages": [
|
|
{"role": "system", "content": "You are helpful."},
|
|
{"role": "user", "content": "What is the best architecture for X?"}
|
|
],
|
|
"stream": false
|
|
}
|
|
```
|
|
|
|
**Request (streaming):**
|
|
```json
|
|
{
|
|
"model": "fusionagi-dvadasa",
|
|
"messages": [{"role": "user", "content": "Explain Y"}],
|
|
"stream": true
|
|
}
|
|
```
|
|
|
|
**Response (sync):** Standard OpenAI chat completion object with `choices[0].message.content` and `usage`.
|
|
|
|
**Response (streaming):** Server-Sent Events (SSE) with `data:` lines containing OpenAI-format chunks, ending with `data: [DONE]`.
|
|
|
|
### Message Translation
|
|
|
|
The bridge converts OpenAI messages to a single prompt for Dvādaśa:
|
|
|
|
- **System messages** — Prepended as context
|
|
- **User/assistant turns** — Formatted as `[User]:` / `[Assistant]:` conversation
|
|
- **Tool results** — Included as `[Tool name] returned: ...`
|
|
|
|
### Models
|
|
|
|
**Request:**
|
|
```
|
|
GET /v1/models
|
|
```
|
|
|
|
**Response:**
|
|
```json
|
|
{
|
|
"object": "list",
|
|
"data": [
|
|
{
|
|
"id": "fusionagi-dvadasa",
|
|
"object": "model",
|
|
"created": 1704067200,
|
|
"owned_by": "fusionagi"
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
## Limitations
|
|
|
|
- **Tool calling:** Not supported in v1. The bridge returns text-only responses. Cursor workflows that expect `tool_calls` in the model response will not work. MCP tools provided by Cursor may still be used based on model output text.
|
|
- **Token counts:** Usage is estimated from character counts (chars/4 heuristic). Exact token counts require a tokenizer.
|
|
- **Vision/multimodal:** Image content in messages is not supported.
|
|
- **Latency:** Dvādaśa runs multiple heads in parallel plus Witness synthesis, which can be slower than a single LLM call.
|
|
|
|
## Example: curl
|
|
|
|
```bash
|
|
# List models
|
|
curl -X GET http://localhost:8000/v1/models
|
|
|
|
# Chat completion (sync)
|
|
curl -X POST http://localhost:8000/v1/chat/completions \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "fusionagi-dvadasa",
|
|
"messages": [{"role": "user", "content": "What is 2+2?"}]
|
|
}'
|
|
```
|