13 KiB
FusionAGI
The world's most advanced agentic AGI system—Artificial General Intelligence, not narrow AI. A modular, composable intelligence framework that supports reasoning, planning, execution, memory, and tool use through coordinated agents, with built-in self-improvement, self-correction, auto-recommend/suggest, and auto-training.
System at a Glance
flowchart TB
subgraph users [Users & Clients]
UI[Multi-Modal UI]
API_Client[API / Cursor Composer]
end
subgraph fusion [FusionAGI]
subgraph interfaces [Interfaces]
Admin[Admin Control Panel]
API[FastAPI / Dvādaśa API]
end
subgraph core [Core]
Orch[Orchestrator]
EB[Event Bus]
SM[State Manager]
end
subgraph agents [Agents]
P[Planner]
R[Reasoner]
E[Executor]
C[Critic]
H[12 Heads + Witness]
end
subgraph support [Support]
Mem[Memory]
Tools[Tools]
Gov[Governance]
SI[Self-Improvement]
end
end
UI --> Admin
UI --> API
API_Client --> API
Admin --> Orch
API --> Orch
Orch --> EB
Orch --> SM
Orch --> P
Orch --> R
Orch --> E
Orch --> C
Orch --> H
P --> Mem
E --> Tools
E --> Gov
C --> SI
Features
- AGI-first: General intelligence across domains via composable agents, not single-task AI.
- Native reasoning: When no LLM adapter is configured, FusionAGI runs with built-in symbolic reasoning—no external API calls. Heads use pattern analysis, domain logic, and persona-driven synthesis.
- Self-improvement: Learns from outcomes; reflection and heuristic updates improve behavior over time.
- Self-correction: Detects failures, runs critique loops, validates outputs, and retries with feedback.
- Auto recommend / suggest: Produces actionable recommendations (next actions, training targets, tool additions) from lessons and evaluations.
- Auto training: Suggests and applies heuristic updates, prompt refinements, and training targets from execution traces and reflection.
- Modularity: Reasoning, planning, memory, tooling, and governance are independent, replaceable modules.
- Agent-oriented: Agents have roles, goals, and constraints; they communicate via structured messages.
- Model-agnostic: LLMs are abstracted behind adapters (OpenAI, Anthropic, local).
- Determinism: Explicit state transitions, logged decisions, and replayable execution traces.
Installation
pip install -e .
# With LLM adapters (optional; MAA and core are built-in):
pip install -e ".[openai]" # OpenAIAdapter
pip install -e ".[anthropic]"
pip install -e ".[local]"
pip install -e ".[all]" # openai + anthropic + local
pip install -e ".[dev]" # pytest
- MAA (Manufacturing Authority Add-On) is built-in; no extra dependency.
- Optional extras:
openai,anthropic,localare for LLM providers;devis for tests (pytest). - OpenAIAdapter requires
fusionagi[openai]; usefrom fusionagi.adapters import OpenAIAdapter(orfrom fusionagi.adapters.openai_adapter import OpenAIAdapterif the optional import is not used). - CachedAdapter wraps any
LLMAdapterand cachescomplete()responses; no extra dependency. Usefrom fusionagi.adapters import CachedAdapter(orfrom fusionagi.adapters.cache import CachedAdapter).
Project Layout
fusionagi/
├── core/ # Orchestrator, event bus, state manager, persistence, run_dvadasa
├── agents/ # Planner, Reasoner, Executor, Critic, heads, Witness
├── reasoning/ # Chain-of-thought and tree-of-thought
├── planning/ # Plan graph, dependency resolution, strategies
├── memory/ # Working, episodic, reflective, semantic, procedural, trust, consolidation
├── tools/ # Tool registry, safe runner, builtins, connectors
├── governance/ # Guardrails, rate limiting, access control, SafetyPipeline, PolicyEngine
├── reflection/ # Post-task reflection and heuristic updates
├── self_improvement/ # Self-correction, auto-recommend/suggest, auto-training, FusionAGILoop
├── interfaces/ # Admin panel, multi-modal UI, voice, conversation
├── adapters/ # LLM adapters (OpenAI, stub, cache, native)
├── schemas/ # Task, message, plan, recommendation, head, witness schemas
├── api/ # FastAPI app, Dvādaśa routes, OpenAI bridge, WebSocket
├── maa/ # Manufacturing Authority Add-On (gate, MPC, gap detection)
├── multi_agent/ # Pool, delegation, supervisor, consensus
├── config/ # Head personas and voices
├── prompts/ # Head prompts
├── skills/ # Skill library, induction, versioning
├── telemetry/ # Tracer and event subscription
├── verification/ # Outcome verifier, contradiction detector, validators
├── world_model/ # World model and rollout
├── tests/
└── docs/
Usage
from fusionagi import Orchestrator, EventBus, StateManager
from fusionagi.agents import PlannerAgent
bus = EventBus()
state = StateManager()
orch = Orchestrator(event_bus=bus, state_manager=state)
planner_agent = PlannerAgent()
orch.register_agent("planner", planner_agent)
task_id = orch.submit_task(goal="Summarize the project README")
With self-improvement (FusionAGILoop): self-correction, auto-recommend, auto-training:
from fusionagi import Orchestrator, EventBus, StateManager, FusionAGILoop
from fusionagi.memory import ReflectiveMemory
from fusionagi.agents import CriticAgent
bus = EventBus()
state = StateManager()
orch = Orchestrator(event_bus=bus, state_manager=state)
reflective = ReflectiveMemory()
critic = CriticAgent(identity="critic")
orch.register_agent("critic", critic)
agi_loop = FusionAGILoop(
event_bus=bus,
state_manager=state,
orchestrator=orch,
critic_agent=critic,
reflective_memory=reflective,
auto_retry_on_failure=False,
on_recommendations=lambda recs: print("Recommendations:", len(recs)),
on_training_suggestions=lambda sugs: print("Training suggestions:", len(sugs)),
)
# On task_state_changed(FAILED) and reflection_done, AGI loop runs correction, recommend, and training.
With native reasoning (default when no adapter):
from fusionagi import Orchestrator, EventBus, StateManager
from fusionagi.agents import WitnessAgent
from fusionagi.agents.heads import create_all_content_heads
from fusionagi.core import run_dvadasa
from fusionagi.schemas.head import HeadId
bus = EventBus()
state = StateManager()
orch = Orchestrator(event_bus=bus, state_manager=state)
# adapter=None uses NativeReasoningProvider + NativeAdapter—no external LLM
heads = create_all_content_heads(adapter=None)
for hid, agent in heads.items():
orch.register_agent(hid.value, agent)
orch.register_agent(HeadId.WITNESS.value, WitnessAgent(adapter=None))
task_id = orch.submit_task(goal="Analyze security tradeoffs")
final = run_dvadasa(orch, task_id, "Analyze security tradeoffs", event_bus=bus)
With an LLM adapter (optional):
from fusionagi.adapters import StubAdapter, CachedAdapter
# OpenAIAdapter requires: pip install "fusionagi[openai]"
# from fusionagi.adapters import OpenAIAdapter
adapter = CachedAdapter(StubAdapter("response"), max_entries=100)
With admin control panel and multi-modal UI:
from fusionagi.interfaces import AdminControlPanel, MultiModalUI
from fusionagi.interfaces import VoiceInterface, VoiceLibrary, ConversationManager
# Admin panel for system management
admin = AdminControlPanel(
orchestrator=orch,
event_bus=bus,
state_manager=state,
)
# Add voice profiles
from fusionagi.interfaces.voice import VoiceProfile
voice = VoiceProfile(name="Assistant", language="en-US", style="friendly")
admin.add_voice_profile(voice)
# Multi-modal user interface
voice_interface = VoiceInterface(stt_provider="whisper", tts_provider="elevenlabs")
ui = MultiModalUI(
orchestrator=orch,
conversation_manager=ConversationManager(),
voice_interface=voice_interface,
)
# Create user session with text and voice
from fusionagi.interfaces.base import ModalityType
session_id = ui.create_session(
preferred_modalities=[ModalityType.TEXT, ModalityType.VOICE]
)
# Interactive task submission with real-time feedback
task_id = await ui.submit_task_interactive(session_id, goal="Analyze data")
Interfaces
FusionAGI provides comprehensive interface layers:
Admin Control Panel
- Voice library management (TTS/STT configuration)
- Conversation style tuning (personality, formality, verbosity)
- Agent configuration and monitoring
- System health and performance metrics
- Governance policies and audit logs
- Manufacturing authority (MAA) oversight
Multi-Modal User Interface
Supports multiple sensory modalities:
- Text: Chat, commands, structured input
- Voice: Speech-to-text, text-to-speech
- Visual: Images, video, AR/VR (extensible)
- Haptic: Touch feedback (extensible)
- Gesture: Motion control (extensible)
- Biometric: Emotion detection (extensible)
See docs/interfaces.md and examples/ for detailed usage.
Task lifecycle (high-level):
sequenceDiagram
participant U as User/API
participant O as Orchestrator
participant P as Planner
participant E as Executor
participant C as Critic
U->>O: submit_task(goal)
O->>P: plan
P->>O: plan graph
O->>E: execute_step (×N)
E->>O: results
O->>C: evaluate
C->>O: reflection_done
O->>U: task complete / recommendations
Dvādaśa 12-Head Mode
Multi-agent orchestration with 12 specialized heads (Logic, Research, Strategy, Security, etc.) and a Witness meta-controller for consensus and transparency:
flowchart LR
subgraph heads [12 Content Heads]
H1[Logic]
H2[Research]
H3[Strategy]
H4[Security]
H5[Safety]
H6[+ 7 more]
end
subgraph witness [Witness]
W[Witness]
end
heads --> W
W --> Answer[final_answer]
pip install -e ".[api]" # For REST + WebSocket API
from fusionagi import Orchestrator, EventBus, StateManager
from fusionagi.agents import WitnessAgent
from fusionagi.agents.heads import create_all_content_heads
from fusionagi.core import run_dvadasa
from fusionagi.adapters import StubAdapter # or OpenAIAdapter
bus = EventBus()
state = StateManager()
orch = Orchestrator(event_bus=bus, state_manager=state)
heads = create_all_content_heads(adapter=StubAdapter())
for hid, agent in heads.items():
orch.register_agent(hid.value, agent)
orch.register_agent("witness", WitnessAgent(adapter=StubAdapter()))
task_id = orch.submit_task(goal="Analyze architecture tradeoffs")
final = run_dvadasa(orch, task_id, "What are the tradeoffs?", event_bus=bus)
print(final.final_answer)
Run the API: uvicorn fusionagi.api.app:app --reload and open the frontend: cd frontend && pnpm dev.
API port: By default the frontend proxies /v1 to http://localhost:8000. If your API runs on another port (e.g. 8002), set VITE_API_URL=http://localhost:8002 in frontend/.env and restart the dev server. See frontend/.env.example.
OpenAI Bridge (Cursor Composer)
Use FusionAGI as a custom model in Cursor Composer and other OpenAI API consumers. The API exposes GET /v1/models and POST /v1/chat/completions in OpenAI format. See docs/openai_bridge.md for setup and configuration.
Development
- Frontend uses pnpm as the package manager. From the project root:
pnpm devorcd frontend && pnpm dev. - Documentation: See docs/README.md for a documentation map and quick reference. Key docs:
docs/architecture.md— high-level components, data flow, self-improvement (with Mermaid diagrams)docs/interfaces.md— Admin Control Panel and Multi-Modal UIdocs/openai_bridge.md— OpenAI-compatible API for Cursor Composerdocs/api_middleware_interface_spec.md— Dvādaśa API, SafetyPipeline, interface layer specdocs/multi_agent_acceleration.md— parallel execution, pool, delegation, supervisordocs/maa_activation.mdanddocs/maa_compliance_mapping.md— Manufacturing Authority Add-Ondocs/ui_ux_implementation.md— interface implementation summarydocs/interface_architecture_diagram.md— system and data flow diagrams (ASCII + Mermaid)
- Use
.cursor/rulesfor coding standards. - Run tests:
pytest tests/ - Examples:
python examples/admin_panel_example.py
License
MIT