This project implements a modular multi-agent workflow for marketing analytics, content strategy, and project management using LangGraph, LangChain, and OpenAI.
Tech Stack Used:
- Python:
Chosen for its versatility, strong ecosystem for AI/ML, and seamless integration with modern frameworks like LangChain, LangGraph and FastAPI. - LangChain:
Provides modular abstractions for building LLM-powered applications, enabling agent workflows, prompt management, and integration with various data sources. - LangGraph:
Used for orchestrating multi-agent workflows and managing agent collaboration. Its graph-based approach allows flexible, extensible routing and execution. - FastAPI:
Selected as the API framework due to its industry-standard status for building high-performance, asynchronous APIs with automatic documentation and type safety. - Chainlit:
Adopted for the user interface as recommended in the task description. Chainlit offers a simple, interactive chat UI that is easy to set up and integrates well with LLM-based backends.
Most of these technologies were selected based on prior experience and proven reliability in similar projects. Chainlit was specifically chosen due to its recommendation in the task and its ease of use for rapid prototyping.
-
Dynamic Agent Routing:
For each user message, the orchestrator uses an LLM to select the most relevant specialized agents (AnalyticsWiz, ContentGenius, ProjectManager, or System Assistant) to respond. Routing is performed per message, not per conversation. -
Agent Collaboration via Shared Memory:
Agents coordinate and share context using a persistent shared memory (blackboard) for each conversation. This enables agents to build on each other's outputs and collaboratively solve complex queries. -
System Assistant for General Queries:
The orchestrator detects greetings, small talk, or non-task-specific queries and routes them to a context-aware system assistant, which responds politely and guides the user. -
Traceable, Multi-Turn Conversations:
All interactions (user, orchestrator, agents) are logged with timestamps and agent names. The conversation history is accessible via API and UI, supporting traceability and auditability. -
Reproducibility:
The system uses fixed random seeds and deterministic LLM settings where possible. Conversations can be replayed from logs to demonstrate reproducibility. -
Prompt Versioning:
Agent prompts are stored as versioned JSON files. Each agent loads its prompt dynamically based on the requested version, with automatic fallback to the latest version. -
Extensible & Modular Architecture:
The codebase is organized for easy addition or replacement of agents and data sources. Each agent is a class with a clear interface, and the orchestrator manages agent registration and routing.
-
AnalyticsWiz:
Connects to marketing campaign metrics (mocked data) to provide insights and KPIs (click-through rates, conversion, ROI, etc.). -
ProjectManager:
Pulls in project/task data (mocked) to report on status, outstanding tasks, deadlines, and progress for marketing projects. -
ContentGenius:
Recommends and develops content strategies based on analytics and project requirements. Suggests content types, formats, and optimization strategies. -
System Assistant:
Handles greetings, small talk, and general conversational queries, providing polite, context-aware responses about the application.
-
User Message:
The user sends a message (via Chainlit UI or API). -
Routing:
The orchestrator's router node uses an LLM to select the best agents for the message, based on agent capabilities and the query. Greetings and small talk are routed to the system assistant. -
Agent Collaboration:
Selected agents read from and write to a persistent shared memory for the conversation, enabling multi-turn, collaborative workflows. -
Agent Execution:
Each selected agent node is executed in sequence. After an agent responds, it is removed from the pending list. -
Response Aggregation:
The orchestrator aggregates all agent responses and returns a consolidated answer to the user. -
Logging & Traceability:
All routing decisions, agent outputs, and conversation steps are logged for transparency and debugging.
By default, the API runs at: http://localhost:8000
Check if the API is running.
- Response:
{ "status": "online", "message": "Multi-Agent Marketing Assistant API is running" }
List all registered agents.
- Response:
[ { "agent_id": "analytics_agent", "name": "AnalyticsWiz", "version": "1.1.0", "prompt_version": "1.1.0", "capabilities": [...] }, ... ]
Create a new conversation.
- Response:
{ "conversation_id": "..." }
Get the details of a specific conversation.
- Response:
{ "id": "...", "created_at": "...", "messages": [...], "metadata": {...} }
Process a query using the multi-agent system.
- Request Body:
{ "query": "How did our last campaign perform?", "conversation_id": "..." // Optional. If not provided, a new conversation is created. } - Response:
{ "conversation_id": "...", "messages": [...], // Full conversation history "response": {...} // Consolidated system response for this query }- The
responsefield contains the orchestrator's consolidated answer, including all agent responses.
- The
Replay a conversation to demonstrate reproducibility.
- Response:
{ "original_conversation_id": "...", "replay_conversation_id": "...", "original_messages": [...], "replayed_messages": [...] }
- All endpoints return appropriate HTTP status codes and error messages.
- Errors and routing decisions are logged for debugging.
- Example error response:
{ "detail": "Conversation 123 not found" }
- The Chainlit UI provides a chat interface for interacting with the multi-agent system.
- Each user message triggers a new routing decision and agent workflow.
- The UI displays the orchestrator's consolidated response, which aggregates insights from all relevant agents and the system assistant when appropriate.
- Each agent's response is labeled for traceability.
-
src/utils/orchestrator.py
Main orchestrator logic, agent registry, workflow definition, and routing. -
src/agents/analytics_agent.py,src/agents/content_agent.py,src/agents/project_agent.py
Specialized agent implementations. -
src/agents/response_model.py
Pydantic model for agent responses. -
src/utils/logger.py
Logging utilities. -
src/data/
Mock data for agents. -
src/utils/shared_memory.py
Shared memory (blackboard) implementation for agent collaboration. -
src/prompts/
Versioned prompt templates for all agents.
Prompt templates for all agents are stored as versioned JSON files in src/prompts/. Each agent loads its prompt dynamically based on the requested version. If the specified version is not found, the agent automatically falls back to the latest available version using the get_latest_version utility.
To add a new prompt version, create a new JSON file in src/prompts/ following the naming convention and structure of existing files. Agents will automatically use the latest version if no specific version is requested.
A Makefile is provided to automate common project tasks:
- clean: Remove all files in the
logs/directory. - setup: Install dependencies and ensure a
.envfile exists (copies from.env.exampleif needed). - api: Start the API server (
python main.py). - chainlit: Start the Chainlit UI (
chainlit run chainlit_app.py).
Usage examples:
make clean # Clean up logs
make setup # Install requirements and set up .env
make api # Start the API server
make chainlit # Start the Chainlit UIflowchart TD
User((User))
UI[Chainlit UI / API]
Orchestrator[Orchestrator]
Analytics[AnalyticsWiz]
Content[ContentGenius]
Project[ProjectManager]
SysAssistant[System Assistant]
SharedMemory[Shared Memory]
Data1[Analytics Data]
Data2[Content Data]
Data3[Project Data]
User-->|Message|UI
UI-->|Query|Orchestrator
Orchestrator-->|Route|Analytics
Orchestrator-->|Route|Content
Orchestrator-->|Route|Project
Orchestrator-->|Route|SysAssistant
Analytics-->|Read/Write|SharedMemory
Content-->|Read/Write|SharedMemory
Project-->|Read/Write|SharedMemory
Orchestrator-->|Aggregate|UI
Analytics-->|Data|Data1
Content-->|Data|Data2
Project-->|Data|Data3
- The architecture is modular and can be extended to integrate with a Multi-Agent Control Plane (MCP) or other orchestration frameworks.
- Agents can register with an MCP server, and the orchestrator can delegate routing and lifecycle management to the MCP.
- Dialogue management can be separated from agent logic for compliance and scalability.
- The system supports both orchestrator-mediated and potential agent-to-agent (A2A) communication via shared memory or message passing. Althought some changes are required for the last approach.
- Integration with message buses or event queues is possible for distributed coordination.
-
LLM Routing Reliability:
The quality of agent selection depends on the LLM's understanding of agent capabilities and the prompt design. -
Scalability:
Adding many agents may require prompt engineering or a more scalable routing mechanism. -
Error Handling:
Current error handling is basic; production systems should include more robust mechanisms. -
Security:
API keys and sensitive data should be managed in a more secure fashion. -
Parallel Agent Execution:
Currently, agents are executed in sequence. Parallel execution is possible with further development.