An intelligent platform that reads, understands, and answers questions about your documents.
Built to solve the bottleneck of enterprise document overload. The system accepts PDF, DOCX, and TXT uploads, asynchronously processes them through Celery task workers backed by Redis, generates semantic embeddings, and stores them in ChromaDB and pgvector for sub-second retrieval.
A multi-agent LangGraph workflow orchestrates between summarization, Q&A, and grounding agents, specifically designed to reduce hallucination by applying retrieval at each reasoning step.
- End-to-end async document ingestion with Celery + Redis task queue
- Semantic vector search via ChromaDB + pgvector
- Multi-agent LangGraph orchestration for long-context reasoning
- CI/CD pipeline via GitHub Actions with pre-commit hooks
- Full Docker Compose deployment (API, workers, vector DB, broker)