Enterprise Knowledge Retrieval
EDDI provides a complete Retrieval-Augmented Generation pipeline with native support for multiple embedding providers, vector stores, and a zero-infrastructure RAG option via HTTP calls.
RAG Capabilities
- 7 Embedding Providers — OpenAI, Ollama, Azure OpenAI, Mistral, Amazon Bedrock, Cohere, Google Vertex AI
- 5 Vector Stores — pgvector, In-Memory, MongoDB Atlas, Elasticsearch, Qdrant
- httpCall RAG — Zero-infrastructure RAG via any search API (BM25, Elasticsearch, custom endpoints)
- REST Ingestion API — Async document ingestion with status tracking and batch processing
- Hybrid Search — Combine dense vector retrieval with sparse keyword matching for optimal recall
Flexible Deployment
RAG is fully configuration-driven. Choose your embedding provider and vector store via JSON configuration — no code changes needed. The httpCall RAG option lets you use any existing search infrastructure (Elasticsearch, Solr, custom APIs) without deploying a separate vector database.