TxtAI Applications
This repository hosts a series of example TxtAI applications. Applications are YAML configuration files that automatically build and export a ready-to-use web API.
Install
Run the following to get started
pip install txtai[api]
Applications
List of applications.
- RAG: Retrieval Augmented Generation (RAG) with the following configuration.
- Text Extraction with Docling, Chunking, and Indexing of documents. Supports PDF, DOCX, Web, XLSX files and more.
- Vector database with embeddings generation
- gpt-oss-20B LLM
- RAG pipeline that joins vector search with the LLM
- Translation: Translates input text to the target language of choice. Automatically detects input language and selects best translation model.
Start an application
Pick a file, download it and run the following.
CONFIG=rag.yml uvicorn "txtai.api:app"
Sample actions
Index the TxtAI website
# Pick any PDF, XLSX, DOCX etc
curl -X POST "http://localhost:8000/workflow" -H "Content-Type: application/json" -d '{"name":"index","elements":["https://github.com/neuml/txtai"]}'
Run a RAG search
curl -N "http://localhost:8000/rag?query=List+txtai+strengths&maxlength=2048&stripthink=True&stream=True"
Use the language or library of choice to interact with the API. TxtAI also provides a series of client libraries for JavaScript, Java, Rust and Go.
Run with Docker
If you don't want to install TxtAI, these applications can instead be run with TxtAI's Docker Image.
docker run -it -p 8000:8000 -v /tmp/config:/config -e CONFIG=rag.yml --entrypoint uvicorn neuml/txtai-gpu --host 0.0.0.0 txtai.api:app
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support