local-llm-with-rag
Local LLM with RAG
This project is an experimental sandbox for testing out ideas related to running local Large Language Models (LLMs) with Ollama to perform Retrieval-Augmented Generation (RAG) for answering questions based on sample PDFs. In this project, we are also using Ollama to create embeddings with the nomic-embed-text to use with Chroma. Please note that the embeddings are reloaded each time the application runs, which is not efficient and is only done here for testing purposes.
Currently, the questions were one off and the LLM does not know what was previously asked.
Requirements
- Ollama verson 0.1.26 or higher.
Setup
- Clone this repository to your local machine.
- Create a Python virtual environment by running
python3 -m venv env
. - Activate the virtual environment by running
source env/bin/activate
on Unix or MacOS, or.\env\Scripts\activate
on Windows. - Install the required Python packages by running
pip install -r requirements.txt
.
Running the Project
Note: The first time you run the project, it will download the necessary models from Ollama for the LLM and embeddings. This is a one-time setup process and may take some time depending on your internet connection.
- Ensure your virtual environment is activated.
- Run the main script with
python app.py -m <model_name> -p <path_to_documents>
to specify a model and the path to documents. If no model is specified, it defaults to mistral. If no path is specified, it defaults toResearch
located in the repository for example purposes. - Optionally, you can specify the embedding model to use with
-e <embedding_model_name>
. If not specified, it defaults to nomic-embed-text.
This will load the PDFs and Markdown files, generate embeddings, query the collection, and answer the question defined in app.py
.
Technologies Used
Описание
Running local Language Learning Models to perform Retrieval-Augmented Generation
Языки
Python