Develop a Cloud-Hosted RAG App With an Open Source LLM – The New Stack

Retrieval-augmented generation (RAG) is often used to develop customized AI applications, including chatbots, recommendation systems and other personalized tools. This system uses the strength of vector databases and large language models (LLMs) to provide high-quality results.

Selecting the right LLM for any RAG model is very important and requires considering factors like cost, privacy concerns and scalability. Commercial LLMs like OpenAI’s GPT-4 and Google’s Gemini are effective but can be expensive and raise data privacy concerns. Some users prefer open source LLMs for their flexibility and cost savings, but they require substantial resources for fine-tuning and deployment, including GPUs and specialized infrastructure. Additionally, managing model updates and scalability can be challenging with local setups.

A better solution is to select an open source LLM and deploy it on the cloud. This approach provides the necessary computational power and scalability without the high costs and complexities of local hosting. It not only saves on initial infrastructural costs but also minimizes maintenance concerns.

Let’s explore a similar approach to develop an application using cloud-hosted open source LLMs and a scalable vector database.

Several tools are required to develop this RAG-based AI application. These include:

In this tutorial, we will extract data from Wikipedia using LangChain’s WikipediaLoader module and build an LLM on that data.

Start setting your environment to use BentoML, MyScaleDB and LangChain in your system by opening your terminal and entering:

Begin by importing the WikipediaLoader from the langchain_community.document_loaders. wikipediamodule. You’ll use this loader to fetch documents related to “Albert Einstein” from Wikipedia.

This uses the load method to retrieve the “Albert Einstein” documents, and the print method to print the contents of the first document to verify the loaded data.

Import the CharacterTextSplitter from langchain_text_splitters, join the contents of all pages into a single string, and then split the text into manageable chunks.

Your data is ready, and the next step is to deploy the models on BentoML and use them in your RAG application. Deploy the LLM first. You’ll need a free BentoML account, and you can sign up for one on BentoCloud if needed. Next, navigate to the Deployments section and click on the Create Deployment button in the top-right corner. A new page will open that looks like this:

Select the bentoml/bentovllm-llama3-8b-instruct-service model from the drop-down menu and click “Submit” in the bottom-right corner. This should start deploying the model. A new page like this will open:

The deployment can take some time. Once it is deployed, copy the endpoint.

Note: BentoML’s free tier only allows the deployment of a single model. If you have a paid plan and can deploy more than one model, follow the steps below. If not, don’t worry — we will use an open source model locally for embeddings.

Deploying the embedding model is very similar to the steps you took to deploy the LLM:

Next, go to the API Tokens page and generate a new API key. Now you are ready to use the deployed models in your RAG application.

You will define a function called get_embeddings to generate embeddings for the provided text. This function takes three arguments. If the BentoML endpoint and API token are provided, the function uses BentoML’s embedding service; otherwise, it uses the local transformers and torch libraries to load the sentence-transformers/all-MiniLM-L6-v2model and generate embeddings.

This setup allows flexibility for free-tier BentoML users, who can deploy only one model at a time. If you have a paid version of BentoML and can deploy two models, you can pass the BentoML endpoint and Bento API token to use the deployed embedding model.

Iterate over the text chunks (splits) in batches of 25 to generate embeddings using the get_embeddings function defined above.

This prevents overloading the embedding model with too much data at once, which can be particularly useful for managing memory and computational resources.

Now, create a pandas DataFrame to store the text chunks and their corresponding embeddings.

The knowledge base is complete, and now it’s time to save the data to the vector database. This demo uses MyScaleDB for vector storage. Start a MyScaleDB cluster in a cloud environment by following the quickstart guide. Then you can establish a connection to the MyScaleDB database using the clickhouse_connect library.

Create a table in MyScaleDB to store the text chunks and embeddings. The table schema includes an id, the page_content and the embeddings.

The next step is to add a vector index to the embeddings column in the RAG table. The vector index allows for efficient similarity searches, which are essential for retrieval-augmented generation tasks.

Define a function to retrieve relevant documents based on a user query. The query embeddings are generated using the get_embeddings function, and an advanced SQL vector query is executed to find the closest matches in the database.

Note: The distance method takes an embedding column and the embedding vector of the user query to find similar documents by applying cosine similarity.

Establish a connection to your hosted LLM on BentoML. The llm_client object will be used to interact with the LLM for generating responses based on the retrieved documents.

Define a function to perform RAG. The function takes a user question and the retrieved context as input. It constructs a prompt for the LLM, instructing it to answer the question based on the provided context. The response from the LLM is then returned as the answer.

Finally, you can test it out by making a query to the RAG application. Ask the question “Who is Albert Einstein?” and use the doragfunction to get the answer based on the relevant documents retrieved earlier.

If you ask the RAG model about Albert Einstein’s death, the response should look like this:

BentoML stands out as an excellent platform for deploying machine learning models, including LLMs, without the hassle of managing resources. With BentoML, you can quickly deploy and scale your AI applications on the cloud, ensuring they are production-ready and highly accessible. Its simplicity and flexibility make it an ideal choice for developers, enabling them to focus more on innovation and less on deployment complexities.

On the other hand, MyScaleDB is explicitly developed for RAG applications, offering a high-performance SQL vector database. Its familiar SQL syntax makes it easy for developers to integrate and use MyScaleDB in their applications, as the learning curve is minimal. MyScaleDB’s Multi-Scale Tree Graph (MSTG) algorithm significantly outperforms other vector databases in terms of speed and accuracy. Additionally, MyScaleDB offers each new user free storage for up to 5 million vectors, making it a desirable option for developers looking to implement efficient and scalable AI solutions.

What do you think about this project? Share your thoughts on Twitter and Discord.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

More here:
Develop a Cloud-Hosted RAG App With an Open Source LLM - The New Stack

Related Posts

Comments are closed.