A Guide to Building Llama 3.1 RAG Applications with TIR AI Studio
Retrieval-Augmented Generation (RAG) systems are becoming increasingly popular for building AI applications that leverage company knowledge bases. RAG operates in two steps: first, the retrieval step, where relevant data is extracted from a store; and second, the generation step, where the retrieved context is incorporated into the prompt for the LLM, enabling it to generate a response that is both accurate and extends beyond its pre-training data.
When building data-sovereign AI applications, you’ll want the RAG system to use a large language model (LLM) deployed on your own cloud infrastructure. You should also use a vector store that can be easily deployed, rather than relying solely on SaaS-based vector stores. By keeping the entire stack within your cloud infrastructure in India, you gain the benefits of building data-sovereign AI that adheres to compliance regulations, without the risk of leaking sensitive data to external platform companies.
In this article, we will guide you through the process of creating RAG applications using TIR AI Studio. TIR is a no-code AI development platform that allows you to deploy and perform inference using advanced LLMs without the hassle of managing infrastructure. By the end of this article, you will have all the tools needed to build RAG applications on your company’s data.
Why TIR AI Studio?
When using LLMs, you need advanced cloud GPUs, such as A100, H100, or L4, to lower the latency of your application. As a data scientist or AI developer, this means you need a workflow that simplifies LLM model deployment and inference without requiring any programming effort.
This is where TIR AI Studio excels. It provides an intuitive, no-code interface for deploying any model from Hugging Face, automating training pipelines, building AI workflows, and integrating them seamlessly with vector databases like Qdrant or PGVector. With TIR, you can focus entirely on your AI models and workflows, while the platform manages the complexities of scaling, deployment, and optimization.
Best of all, you can leverage advanced cluster cloud GPUs, like InfiniBand-powered 8xH100. This is especially important when launching applications in production, where high performance is critical. In terms of cost, TIR is far more cost-effective than other AI studios, making it an ideal choice—so give it a spin!
About Llama 3.1-8B
Llama 3.1-8B is part of an advanced family of multilingual large language models (LLMs), which include models with 8 billion, 70 billion, and 405 billion parameters. The Llama 3.1-8B model, in particular, is instruction-tuned, optimized for generating high-quality text, and is suited for tasks involving multilingual dialogue, making it ideal for use cases such as virtual assistants, chatbots, and more.
Key Features of Llama 3.1-8B
We will use Llama 3.1-8B for this tutorial.
Guide to Building RAG on TIR
In this project, we will use the Llama 3.1-8B Instruct model and the Qdrant vector store to build the RAG application.
Let’s get started.
Step 1: Launch Llama 3.1-8B Endpoint
Head to TIR AI Studio, click on Model Endpoints on the left sidebar, and then click on Create Endpoint.
You’ll need to add your hf_token from Hugging Face in the next step. We will assume that you have applied for access to the Llama 3.1 model.
Provide the token below.
You will also need to choose the GPU node. You will have to select this based on the number of parameters your model has. We will choose the Llama 3.1-8B with the L4 series of cloud GPUs.
Once that’s done, select the Plan Details. You should select a minimum 30GB disk replica size (however, we recommend minimum 100GB if you are going to train the model).
Finally, you can set the environment variables if required.
You can now launch the endpoint.
Step 2: Launch Jupyter Notebook
We will launch a Jupyter Notebook to build our RAG application. For that, select ‘Nodes’ from the sidebar.
Now select the CPU or GPU needed according to your preference. Since the model endpoint is launched already, we can go with a CPU node here.
You will see the Jupyter Notebook launched in the list of Nodes.
Select the Python3 (IPY Python3 Kernel).
You’re all set with your Jupyter notebook on your node.
Step 3: Building RAG
First, install the required libraries. We will also install a PyPDF2 so that you can parse any PDF document.
pip -q install openai PyPDF2 sentence-transformers qdrant_client
Import the libraries.
import openai
import PyPDF2
import re
from qdrant_client.models import PointStruct
from qdrant_client.models import Distance, VectorParams
from qdrant_client import QdrantClient
import openai
from sentence_transformers import SentenceTransformer
embeddings = SentenceTransformer("sentence-transformers/all-mpnet-base-v2")
qdrant_client = QdrantClient(":memory:")
Let’s also initialize Qdrant.
def initialize_qdrant(length: int):
vector_size = length
# Define the vectors configuration
vector_params = VectorParams(
size=vector_size, # Size of the vectors
distance=Distance.COSINE # Choose distance metric (COSINE, EUCLID, or IP)
)
# Create the collection with the specified configuration
if qdrant_client.get_collections().collections == []:
qdrant_client.create_collection(
collection_name="CHATBOT",
vectors_config=vector_params # Specify vector configuration
)
else:
if "CHATBOT" not in qdrant_client.get_collections().collections[0].name:
qdrant_client.create_collection(
collection_name="CHATBOT",
vectors_config=vector_params # Specify vector configuration
)
We will use the embedding model all-mpnet-base-v2. Also, we will use Qdrant in :memory: mode for testing. You can also launch a Qdrant node on TIR and provide that as an endpoint.
Now, we will provide a PDF to the PDF reader and add it to the corpus. You can do the same for multiple PDFs.
pdf_reader = PyPDF2.PdfReader("company_info.pdf")
pdf_corpus = []
for page in pdf_reader.pages:
pdf_corpus.append(page.extract_text())
This will read the PDF and store it pagewise in a list. The structure will be the following:
pdf_corpus = [page1, page2, page3,...,]
Next, let’s tokenize the paragraphs in the corpus.
def tokenize_paragraphs(pdf_corpus):
send = []
page_no = 1
for document in pdf_corpus:
section_no = 1
paragraphs = document.split(".\n")
for para in paragraphs:
send.append([para,{'page_no': page_no, 'section_no': section_no}])
section_no += 1
page_no = page_no + 1
return send
# data = [ [raw_text,{page_no, section_no}], [raw_text,{page_no, section_no}], ... ,[raw_text,{page_no, section_no}] ]
Above, we are breaking down a single-page content into sections and adding metadata of page_no and section_no with the paragraph. This will help us with the retrieval process later, and allow us to create citations.
Let’s now generate the embeddings.
def generate_embeddings(data_text):
return embeddings.encode(data_text)
We can also create a convenience function.
def prepare_embeddings(data, batch_size=10):
total_items = len(data)
final_data = []
for item in data: # Extract contexts for this batch
vectors = generate_embeddings(item[0]) # Generate embeddings for the batch
final_data.append([{"raw_text":item[0], "page_no": item[1]['page_no'], "section_no": item[1]['section_no']}, vectors])
return final_data
# final_data = [[{raw_text, page_no:, section_no}, vectors], [{raw_text, page_no:, section_no}, vectors], ... , [{raw_text, page_no:, section_no}, vectors], ]
This is the structure of our final data, which will be inserted into Qdrant.
final_data =[ [{raw_text, page_no:, section_no}, vectors], [{raw_text, page_no:, section_no}, vectors], .., [{raw_text, page_no:, section_no}, vectors] ]
Now, we can insert the data into Qdrant.
def qdrant_entry(final_data):
points=[PointStruct( id=i, vector=final_data[i][1],payload={'raw_context':final_data[i][0]['raw_text'], 'page_no':final_data[i][0]['page_no'], 'section_no':final_data[i][0]['section_no'] }) for i in range(len(final_data))]
qdrant_client.upsert(collection_name="CHATBOT", points=points)
print(qdrant_client.get_collections())
Let’s also build a function to query Qdrant. This function will perform similarity search using Qdrant, where it looks at the query vector embeddings and returns the embeddings similar to it.
def query_qdrant(query, collection_name='CHATBOT', limit=8):
query_vector=generate_embeddings(query)
result = qdrant_client.search(
collection_name = collection_name,
query_vector=query_vector,
limit = limit,
with_vectors = False
)
return result
We will now prepare the LLM context from the data returned. The result from the vector store query will be in the following format.
final_data = [{raw_text, page_no:, section_no}, vectors]
We have to transform this into a format that the LLM can use.
def prepare_llm_context(result):
# result[0].payload['raw_context']
context =[]
for i in range(len(result)):
context.append(result[i].payload['raw_context'])
return context
Finally, let’s query the LLM with the context and the user query as part of the prompt. I have used Llama 3.1-8B here–you can choose any other model endpoint you create on TIR.
def query_llm(context, query):
token = ""
openai.api_key = token
openai.base_url = ""
completion = openai.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=[
{
"role": "system",
"content": "You are an answer generation agent, you'll be given context and query, generate answer in human readable form",
"role": "user",
"content": f"here's the question {query} and here's the context {'--'.join(context)}"
},
],
)
return completion.choices[0].message.content
Let’s bring it all together. This is our ingestion pipeline.
data = tokenize_paragraphs(pdf_corpus)
final_data = prepare_embeddings(data)
initialize_qdrant(len(final_data[0][1]))
qdrant_entry(final_data)
Step 4: Querying the RAG
This is how you query your RAG system.
query = "your RAG query"
result = query_qdrant(query)
llm_context = prepare_llm_context(result)
response = query_llm(llm_context, query)
print(response)
That’s all. You can pass on any RAG query, and it will respond with context from the PDF. The whole exercise will take you less than 20 minutes on TIR! TIR’s no-code model endpoint launch and integrated notebook environment dramatically simplifies the development process.
Conclusion
Building a Retrieval-Augmented Generation (RAG) system can often seem like a complex task, especially when handling large language models (LLMs) and vector stores. However, with TIR AI Studio, the process becomes streamlined and efficient. TIR’s no-code interface allows you to focus on what truly matters—your AI models and workflows—while it manages the intricacies of infrastructure, deployment, and scaling.
By following this guide, you’ve learnt how to launch the Llama 3.1-8B model on TIR, initialize a Qdrant vector store, and create a complete RAG pipeline that draws upon your company’s data to generate accurate, context-rich responses. Whether you're developing AI chatbots, virtual assistants, or knowledge management systems, this approach ensures your applications are not only powerful but also data sovereign—an essential factor in today’s compliance-driven world.
With TIR's advanced cloud GPUs, such as the A100, H100, and L4 series, along with cost-effective pricing, deploying large-scale AI applications has never been easier. Start exploring RAG development with TIR AI Studio and transform your company’s knowledge assets into a strategic advantage.