Top-K Similarity Search - Ask A Book A Question#

In this tutorial we will see a simple example of basic retrieval via Top-K Similarity search

# pip install langchain --upgrade
# Version: 0.0.164

# !pip install pypdf
# Unzip data folder

import zipfile
with zipfile.ZipFile('../../data.zip', 'r') as zip_ref:
    zip_ref.extractall('..')
# PDF Loaders. If unstructured gives you a hard time, try PyPDFLoader
from langchain.document_loaders import UnstructuredPDFLoader, OnlinePDFLoader, PyPDFLoader, TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from dotenv import load_dotenv
import os

load_dotenv()
True

Load your data#

Next let’s load up some data. I’ve put a few ‘loaders’ on there which will load data from different locations. Feel free to use the one that suits you. The default one queries one of Paul Graham’s essays for a simple example. This process will only stage the loader, not actually load it.

loader = TextLoader(file_path="../data/PaulGrahamEssays/vb.txt")

## Other options for loaders 
# loader = PyPDFLoader("../data/field-guide-to-data-science.pdf")
# loader = UnstructuredPDFLoader("../data/field-guide-to-data-science.pdf")
# loader = OnlinePDFLoader("https://wolfpaulus.com/wp-content/uploads/2017/05/field-guide-to-data-science.pdf")

Then let’s go ahead and actually load the data.

data = loader.load()

Then let’s actually check out what’s been loaded

# Note: If you're using PyPDFLoader then it will split by page for you already
print (f'You have {len(data)} document(s) in your data')
print (f'There are {len(data[0].page_content)} characters in your sample document')
print (f'Here is a sample: {data[0].page_content[:200]}')
You have 1 document(s) in your data
There are 9155 characters in your sample document
Here is a sample: January 2016Life is short, as everyone knows. When I was a kid I used to wonder
about this. Is life actually short, or are we really complaining
about its finiteness?  Would we be just as likely to fe

Chunk your data up into smaller documents#

While we could pass the entire essay to a model w/ long context, we want to be picky about which information we share with our model. The better signal to noise ratio we have the more likely we are to get the right answer.

The first thing we’ll do is chunk up our document into smaller pieces. The goal will be to take only a few of those smaller pieces and pass them to the LLM.

# We'll split our data into chunks around 500 characters each with a 50 character overlap. These are relatively small.

text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(data)
# Let's see how many small chunks we have
print (f'Now you have {len(texts)} documents')
Now you have 20 documents

Option #1: Chroma (for local)#

I like Chroma becauase it’s local and easy to set up without an account.

First we’ll pass our texts to Chroma via .from_documents, this will 1) embed the documents and get a vector, then 2) add them to the vectorstore for retrieval later.

# load it into Chroma
vectorstore = Chroma.from_documents(texts, embeddings)

Let’s test it out. I want to see which documents are most closely related to a query.

query = "What is great about having kids?"
docs = vectorstore.similarity_search(query)

Then we can check them out. In theory, the texts which are deemed most similar should hold the answer to our question. But keep in mind that our query just happens to be a question, it could be a random statement or sentence and it would still work.

# Here's an example of the first document that was returned
for doc in docs:
    print (f"{doc.page_content}\n")
jabs into your consciousness like a pin.The things that matter aren't necessarily the ones people would
call "important."  Having coffee with a friend matters.  You won't
feel later like that was a waste of time.One great thing about having small children is that they make you
spend time on things that matter: them. They grab your sleeve as
you're staring at your phone and say "will you play with me?" And

the question, and the answer is that life actually is short.Having kids showed me how to convert a continuous quantity, time,
into discrete quantities. You only get 52 weekends with your 2 year
old.  If Christmas-as-magic lasts from say ages 3 to 10, you only
get to watch your child experience it 8 times.  And while it's
impossible to say what is a lot or a little of a continuous quantity
like time, 8 is not a lot of something.  If you had a handful of 8

January 2016Life is short, as everyone knows. When I was a kid I used to wonder
about this. Is life actually short, or are we really complaining
about its finiteness?  Would we be just as likely to feel life was
short if we lived 10 times as long?Since there didn't seem any way to answer this question, I stopped
wondering about it.  Then I had kids.  That gave me a way to answer

done that we didn't.  My oldest son will be 7 soon.  And while I
miss the 3 year old version of him, I at least don't have any regrets
over what might have been.  We had the best time a daddy and a 3
year old ever had.Relentlessly prune bullshit, don't wait to do things that matter,
and savor the time you have.  That's what you do when life is short.Notes[1]
At first I didn't like it that the word that came to mind was
one that had other meanings.  But then I realized the other meanings

Option #2: Pinecone (for cloud)#

If you want to use pinecone, run the code below, if not then skip over to Chroma below it. You must go to Pinecone.io and set up an account

# PINECONE_API_KEY = os.getenv('PINECONE_API_KEY', 'YourAPIKey')
# PINECONE_API_ENV = os.getenv('PINECONE_API_ENV', 'us-east1-gcp') # You may need to switch with your env

# # initialize pinecone
# pinecone.init(
#     api_key=PINECONE_API_KEY,  # find at app.pinecone.io
#     environment=PINECONE_API_ENV  # next to api key in console
# )
# index_name = "langchaintest" # put in the name of your pinecone index here

# docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)

Query those docs to get your answer back#

Great, those are just the docs which should hold our answer. Now we can pass those to a LangChain chain to query the LLM.

We could do this manually, but a chain is a convenient helper for us.

from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
query = "What is great about having kids?"
docs = vectorstore.similarity_search(query)
chain.run(input_documents=docs, question=query)
'One great thing about having kids is that they make you spend time on things that matter. They remind you to prioritize the important things in life, like spending quality time with them. Having kids can also bring a sense of joy and fulfillment as you watch them grow and experience new things.'

Awesome! We just went and queried an external data source!