K

Change the model used by LlamaIndex

The steps necessary to let LlamaIndex use a different Large Language Model than text-davinci-003.

Posted: May 9, 2023 Updated:
Find more posts about:

By default, LlamaIndex uses the text-davinci-003 model. For improved results, you can change to a different model using these instructions.

We are currently using OpenAI models, but LlamaIndex can be connected to any LLM that the LangChain project supports.

Here, I am going to change from text-davinci-003 to gpt-3.5-turbo.

`gpt-3.5-turbo` is technically a chat model, not a text model. While LlamaIndex's abstraction still works, this is something to keep in mind as things are different under the hood.

We can take our script from Configure LlamaIndex to read from Google Calendar as an example.

We’ll need to pull in some new dependencies that allow us to define the model we’d like to use (namely LLMPredictor and ServiceContext from llama_index, and ChatOpenAI from langchain.

from llama_index import GPTVectorStoreIndex, download_loader, LLMPredictor, ServiceContext
from langchain.chat_models import ChatOpenAI

Now, we can define the LLM (large language model) we would like to use, the create a service context with that LLM.

> Whenever we define the LLM we'd like to use, we can set it's temperature. Temperature is a representation of how "random" or "creative" the model will be. Since we don't necessarily want the model to be creative and want it to base its conclusion simply on the data we provide, we set it to 0.
# define LLM
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)

Finally, we pass that service context to the vector index when we create it.

# Setup data loader
GoogleCalendarReader = download_loader('GoogleCalendarReader')
loader = GoogleCalendarReader()

# load data
documents = loader.load_data()
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)

# query model
query_engine = index.as_query_engine()
response = query_engine.query('Today is May 9th, 2023. When is my next meeting with Austin?')
print(response)

Now, any queries to the index will use the specified model.