By default, LlamaIndex uses the text-davinci-003
model. For improved results, you can change to a different model using these instructions.
We are currently using OpenAI models, but LlamaIndex can be connected to any LLM that the LangChain project supports.
Here, I am going to change from text-davinci-003
to gpt-3.5-turbo
.
We can take our script from Configure LlamaIndex to read from Google Calendar as an example.
We'll need to pull in some new dependencies that allow us to define the model we'd like to use (namely LLMPredictor
and ServiceContext
from llama_index, and ChatOpenAI
from langchain.
from llama_index import GPTVectorStoreIndex, download_loader, LLMPredictor, ServiceContextfrom langchain.chat_models import ChatOpenAI
Now, we can define the LLM (large language model) we would like to use, the create a service context with that LLM.
# define LLMllm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"))service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
Finally, we pass that service context to the vector index when we create it.
# Setup data loaderGoogleCalendarReader = download_loader('GoogleCalendarReader')loader = GoogleCalendarReader()
# load datadocuments = loader.load_data()index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
# query modelquery_engine = index.as_query_engine()response = query_engine.query('Today is May 9th, 2023. When is my next meeting with Austin?')print(response)
Now, any queries to the index will use the specified model.