As of version 0.6.0, the llama_index library no longer automatically logs runtime information and details about its processes. However, you can enable logging by adding the following to the beginning of your script:
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
This will show the token counts for embedding and LLM usage.
The level can be changed to logging.DEBUG
for even more details, such as the actual data it’s processing.