GPT4All
This page covers how to use the GPT4All
wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
Installation and Setupβ
- Install the Python package with
pip install gpt4all
- Download a GPT4All model and place it in your desired directory
In this example, we are using mistral-7b-openorca.Q4_0.gguf
:
mkdir models
wget https://gpt4all.io/models/gguf/mistral-7b-openorca.Q4_0.gguf -O models/mistral-7b-openorca.Q4_0.gguf
Usageβ
GPT4Allβ
To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model's configuration.
from langchain_community.llms import GPT4All
# Instantiate the model. Callbacks support token-wise streaming
model = GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8)
# Generate text
response = model.invoke("Once upon a time, ")
API Reference:GPT4All
You can also customize the generation parameters, such as n_predict
, temp
, top_p
, top_k
, and others.
To stream the model's predictions, add in a CallbackManager.
from langchain_community.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler
callbacks = [StreamingStdOutCallbackHandler()]
model = GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8)
# Generate text. Tokens are streamed through the callback manager.
model.invoke("Once upon a time, ", callbacks=callbacks)
Model Fileβ
You can download model files from the GPT4All client. You can download the client from the GPT4All website.
For a more detailed walkthrough of this, see this notebook