Teachability addresses these
limitations by persisting user teachings across chat boundaries in
long-term memory (a vector database). Memories (called memos) are
created and saved to disk throughout a conversation, then loaded from
disk later. Instead of copying all the memos into the context window,
which would eat up valuable space, individual memos are retrieved into
context only as needed. This allows the user to teach many facts,
preferences and skills to the teachable agent just once, and have it
remember them in later chats.
In making decisions about memo storage and retrieval, Teachability
calls an instance of TextAnalyzerAgent to analyze pieces of text in
several different ways. This adds extra LLM calls involving a relatively
small number of tokens. These calls can add a few seconds to the time a
user waits for a response.
This notebook demonstrates how Teachability can be added to an agent
so that it can learn facts, preferences, and skills from users. To chat
with a teachable agent yourself, run
chat_with_teachable_agent.py.
Requirements
Some extra dependencies are needed for this notebook, which can be installed via pip:For more information, please refer to the installation guide.
Set your API Endpoint
Theconfig_list_from_json
function loads a list of configurations from an environment variable or
a json file.
Construct Agents
For this walkthrough, we start by creating a teachable agent and resetting its memory store. This deletes any memories from prior conversations that may be stored on disk.Learning new facts
Let’s teach the agent some facts it doesn’t already know, since they are more recent than GPT-4’s training data.clear_history=True to
initiate_chat. At this point, a common LLM-based assistant would
forget everything from the last chat. But a teachable agent can retrieve
memories from its vector DB as needed, allowing it to recall and reason
over things that the user taught it in earlier conversations.