autogen
and ag2
.
Any one of the following three lines works:
autogen
using the same import statement:
api_base
parameter to base_url
. So for older versions, use api_base
but for newer versions use base_url
.
max_retries
to handle rate limit error. And you can set timeout
to handle timeout error. They can all be specified in llm_config
for an agent, which will be used in the OpenAI client for LLM inference. They can be set differently for different clients if they are set in the config_list
.
max_retries
(int): the total number of times allowed for retrying failed requests for a single client.timeout
(int): the timeout (in seconds) for a single client.initiate_chat
the conversation restarts by default. You can use send
or initiate_chat(clear_history=False)
to continue the conversation.
max_consecutive_auto_reply
vs max_turn
vs max_round
max_consecutive_auto_reply
the maximum number of consecutive auto replies (a reply from an agent without human input is considered an auto reply). It plays a role when human_input_mode
is not “ALWAYS”.max_turns
in ConversableAgent.initiate_chat
limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)max_round
in GroupChat specifies the maximum number of rounds in a group chat session.If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line.
in the system message. This line is in the default system message of the AssistantAgent
.
If the # filename
doesn’t appear in the suggested code still, consider adding explicit instructions such as “save the code to disk” in the initial user message in initiate_chat
.
The AssistantAgent
doesn’t save all the code by default, because there are cases in which one would just like to finish a task without saving the code.
code_execution_config
in the agent’s constructor.
code_execution_config
specifies that the code will be
executed in a docker container with the image python:3
.
By default, the image name is python:3-slim
if not specified.
The work_dir
specifies the directory where the code will be executed.
If you have problems with agents running pip install
or get errors similar to
Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')
,
you can choose ‘python:3’ as image as shown in the code example above and
that should solve the problem.
By default it runs code in a docker container. If you want to run code locally
(not recommended) then use_docker
can be set to False
in code_execution_config
for each code-execution agent, or set AUTOGEN_USE_DOCKER
to False
as an
environment variable.
You can also develop your AG2 application in a docker container.
For example, when developing in GitHub codespace,
AG2 runs in a docker container.
If you are not developing in GitHub Codespaces,
follow instructions here
to install and run AG2 in a Dev Container.
gpt-3.5-turbo
gpt-3.5-turbo
you may often encounter agents going into a “gratitude loop”, meaning when they complete a task they will begin congratulating and thanking each other in a continuous loop. This is a limitation in the performance of gpt-3.5-turbo
, in contrast to gpt-4
which has no problem remembering instructions. This can hinder the experimentation experience when trying to test out your own use case with cheaper models.
A workaround is to add an additional termination notice to the prompt. This acts a “little nudge” for the LLM to remember that they need to terminate the conversation when their task is complete. You can do this by appending a string such as the following to your user input string:
pip install pysqlite3-binary
mkdir /home/vscode/.local/lib/python3.10/site-packages/google/colab
generate_reply
is called for an agent.
print_messages
function that is called each time the agent’s generate_reply
is triggered after receiving a message.
cache_path_root
to a location where the application has access.
For example,
code_execution_config
to False
for each code-execution agent. E.g.:use_docker
can be set to False
in code_execution_config
for each code-execution agent.AUTOGEN_USE_DOCKER
to False
as an environment variable.ag2
version earlier than 0.2.27 in combination with OpenAI library version 1.21 or later. The issue arises because the older version of ag2
does not support the file_ids parameter used by newer versions of the OpenAI API.
To resolve this issue, you need to upgrade your ag2
package to version 0.2.27 or higher that ensures compatibility between AG2 and the OpenAI library.
apt-get update
step with the following: