AssistantAgent and
UserProxyAgent to solve a challenging math problem with human
feedback. Here AssistantAgent is an LLM-based agent that can write
Python code (in a Python coding block) for a user to execute for a given
task. UserProxyAgent is an agent which serves as a proxy for a user to
execute the code written by AssistantAgent. By setting
human_input_mode properly, the UserProxyAgent can also prompt the
user for feedback to AssistantAgent. For example, when
human_input_mode is set to “ALWAYS”, the UserProxyAgent will always
prompt the user for feedback. When user feedback is provided, the
UserProxyAgent will directly pass the feedback to AssistantAgent.
When no user feedback is provided, the UserProxyAgent will execute the
code written by AssistantAgent and return the execution results
(success or failure and corresponding outputs) to AssistantAgent.
Requirements
AG2 requiresPython>=3.9. To run this notebook example, please
install:
Set your API Endpoint
Theconfig_list_from_json
function loads a list of configurations from an environment variable or
a json file.
Construct Agents
We construct the assistant agent and the user proxy agent.Perform a task
We invoke theinitiate_chat() method of the user proxy agent to start
the conversation. When you run the cell below, you will be prompted to
provide feedback after receiving a message from the assistant agent. If
you don’t provide any feedback (by pressing Enter directly), the user
proxy agent will try to execute the code suggested by the assistant
agent on behalf of you, or terminate if the assistant agent sends a
“TERMINATE” signal at the end of the message.
Analyze the conversation
The human user can provide feedback at each step. When the human user didn’t provide feedback, the code was executed. The executed results and error messages are returned to the assistant, and the assistant is able to modify the code based on the feedback. In the end, the task is complete and a “TERMINATE” signal is sent from the assistant. The user skipped feedback in the end and the conversation is finished. After the conversation is finished, we can save the conversations between the two agents. The conversation can be accessed fromuser_proxy.chat_messages.