Configuring an LLM is crucial for your AG2 agents - it’s what gives them their thinking power! LLM Configuration defines how your agents connect to language models, specifying:
Which language model provider and model to use
How to authenticate with the provider
Parameters that control the model’s behavior
Optional structured output formats for standardized responses
Your agents deserve options! AG2 plays nicely with an impressive lineup of model providers:
Cloud Models: OpenAI, Anthropic, Google (Gemini), Amazon (Bedrock), Mistral AI, Cerebras, Together AI, and Groq
Local Models: Ollama, LiteLLM, and LM Studio
So whether you want to tap into cloud-based intelligence or keep things running on your local machine, AG2 has got you covered. You can find more information about the supported models in the AG2 Models documentation.!!! note
Starting with version 0.8, AG2 takes a “bring your own LLM” approach - provider packages aren’t included by default, so you’ll need to install your favorites explicitly, for example:
Once you have installed AG2 with your preferred LLM provider, we need to create the LLM configuration object with the API type, model, and key if necessary.Here are the different ways to create an LLM configuration in AG2:
The simplest approach is to directly specify the model provider, model name, and authentication:
Copy
import osfrom autogen import LLMConfigllm_config = LLMConfig( api_type="openai", # The provider model="gpt-4o-mini", # The specific model api_key=os.environ["OPENAI_API_KEY"], # Authentication)
AG2’s LLM configuration offers additional methods to create an LLM configuration, allowing you to specify multiple LLMs for fallback support and filtering them per agent. See the LLM Configuration deep-dive for more details.!!! danger
Never hard-code API keys or secrets in your code. Always use environment variables or secure configuration files. For example, you can set your API key in the environment like below:=== “macOS / Linux”
from autogen import ConversableAgentmy_agent = ConversableAgent( name="helpful_agent", system_message="You are a poetic AI assistant", llm_config=llm_config)
Time to put theory into practice! Let’s set up the brains for our financial compliance assistant:
Copy
from autogen import LLMConfigimport os# Basic configuration using environment variablellm_config = LLMConfig( api_type="openai", model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"], temperature=0.2 # Lower temperature for more consistent financial analysis)
Code walkthrough:
We’re using OpenAI’s GPT-4o-mini model because our financial bot needs smarts without breaking the bank. You can use a different model if you prefer.
We’ve set temperature to 0.2 because when it comes to financial compliance, creativity is NOT what we want (sorry, creative accountants!)
We’re keeping our API key in an environment variable because security first, folks!
This configuration gives our financial compliance assistant the right balance of intelligence, consistency, and security - exactly what you want when dealing with suspicious transactions.
Now that you’ve got the brains sorted for your AG2 agents, it’s time to give them a body! Head over to ConversableAgent to create actual thinking agents powered by your LLM configuration.