- Which language model provider and model to use
- How to authenticate with the provider
- Parameters that control the model’s behavior
- Optional structured output formats for standardized responses
Supported LLM Providers
Your agents deserve options! AG2 plays nicely with an impressive lineup of model providers:- Cloud Models: OpenAI, Anthropic, Google (Gemini), Amazon (Bedrock), Mistral AI, Cerebras, Together AI, and Groq
- Local Models: Ollama, LiteLLM, and LM Studio
Creating an LLM Configuration
Once you have installed AG2 with your preferred LLM provider, we need to create the LLM configuration object with the API type, model, and key if necessary. Here are the different ways to create an LLM configuration in AG2:Method 1: Using Direct Parameters
The simplest approach is to directly specify the model provider, model name, and authentication:Method 2: Using the config_list
Parameter
For more advanced scenarios, especially when you want to set up fallback models, use the config_list
parameter.
Integrating LLM Configuration with Agents
Once you’ve created your LLM configuration, there are two ways to apply it to your agents:Method 1: Passing as a Keyword Argument
Method 2: Using a Context Manager
The context manager approach applies the LLM configuration to all agents created within its scope:Financial Compliance Example: LLM Configuration
Time to put theory into practice! Let’s set up the brains for our financial compliance assistant:- We’re using OpenAI’s
GPT-4o-mini
model because our financial bot needs smarts without breaking the bank. You can use a different model if you prefer. - We’ve set temperature to 0.2 because when it comes to financial compliance, creativity is NOT what we want (sorry, creative accountants!)
- We’re keeping our API key in an environment variable because security first, folks!