OAI_CONFIG_LIST
file.
llm_config
argument in its constructor. For example, the following snippet shows a configuration that uses gpt-4o
:
llm_config
can then be passed to an agent’s constructor to enable it to use the LLM.
config_list
config_list
allows specifying the different endpoints and configurations that are to be used. It is a list of dictionaries, each of which contains the following keys depending on the kind of endpoint being used:
api_type
(str, required): The model providermodel
(str, required): The identifier of the model to be used, such as ‘gpt-4o’, ‘gpt-4o-mini’.api_key
(str, optional): The API key required for authenticating requests to the model’s API endpoint.base_url
(str, optional): The base URL of the API endpoint. This is the root address where API calls are directed.tags
(List[str], optional): Tags which can be used for filtering.OAI_CONFIG_LIST
patternconfig_list
via JSON (specified as a file or an environment variable set to a JSON-formatted string) and then use the from_json
method to load it:
env_or_file
argument as follows:
env_or_file
is an environment variable then:
env_or_file
.LLM_CONFIG
with where
method:
from_json
:
config_list
, for example for the following config_list
:
config_list
you can can specify the desired tags. A config is selected if it has at least one of the tags specified in the filter. For example, to just get the llama
model, you can use the following filter:
__deepcopy__
method for the class.
The below example shows how to implement a __deepcopy__
method for http client and add a proxy.
Azure Active Directory
> App registrations
.New registration
.Redirect URI
(optional).Register
.API permissions
.Add a permission
.Microsoft Graph
and then Delegated permissions
.User.Read
).Overview
of your registered application.Application (client) ID
and Directory (tenant) ID
.Client ID
and Tenant ID
in your application configuration. Here’s an example of how to do this in your configuration file:
llm_config
with the necessary parameters.
Here is an example configuration:
model
: The Azure OpenAI deployment name.base_url
: The base URL of the Azure OpenAI endpoint.api_type
: Should be set to “azure”.api_version
: The API version to use.azure_ad_token_provider
: Set to “DEFAULT” to use the default token provider.Client ID
and Tenant ID
are correct.config_list
, there are other parameters that can be used to configure the LLM. These are split between parameters specifically used by Autogen and those passed into the model client.
cache_seed
- This is a legacy parameter and not recommended to be used unless the reason for using it is to disable the default caching behavior. To disable default caching, set this to None
. Otherwise, by default or if an int is passed the DiskCache will be used. For the new way of using caching, pass a Cache object into initiate_chat
.OpenAI
client or the OpenAI
completions create API can be supplied.
This is commonly used for things like temperature
, or timeout
.
get_config_list
: Generates configurations for API calls, primarily from provided API keys.config_list_openai_aoai
: Constructs a list of configurations using both Azure OpenAI and OpenAI endpoints, sourcing API keys from environment variables or local files.config_list_from_models
: Creates configurations based on a provided list of models, useful when targeting specific models without manually specifying each configuration.config_list_from_dotenv
: Constructs a configuration list from a .env
file, offering a consolidated way to manage multiple API configurations and keys from a single file.