MessageTokenLimitermin_tokens
) is checked (0 by default). If the total number of tokens in messages
is less than this threshold, then the messages are returned as is. In other case, the following process is applied.Name | Description |
---|---|
max_tokens_per_message | Type: int | None Default: None |
max_tokens | Type: int | None Default: None |
min_tokens | Type: int | None Default: None |
model | Type: str Default: ‘gpt-3.5-turbo-0613’ |
filter_dict | Type: dict[str, typing.Any] | None Default: None |
exclude_filter | Type: bool Default: True |
Name | Description |
---|---|
messages | The list of messages representing the conversation history. Type: list[dict[str, typing.Any]] |
Type | Description |
---|---|
list[dict[str, typing.Any]] | List[Dict]: A new list containing the truncated messages up to the specified token limits. |
Name | Description |
---|---|
pre_transform_messages | Type: list[dict[str, typing.Any]] |
post_transform_messages | Type: list[dict[str, typing.Any]] |