agentscope.models

Import modules in models package.

class agentscope.models.ModelWrapperBase(config_name: str, **kwargs: Any)[源代码]

基类:object

The base class for model wrapper.

model_type: str

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

model_name: str

The name of the model, which is used in model api calling.

__init__(config_name: str, **kwargs: Any) None[源代码]

Base class for model wrapper.

All model wrappers should inherit this class and implement the __call__ function.

参数:

config_name (str) – The id of the model, which is used to extract configuration from the config file.

config_name: str

The name of the model configuration.

classmethod get_wrapper(model_type: str) Type[ModelWrapperBase][源代码]

Get the specific model wrapper

format(*args: MessageBase | Sequence[MessageBase]) List[dict] | str[源代码]

Format the input string or dict into the format that the model API required.

update_monitor(**kwargs: Any) None[源代码]

Update the monitor with the given values.

参数:

kwargs (dict) – The values to be updated to the monitor.

class agentscope.models.ModelResponse(text: str | None = None, embedding: Sequence | None = None, image_urls: Sequence[str] | None = None, raw: Any | None = None, parsed: Any | None = None)[源代码]

基类:object

Encapsulation of data returned by the model.

The main purpose of this class is to align the return formats of different models and act as a bridge between models and agents.

__init__(text: str | None = None, embedding: Sequence | None = None, image_urls: Sequence[str] | None = None, raw: Any | None = None, parsed: Any | None = None) None[源代码]

Initialize the model response.

参数:
  • text (str, optional) – The text field.

  • embedding (Sequence, optional) – The embedding returned by the model.

  • image_urls (Sequence[str], optional) – The image URLs returned by the model.

  • raw (Any, optional) – The raw data returned by the model.

  • parsed (Any, optional) – The parsed data returned by the model.

text: str | None = None
embedding: Sequence | None = None
image_urls: Sequence[str] | None = None
raw: Any | None = None
parsed: Any | None = None
class agentscope.models.PostAPIModelWrapperBase(config_name: str, api_url: str, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'inputs', retry_interval: int = 1, **kwargs: Any)[源代码]

基类:ModelWrapperBase, ABC

The base model wrapper for the model deployed on the POST API.

model_type: str = 'post_api'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

__init__(config_name: str, api_url: str, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'inputs', retry_interval: int = 1, **kwargs: Any) None[源代码]

Initialize the model wrapper.

参数:
  • config_name (str) – The id of the model.

  • api_url (str) – The url of the post request api.

  • headers (dict, defaults to None) – The headers of the api. Defaults to None.

  • max_length (int, defaults to 2048) – The maximum length of the model.

  • timeout (int, defaults to 30) – The timeout of the api. Defaults to 30.

  • json_args (dict, defaults to None) – The json arguments of the api. Defaults to None.

  • post_args (dict, defaults to None) – The post arguments of the api. Defaults to None.

  • max_retries (int, defaults to 3) – The maximum number of retries when the parse_func raise an exception.

  • messages_key (str, defaults to inputs) – The key of the input messages in the json argument.

  • retry_interval (int, defaults to 1) – The interval between retries when a request fails.

备注

When an object of PostApiModelWrapper is called, the arguments will of post requests will be used as follows:

request.post(
    url=api_url,
    headers=headers,
    json={
        messages_key: messages,
        **json_args
    },
    **post_args
)
class agentscope.models.PostAPIChatWrapper(config_name: str, api_url: str, headers: dict | None = None, max_length: int = 2048, timeout: int = 30, json_args: dict | None = None, post_args: dict | None = None, max_retries: int = 3, messages_key: str = 'inputs', retry_interval: int = 1, **kwargs: Any)[源代码]

基类:PostAPIModelWrapperBase

A post api model wrapper compatible with openai chat, e.g., vLLM, FastChat.

model_type: str = 'post_api_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) List[dict][源代码]

Format the input messages into a list of dict, which is compatible to OpenAI Chat API.

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

Union[List[dict]]

class agentscope.models.OpenAIWrapperBase(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, budget: float | None = None, **kwargs: Any)[源代码]

基类:ModelWrapperBase, ABC

The model wrapper for OpenAI API.

__init__(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, budget: float | None = None, **kwargs: Any) None[源代码]

Initialize the openai client.

参数:
  • config_name (str) – The name of the model config.

  • model_name (str, default None) – The name of the model to use in OpenAI API.

  • api_key (str, default None) – The API key for OpenAI API. If not specified, it will be read from the environment variable OPENAI_API_KEY.

  • organization (str, default None) – The organization ID for OpenAI API. If not specified, it will be read from the environment variable OPENAI_ORGANIZATION.

  • client_args (dict, default None) – The extra keyword arguments to initialize the OpenAI client.

  • generate_args (dict, default None) – The extra keyword arguments used in openai api generation, e.g. temperature, seed.

  • budget (float, default None) – The total budget using this model. Set to None means no limit.

format(*args: MessageBase | Sequence[MessageBase]) List[dict] | str[源代码]

Format the input string or dict into the format that the model API required.

class agentscope.models.OpenAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, budget: float | None = None, **kwargs: Any)[源代码]

基类:OpenAIWrapperBase

The model wrapper for OpenAI’s chat API.

model_type: str = 'openai_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

deprecated_model_type: str = 'openai'
substrings_in_vision_models_names = ['gpt-4-turbo', 'vision', 'gpt-4o']

The substrings in the model names of vision models.

format(*args: MessageBase | Sequence[MessageBase]) List[dict][源代码]

Format the input string and dictionary into the format that OpenAI Chat API required.

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages in the format that OpenAI Chat API required.

返回类型:

List[dict]

class agentscope.models.OpenAIDALLEWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, budget: float | None = None, **kwargs: Any)[源代码]

基类:OpenAIWrapperBase

The model wrapper for OpenAI’s DALL·E API.

model_type: str = 'openai_dall_e'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class agentscope.models.OpenAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, organization: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, budget: float | None = None, **kwargs: Any)[源代码]

基类:OpenAIWrapperBase

The model wrapper for OpenAI embedding API.

model_type: str = 'openai_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class agentscope.models.DashScopeChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope’s chat API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/api-details

model_type: str = 'dashscope_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

deprecated_model_type: str = 'tongyi_chat'
format(*args: MessageBase | Sequence[MessageBase]) List[源代码]

Format the messages for DashScope Chat API.

In this format function, the input messages are formatted into a single system messages with format “{name}: {content}” for each message. Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

[
    {
        "role": "system",
        "content": "You're a helpful assistant",
    }
    {
        "role": "user",
        "content": (
            "## Dialogue History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

class agentscope.models.DashScopeImageSynthesisWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope Image Synthesis API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/quick-start-1

model_type: str = 'dashscope_image_synthesis'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class agentscope.models.DashScopeTextEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope Text Embedding API.

model_type: str = 'dashscope_text_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class agentscope.models.DashScopeMultiModalWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:DashScopeWrapperBase

The model wrapper for DashScope Multimodal API, refer to https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-api

model_type: str = 'dashscope_multimodal'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) List[源代码]

Format the messages for DashScope Multimodal API.

The multimodal API has the following requirements:

  • The roles of messages must alternate between “user” and

    “assistant”.

  • The message with the role “system” should be the first message

    in the list.

  • If the system message exists, then the second message must

    have the role “user”.

  • The last message in the list should have the role “user”.

  • In each message, more than one figure is allowed.

With the above requirements, we format the messages as follows:

  • If the first message is a system message, then we will keep it as

    system prompt.

  • We merge all messages into a dialogue history prompt in a single

    message with the role “user”.

  • When there are multiple figures in the given messages, we will

    attach it to the user message by order. Note if there are multiple figures, this strategy may cause misunderstanding for the model. For advanced solutions, developers are encouraged to implement their own prompt engineering strategies.

The following is an example:

prompt = model.format(
    Msg(
        "system",
        "You're a helpful assistant",
        role="system", url="figure1"
    ),
    Msg(
        "Bob",
        "How about this picture?",
        role="assistant", url="figure2"
    ),
    Msg(
        "user",
        "It's wonderful! How about mine?",
        role="user", image="figure3"
    )
)

The prompt will be as follows:

[
    {
        "role": "system",
        "content": [
            {"text": "You are a helpful assistant"},
            {"image": "figure1"}
        ]
    },
    {
        "role": "user",
        "content": [
            {"image": "figure2"},
            {"image": "figure3"},
            {
                "text": (
                    "## Dialogue History\n"
                    "Bob: How about this picture?\n"
                    "user: It's wonderful! How about mine?"
                )
            },
        ]
    }
]

备注

In multimodal API, the url of local files should be prefixed with “file://”, which will be attached in this format function.

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

class agentscope.models.OllamaChatWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', **kwargs: Any)[源代码]

基类:OllamaWrapperBase

The model wrapper for Ollama chat API.

model_type: str = 'ollama_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) List[dict][源代码]

Format the messages for ollama Chat API.

All messages will be formatted into a single system message with system prompt and dialogue history.

Note: 1. This strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies. 2. For ollama chat api, the content field shouldn’t be empty string.

Example:

prompt = model.format(
    Msg("system", "You're a helpful assistant", role="system"),
    Msg("Bob", "Hi, how can I help you?", role="assistant"),
    Msg("user", "What's the date today?", role="user")
)

The prompt will be as follows:

[
    {
        "role": "user",
        "content": (
            "You're a helpful assistant\n\n"
            "## Dialogue History\n"
            "Bob: Hi, how can I help you?\n"
            "user: What's the date today?"
        )
    }
]
参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages.

返回类型:

List[dict]

class agentscope.models.OllamaEmbeddingWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', **kwargs: Any)[源代码]

基类:OllamaWrapperBase

The model wrapper for Ollama embedding API.

model_type: str = 'ollama_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) List[dict] | str[源代码]

Format the input string or dict into the format that the model API required.

class agentscope.models.OllamaGenerationWrapper(config_name: str, model_name: str, options: dict | None = None, keep_alive: str = '5m', **kwargs: Any)[源代码]

基类:OllamaWrapperBase

The model wrapper for Ollama generation API.

model_type: str = 'ollama_generate'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) str[源代码]

Forward the input to the model.

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted string prompt.

返回类型:

str

class agentscope.models.GeminiChatWrapper(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any)[源代码]

基类:GeminiWrapperBase

The wrapper for Google Gemini chat model, e.g. gemini-pro

model_type: str = 'gemini_chat'

The type of the model, which is used in model configuration.

generation_method = 'generateContent'

The generation method used in __call__ function.

__init__(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any) None[源代码]

Initialize the wrapper for Google Gemini model.

参数:
  • model_name (str) – The name of the model.

  • api_key (str, defaults to None) – The api_key for the model. If it is not provided, it will be loaded from environment variable.

format(*args: MessageBase | Sequence[MessageBase]) List[dict][源代码]

This function provide a basic prompting strategy for Gemini Chat API in multi-party conversation, which combines all input into a single string, and wrap it into a user message.

We make the above decision based on the following constraints of the Gemini generate API:

1. In Gemini generate_content API, the role field must be either user or model.

2. If we pass a list of messages to the generate_content API, the user role must speak in the beginning and end of the messages, and user and model must alternative. This prevents us to build a multi-party conversations, where model may keep speaking in different names.

The above information is updated to 2024/03/21. More information about the Gemini generate_content API can be found in https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini

Based on the above considerations, we decide to combine all messages into a single user message. This is a simple and straightforward strategy, if you have any better ideas, pull request and discussion are welcome in our GitHub repository https://github.com/agentscope/agentscope!

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

A list with one user message.

返回类型:

List[dict]

class agentscope.models.GeminiEmbeddingWrapper(config_name: str, model_name: str, api_key: str | None = None, **kwargs: Any)[源代码]

基类:GeminiWrapperBase

The wrapper for Google Gemini embedding model, e.g. models/embedding-001

model_type: str = 'gemini_embedding'

The type of the model, which is used in model configuration.

class agentscope.models.ZhipuAIChatWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:ZhipuAIWrapperBase

The model wrapper for ZhipuAI’s chat API.

model_type: str = 'zhipuai_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) List[dict][源代码]

Format the input string and dictionary into the format that ZhipuAI Chat API required.

In this format function, the input messages are formatted into a single system messages with format “{name}: {content}” for each message. Note this strategy maybe not suitable for all scenarios, and developers are encouraged to implement their own prompt engineering strategies.

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages in the format that ZhipuAI Chat API required.

返回类型:

List[dict]

class agentscope.models.ZhipuAIEmbeddingWrapper(config_name: str, model_name: str | None = None, api_key: str | None = None, client_args: dict | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:ZhipuAIWrapperBase

The model wrapper for ZhipuAI embedding API.

model_type: str = 'zhipuai_embedding'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

class agentscope.models.LiteLLMChatWrapper(config_name: str, model_name: str | None = None, generate_args: dict | None = None, **kwargs: Any)[源代码]

基类:LiteLLMWrapperBase

The model wrapper based on litellm chat API. To use the LiteLLM wrapper, environent variables must be set. Different model_name could be using different environment variables. For example:

  • for model_name: “gpt-3.5-turbo”, you need to set “OPENAI_API_KEY”

` os.environ["OPENAI_API_KEY"] = "your-api-key" ` - for model_name: “claude-2”, you need to set “ANTHROPIC_API_KEY” - for Azure OpenAI, you need to set “AZURE_API_KEY”, “AZURE_API_BASE”, “AZURE_API_VERSION”

You should refer to the docs in https://docs.litellm.ai/docs/ .

model_type: str = 'litellm_chat'

The type of the model wrapper, which is to identify the model wrapper class in model configuration.

format(*args: MessageBase | Sequence[MessageBase]) List[dict][源代码]

Format the input string and dictionary into the unified format. Note that the format function might not be the optimal way to contruct prompt for every model, but a common way to do so. Developers are encouraged to implement their own prompt engineering strategies if have strong performance concerns.

参数:

args (Union[MessageBase, Sequence[MessageBase]]) – The input arguments to be formatted, where each argument should be a Msg object, or a list of Msg objects. In distribution, placeholder is also allowed.

返回:

The formatted messages in the format that anthropic Chat API required.

返回类型:

List[dict]

agentscope.models.load_model_by_config_name(config_name: str) ModelWrapperBase[源代码]

Load the model by config name.

agentscope.models.read_model_configs(configs: dict | str | list, clear_existing: bool = False) None[源代码]

read model configs from a path or a list of dicts.

参数:
  • configs (Union[str, list, dict]) – The path of the model configs | a config dict | a list of model configs.

  • clear_existing (bool, defaults to False) – Whether to clear the loaded model configs before reading.

返回:

The model configs.

返回类型:

dict

agentscope.models.clear_model_configs() None[源代码]

Clear the loaded model configs.