agentscope.prompt

Prompt engineering module.

class agentscope.prompt.PromptType(value)[源代码]

基类:IntEnum

Enum for prompt types.

STRING = 0
LIST = 1
class agentscope.prompt.PromptEngine(model: ModelWrapperBase, shrink_policy: ShrinkPolicy = ShrinkPolicy.TRUNCATE, max_length: int | None = None, prompt_type: PromptType | None = None, max_summary_length: int = 200, summarize_model: ModelWrapperBase | None = None)[源代码]

基类:object

Prompt engineering module for both list and string prompt

__init__(model: ModelWrapperBase, shrink_policy: ShrinkPolicy = ShrinkPolicy.TRUNCATE, max_length: int | None = None, prompt_type: PromptType | None = None, max_summary_length: int = 200, summarize_model: ModelWrapperBase | None = None) None[源代码]

Init PromptEngine.

参数:
  • model (ModelWrapperBase) – The target model for prompt engineering.

  • (ShrinkPolicy (shrink_policy)

  • to (defaults)

  • ShrinkPolicy.TRUNCATE) – The shrink policy for prompt engineering, defaults to ShrinkPolicy.TRUNCATE.

  • max_length (Optional[int], defaults to None) – The max length of context, if it is None, it will be set to the max length of the model.

  • prompt_type (Optional[MsgType], defaults to None) – The type of prompt, if it is None, it will be set according to the model.

  • max_summary_length (int, defaults to 200) – The max length of summary, if it is None, it will be set to the max length of the model.

  • summarize_model (Optional[ModelWrapperBase], defaults to None) – The model used for summarization, if it is None, it will be set to model.

备注

  1. TODO: Shrink function is still under development.

2. If the argument max_length and prompt_type are not given, they will be set according to the given model.

3. shrink_policy is used when the prompt is too long, it can be set to ShrinkPolicy.TRUNCATE or ShrinkPolicy.SUMMARIZE.

a. ShrinkPolicy.TRUNCATE will truncate the prompt to the desired length.

b. ShrinkPolicy.SUMMARIZE will summarize partial of the dialog history to save space. The summarization model defaults to model if not given.

示例

With prompt engine, we encapsulate different operations for string- and list-style prompt, and block the prompt engineering process from the user. As a user, you can just combine you prompt as follows.

# prepare the component
system_prompt = "You're a helpful assistant ..."
hint_prompt = "You should response in Json format."
prefix = "assistant: "

# initialize the prompt engine and join the prompt
engine = PromptEngine(model)
prompt = engine.join(system_prompt, memory.get_memory(),
hint_prompt, prefix)
join(*args: Any, format_map: dict | None = None) str | list[dict][源代码]

Join prompt components according to its type. The join function can accept any number and type of arguments. If prompt type is PromptType.STRING, the arguments will be joined by “\n”. If prompt type is PromptType.LIST, the string arguments will be converted to Msg from system.

join_to_str(*args: Any, format_map: dict | None) str[源代码]

Join prompt components to a string.

join_to_list(*args: Any, format_map: dict | None) list[源代码]

Join prompt components to a list of Msg objects.