memoryscope.core.chat.api_memory_chat

class memoryscope.core.chat.api_memory_chat.ApiMemoryChat(memory_service: str, generation_model: str, context: ~memoryscope.core.utils.singleton.singleton.<locals>._singleton, stream: bool = False, **kwargs)[source]

Bases: BaseMemoryChat

__init__(memory_service: str, generation_model: str, context: ~memoryscope.core.utils.singleton.singleton.<locals>._singleton, stream: bool = False, **kwargs)[source]
property prompt_handler: PromptHandler

Lazy initialization property for the prompt handler.

This property ensures that the _prompt_handler attribute is only instantiated when it is first accessed. It uses the current file’s path and additional keyword arguments for configuration.

Returns:

An instance of the PromptHandler configured for this CLI session.

Return type:

PromptHandler

property memory_service: BaseMemoryService

Property to access the memory service. If the service is initially set as a string, it will be looked up in the memory service dictionary of context, initialized, and then returned as an instance of BaseMemoryService. Ensures the memory service is properly started before use.

Returns:

An active memory service instance.

Return type:

BaseMemoryService

Raises:

ValueError – If the declaration of memory service is not found in the memory service dictionary of context.

property human_name
property assistant_name
property generation_model: BaseModel

Property to get the generation model. If the model is set as a string, it will be resolved from the global context’s model dictionary.

Raises:

ValueError – If the declaration of generation model is not found in the model dictionary of context .

Returns:

An actual generation model instance.

Return type:

BaseModel

iter_response(remember_response: bool, resp: Generator[ModelResponse, None, None], memories: str, query_message: Message) Generator[ModelResponse, None, None][source]
chat_with_memory(query: str, role_name: str | None = None, system_prompt: str | None = None, memory_prompt: str | None = None, temporary_memories: str | None = None, history_message_strategy: Literal['auto', None] | int = 'auto', remember_response: bool = True, **kwargs)[source]

The core function that carries out conversation with memory accepts user queries through query and returns the conversation results through model_response. The retrieved memories are stored in the memories within meta_data. :param query: User’s query, includes the user’s question. :type query: str :param role_name: User’s role name. :type role_name: str, optional :param system_prompt: System prompt. Defaults to the system_prompt in “memory_chat_prompt.yaml”. :type system_prompt: str, optional :param memory_prompt: Memory prompt, It takes effect when there is a memory and will be placed in

front of the retrieved memory. Defaults to the memory_prompt in “memory_chat_prompt.yaml”.

Parameters:
  • temporary_memories (str, optional) – Manually added user memory in this function.

  • history_message_strategy ("auto", None, int) –

    • If it is set to “auto”, the history messages in the conversation will retain those that have not

      yet been summarized. Default to “auto”.

    • If it is set to None, no conversation history will be saved.

    • If it is set to an integer value “n”, recent “n” message-pair[user, assistant] will be retained.

  • remember_response (bool, optional) – Flag indicating whether to save the AI’s response to memory. Defaults to False.

Returns:

In non-streaming mode, returns a complete AI response. - ModelResponseGen: In streaming mode, returns a generator yielding AI response parts. - Memories: To obtain the memory by invoking the method of model_response.meta_data[MEMORIES]

Return type:

  • ModelResponse