memoryscope.core.chat
- class memoryscope.core.chat.ApiMemoryChat(memory_service: str, generation_model: str, context: ~memoryscope.core.utils.singleton.singleton.<locals>._singleton, stream: bool = False, **kwargs)[源代码]
-
- __init__(memory_service: str, generation_model: str, context: ~memoryscope.core.utils.singleton.singleton.<locals>._singleton, stream: bool = False, **kwargs)[源代码]
- property prompt_handler: PromptHandler
Lazy initialization property for the prompt handler.
This property ensures that the _prompt_handler attribute is only instantiated when it is first accessed. It uses the current file's path and additional keyword arguments for configuration.
- 返回:
An instance of the PromptHandler configured for this CLI session.
- 返回类型:
- property memory_service: BaseMemoryService
Property to access the memory service. If the service is initially set as a string, it will be looked up in the memory service dictionary of context, initialized, and then returned as an instance of BaseMemoryService. Ensures the memory service is properly started before use.
- 返回:
An active memory service instance.
- 返回类型:
- 抛出:
ValueError -- If the declaration of memory service is not found in the memory service dictionary of context.
- property human_name
- property assistant_name
- property generation_model: BaseModel
Property to get the generation model. If the model is set as a string, it will be resolved from the global context's model dictionary.
- 抛出:
ValueError -- If the declaration of generation model is not found in the model dictionary of context .
- 返回:
An actual generation model instance.
- 返回类型:
- iter_response(remember_response: bool, resp: Generator[ModelResponse, None, None], memories: str, query_message: Message) Generator[ModelResponse, None, None] [源代码]
- chat_with_memory(query: str, role_name: str | None = None, system_prompt: str | None = None, memory_prompt: str | None = None, temporary_memories: str | None = None, history_message_strategy: Literal['auto', None] | int = 'auto', remember_response: bool = True, **kwargs)[源代码]
The core function that carries out conversation with memory accepts user queries through query and returns the conversation results through model_response. The retrieved memories are stored in the memories within meta_data. :param query: User's query, includes the user's question. :type query: str :param role_name: User's role name. :type role_name: str, optional :param system_prompt: System prompt. Defaults to the system_prompt in "memory_chat_prompt.yaml". :type system_prompt: str, optional :param memory_prompt: Memory prompt, It takes effect when there is a memory and will be placed in
front of the retrieved memory. Defaults to the memory_prompt in "memory_chat_prompt.yaml".
- 参数:
temporary_memories (str, optional) -- Manually added user memory in this function.
history_message_strategy ("auto", None, int) --
- If it is set to "auto", the history messages in the conversation will retain those that have not
yet been summarized. Default to "auto".
If it is set to None, no conversation history will be saved.
If it is set to an integer value "n", recent "n" message-pair[user, assistant] will be retained.
remember_response (bool, optional) -- Flag indicating whether to save the AI's response to memory. Defaults to False.
- 返回:
In non-streaming mode, returns a complete AI response. - ModelResponseGen: In streaming mode, returns a generator yielding AI response parts. - Memories: To obtain the memory by invoking the method of model_response.meta_data[MEMORIES]
- 返回类型:
ModelResponse
- class memoryscope.core.chat.BaseMemoryChat(**kwargs)[源代码]
基类:
object
An abstract base class representing a chat system integrated with memory services. It outlines the method to initiate a chat session leveraging memory data, which concrete subclasses must implement.
- property memory_service: BaseMemoryService
Abstract property to access the memory service.
- 抛出:
NotImplementedError -- This method should be implemented in a subclass.
- abstract chat_with_memory(query: str, role_name: str | None = None, system_prompt: str | None = None, memory_prompt: str | None = None, temporary_memories: str | None = None, history_message_strategy: Literal['auto', None] | int = 'auto', remember_response: bool = True, **kwargs)[源代码]
The core function that carries out conversation with memory accepts user queries through query and returns the conversation results through model_response. The retrieved memories are stored in the memories within meta_data. :param query: User's query, includes the user's question. :type query: str :param role_name: User's role name. :type role_name: str, optional :param system_prompt: System prompt. Defaults to the system_prompt in "memory_chat_prompt.yaml". :type system_prompt: str, optional :param memory_prompt: Memory prompt, It takes effect when there is a memory and will be placed in
front of the retrieved memory. Defaults to the memory_prompt in "memory_chat_prompt.yaml".
- 参数:
temporary_memories (str, optional) -- Manually added user memory in this function.
history_message_strategy ("auto", None, int) --
- If it is set to "auto", the history messages in the conversation will retain those that have not
yet been summarized. Default to "auto".
If it is set to None, no conversation history will be saved.
If it is set to an integer value "n", recent "n" message-pair[user, assistant] will be retained.
remember_response (bool, optional) -- Flag indicating whether to save the AI's response to memory. Defaults to False.
- 返回:
In non-streaming mode, returns a complete AI response. - ModelResponseGen: In streaming mode, returns a generator yielding AI response parts. - Memories: To obtain the memory by invoking the method of model_response.meta_data[MEMORIES]
- 返回类型:
ModelResponse
- class memoryscope.core.chat.CliMemoryChat(**kwargs)[源代码]
-
Command-line interface for chatting with an AI that integrates memory functionality. Allows users to interact, manage chat history, adjust streaming settings, and view commands' help.
- USER_COMMANDS = {'clear': 'Clear the command history.', 'exit': 'Exit the CLI.', 'help': 'Display available CLI commands and their descriptions.', 'stream': 'Toggle between getting streamed responses from the model.'}
- print_logo()[源代码]
Prints the logo of the CLI application to the console.
The logo is composed of multiple lines, which are iterated through and printed one by one to provide a visual identity for the chat interface.
- chat_with_memory(query: str, role_name: str | None = None, system_prompt: str | None = None, memory_prompt: str | None = None, temporary_memories: str | None = None, history_message_strategy: Literal['auto', None] | int = 'auto', remember_response: bool = True, **kwargs)[源代码]
The core function that carries out conversation with memory accepts user queries through query and returns the conversation results through model_response. The retrieved memories are stored in the memories within meta_data. :param query: User's query, includes the user's question. :type query: str :param role_name: User's role name. :type role_name: str, optional :param system_prompt: System prompt. Defaults to the system_prompt in "memory_chat_prompt.yaml". :type system_prompt: str, optional :param memory_prompt: Memory prompt, It takes effect when there is a memory and will be placed in
front of the retrieved memory. Defaults to the memory_prompt in "memory_chat_prompt.yaml".
- 参数:
temporary_memories (str, optional) -- Manually added user memory in this function.
history_message_strategy ("auto", None, int) --
- If it is set to "auto", the history messages in the conversation will retain those that have not
yet been summarized. Default to "auto".
If it is set to None, no conversation history will be saved.
If it is set to an integer value "n", recent "n" message-pair[user, assistant] will be retained.
remember_response (bool, optional) -- Flag indicating whether to save the AI's response to memory. Defaults to False.
- 返回:
In non-streaming mode, returns a complete AI response. - ModelResponseGen: In streaming mode, returns a generator yielding AI response parts. - Memories: To obtain the memory by invoking the method of model_response.meta_data[MEMORIES]
- 返回类型:
ModelResponse
- static parse_query_command(query: str)[源代码]
Parses the user's input query command, separating it into the command and its associated keyword arguments.
- 参数:
query (str) -- The raw input string from the user which includes the command and its arguments.
- 返回:
A tuple containing the command (str) as the first element and a dictionary (kwargs) of keyword arguments as the second element.
- 返回类型:
tuple
- process_commands(query: str) bool [源代码]
Parses and executes commands from user input in the CLI chat interface. Supports operations like exiting, clearing screen, showing help, toggling stream mode, executing predefined memory operations, and handling unknown commands.
- 参数:
query (str) -- The user's input command string.
- 返回:
Indicates whether to continue running the CLI after processing the command.
- 返回类型:
bool
- run()[源代码]
Runs the CLI chat loop, which handles user input, processes commands, communicates with the AI model, manages conversation memory, and controls the chat session including streaming responses, command execution, and error handling.
The loop continues until the user explicitly chooses to exit.