memoryscope.core.worker.memory_base_worker

class memoryscope.core.worker.memory_base_worker.MemoryBaseWorker(embedding_model: str = '', generation_model: str = '', rank_model: str = '', **kwargs)[source]

Bases: BaseWorker

FILE_PATH: str = '/home/runner/work/MemoryScope/MemoryScope/memoryscope/core/worker/memory_base_worker.py'
__init__(embedding_model: str = '', generation_model: str = '', rank_model: str = '', **kwargs)[source]

Initializes the MemoryBaseWorker with specified models and configurations.

Parameters:
  • embedding_model (str) – Identifier or instance of the embedding model used for transforming text.

  • generation_model (str) – Identifier or instance of the text generation model.

  • rank_model (str) – Identifier or instance of the ranking model for sorting the retrieved memories wrt. the semantic similarities.

  • **kwargs – Additional keyword arguments passed to the parent class initializer.

The constructor also initializes key attributes related to memory store, monitoring, user and target identification, and a prompt handler, setting them up for later use.

property chat_messages: List[List[Message]]

Property to get the chat messages.

Returns:

List of chat messages.

Return type:

List[Message]

property chat_messages_scatter: List[Message]

Property to get the chat messages.

Returns:

List of chat messages.

Return type:

List[Message]

property chat_kwargs: Dict[str, Any]

Retrieves the chat keyword arguments from the context.

This property getter fetches the chat-related parameters stored in the context, which are used to configure how chat interactions are handled.

Returns:

A dictionary containing the chat keyword arguments.

Return type:

Dict[str, str]

property user_name: str
property target_name: str
property workflow_name: str
property language: LanguageEnum
property embedding_model: BaseModel

Property to get the embedding model. If the model is currently stored as a string, it will be replaced with the actual model instance from the global context’s model dictionary.

Returns:

The embedding model used for converting text into vector representations.

Return type:

BaseModel

property generation_model: BaseModel

Property to access the generation model. If the model is stored as a string, it retrieves the actual model instance from the global context’s model dictionary.

Returns:

The model used for text generation.

Return type:

BaseModel

property rank_model: BaseModel

Property to access the rank model. If the stored rank model is a string, it fetches the actual model instance from the global context’s model dictionary before returning it.

Returns:

The rank model instance used for ranking tasks.

Return type:

BaseModel

property memory_store: BaseMemoryStore

Property to access the memory vector store. If not initialized, it fetches the global memory store.

Returns:

The memory store instance used for inserting, updating, retrieving and deleting operations.

Return type:

BaseMemoryStore

property monitor: BaseMonitor

Property to access the monitoring component. If not initialized, it fetches the global monitor.

Returns:

The monitoring component instance.

Return type:

BaseMonitor

property prompt_handler: PromptHandler

Lazily initializes and returns the PromptHandler instance.

Returns:

An instance of PromptHandler initialized with specific file path and keyword arguments.

Return type:

PromptHandler

property memory_manager: MemoryManager

Lazily initializes and returns the MemoryHandler instance.

Returns:

An instance of MemoryHandler.

Return type:

MemoryHandler

get_language_value(languages: dict | List[dict]) Any | List[Any][source]

Retrieves the value(s) corresponding to the current language context.

Parameters:

languages (dict | list[dict]) – A dictionary or list of dictionaries containing language-keyed values.

Returns:

The value or list of values matching the current language setting.

Return type:

Any | list[Any]

prompt_to_msg(system_prompt: str, few_shot: str, user_query: str, concat_system_prompt: bool = True) List[Message][source]

Converts input strings into a structured list of message objects suitable for AI interactions.

Parameters:
  • system_prompt (str) – The system-level instruction or context.

  • few_shot (str) – An example or demonstration input, often used for illustrating expected behavior.

  • user_query (str) – The actual user query or prompt to be processed.

  • concat_system_prompt (bool) – Concat system prompt again or not in the user message. A simple method to improve the effectiveness for some LLMs. Defaults to True.

Returns:

A list of Message objects, each representing a part of the conversation setup.

Return type:

List[Message]

name: str
workflow_context: Dict[str, Any]
memoryscope_context: MemoryscopeContext
raise_exception: bool
is_multi_thread: bool
thread_pool: ThreadPoolExecutor
enable_parallel: bool
kwargs: dict
continue_run: bool
async_task_list: list
thread_task_list: list
logger: Logger