memoryscope.core.models
- class memoryscope.core.models.BaseModel(model_name: str, module_name: str, timeout: int | None = None, max_retries: int = 3, retry_interval: float = 1.0, kwargs_filter: bool = True, raise_exception: bool = True, **kwargs)[源代码]
基类:
object
- __init__(model_name: str, module_name: str, timeout: int | None = None, max_retries: int = 3, retry_interval: float = 1.0, kwargs_filter: bool = True, raise_exception: bool = True, **kwargs)[源代码]
- property model
- abstract before_call(model_response: ModelResponse, **kwargs)[源代码]
- abstract after_call(model_response: ModelResponse, **kwargs) ModelResponse | Generator[ModelResponse, None, None] [源代码]
- call(stream: bool = False, **kwargs) ModelResponse | Generator[ModelResponse, None, None] [源代码]
- async async_call(**kwargs) ModelResponse [源代码]
- class memoryscope.core.models.DummyGenerationModel(model_name: str, module_name: str, timeout: int | None = None, max_retries: int = 3, retry_interval: float = 1.0, kwargs_filter: bool = True, raise_exception: bool = True, **kwargs)[源代码]
基类:
BaseModel
The DummyGenerationModel class serves as a placeholder model for generating responses. It processes input prompts or sequences of messages, adapting them into a structure compatible with chat interfaces. It also facilitates the generation of mock (dummy) responses for testing, supporting both immediate and streamed output.
- before_call(model_response: ModelResponse, **kwargs)[源代码]
Prepares the input data before making a call to the language model. It accepts either a 'prompt' directly or a list of 'messages'. If 'prompt' is provided, it sets the data accordingly. If 'messages' are provided, it constructs a list of ChatMessage objects from the list. Raises an error if neither 'prompt' nor 'messages' are supplied.
- 参数:
model_response -- model_response
**kwargs -- Arbitrary keyword arguments including 'prompt' and 'messages'.
- 抛出:
RuntimeError -- When both 'prompt' and 'messages' inputs are not provided.
- after_call(model_response: ModelResponse, stream: bool = False, **kwargs) ModelResponse | Generator[ModelResponse, None, None] [源代码]
Processes the model's response post-call, optionally streaming the output or returning it as a whole.
This method modifies the input model_response by resetting its message content and, based on the stream parameter, either yields the response in a generated stream or returns the complete response directly.
- 参数:
model_response (ModelResponse) -- The initial response object to be processed.
stream (bool, optional) -- Flag indicating whether to stream the response. Defaults to False.
**kwargs -- Additional keyword arguments (not used in this implementation).
- 返回:
- If stream is True, a generator yielding updated ModelResponse objects;
otherwise, a modified ModelResponse object with the complete content.
- 返回类型:
ModelResponse | ModelResponseGen
- class memoryscope.core.models.LlamaIndexEmbeddingModel(*args, **kwargs)[源代码]
基类:
BaseModel
Manages text embeddings utilizing the DashScopeEmbedding within the LlamaIndex framework, facilitating embedding operations for both sync and async modes, inheriting from BaseModel.
- classmethod register_model(model_name: str, model_class: type)[源代码]
Registers a new embedding model class with the model registry.
- 参数:
model_name (str) -- The name to register the model under.
model_class (type) -- The class of the model to register.
- before_call(model_response: ModelResponse, **kwargs)[源代码]
- after_call(model_response: ModelResponse, **kwargs) ModelResponse [源代码]
- class memoryscope.core.models.LlamaIndexGenerationModel(*args, **kwargs)[源代码]
基类:
BaseModel
This class represents a generation model within the LlamaIndex framework, capable of processing input prompts or message histories, selecting an appropriate language model service from a registry, and generating text responses, with support for both streaming and non-streaming modes. It encapsulates logic for formatting these interactions within the context of a memory scope management system.
- before_call(model_response: ModelResponse, **kwargs)[源代码]
Prepares the input data before making a call to the language model. It accepts either a 'prompt' directly or a list of 'messages'. If 'prompt' is provided, it sets the data accordingly. If 'messages' are provided, it constructs a list of ChatMessage objects from the list. Raises an error if neither 'prompt' nor 'messages' are supplied.
- 参数:
model_response -- model_response
**kwargs -- Arbitrary keyword arguments including 'prompt' and 'messages'.
- 抛出:
RuntimeError -- When both 'prompt' and 'messages' inputs are not provided.
- after_call(model_response: ModelResponse, stream: bool = False, **kwargs) ModelResponse | Generator[ModelResponse, None, None] [源代码]
- class memoryscope.core.models.LlamaIndexRankModel(*args, **kwargs)[源代码]
基类:
BaseModel
The LlamaIndexRankModel class is designed to rerank documents according to their relevance to a provided query, utilizing the DashScope Rerank model. It transforms document lists and queries into a compatible format for ranking, manages the ranking process, and allocates rank scores to individual documents.
- before_call(model_response: ModelResponse, **kwargs)[源代码]
Prepares necessary data before the ranking call by extracting the query and documents, ensuring they are valid, and initializing nodes with dummy scores.
- 参数:
model_response -- model response
**kwargs -- Keyword arguments containing 'query' and 'documents'.
- after_call(model_response: ModelResponse, **kwargs) ModelResponse [源代码]
Processes the model response post-ranking, assigning calculated rank scores to each document based on their index in the original document list.
- 参数:
model_response (ModelResponse) -- The initial response from the ranking model.
**kwargs -- Additional keyword arguments (unused).
- 返回:
Updated response with rank scores assigned to documents.
- 返回类型: