memoryscope.core.models.llama_index_generation_model

class memoryscope.core.models.llama_index_generation_model.LlamaIndexGenerationModel(*args, **kwargs)[源代码]

基类:BaseModel

This class represents a generation model within the LlamaIndex framework, capable of processing input prompts or message histories, selecting an appropriate language model service from a registry, and generating text responses, with support for both streaming and non-streaming modes. It encapsulates logic for formatting these interactions within the context of a memory scope management system.

m_type: ModelEnum = 'generation_model'
__init__(*args, **kwargs)[源代码]
before_call(model_response: ModelResponse, **kwargs)[源代码]

Prepares the input data before making a call to the language model. It accepts either a 'prompt' directly or a list of 'messages'. If 'prompt' is provided, it sets the data accordingly. If 'messages' are provided, it constructs a list of ChatMessage objects from the list. Raises an error if neither 'prompt' nor 'messages' are supplied.

参数:
  • model_response -- model_response

  • **kwargs -- Arbitrary keyword arguments including 'prompt' and 'messages'.

抛出:

RuntimeError -- When both 'prompt' and 'messages' inputs are not provided.

after_call(model_response: ModelResponse, stream: bool = False, **kwargs) ModelResponse | Generator[ModelResponse, None, None][源代码]
model_name: str
module_name: str
timeout: int
max_retries: int
retry_interval: float
kwargs_filter: bool
raise_exception: bool
context: MemoryscopeContext
kwargs: dict