trinity.common.models package#
Submodules#
- trinity.common.models.mm_utils module
- trinity.common.models.model module
InferenceModelModelWrapperModelWrapper.__init__()ModelWrapper.prepare()ModelWrapper.generate()ModelWrapper.generate_async()ModelWrapper.generate_mm()ModelWrapper.generate_mm_async()ModelWrapper.chat()ModelWrapper.chat_async()ModelWrapper.chat_mm()ModelWrapper.chat_mm_async()ModelWrapper.logprobs()ModelWrapper.logprobs_async()ModelWrapper.convert_messages_to_experience()ModelWrapper.convert_messages_to_experience_async()ModelWrapper.model_versionModelWrapper.model_version_asyncModelWrapper.model_pathModelWrapper.model_path_asyncModelWrapper.get_lora_request()ModelWrapper.get_lora_request_async()ModelWrapper.get_openai_client()ModelWrapper.get_openai_async_client()ModelWrapper.get_current_load()ModelWrapper.sync_model_weights()ModelWrapper.extract_experience_from_history()
convert_api_output_to_experience()extract_logprobs()
- trinity.common.models.utils module
- trinity.common.models.vllm_model module
vLLMRolloutModelvLLMRolloutModel.__init__()vLLMRolloutModel.chat()vLLMRolloutModel.generate()vLLMRolloutModel.chat_mm()vLLMRolloutModel.generate_mm()vLLMRolloutModel.logprobs()vLLMRolloutModel.convert_messages_to_experience()vLLMRolloutModel.shutdown()vLLMRolloutModel.sync_model()vLLMRolloutModel.init_process_group()vLLMRolloutModel.run_api_server()vLLMRolloutModel.get_api_server_url()vLLMRolloutModel.reset_prefix_cache()vLLMRolloutModel.get_model_version()vLLMRolloutModel.get_model_path()vLLMRolloutModel.get_lora_request()vLLMRolloutModel.sleep()vLLMRolloutModel.wake_up()
- trinity.common.models.vllm_worker module
Module contents#
- trinity.common.models.create_inference_models(config: Config) Tuple[List[InferenceModel], List[List[InferenceModel]]][source]#
Create engine_num rollout models.
Each model has tensor_parallel_size workers.
- trinity.common.models.create_debug_inference_model(config: Config) None[source]#
Create inference models for debugging.
- trinity.common.models.get_debug_inference_model(config: Config) Tuple[InferenceModel, List[InferenceModel]][source]#
Get the inference models for debugging. The models must be created by create_debug_inference_model in another process first.