trinity.common.models package#
Submodules#
- trinity.common.models.mm_utils module
- trinity.common.models.model module
InferenceModelInferenceModel.generate()InferenceModel.chat()InferenceModel.logprobs()InferenceModel.convert_messages_to_experience()InferenceModel.prepare()InferenceModel.sync_model()InferenceModel.get_model_version()InferenceModel.get_available_address()InferenceModel.get_api_server_url()InferenceModel.get_model_path()
ModelWrapperModelWrapper.__init__()ModelWrapper.prepare()ModelWrapper.generate()ModelWrapper.generate_async()ModelWrapper.generate_mm()ModelWrapper.generate_mm_async()ModelWrapper.chat()ModelWrapper.chat_async()ModelWrapper.chat_mm()ModelWrapper.chat_mm_async()ModelWrapper.logprobs()ModelWrapper.logprobs_async()ModelWrapper.convert_messages_to_experience()ModelWrapper.convert_messages_to_experience_async()ModelWrapper.model_versionModelWrapper.model_version_asyncModelWrapper.model_pathModelWrapper.model_path_asyncModelWrapper.get_lora_request()ModelWrapper.get_lora_request_async()ModelWrapper.get_message_token_len()ModelWrapper.get_openai_client()ModelWrapper.get_openai_async_client()ModelWrapper.get_current_load()ModelWrapper.sync_model_weights()ModelWrapper.extract_experience_from_history()ModelWrapper.set_workflow_state()ModelWrapper.clean_workflow_state()ModelWrapper.get_workflow_state()
convert_api_output_to_experience()extract_logprobs()
- trinity.common.models.utils module
tokenize_and_mask_messages_hf()tokenize_and_mask_messages_default()get_action_mask_method()get_checkpoint_dir_with_step_num()get_latest_state_dict()load_state_dict()merge_by_placement()get_verl_checkpoint_info()load_fsdp_state_dict_from_verl_checkpoint()load_huggingface_state_dict()get_megatron_converter()
- trinity.common.models.vllm_model module
- trinity.common.models.vllm_worker module
Module contents#
- trinity.common.models.create_inference_models(config: Config) Tuple[List[InferenceModel], List[List[InferenceModel]]][源代码]#
Create engine_num rollout models.
Each model has tensor_parallel_size workers.
- async trinity.common.models.create_debug_inference_model(config: Config) None[源代码]#
Create inference models for debugging.
- trinity.common.models.get_debug_inference_model(config: Config) Tuple[InferenceModel, List[InferenceModel]][源代码]#
Get the inference models for debugging. The models must be created by create_debug_inference_model in another process first.