trinity.trainer.verl_trainer module

veRL Trainer Class

Modified from verl/trainer/ppo/ray_trainer.py

class trinity.trainer.verl_trainer.VerlPPOTrainerWrapper(global_config: Config)[source]

Bases: RayPPOTrainer, TrainEngineWrapper

A wrapper for verl.trainer.ppo.RayPPOTrainer.

__init__(global_config: Config)[source]

Initialize distributed PPO trainer with Ray backend. Note that this trainer runs on the driver process on a single CPU/GPU node.

Parameters:
  • config – Configuration object containing training parameters.

  • tokenizer – Tokenizer used for encoding and decoding text.

  • role_worker_mapping (dict[Role, WorkerType]) – Mapping from roles to worker classes.

  • resource_pool_manager (ResourcePoolManager) – Manager for Ray resource pools.

  • ray_worker_group_cls (RayWorkerGroup, optional) – Class for Ray worker groups. Defaults to RayWorkerGroup.

  • processor – Optional data processor, used for multimodal data

  • reward_fn – Function for computing rewards during training.

  • val_reward_fn – Function for computing rewards during validation.

  • train_dataset (Optional[Dataset], optional) – Training dataset. Defaults to None.

  • val_dataset (Optional[Dataset], optional) – Validation dataset. Defaults to None.

  • collate_fn – Function to collate data samples into batches.

  • train_sampler (Optional[Sampler], optional) – Sampler for the training dataset. Defaults to None.

  • device_name (str, optional) – Device name for training (e.g., “cuda”, “cpu”). Defaults to None.

init_workers()[source]

Initialize distributed training workers using Ray backend.

Creates:

  1. Ray resource pools from configuration

  2. Worker groups for each role (actor, critic, etc.)

property train_step_num: int

Get the current training step number.

prepare()[source]

Do some preparation before training started.

save_state_dict()[source]

Only save the model state dict for Synchronizer.

upload_state_dict()[source]

Upload the state dict to Synchronizer.

train_step(batch: Experiences) Tuple[bool, Dict][source]

Training one step.

Parameters:

batch (Experiences) – A batch of experiences to train.

Returns:

Whether to continue training. Dict: Metrics of the training step.

Return type:

bool

save_checkpoint(block_until_saved: bool = False) None[source]

Save the checkpoint.

sync_weight() None[source]

Sync the model weight.

sft_to_rft() None[source]