trinity.algorithm.policy_loss_fn package

Submodules

Module contents

class trinity.algorithm.policy_loss_fn.PolicyLossFn(backend: str = 'verl')[source]

Bases: ABC

Abstract base class for policy loss functions.

This class provides the interface for implementing different policy gradient loss functions while handling parameter name mapping between different training frameworks.

__init__(backend: str = 'verl')[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

abstract classmethod default_args() Dict[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.PPOPolicyLossFn(backend: str = 'verl', clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None)[source]

Bases: PolicyLossFn

__init__(backend: str = 'verl', clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

classmethod default_args() Dict[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

class trinity.algorithm.policy_loss_fn.OPMDPolicyLossFn(backend: str = 'verl', tau: float = 1.0)[source]

Bases: PolicyLossFn

__init__(backend: str = 'verl', tau: float = 1.0) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args() Dict[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.DPOLossFn(backend: str = 'verl', beta: float = 0.1, label_smoothing: float = 0.0)[source]

Bases: PolicyLossFn

__init__(backend: str = 'verl', beta: float = 0.1, label_smoothing: float = 0.0) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args() Dict[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.SFTLossFn(backend: str = 'verl', use_token_level_loss: bool = True)[source]

Bases: PolicyLossFn

__init__(backend: str = 'verl', use_token_level_loss: bool = True) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args()[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.MIXPolicyLossFn(backend: str = 'verl', mu: float = 0.1, clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None, use_dynamic_bsz: bool | None = None, ppo_mini_batch_size: int = 1, ppo_micro_batch_size_per_gpu: int = 1, ngpus_trainer: int = 1, train_batch_size_usual: int = 1, train_batch_size_expert: int = 1, use_token_level_loss_in_sft: bool = True)[source]

Bases: PolicyLossFn

Implements a mixed policy loss combining GRPO and SFT losses.

This loss function applies different loss components to data based on whether it comes from an expert or not, as indicated by expert_mask. It combines: - GRPO loss (self.grpo_loss_fn) for non-expert data - SFT loss (self.sft_loss_fn) for expert data - Weighting parameter mu

The per-sample weights are normalized using either experience_per_gpu or gradient_accumulation, depending on whether dynamic batch sizing is enabled, to ensure consistent weighting across different batches of the same type experiences.

__init__(backend: str = 'verl', mu: float = 0.1, clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None, use_dynamic_bsz: bool | None = None, ppo_mini_batch_size: int = 1, ppo_micro_batch_size_per_gpu: int = 1, ngpus_trainer: int = 1, train_batch_size_usual: int = 1, train_batch_size_expert: int = 1, use_token_level_loss_in_sft: bool = True) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args() Dict[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.GSPOLossFn(backend: str = 'verl', clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None)[source]

Bases: PolicyLossFn

__init__(backend: str = 'verl', clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args() Dict[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.MIXCHORDPolicyLossFn(backend: str = 'verl', mu_warmup_steps: int = 0, mu_decay_steps: int = 0, mu_peak: float = 0.1, mu_valley: float = 0.1, enable_phi_function: bool = True, clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None, use_dynamic_bsz: bool | None = None, ppo_mini_batch_size: int = 1, ppo_micro_batch_size_per_gpu: int = 1, ngpus_trainer: int = 1, train_batch_size_usual: int = 1, train_batch_size_expert: int = 1, use_token_level_loss_in_sft: bool = True)[source]

Bases: PolicyLossFn

Implements a mixed policy loss combining GRPO and SFT losses.

This loss function applies different loss components to data based on whether it comes from an expert or not, as indicated by expert_mask. It combines:

  • GRPO loss (self.grpo_loss_fn) for non-expert data

  • SFT loss (self.sft_loss_fn) for expert data

    the weight of SFT loss is globally controled by mu_schedule function the tokenwise weights are calculated using different SFT loss formulas

The per-sample weights are normalized using either experience_per_gpu or gradient_accumulation, depending on whether dynamic batch sizing is enabled, to ensure consistent weighting across different batches of the same type experiences.

__init__(backend: str = 'verl', mu_warmup_steps: int = 0, mu_decay_steps: int = 0, mu_peak: float = 0.1, mu_valley: float = 0.1, enable_phi_function: bool = True, clip_range: float | None = None, clip_range_low: float | None = None, clip_range_high: float | None = None, use_dynamic_bsz: bool | None = None, ppo_mini_batch_size: int = 1, ppo_micro_batch_size_per_gpu: int = 1, ngpus_trainer: int = 1, train_batch_size_usual: int = 1, train_batch_size_expert: int = 1, use_token_level_loss_in_sft: bool = True) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args() Dict[source]

mu_warmup_steps: int, mu_decay_steps: int, mu_peak: float, mu_valley: float

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.SFTISLossFn(backend: str = 'verl', use_token_level_loss: bool = True)[source]

Bases: PolicyLossFn

SFT loss with importance sampling

__init__(backend: str = 'verl', use_token_level_loss: bool = True) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args()[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.

class trinity.algorithm.policy_loss_fn.SFTPhiLossFn(backend: str = 'verl', use_token_level_loss: bool = True, cutoff_prob: float = 1.0)[source]

Bases: PolicyLossFn

SFT loss with transformed phi function

__init__(backend: str = 'verl', use_token_level_loss: bool = True, cutoff_prob: float = 1.0) None[source]

Initialize the policy loss function.

Parameters:

backend – The training framework/backend to use (e.g., “verl”)

classmethod default_args()[source]

Get default initialization arguments for this loss function.

Returns:

The default init arguments for the policy loss function.

Return type:

Dict

property select_keys

Returns parameter keys mapped to the specific training framework’s naming convention.