trinity.common.rewards.naive_dapo_score module#

This file contaims the naive dapo reward function for math tasks. Adapted from LLM360/Reasoning360

trinity.common.rewards.naive_dapo_score.normalize_final_answer(final_answer: str) str[source]#

Normalize a final answer to a quantitative reasoning question.

Parameters:

final_answer – The answer string to normalize

Returns:

Normalized answer string

trinity.common.rewards.naive_dapo_score.timeout(timeout_seconds: int = 8)[source]#
trinity.common.rewards.naive_dapo_score.count_unknown_letters_in_expr(expr: str)[source]#
trinity.common.rewards.naive_dapo_score.should_allow_eval(expr: str)[source]#
trinity.common.rewards.naive_dapo_score.are_equal_under_sympy(ground_truth_normalized: str, given_normalized: str)[source]#
trinity.common.rewards.naive_dapo_score.split_tuple(expr: str)[source]#

Split the elements in a tuple/interval, while handling well-formatted commas in large numbers

trinity.common.rewards.naive_dapo_score.grade_answer(given_answer: str, ground_truth: str) tuple[bool, str][source]#

The answer will be considered correct if: (a) it normalizes to the same string as the ground truth answer OR (b) sympy can simplify the difference between the expressions to 0

trinity.common.rewards.naive_dapo_score.match_answer(response)[source]#
trinity.common.rewards.naive_dapo_score.compute_score(solution_str: str, ground_truth: str) float[source]#

Compute the reward score for a solution. This draws heavily from the LLM-as-judge and PRIME reward functions

Parameters:
  • solution_str – The solution string

  • ground_truth – The ground truth answer

  • extra_info – dict with additional info for the score computation

Returns:

Reward score (1.0 for correct, 0.0 for incorrect)