Skip to content

template

BasePromptTemplate

Bases: BaseModel

BasePromptTemplate serves as the abstract base class for all prompt template implementations.

This class provides core functionality for parsing structured templates, formatting output schemas, and validating content against defined field requirements. It implements the fundamental patterns for bidirectional conversion between string representations and structured data models.

Attributes:

Name Type Description
reason str

A field capturing the reasoning trace for decision-making processes

Source code in rm_gallery/core/reward/template.py
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
class BasePromptTemplate(BaseModel):
    """
    BasePromptTemplate serves as the abstract base class for all prompt template implementations.

    This class provides core functionality for parsing structured templates, formatting output schemas,
    and validating content against defined field requirements. It implements the fundamental patterns
    for bidirectional conversion between string representations and structured data models.

    Attributes:
        reason (str): A field capturing the reasoning trace for decision-making processes
    """

    model_config = ConfigDict(validate_by_alias=True, validate_by_name=True)
    reason: Optional[str] = Field(
        default=None, description="your reasoning trace", alias="think"
    )

    @classmethod
    def _parse(cls, text: str) -> Dict[str, str]:
        """
        Extracts key-value pairs from XML-style tagged text using regex pattern matching.

        This internal method identifies structured patterns in the format <key>value</key>
        and converts them into a dictionary mapping for further processing.

        Args:
            text (str): Input string containing XML-style tagged content

        Returns:
            Dict[str, str]: Dictionary mapping of tag names to corresponding values
        """
        pattern = r"<([^>]+)>(.*)</\1>"
        matches = re.findall(pattern, text, re.DOTALL)
        contents = {match[0]: match[1].strip() for match in matches}
        return contents

    @classmethod
    def parse(cls, text: str) -> "BasePromptTemplate":
        """
        Converts a structured text string into a validated template instance.

        Processes input text through internal parsing mechanism and constructs
        a model instance with validated field values.

        Args:
            text (str): XML-style formatted string containing template data

        Returns:
            BasePromptTemplate: Constructed instance with parsed field values
        """
        contents = cls._parse(text)
        contents.setdefault("think", "")
        return cls(**contents)

    @classmethod
    def schema(cls, enable_thinking: bool = False, **kwargs) -> str:
        """
        Generates a descriptive schema documentation string for the template structure.

        Creates a human-readable documentation showing required fields, their descriptions,
        and proper output formatting requirements.

        Args:
            enable_thinking (bool): Flag to include/exclude thinking field in schema
            **kwargs: Additional parameters passed to schema generation

        Returns:
            str: Formatted schema documentation string with field descriptions
        """
        schema_str = "Note: Ensure all outputs are placed within the tags like <tag> </tag> as required!!!\n"
        for key, property in cls.model_json_schema(by_alias=True)["properties"].items():
            if key == "model_config":
                continue

            if key == "think" and enable_thinking:
                continue

            if key == "think":
                schema_str += f"<reason>\n{property['description']}\n</reason>\n"
            else:
                schema_str += f"<{key}>\n{property['description']}\n</{key}>\n"
        return schema_str

    @classmethod
    def format(cls, enable_thinking: bool = False, **kwargs) -> str:
        """
        Formats provided content into the template's required output structure.

        Takes arbitrary keyword arguments and formats them into the appropriate
        template structure for response generation.

        Args:
            enable_thinking (bool): Flag to control inclusion of reasoning field
            **kwargs: Content to be formatted into template structure

        Returns:
            str: Formatted string ready for model processing
        """
        ...

format(enable_thinking=False, **kwargs) classmethod

Formats provided content into the template's required output structure.

Takes arbitrary keyword arguments and formats them into the appropriate template structure for response generation.

Parameters:

Name Type Description Default
enable_thinking bool

Flag to control inclusion of reasoning field

False
**kwargs

Content to be formatted into template structure

{}

Returns:

Name Type Description
str str

Formatted string ready for model processing

Source code in rm_gallery/core/reward/template.py
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
@classmethod
def format(cls, enable_thinking: bool = False, **kwargs) -> str:
    """
    Formats provided content into the template's required output structure.

    Takes arbitrary keyword arguments and formats them into the appropriate
    template structure for response generation.

    Args:
        enable_thinking (bool): Flag to control inclusion of reasoning field
        **kwargs: Content to be formatted into template structure

    Returns:
        str: Formatted string ready for model processing
    """
    ...

parse(text) classmethod

Converts a structured text string into a validated template instance.

Processes input text through internal parsing mechanism and constructs a model instance with validated field values.

Parameters:

Name Type Description Default
text str

XML-style formatted string containing template data

required

Returns:

Name Type Description
BasePromptTemplate BasePromptTemplate

Constructed instance with parsed field values

Source code in rm_gallery/core/reward/template.py
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
@classmethod
def parse(cls, text: str) -> "BasePromptTemplate":
    """
    Converts a structured text string into a validated template instance.

    Processes input text through internal parsing mechanism and constructs
    a model instance with validated field values.

    Args:
        text (str): XML-style formatted string containing template data

    Returns:
        BasePromptTemplate: Constructed instance with parsed field values
    """
    contents = cls._parse(text)
    contents.setdefault("think", "")
    return cls(**contents)

schema(enable_thinking=False, **kwargs) classmethod

Generates a descriptive schema documentation string for the template structure.

Creates a human-readable documentation showing required fields, their descriptions, and proper output formatting requirements.

Parameters:

Name Type Description Default
enable_thinking bool

Flag to include/exclude thinking field in schema

False
**kwargs

Additional parameters passed to schema generation

{}

Returns:

Name Type Description
str str

Formatted schema documentation string with field descriptions

Source code in rm_gallery/core/reward/template.py
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
@classmethod
def schema(cls, enable_thinking: bool = False, **kwargs) -> str:
    """
    Generates a descriptive schema documentation string for the template structure.

    Creates a human-readable documentation showing required fields, their descriptions,
    and proper output formatting requirements.

    Args:
        enable_thinking (bool): Flag to include/exclude thinking field in schema
        **kwargs: Additional parameters passed to schema generation

    Returns:
        str: Formatted schema documentation string with field descriptions
    """
    schema_str = "Note: Ensure all outputs are placed within the tags like <tag> </tag> as required!!!\n"
    for key, property in cls.model_json_schema(by_alias=True)["properties"].items():
        if key == "model_config":
            continue

        if key == "think" and enable_thinking:
            continue

        if key == "think":
            schema_str += f"<reason>\n{property['description']}\n</reason>\n"
        else:
            schema_str += f"<{key}>\n{property['description']}\n</{key}>\n"
    return schema_str

PrincipleListWiseTemplate

Bases: BasePromptTemplate

Template implementation for principle-based list-wise evaluation tasks.

Designed for comparative evaluation scenarios where multiple answers need to be assessed against defined principles to determine the optimal choice.

Attributes:

Name Type Description
best int

Index of the best-performing answer according to principles

Source code in rm_gallery/core/reward/template.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
class PrincipleListWiseTemplate(BasePromptTemplate):
    """
    Template implementation for principle-based list-wise evaluation tasks.

    Designed for comparative evaluation scenarios where multiple answers need
    to be assessed against defined principles to determine the optimal choice.

    Attributes:
        best (int): Index of the best-performing answer according to principles
    """

    best: int = Field(
        default=...,
        description="which answer is the best? just give the number here!!!",
    )

    @classmethod
    def parse(cls, text: str):
        """
        Parses text input containing list-wise evaluation results.

        Converts best answer index from string to integer format
        during template instantiation.

        Args:
            text (str): Input string containing XML-style tagged content

        Returns:
            PrincipleListWiseTemplate: Constructed instance with parsed values
        """
        contents = cls._parse(text)
        contents["best"] = int(contents["best"])
        return cls(**contents)

    @classmethod
    def format(
        cls,
        desc: str,
        scenario: str,
        principles: str,
        examples: str,
        query: str,
        context: str,
        answers: List[str],
        **kwargs,
    ) -> str:
        """
        Formats comparative evaluation components into structured prompt template.

        Combines task description, scenario context, principles, and multiple
        candidate answers into standardized prompt format for list-wise evaluation.

        Args:
            desc (str): Task description text
            scenario (str): Scenario context description
            principles (str): List of relevant principles
            examples (str): Example-based guidance
            query (str): Evaluation query text
            context (str): Additional contextual information
            answers (List[str]): List of candidate answers for comparison
            **kwargs: Additional formatting parameters

        Returns:
            str: Formatted prompt string following template requirements
        """
        answer_str = ""
        for i, answer in enumerate(answers):
            answer_str += f"## Answer {i + 1}\n{answer}\n\n"

        if examples:
            examples = f"# Examples\n{examples}\n"

        if scenario:
            scenario = f"\n# Scenario\n{scenario}\n"

        if context:
            context = f"\n# Context\n{context}\n"

        if principles:
            principles = f"# Principles\n{principles}\n"

        return f"""# Task Description
{desc}
{scenario}

{principles}
{examples}

# Query
{query}
{context}

# Answers
{answer_str}

# Output Requirement
{cls.schema(**kwargs)}
"""

format(desc, scenario, principles, examples, query, context, answers, **kwargs) classmethod

Formats comparative evaluation components into structured prompt template.

Combines task description, scenario context, principles, and multiple candidate answers into standardized prompt format for list-wise evaluation.

Parameters:

Name Type Description Default
desc str

Task description text

required
scenario str

Scenario context description

required
principles str

List of relevant principles

required
examples str

Example-based guidance

required
query str

Evaluation query text

required
context str

Additional contextual information

required
answers List[str]

List of candidate answers for comparison

required
**kwargs

Additional formatting parameters

{}

Returns:

Name Type Description
str str

Formatted prompt string following template requirements

Source code in rm_gallery/core/reward/template.py
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
    @classmethod
    def format(
        cls,
        desc: str,
        scenario: str,
        principles: str,
        examples: str,
        query: str,
        context: str,
        answers: List[str],
        **kwargs,
    ) -> str:
        """
        Formats comparative evaluation components into structured prompt template.

        Combines task description, scenario context, principles, and multiple
        candidate answers into standardized prompt format for list-wise evaluation.

        Args:
            desc (str): Task description text
            scenario (str): Scenario context description
            principles (str): List of relevant principles
            examples (str): Example-based guidance
            query (str): Evaluation query text
            context (str): Additional contextual information
            answers (List[str]): List of candidate answers for comparison
            **kwargs: Additional formatting parameters

        Returns:
            str: Formatted prompt string following template requirements
        """
        answer_str = ""
        for i, answer in enumerate(answers):
            answer_str += f"## Answer {i + 1}\n{answer}\n\n"

        if examples:
            examples = f"# Examples\n{examples}\n"

        if scenario:
            scenario = f"\n# Scenario\n{scenario}\n"

        if context:
            context = f"\n# Context\n{context}\n"

        if principles:
            principles = f"# Principles\n{principles}\n"

        return f"""# Task Description
{desc}
{scenario}

{principles}
{examples}

# Query
{query}
{context}

# Answers
{answer_str}

# Output Requirement
{cls.schema(**kwargs)}
"""

parse(text) classmethod

Parses text input containing list-wise evaluation results.

Converts best answer index from string to integer format during template instantiation.

Parameters:

Name Type Description Default
text str

Input string containing XML-style tagged content

required

Returns:

Name Type Description
PrincipleListWiseTemplate

Constructed instance with parsed values

Source code in rm_gallery/core/reward/template.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
@classmethod
def parse(cls, text: str):
    """
    Parses text input containing list-wise evaluation results.

    Converts best answer index from string to integer format
    during template instantiation.

    Args:
        text (str): Input string containing XML-style tagged content

    Returns:
        PrincipleListWiseTemplate: Constructed instance with parsed values
    """
    contents = cls._parse(text)
    contents["best"] = int(contents["best"])
    return cls(**contents)

PrinciplePointWiseTemplate

Bases: BasePromptTemplate

Template implementation for principle-based point-wise evaluation tasks.

This template structure is designed for scenarios requiring analysis of principle violations in specific contexts, with support for detailed scenario descriptions and example-based guidance.

Attributes:

Name Type Description
violation List[str]

List of identified principle violations

Source code in rm_gallery/core/reward/template.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
class PrinciplePointWiseTemplate(BasePromptTemplate):
    """
    Template implementation for principle-based point-wise evaluation tasks.

    This template structure is designed for scenarios requiring analysis of principle
    violations in specific contexts, with support for detailed scenario descriptions
    and example-based guidance.

    Attributes:
        violation (List[str]): List of identified principle violations
    """

    violation: List[str] = Field(
        default=..., description="a list of violated principles"
    )

    @classmethod
    def parse(cls, text: str):
        """
        Parses text input containing principle violation information.

        Processes standard template format and converts violation field
        from string representation to Python list.

        Args:
            text (str): Input string containing XML-style tagged content

        Returns:
            PrinciplePointWiseTemplate: Constructed instance with parsed values
        """
        contents = cls._parse(text)
        try:
            contents["violation"] = eval(contents["violation"])
        except Exception:
            contents["violation"] = []
        return cls(**contents)

    @classmethod
    def format(
        cls,
        desc: str,
        scenario: str,
        principles: str,
        examples: str,
        query: str,
        context: str,
        answer: str,
        **kwargs,
    ) -> str:
        """
        Formats evaluation components into structured prompt template.

        Combines task description, scenario context, principles, and response
        requirements into standardized prompt format.

        Args:
            desc (str): Task description text
            scenario (str): Scenario context description
            principles (str): List of relevant principles
            examples (str): Example-based guidance
            query (str): Evaluation query text
            context (str): Additional contextual information
            answer (str): Reference answer text
            **kwargs: Additional formatting parameters

        Returns:
            str: Formatted prompt string following template requirements
        """
        if examples:
            examples = f"\n# Examples\n{examples}\n"

        if scenario:
            scenario = f"\n# Scenario\n{scenario}\n"

        if context:
            context = f"\n# Context\n{context}\n"

        return f"""# Task Description
{desc}
{scenario}

# Principles
{principles}
{examples}

# Query
{query}
{context}

# Answer
{answer}

# Output Requirement
{cls.schema(**kwargs)}
"""

format(desc, scenario, principles, examples, query, context, answer, **kwargs) classmethod

Formats evaluation components into structured prompt template.

Combines task description, scenario context, principles, and response requirements into standardized prompt format.

Parameters:

Name Type Description Default
desc str

Task description text

required
scenario str

Scenario context description

required
principles str

List of relevant principles

required
examples str

Example-based guidance

required
query str

Evaluation query text

required
context str

Additional contextual information

required
answer str

Reference answer text

required
**kwargs

Additional formatting parameters

{}

Returns:

Name Type Description
str str

Formatted prompt string following template requirements

Source code in rm_gallery/core/reward/template.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
    @classmethod
    def format(
        cls,
        desc: str,
        scenario: str,
        principles: str,
        examples: str,
        query: str,
        context: str,
        answer: str,
        **kwargs,
    ) -> str:
        """
        Formats evaluation components into structured prompt template.

        Combines task description, scenario context, principles, and response
        requirements into standardized prompt format.

        Args:
            desc (str): Task description text
            scenario (str): Scenario context description
            principles (str): List of relevant principles
            examples (str): Example-based guidance
            query (str): Evaluation query text
            context (str): Additional contextual information
            answer (str): Reference answer text
            **kwargs: Additional formatting parameters

        Returns:
            str: Formatted prompt string following template requirements
        """
        if examples:
            examples = f"\n# Examples\n{examples}\n"

        if scenario:
            scenario = f"\n# Scenario\n{scenario}\n"

        if context:
            context = f"\n# Context\n{context}\n"

        return f"""# Task Description
{desc}
{scenario}

# Principles
{principles}
{examples}

# Query
{query}
{context}

# Answer
{answer}

# Output Requirement
{cls.schema(**kwargs)}
"""

parse(text) classmethod

Parses text input containing principle violation information.

Processes standard template format and converts violation field from string representation to Python list.

Parameters:

Name Type Description Default
text str

Input string containing XML-style tagged content

required

Returns:

Name Type Description
PrinciplePointWiseTemplate

Constructed instance with parsed values

Source code in rm_gallery/core/reward/template.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
@classmethod
def parse(cls, text: str):
    """
    Parses text input containing principle violation information.

    Processes standard template format and converts violation field
    from string representation to Python list.

    Args:
        text (str): Input string containing XML-style tagged content

    Returns:
        PrinciplePointWiseTemplate: Constructed instance with parsed values
    """
    contents = cls._parse(text)
    try:
        contents["violation"] = eval(contents["violation"])
    except Exception:
        contents["violation"] = []
    return cls(**contents)