Welcome to Trinity-RFT’s documentation!#
💡 What is Trinity-RFT?#
Trinity-RFT is a flexible, general-purpose framework for reinforcement fine-tuning (RFT) of large language models (LLMs). It decouples the RFT process into three key components: Explorer, Trainer, and Buffer, and provides functionalities for users with different backgrounds and objectives:
🤖 For agent application developers. [tutorial]
Train agent applications to improve their ability to complete tasks in specific environments.
Examples: Multi-Turn Interaction, ReAct Agent
🧠 For RL algorithm researchers. [tutorial]
Design and validate new reinforcement learning algorithms using compact, plug-and-play modules.
Example: Mixture of SFT and GRPO
📊 For data engineers. [tutorial]
Create task-specific datasets and build data pipelines for cleaning, augmentation, and human-in-the-loop scenarios.
Example: Data Processing
🌟 Key Features#
Flexible RFT Modes:
Supports synchronous/asynchronous, on-policy/off-policy, and online/offline training. Rollout and training can run separately and scale independently across devices.
General Agentic-RL Support:
Supports both concatenated and general multi-turn agentic workflows. Able to directly train agent applications developed using agent frameworks like AgentScope.
Full Lifecycle Data Pipelines:
Enables pipeline processing of rollout and experience data, supporting active management (prioritization, cleaning, augmentation) throughout the RFT lifecycle.
User-Friendly Design:
Modular, decoupled architecture for easy adoption and development. Rich graphical user interfaces enable low-code usage.
Acknowledgements#
This project is built upon many excellent open-source projects, including:
verl and PyTorch’s FSDP for LLM training;
vLLM for LLM inference;
Data-Juicer for data processing pipelines;
AgentScope for agentic workflow;
Ray for distributed systems;
we have also drawn inspirations from RL frameworks like OpenRLHF, TRL and ChatLearn;
……
Citation#
@misc{trinity-rft,
title={Trinity-RFT: A General-Purpose and Unified Framework for Reinforcement Fine-Tuning of Large Language Models},
author={Xuchen Pan and Yanxi Chen and Yushuo Chen and Yuchang Sun and Daoyuan Chen and Wenhao Zhang and Yuexiang Xie and Yilun Huang and Yilei Zhang and Dawei Gao and Yaliang Li and Bolin Ding and Jingren Zhou},
year={2025},
eprint={2505.17826},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.17826},
}