trinity.explorer package#
Subpackages#
Submodules#
- trinity.explorer.explorer module
ExplorerExplorer.__init__()Explorer.setup_weight_sync_group()Explorer.setup_model_level_weight_sync_group()Explorer.prepare()Explorer.get_weight()Explorer.explore()Explorer.explore_step()Explorer.need_sync()Explorer.need_eval()Explorer.eval()Explorer.benchmark()Explorer.save_checkpoint()Explorer.sync_weight()Explorer.shutdown()Explorer.is_alive()Explorer.serve()Explorer.get_actor()
- trinity.explorer.explorer_client module
- trinity.explorer.scheduler module
TaskWrappercalculate_task_level_metrics()RunnerWrappersort_batch_id()SchedulerScheduler.__init__()Scheduler.task_done_callback()Scheduler.start()Scheduler.stop()Scheduler.schedule()Scheduler.dynamic_timeout()Scheduler.get_results()Scheduler.has_step()Scheduler.wait_all()Scheduler.get_key_state()Scheduler.get_runner_state()Scheduler.get_all_state()Scheduler.print_all_state()
- trinity.explorer.workflow_runner module
Module contents#
- class trinity.explorer.Explorer(config: Config)[源代码]#
基类:
objectResponsible for exploring the taskset.
- async explore() str[源代码]#
- The timeline of the exploration process:
- <--------------------------------- one period -------------------------------------> |
- explorer | <---------------- step_1 --------------> | |
- | <---------------- step_2 --------------> | |... || <---------------- step_n ---------------> | || <---------------------- eval --------------------> | <-- sync --> |
|--------------------------------------------------------------------------------------|
trainer | <-- idle --> | <-- step_1 --> | <-- step_2 --> | ... | <-- step_n --> | <-- sync --> |
- async get_weight(name: str) Tensor[源代码]#
Get the weight of the loaded model (For checkpoint weights update).
- async serve() None[源代码]#
Run the explorer in serving mode.
In serving mode, the explorer starts an OpenAI compatible server to handle requests. Agent applications can be deployed separately and interact with the explorer via the API.
import openai client = openai.OpenAI( base_url=f"{explorer_server_url}/v1", api_key="EMPTY", ) response = client.chat.completions.create( model=config.model.model_path, messages=[{"role": "user", "content": "Hello!"}] )
- async setup_model_level_weight_sync_group()[源代码]#
Setup process group for each model, only used in serve mode.