trinity.explorer package#
Subpackages#
Submodules#
- trinity.explorer.explorer module
ExplorerExplorer.__init__()Explorer.setup_weight_sync_group()Explorer.setup_model_level_weight_sync_group()Explorer.prepare()Explorer.get_weight()Explorer.explore()Explorer.explore_step()Explorer.need_sync()Explorer.need_eval()Explorer.eval()Explorer.benchmark()Explorer.save_checkpoint()Explorer.sync_weight()Explorer.shutdown()Explorer.is_alive()Explorer.serve()Explorer.get_actor()
- trinity.explorer.explorer_client module
- trinity.explorer.scheduler module
TaskWrappercalculate_task_level_metrics()RunnerWrappersort_batch_id()SchedulerScheduler.__init__()Scheduler.task_done_callback()Scheduler.start()Scheduler.stop()Scheduler.schedule()Scheduler.dynamic_timeout()Scheduler.get_results()Scheduler.has_step()Scheduler.wait_all()Scheduler.get_key_state()Scheduler.get_runner_state()Scheduler.get_all_state()Scheduler.print_all_state()
- trinity.explorer.workflow_runner module
Module contents#
- class trinity.explorer.Explorer(config: Config)[source]#
Bases:
objectResponsible for exploring the taskset.
- async explore() str[source]#
- The timeline of the exploration process:
- <βββββββββββ one period ββββββββββββ-> |
- explorer | <βββββ- step_1 βββββ> | |
- | <βββββ- step_2 βββββ> | |β¦ || <βββββ- step_n βββββ> | || <βββββββ- eval βββββββ> | <β sync β> |
|--------------------------------------------------------------------------------------|
trainer | <β idle β> | <β step_1 β> | <β step_2 β> | β¦ | <β step_n β> | <β sync β> |
- async get_weight(name: str) Tensor[source]#
Get the weight of the loaded model (For checkpoint weights update).
- async serve() None[source]#
Run the explorer in serving mode.
In serving mode, the explorer starts an OpenAI compatible server to handle requests. Agent applications can be deployed separately and interact with the explorer via the API.
import openai client = openai.OpenAI( base_url=f"{explorer_server_url}/v1", api_key="EMPTY", ) response = client.chat.completions.create( model=config.model.model_path, messages=[{"role": "user", "content": "Hello!"}] )
- async setup_model_level_weight_sync_group()[source]#
Setup process group for each model, only used in serve mode.