Agentohana: Design unified data and training pipeline for effective agent learning

J Zhang, T Lan, R Murthy, Z Liu, W Yao, M Zhu… - arXiv preprint arXiv …, 2024 - arxiv.org
J Zhang, T Lan, R Murthy, Z Liu, W Yao, M Zhu, J Tan, T Hoang, Z Liu, L Yang, Y Feng
arXiv preprint arXiv:2402.15506, 2024arxiv.org
Autonomous agents powered by large language models (LLMs) have garnered significant
research attention. However, fully harnessing the potential of LLMs for agent-based tasks
presents inherent challenges due to the heterogeneous nature of diverse data sources
featuring multi-turn trajectories. In this paper, we introduce\textbf {AgentOhana} as a
comprehensive solution to address these challenges.\textit {AgentOhana} aggregates agent
trajectories from distinct environments, spanning a wide array of scenarios. It meticulously …
Autonomous agents powered by large language models (LLMs) have garnered significant research attention. However, fully harnessing the potential of LLMs for agent-based tasks presents inherent challenges due to the heterogeneous nature of diverse data sources featuring multi-turn trajectories. In this paper, we introduce \textbf{AgentOhana} as a comprehensive solution to address these challenges. \textit{AgentOhana} aggregates agent trajectories from distinct environments, spanning a wide array of scenarios. It meticulously standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training. Leveraging the data unification, our training pipeline maintains equilibrium across different data sources and preserves independent randomness across devices during dataset partitioning and model training. Additionally, we present \textbf{xLAM-v0.1}, a large action model tailored for AI agents, which demonstrates exceptional performance across various benchmarks. Begin the exploration at \url{https://github.com/SalesforceAIResearch/xLAM}.
arxiv.org
Showing the best result for this search. See all results