嘘~ 正在从服务器偷取页面 . . .

Agent


⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验

2025-10-18 更新

Agentic Design of Compositional Machines

Authors:Wenqian Zhang, Weiyang Liu, Zhen Liu

The design of complex machines stands as both a marker of human intelligence and a foundation of engineering practice. Given recent advances in large language models (LLMs), we ask whether they, too, can learn to create. We approach this question through the lens of compositional machine design: a task in which machines are assembled from standardized components to meet functional demands like locomotion or manipulation in a simulated physical environment. To support this investigation, we introduce BesiegeField, a testbed built on the machine-building game Besiege, which enables part-based construction, physical simulation and reward-driven evaluation. Using BesiegeField, we benchmark state-of-the-art LLMs with agentic workflows and identify key capabilities required for success, including spatial reasoning, strategic assembly, and instruction-following. As current open-source models fall short, we explore reinforcement learning (RL) as a path to improvement: we curate a cold-start dataset, conduct RL finetuning experiments, and highlight open challenges at the intersection of language, machine design, and physical reasoning.

复杂机器的设计既是人类智慧的标志,也是工程实践的基础。鉴于大型语言模型(LLMs)的最新进展,我们想知道它们是否也能学会创造。我们通过组合机器设计的视角来探讨这个问题:这是一项任务,其中机器由标准化组件组装而成,以满足在模拟物理环境中的运动或操作等功能需求。为了支持这项研究,我们推出了BesiegeField测试平台,该平台建立在机器建造游戏Besiege之上,支持基于部件的建造、物理模拟和奖励驱动评估。使用BesiegeField基准测试,我们评估了最先进的LLMs的代理工作流程,并确定了成功的关键能力,包括空间推理、策略性装配和指令遵循能力。由于当前开源模型的不足,我们探索了强化学习(RL)作为改进的途径:我们整理了一个冷启动数据集,进行了RL微调实验,并强调了语言、机器设计和物理推理交汇处的开放挑战。

论文及项目相关链接

PDF 75 pages, 31 figures, Project Page: https://besiegefield.github.io

Summary

在复杂机器设计中,大型语言模型(LLMs)的能力备受关注。本研究通过组合机器设计的视角来探讨这一问题,即在模拟的物理环境中,通过标准化组件组装机器来满足运动或操作等功能需求。本研究引入了一个名为BesiegeField的测试平台,该平台建立在机器建造游戏Besiege的基础上,支持部分建设、物理模拟和奖励驱动评估。通过BesiegeField平台,我们对先进的LLMs进行了基准测试,并确定了成功所需的关键能力,包括空间推理、战略装配和指令遵循等。鉴于当前开源模型的不足,我们探讨了强化学习(RL)作为提升模型能力的途径,并整理了一个冷启动数据集进行强化学习微调实验,同时指出了语言、机器设计和物理推理交叉领域的开放挑战。

Key Takeaways

  1. 大型语言模型(LLMs)在机器设计领域的应用受到关注,尤其是通过组合设计的视角进行探讨。
  2. BesiegeField测试平台用于支持机器建造游戏中的部分建设、物理模拟和奖励驱动评估。
  3. 对先进的LLMs进行了基准测试,确定了成功所需的关键能力包括空间推理、战略装配和指令遵循等。
  4. 当前开源模型在机器设计领域存在不足,需要进一步提升模型能力。
  5. 强化了学习(RL)被探索作为提升模型能力的途径,并整理了一个冷启动数据集进行实验研究。
  6. 在语言、机器设计和物理推理交叉领域存在许多开放挑战。

Cool Papers

点此查看论文截图

Information Gain-based Policy Optimization: A Simple and Effective Approach for Multi-Turn LLM Agents

Authors:Guoqing Wang, Sunhao Dai, Guangze Ye, Zeyu Gan, Wei Yao, Yong Deng, Xiaofeng Wu, Zhenzhe Ying

Large language model (LLM)-based agents are increasingly trained with reinforcement learning (RL) to enhance their ability to interact with external environments through tool use, particularly in search-based settings that require multi-turn reasoning and knowledge acquisition. However, existing approaches typically rely on outcome-based rewards that are only provided at the final answer. This reward sparsity becomes particularly problematic in multi-turn settings, where long trajectories exacerbate two critical issues: (i) advantage collapse, where all rollouts receive identical rewards and provide no useful learning signals, and (ii) lack of fine-grained credit assignment, where dependencies between turns are obscured, especially in long-horizon tasks. In this paper, we propose Information Gain-based Policy Optimization (IGPO), a simple yet effective RL framework that provides dense and intrinsic supervision for multi-turn agent training. IGPO models each interaction turn as an incremental process of acquiring information about the ground truth, and defines turn-level rewards as the marginal increase in the policy’s probability of producing the correct answer. Unlike prior process-level reward approaches that depend on external reward models or costly Monte Carlo estimation, IGPO derives intrinsic rewards directly from the model’s own belief updates. These intrinsic turn-level rewards are combined with outcome-level supervision to form dense reward trajectories. Extensive experiments on both in-domain and out-of-domain benchmarks demonstrate that IGPO consistently outperforms strong baselines in multi-turn scenarios, achieving higher accuracy and improved sample efficiency.

基于大型语言模型(LLM)的代理越来越多地使用强化学习(RL)进行训练,以加强其通过工具与环境互动的能力,特别是在基于搜索的环境中,这些环境需要多轮推理和知识获取。然而,现有的方法通常依赖于结果导向的奖励,这些奖励只在最终答案处提供。在需要多轮对话的环境中,奖励稀疏的问题尤为严重。长的轨迹放大了两个关键问题:(i)优势崩溃,所有滚动结果获得相同的奖励,无法提供有用的学习信号;(ii)缺乏精细的信用分配,导致轮次之间的依赖关系变得模糊,尤其是在长时间跨度任务中更加明显。在本文中,我们提出了基于信息增益的政策优化(IGPO),这是一个简单而有效的强化学习框架,为代理的多轮训练提供了密集和内在的监督。IGPO将每个交互回合视为关于真相的增量信息获取过程,并将回合奖励定义为策略产生正确答案的概率的边际增加。与依赖外部奖励模型或昂贵的蒙特卡洛估计的先前过程级奖励方法不同,IGPO直接从模型自身的信念更新中得出内在奖励。这些内在的回合奖励与结果级监督相结合,形成了密集的奖励轨迹。在内部和外部基准测试上的大量实验表明,在需要多轮对话的场景中,IGPO始终优于强大的基准测试,实现了更高的准确性和样本效率。

论文及项目相关链接

PDF

Summary

大型语言模型(LLM)通过强化学习(RL)进行训练,提高其通过工具使用与外部环境交互的能力,特别是在需要多轮推理和知识获取搜索场景中。现有方法通常依赖于仅在最终答案提供的结果导向奖励,但在多轮对话场景中,奖励稀疏性引发两个问题:优势崩溃和缺乏精细的信用分配。本文提出基于信息增益的策略优化(IGPO),一个简单的有效RL框架,为对话轮训练提供密集和内在监督。IGPO将每个交互回合视为对事实真相的获取过程,并将回合奖励定义为策略产生正确答案的概率的边际增加。实验证明,IGPO在多轮场景中表现优于强大的基线,具有更高的准确性和样本效率。

Key Takeaways

  1. 大型语言模型利用强化学习提高与外部环境的交互能力,尤其在需要多轮推理和知识获取的搜索场景中。
  2. 现有方法在多轮对话场景中面临奖励稀疏问题,导致优势崩溃和缺乏精细的信用分配。
  3. 提出的IGPO框架通过提供密集和内在监督来解决这些问题。
  4. IGPO将每个交互回合视为对事实真相的获取过程,并定义回合奖励为策略产生正确答案的概率的边际增加。
  5. IGPO从模型自身的信念更新中直接获得内在奖励,不同于依赖外部奖励模型或昂贵的蒙特卡洛估计的过程级别奖励方法。
  6. 实验证明,IGPO在多轮场景中的表现优于现有方法,具有更高的准确性和样本效率。

Cool Papers

点此查看论文截图

VLA^2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation

Authors:Han Zhao, Jiaxuan Zhang, Wenxuan Song, Pengxiang Ding, Donglin Wang

Current vision-language-action (VLA) models, pre-trained on large-scale robotic data, exhibit strong multi-task capabilities and generalize well to variations in visual and language instructions for manipulation. However, their success rate drops significantly when faced with object concepts outside the training data, such as unseen object descriptions and textures in the dataset. To address this, we propose a novel agentic framework, VLA^2, which leverages OpenVLA as the execution backbone and effectively leverages external modules such as web retrieval and object detection to provide visual and textual knowledge about target objects to the VLA. This approach mitigates generalization failure when handling out-of-distribution objects. Based on the LIBERO simulation environment, we introduced novel objects and object descriptions to construct a new evaluation benchmark with three difficulty levels to test the effectiveness of our method. Our framework successfully outperformed the current state-of-the-art models on our designed hard-level generalization benchmark. Compared to the standalone OpenVLA baseline, VLA^2 achieves a 44.2% improvement in the success rate in the hard-level benchmark and an average improvement of 20.2% in all customized environments without any performance degradation on in-domain tasks. Project website: https://vla-2.github.io.

当前基于大规模机器人数据预训练的视觉-语言-行动(VLA)模型展现出强大的多任务能力和对操作指令的视觉和语言变化的良好泛化能力。然而,在面对训练数据外的对象概念,如数据集中未见到的对象描述和纹理时,它们的成功率会大幅下降。为了解决这一问题,我们提出了一种新型的VLA^2代理框架,它利用OpenVLA作为执行主干,并有效地利用外部模块,如网络检索和对象检测,为目标对象提供视觉和文本知识。这种方法在处理离群对象时缓解了泛化失败的问题。基于LIBERO仿真环境,我们引入了新型对象和对象描述,构建了一个新的评估基准,包括三个难度级别,以测试我们方法的有效性。我们的框架在设计的困难级别泛化基准上成功超越了当前最先进的模型。与单独的OpenVLA基线相比,VLA^2在困难级别的基准测试中成功率提高了44.2%,在定制环境中的平均成功率提高了20.2%,同时在领域内的任务上没有性能下降。项目网站:https://vla-2.github.io。

论文及项目相关链接

PDF

Summary

本文介绍了一种新型的视觉语言行动框架VLA^2,该框架以OpenVLA为执行主体,通过引入外部模块如网络检索和对象检测,解决了当前VLA模型在处理超出训练数据的对象概念时的泛化失败问题。实验证明,VLA^2框架在设计的硬级泛化基准测试上优于当前最先进的模型,实现了显著的性能提升。

Key Takeaways

  1. VLA^2框架利用OpenVLA作为执行主体,通过引入外部模块如网络检索和对象检测来增强模型的性能。
  2. VLA^2框架能有效解决当前VLA模型在面对超出训练数据的对象概念时的泛化失败问题。
  3. 在硬级泛化基准测试中,VLA^2框架的性能显著优于当前最先进的模型。
  4. 与OpenVLA基线相比,VLA^2在硬级基准测试中的成功率提高了44.2%,在定制环境中的平均成功率提高了20.2%,同时没有对域内任务的性能产生负面影响。
  5. VLA^2框架通过使用LIBERO模拟环境进行测试和评估,展示了其实用性和有效性。
  6. 通过引入新型对象和对象描述构建了新的评估基准,设计了三个难度级别以全面测试模型的有效性。

Cool Papers

点此查看论文截图

Agentic NL2SQL to Reduce Computational Costs

Authors:Dominik Jehle, Lennart Purucker, Frank Hutter

Translating natural language queries into SQL queries (NL2SQL or Text-to-SQL) has recently been empowered by large language models (LLMs). Using LLMs to perform NL2SQL methods on a large collection of SQL databases necessitates processing large quantities of meta-information about the databases, which in turn results in lengthy prompts with many tokens and high processing costs. To address this challenge, we introduce Datalake Agent, an agentic system designed to enable an LLM to solve NL2SQL tasks more efficiently. Instead of utilizing direct solvers for NL2SQL that call the LLM once with all meta-information in the prompt, the Datalake Agent employs an interactive loop to reduce the utilized meta-information. Within the loop, the LLM is used in a reasoning framework that selectively requests only the necessary information to solve a table question answering task. We evaluate the Datalake Agent on a collection of 23 databases with 100 table question answering tasks. The Datalake Agent reduces the tokens used by the LLM by up to 87% and thus allows for substantial cost reductions while maintaining competitive performance.

将自然语言查询转换为SQL查询(NL2SQL或文本到SQL)最近得益于大型语言模型(LLM)的支持。使用LLM在大规模SQL数据库集合上执行NL2SQL方法需要处理大量关于数据库的元信息,这反过来又会导致提示信息包含大量标记和较高的处理成本。为了解决这一挑战,我们引入了Datalake Agent,这是一个代理系统,旨在使LLM能够更高效地解决NL2SQL任务。不同于为NL2SQL使用直接求解器,后者会在提示中一次性调用所有元信息来调用LLM,Datalake Agent采用交互式循环来减少所使用的元信息。在该循环中,LLM被用于一个推理框架,该框架有选择地请求解决表格问答任务所需的信息。我们在包含23个数据库的集合上对Datalake Agent进行了评估,这些数据库中有100个表格问答任务。Datalake Agent可将LLM使用的标记减少高达87%,从而在保持竞争力的同时实现显著的成本降低。

论文及项目相关链接

PDF Accepted at the NeurIPS 2025 Workshop on Efficient Reasoning. 10 pages, 11 figures

Summary

大型语言模型(LLMs)在执行自然语言查询转换为SQL查询(NL2SQL)任务时面临处理大量数据库元信息的挑战,导致提示信息冗长且处理成本高昂。为解决此问题,我们推出Datalake Agent,它通过采用交互式循环减少所需元信息,使LLM在推理框架中仅选择性请求必要信息来解决表问答任务。评估显示,Datalake Agent在23个数据库上的100个表问答任务中,减少了LLM使用的令牌数高达87%,同时保持竞争力。

Key Takeaways

  1. 大型语言模型(LLMs)在执行NL2SQL任务时面临处理大量数据库元信息的挑战。
  2. Datalake Agent采用交互式循环设计,减少所需元信息。
  3. Datalake Agent通过选择性请求必要信息来解决表问答任务。
  4. Datalake Agent在多个数据库上的表问答任务中显著减少了LLM使用的令牌数。
  5. Datalake Agent在减少令牌使用的同时,保持了竞争性的性能。
  6. Datalake Agent的设计旨在提高LLM解决NL2SQL任务的效率。

Cool Papers

点此查看论文截图

LLM Agents for Automated Web Vulnerability Reproduction: Are We There Yet?

Authors:Bin Liu, Yanjie Zhao, Guoai Xu, Haoyu Wang

Large language model (LLM) agents have demonstrated remarkable capabilities in software engineering and cybersecurity tasks, including code generation, vulnerability discovery, and automated testing. One critical but underexplored application is automated web vulnerability reproduction, which transforms vulnerability reports into working exploits. Although recent advances suggest promising potential, challenges remain in applying LLM agents to real-world web vulnerability reproduction scenarios. In this paper, we present the first comprehensive evaluation of state-of-the-art LLM agents for automated web vulnerability reproduction. We systematically assess 20 agents from software engineering, cybersecurity, and general domains across 16 dimensions, including technical capabilities, environment adaptability, and user experience factors, on 3 representative web vulnerabilities. Based on the results, we select three top-performing agents (OpenHands, SWE-agent, and CAI) for in-depth evaluation on our benchmark dataset of 80 real-world CVEs spanning 7 vulnerability types and 6 web technologies. Our results reveal that while LLM agents achieve reasonable success on simple library-based vulnerabilities, they consistently fail on complex service-based vulnerabilities requiring multi-component environments. Complex environment configurations and authentication barriers create a gap where agents can execute exploit code but fail to trigger actual vulnerabilities. We observe high sensitivity to input guidance, with performance degrading by over 33% under incomplete authentication information. Our findings highlight the significant gap between current LLM agent capabilities and the demands of reliable automated vulnerability reproduction, emphasizing the need for advances in environmental adaptation and autonomous problem-solving capabilities.

大型语言模型(LLM)代理在软件工程和网络安全任务中表现出了显著的能力,包括代码生成、漏洞发现和自动化测试。其中一个关键但尚未被充分探索的应用是将漏洞报告转化为可利用的漏洞攻击。尽管最近的进展显示出有前途的潜力,但在将LLM代理应用于真实世界的网页漏洞复现场景中仍存在挑战。在本文中,我们对当前先进的大型语言模型代理在自动化网页漏洞复现方面的能力进行了首次全面评估。我们系统地评估了来自软件工程、网络安全和通用领域的20个代理,涉及16个维度,包括技术能力、环境适应性和用户体验因素,针对三种具有代表性的网页漏洞。基于评估结果,我们选择了三个表现最佳的代理(OpenHands、SWE-agent和CAI)进行深入研究,在我们的基准数据集上对包含七种漏洞类型和六种网络技术的80个真实世界CVE上进行深度评估。结果表明,虽然大型语言模型代理在基于库的简单漏洞上取得了合理的成功,但在基于服务的复杂漏洞上却持续失败,这些复杂漏洞需要多组件环境。复杂的环境配置和身份验证障碍造成了差距,代理虽然能够执行漏洞利用代码,但无法触发实际漏洞。我们还观察到对输入指导的高度敏感性,在不完整的身份验证信息下,性能下降超过33%。我们的研究结果突显了当前大型语言模型代理能力与可靠的自动化漏洞复现需求之间的差距,强调需要在环境适应和自主解决问题能力方面取得进展。

论文及项目相关链接

PDF

摘要

大型语言模型(LLM)代理在软件工程和网络安全任务中表现出卓越的能力,包括代码生成、漏洞发现和自动化测试。一个关键但尚未被充分探索的应用是自动化网页漏洞复现,它能够将漏洞报告转化为可执行的攻击代码。尽管最近的进展显示出令人鼓舞的潜力,但在将LLM代理应用于实际网页漏洞复现场景时仍面临挑战。本文首次全面评估了最先进的LLM代理在自动化网页漏洞复现方面的表现。我们对来自软件工程、网络安全和通用领域的20个代理进行了16个维度的系统评估,包括技术能力、环境适应性和用户体验因素,针对三种具有代表性的网页漏洞。基于结果,我们选择了三个表现最佳的代理(OpenHands、SWE-agent和CAI),在包含7种漏洞类型和6种网络技术的80个真实CVE数据集上进行深度评估。研究结果表明,虽然LLM代理在基于库的简单漏洞上取得了合理的成功,但在需要多组件环境的基于服务的复杂漏洞上却持续失败。复杂的环境配置和身份验证障碍造成了差距,代理能够执行攻击代码,但无法触发实际漏洞。我们观察到对输入指导的高度敏感性,在不完整的身份验证信息下,性能下降超过33%。我们的研究结果突出了当前LLM代理能力与可靠自动化漏洞复现需求之间的巨大差距,并强调了环境适应和自主解决问题能力的进步需求。

关键发现

  1. LLM代理在软件工程和网络安全任务中表现出卓越的能力,包括代码生成、漏洞发现和自动化测试。
  2. 一个关键应用是自动化网页漏洞复现,能够将漏洞报告转化为攻击代码。
  3. 现有LLM代理在复杂漏洞复现方面存在能力差距,特别是在需要多组件环境的基于服务的漏洞上。
  4. 复杂的环境配置和身份验证障碍导致代理无法触发实际漏洞。
  5. LLM代理对输入指导高度敏感,不完整的身份验证信息可能导致性能大幅下降。
  6. 当前LLM代理能力与可靠自动化漏洞复现需求之间存在显著差距。

Cool Papers

点此查看论文截图

When Planners Meet Reality: How Learned, Reactive Traffic Agents Shift nuPlan Benchmarks

Authors:Steffen Hagedorn, Luka Donkov, Aron Distelzweig, Alexandru P. Condurache

Planner evaluation in closed-loop simulation often uses rule-based traffic agents, whose simplistic and passive behavior can hide planner deficiencies and bias rankings. Widely used IDM agents simply follow a lead vehicle and cannot react to vehicles in adjacent lanes, hindering tests of complex interaction capabilities. We address this issue by integrating the state-of-the-art learned traffic agent model SMART into nuPlan. Thus, we are the first to evaluate planners under more realistic conditions and quantify how conclusions shift when narrowing the sim-to-real gap. Our analysis covers 14 recent planners and established baselines and shows that IDM-based simulation overestimates planning performance: nearly all scores deteriorate. In contrast, many planners interact better than previously assumed and even improve in multi-lane, interaction-heavy scenarios like lane changes or turns. Methods trained in closed-loop demonstrate the best and most stable driving performance. However, when reaching their limits in augmented edge-case scenarios, all learned planners degrade abruptly, whereas rule-based planners maintain reasonable basic behavior. Based on our results, we suggest SMART-reactive simulation as a new standard closed-loop benchmark in nuPlan and release the SMART agents as a drop-in alternative to IDM at https://github.com/shgd95/InteractiveClosedLoop.

规划器在闭环模拟中的评估经常采用基于规则的交通代理,这些代理的行为简单且被动,可能会掩盖规划器的缺陷并产生偏向排名。广泛使用的IDM代理只是简单地跟随领先车辆,无法对相邻车道的车辆作出反应,这阻碍了复杂交互能力的测试。我们通过将最先进的学会的交通代理模型SMART集成到nuPlan中来解决这个问题。因此,我们是第一个在更现实的情况下评估规划器并量化当缩小模拟到现实的差距时结论如何变化的研究团队。我们的分析涵盖了14个最新的规划器和既定基准线,并显示基于IDM的模拟高估了规划性能:几乎所有分数都有所下降。相比之下,许多规划器的互动比以往假设的更好,甚至在多车道、互动密集的场景中,如车道变更或转弯,都有所改善。在闭环中训练的方法表现出最佳且最稳定的驾驶性能。然而,在达到增强的边缘场景的限制时,所有经过学习的规划器都会突然降级,而基于规则的规划器则能保持合理的基本行为。基于我们的结果,我们建议使用SMART响应式模拟作为nuPlan中的新标准闭环基准测试,并将SMART代理作为IDM的替代方案发布在https://github.com/shgd95/InteractiveClosedLoop。

论文及项目相关链接

PDF

Summary:为评估自动驾驶系统的规划能力,首次使用先进的人工智能算法SMART替代基于规则的IDM智能体模拟。研究显示,简单的规则智能体评估会导致高估规划性能,多通道互动场景中真实交互测试更为重要。新方法使测试更贴近现实驾驶情况。为此推出了智能仿真替代标准工具包,促进智能交通环境建立并更准确的模拟评估自动驾驶系统性能。

Key Takeaways

  1. 基于规则的交通智能体评估存在缺陷,难以准确反映自动驾驶规划器的真实性能。简单跟随行为的IDM智能体不能反映复杂的交互能力。
  2. SMART智能体的引入解决了上述问题,使得自动驾驶系统的评估更接近真实环境。通过SMART智能体模拟,可以更准确地测试自动驾驶系统的规划性能。

Cool Papers

点此查看论文截图

ColorBench: Benchmarking Mobile Agents with Graph-Structured Framework for Complex Long-Horizon Tasks

Authors:Yuanyi Song, Heyuan Huang, Qiqiang Lin, Yin Zhao, Xiangmou Qu, Jun Wang, Xingyu Lou, Weiwen Liu, Zhuosheng Zhang, Jun Wang, Yong Yu, Weinan Zhang, Zhaoxiang Wang

The rapid advancement of multimodal large language models has enabled agents to operate mobile devices by directly interacting with graphical user interfaces, opening new possibilities for mobile automation. However, real-world mobile tasks are often complex and allow for multiple valid solutions. This contradicts current mobile agent evaluation standards: offline static benchmarks can only validate a single predefined “golden path”, while online dynamic testing is constrained by the complexity and non-reproducibility of real devices, making both approaches inadequate for comprehensively assessing agent capabilities. To bridge the gap between offline and online evaluation and enhance testing stability, this paper introduces a novel graph-structured benchmarking framework. By modeling the finite states observed during real-device interactions, it achieves static simulation of dynamic behaviors. Building on this, we develop ColorBench, a benchmark focused on complex long-horizon tasks. It supports evaluation of multiple valid solutions, subtask completion rate statistics, and atomic-level capability analysis. ColorBench contains 175 tasks (74 single-app, 101 cross-app) with an average length of over 13 steps. Each task includes at least two correct paths and several typical error paths, enabling quasi-dynamic interaction. By evaluating ColorBench across various baselines, we discover limitations of existing models and propose improvement directions and feasible technical pathways to enhance agents’ performance on complex, long-horizon problems based on experimental results. Code and data are available at: https://github.com/MadeAgents/ColorBench.

随着多模态大型语言模型的快速发展,智能体通过与图形用户界面直接交互来操作移动设备的能力得到了提升,为移动自动化带来了新的可能性。然而,现实世界的移动任务通常很复杂,并允许多种有效解决方案。这与当前的移动智能体评估标准相矛盾:离线静态基准测试只能验证单个预定义的“黄金路径”,而在线动态测试受到真实设备复杂性和不可重复性的限制,因此两种方法均不足以全面评估智能体的能力。为了弥合离线与在线评估之间的差距并增强测试稳定性,本文引入了一种新型的图结构基准测试框架。通过模拟真实设备交互过程中观察到的有限状态,实现对动态行为的静态仿真。在此基础上,我们开发了ColorBench,一个专注于复杂长期任务的基准测试。它支持对多种有效解决方案的评估、子任务完成率统计以及原子级能力分析。ColorBench包含175个任务(74个单应用任务,101个跨应用任务),平均步骤超过13步。每个任务至少包括两个正确路径和几个典型错误路径,能够实现准动态交互。通过对ColorBench进行各种基准的评估,我们发现了现有模型的局限性,并根据实验结果提出了改进方向和可行的技术路径,以提高智能体在复杂长期问题上的性能。代码和数据可在https://github.com/MadeAgents/ColorBench获取。

论文及项目相关链接

PDF

Summary

多模态大型语言模型的快速发展使得智能体可以直接与图形用户界面交互,从而操作移动设备,为移动自动化打开了新的可能性。然而,现实世界的移动任务通常很复杂,允许多种有效解决方案,这与当前的移动智能体评估标准存在冲突。离线静态基准测试只能验证预定义的单一路径,而在线动态测试受到真实设备复杂性和不可重复性的限制,因此无法全面评估智能体的能力。为了缩小离线与在线评估之间的差距并增强测试稳定性,本文引入了一种新型的图结构基准测试框架,通过模拟真实设备交互中观察到的有限状态来实现动态行为的静态仿真。在此基础上,我们开发了ColorBench基准测试,专注于复杂长期任务。它支持多种有效解决方案的评估、子任务完成率统计和原子级能力分析。通过在不同基线上对ColorBench进行评估,我们发现了现有模型的局限性,并提出了改进方向和可行的技术途径来提升智能体在复杂长期问题上的性能。

Key Takeaways

  1. 多模态大型语言模型的进步推动了智能体与图形用户界面的直接交互,为移动自动化带来新机会。
  2. 当前移动智能体评估标准面临挑战,因为现实世界的移动任务允许多种有效解决方案。
  3. 现有评估方法(离线静态基准测试和在线动态测试)存在局限性,无法全面评估智能体的能力。
  4. 为解决此问题,引入了新型的图结构基准测试框架,实现动态行为的静态仿真。
  5. ColorBench基准测试专注于复杂长期任务,支持多种解决方案评估、子任务完成率统计和原子级能力分析。
  6. 通过ColorBench的评估,发现了现有模型的局限性。

Cool Papers

点此查看论文截图

Agentic Entropy-Balanced Policy Optimization

Authors:Guanting Dong, Licheng Bao, Zhongyuan Wang, Kangzhi Zhao, Xiaoxi Li, Jiajie Jin, Jinghan Yang, Hangyu Mao, Fuzheng Zhang, Kun Gai, Guorui Zhou, Yutao Zhu, Ji-Rong Wen, Zhicheng Dou

Recently, Agentic Reinforcement Learning (Agentic RL) has made significant progress in incentivizing the multi-turn, long-horizon tool-use capabilities of web agents. While mainstream agentic RL algorithms autonomously explore high-uncertainty tool-call steps under the guidance of entropy, excessive reliance on entropy signals can impose further constraints, leading to the training collapse. In this paper, we delve into the challenges caused by entropy and propose the Agentic Entropy-Balanced Policy Optimization (AEPO), an agentic RL algorithm designed to balance entropy in both the rollout and policy update phases. AEPO comprises two core components: (1) a dynamic entropy-balanced rollout mechanism that adaptively allocate global and branch sampling budget through entropy pre-monitoring, while imposing a branch penalty on consecutive high-entropy tool-call steps to prevent over-branching issues; and (2) Entropy-Balanced Policy Optimization that inserts a stop-gradient operation into the high-entropy clipping term to preserve and properly rescale gradients on high-entropy tokens, while incorporating entropy-aware advantage estimation to prioritize learning on high-uncertainty tokens. Results across 14 challenging datasets show that AEPO consistently outperforms 7 mainstream RL algorithms. With just 1K RL samples, Qwen3-14B with AEPO achieves impressive results: 47.6% on GAIA, 11.2% on Humanity’s Last Exam, and 43.0% on WebWalker for Pass@1; 65.0% on GAIA, 26.0% on Humanity’s Last Exam, and 70.0% on WebWalker for Pass@5. Further analysis reveals that AEPO improves rollout sampling diversity while maintaining stable policy entropy, facilitating scalable web agent training.

近期,Agentic强化学习(Agentic RL)在激励Web代理的多轮、长期工具使用能力方面取得了显著进展。虽然主流代理强化学习算法在熵的指导下自主地探索高不确定性工具调用步骤,但过度依赖熵信号可能会施加进一步的约束,导致训练崩溃。在本文中,我们深入探讨了由熵引起的挑战,并提出了Agentic熵平衡策略优化(AEPO),这是一种旨在平衡展卷和策略更新阶段熵的代理强化学习算法。AEPO包含两个核心组件:(1)动态熵平衡展卷机制,通过预先监测熵自适应地分配全局和分支采样预算,对连续的熵较高的工具调用步骤施加分支惩罚,以防止过度分支问题;(2)熵平衡策略优化,在高熵剪辑项中插入停止梯度操作以保留并适当重新缩放高熵令牌上的梯度,同时结合熵感知优势估计以优先学习高不确定性令牌。在14个具有挑战性的数据集上的结果表明,AEPO持续优于7种主流强化学习算法。仅使用1K个强化学习样本,Qwen3-14B与AEPO就取得了令人印象深刻的结果:在GAIA上达到47.6%,在人类最后的考试中达到11.2%,在WebWalker上达到43.0%,Pass@1;在GAIA上达到65.0%,在人类最后的考试上达到26.0%,在WebWalker上达到70.0%,Pass@5。进一步的分析表明,AEPO提高了展卷采样的多样性,同时保持了稳定的策略熵,促进了可扩展的Web代理训练。

论文及项目相关链接

PDF Working in progress

Summary
近期,Agentic强化学习(Agentic RL)在激励Web代理的多轮、长期工具使用能力方面取得了显著进展。然而,主流代理RL算法过度依赖熵信号可能导致训练崩溃。本文深入探讨了熵带来的挑战,并提出了Agentic熵平衡策略优化(AEPO),这是一种旨在平衡展开阶段和政策更新阶段熵的代理RL算法。AEPO包括两个核心组件:动态熵平衡展开机制和熵平衡策略优化。实验结果显示,AEPO在14个具有挑战性的数据集上持续优于7种主流RL算法。

Key Takeaways

  • Agentic强化学习(Agentic RL)已进步于激励Web代理的长期、多轮工具使用能力。
  • 主流代理RL算法过度依赖熵信号可能导致训练问题。
  • AEPO算法旨在平衡展开阶段和政策更新阶段的熵。
  • AEPO包括动态熵平衡展开机制和熵平衡策略优化两个核心组件。
  • AEPO在多个数据集上的表现优于其他主流RL算法。
  • Qwen3-14B与AEPO结合,在少量RL样本下即取得显著成果。

Cool Papers

点此查看论文截图

AOAD-MAT: Transformer-based multi-agent deep reinforcement learning model considering agents’ order of action decisions

Authors:Shota Takayama, Katsuhide Fujita

Multi-agent reinforcement learning focuses on training the behaviors of multiple learning agents that coexist in a shared environment. Recently, MARL models, such as the Multi-Agent Transformer (MAT) and ACtion dEpendent deep Q-learning (ACE), have significantly improved performance by leveraging sequential decision-making processes. Although these models can enhance performance, they do not explicitly consider the importance of the order in which agents make decisions. In this paper, we propose an Agent Order of Action Decisions-MAT (AOAD-MAT), a novel MAT model that considers the order in which agents make decisions. The proposed model explicitly incorporates the sequence of action decisions into the learning process, allowing the model to learn and predict the optimal order of agent actions. The AOAD-MAT model leverages a Transformer-based actor-critic architecture that dynamically adjusts the sequence of agent actions. To achieve this, we introduce a novel MARL architecture that cooperates with a subtask focused on predicting the next agent to act, integrated into a Proximal Policy Optimization based loss function to synergistically maximize the advantage of the sequential decision-making. The proposed method was validated through extensive experiments on the StarCraft Multi-Agent Challenge and Multi-Agent MuJoCo benchmarks. The experimental results show that the proposed AOAD-MAT model outperforms existing MAT and other baseline models, demonstrating the effectiveness of adjusting the AOAD order in MARL.

多智能体强化学习专注于训练共存于共享环境中的多个学习智能体的行为。最近,如多智能体转换器(MAT)和动作依赖深度Q学习(ACE)等MARL模型通过利用序列决策过程显著提高了性能。尽管这些模型可以增强性能,但它们并没有明确考虑智能体决策顺序的重要性。在本文中,我们提出了一种新的MAT模型——基于动作决策顺序的Agent Order of Action Decisions-MAT(AOAD-MAT)。该模型考虑了智能体决策的顺序,并将动作决策的序列明确纳入学习过程,使模型能够学习和预测智能体动作的最佳顺序。AOAD-MAT模型采用基于Transformer的Actor-Critic架构,动态调整智能体动作的序列。为此,我们引入了一种新的MARL架构,它与专注于预测下一个行动的次级任务合作,并集成到基于近端策略优化的损失函数中,以协同最大化序列决策的优势。所提出的方法在星际争霸多智能体挑战和Multi-Agent MuJoCo基准测试上进行了广泛的实验验证。实验结果表明,所提出的AOAD-MAT模型优于现有的MAT和其他基线模型,证明了在MARL中调整AOAD顺序的有效性。

论文及项目相关链接

PDF This manuscript is an extended version of the work accepted as a short paper at the 26th International Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2025). The Version of Record of this contribution is published in Springer’s Lecture Notes in Artificial Intelligence series (LNCS/LNAI)

Summary

本文提出了一个名为Agent Order of Action Decisions-MAT(AOAD-MAT)的多智能体强化学习模型。该模型考虑智能体决策的顺序,并将其纳入学习过程,以学习和预测最优的智能体行动顺序。通过引入基于Transformer的actor-critic架构和与近端策略优化相结合的损失函数,实现对智能体行动序列的动态调整。实验结果表明,AOAD-MAT模型在StarCraft Multi-Agent Challenge和Multi-Agent MuJoCo基准测试中表现优异。

Key Takeaways

  1. 多智能体强化学习(MARL)训练共存于共享环境中的多个学习智能体的行为。
  2. 最近,如Multi-Agent Transformer(MAT)和ACtion dEpendent deep Q-learning(ACE)等MARL模型通过利用序列决策过程提高了性能。
  3. AOAD-MAT是一个新型MAT模型,考虑智能体决策的顺序,并显式地将行动决策序列纳入学习过程。
  4. AOAD-MAT模型采用基于Transformer的actor-critic架构,可动态调整智能体行动序列。
  5. 该模型与近端策略优化相结合的损失函数协同工作,以最大化序列决策的优势。
  6. 在StarCraft Multi-Agent Challenge和Multi-Agent MuJoCo基准测试中,AOAD-MAT模型表现出优异的性能,优于其他现有模型和基线。

Cool Papers

点此查看论文截图

Ax-Prover: A Deep Reasoning Agentic Framework for Theorem Proving in Mathematics and Quantum Physics

Authors:Marco Del Tredici, Jacob McCarran, Benjamin Breen, Javier Aspuru Mijares, Weichen Winston Yin, Jacob M. Taylor, Frank H. L. Koppens, Dirk Englund

We present Ax-Prover, a multi-agent system for automated theorem proving in Lean that can solve problems across diverse scientific domains and operate either autonomously or collaboratively with human experts. To achieve this, Ax-Prover approaches scientific problem solving through formal proof generation, a process that demands both creative reasoning and strict syntactic rigor. Ax-Prover meets this challenge by equipping Large Language Models (LLMs), which provide knowledge and reasoning, with Lean tools via the Model Context Protocol (MCP), which ensure formal correctness. To evaluate its performance as an autonomous prover, we benchmark our approach against frontier LLMs and specialized prover models on two public math benchmarks and on two Lean benchmarks we introduce in the fields of abstract algebra and quantum theory. On public datasets, Ax-Prover is competitive with state-of-the-art provers, while it largely outperforms them on the new benchmarks. This shows that, unlike specialized systems that struggle to generalize, our tool-based agentic theorem prover approach offers a generalizable methodology for formal verification across diverse scientific domains. Furthermore, we demonstrate Ax-Prover’s assistant capabilities in a practical use case, showing how it enabled an expert mathematician to formalize the proof of a complex cryptography theorem.

我们推出了Ax-Prover,这是一个用于Lean自动定理证明的多智能体系统,可以解决不同科学领域的各种问题,并能自主运行或与人类专家协作。为实现这一目标,Ax-Prover通过形式化证明生成来解决科学问题,这一过程需要创造性的推理和严格的句法严谨性。Ax-Prover通过为大型语言模型(LLM)提供知识和推理能力,利用模型上下文协议(MCP)来确保形式化正确性,以应对这一挑战。为评估其作为自主证明器的性能,我们在两个公共数学基准测试以及我们在抽象代数和量子理论领域引入的两个Lean基准测试上,将我们的方法与前沿的大型语言模型和专用证明器模型进行了比较。在公共数据集上,Ax-Prover与最新证明器的竞争力相当,而在新基准测试中则大幅超越了它们。这表明,与专用系统相比,我们的基于工具的智能定理证明方法提供了一种跨不同科学领域的可推广方法,后者在推广方面往往遇到困难。此外,我们还展示了Ax-Prover在实际应用中的助理能力,说明了它如何帮助一位专家数学家形式化一个复杂的密码学定理的证明。

论文及项目相关链接

PDF

Summary

Ax-Prover是一个基于Lean的多智能体系统,用于自动化定理证明,能够解决不同科学领域的各种问题,并与人类专家自主或协作操作。它通过形式化证明生成来解决科学问题,要求创造性推理和严格的句法严谨性。Ax-Prover通过将大型语言模型(LLM)与Lean工具结合,通过模型上下文协议(MCP)确保形式正确性来应对这一挑战。在自主证明者的性能评估中,Ax-Prover在两个公共数学基准测试和两个新推出的抽象代数和量子理论领域的Lean基准测试中,与前沿的LLM和专用证明模型进行了比较。在公共数据集上,Ax-Prover与最先进的证明者具有竞争力,而在新基准测试中则大大优于它们。这表明,不同于难以概括的专用系统,基于工具的理论证明者方法为跨不同科学领域的正式验证提供了一种可概括的方法。此外,Ax-Prover还展示了其在实际案例中的助理能力,显示了它如何帮助数学家形式化一个复杂的密码学定理的证明。

Key Takeaways

  1. Ax-Prover是一个多智能体系统,用于在Lean中进行自动化定理证明,可解决多个科学领域的问题。
  2. Ax-Prover通过形式化证明生成来解决科学问题,要求创造性推理和严格的句法严谨性。
  3. Ax-Prover结合大型语言模型和Lean工具,通过模型上下文协议确保形式正确性。
  4. 在多个基准测试中,Ax-Prover的表现与前沿系统相比具有竞争力,特别是在新引入的抽象代数和量子理论领域的基准测试中表现优异。
  5. Ax-Prover展示了在不同科学领域的正式验证中的可概括性,与专用系统相比具有优势。
  6. Ax-Prover可以在实际情况下作为数学家的助理,帮助形式化复杂的数学证明。

Cool Papers

点此查看论文截图

A Comprehensive Survey on Benchmarks and Solutions in Software Engineering of LLM-Empowered Agentic System

Authors:Jiale Guo, Suizhi Huang, Mei Li, Dong Huang, Xingsheng Chen, Regina Zhang, Zhijiang Guo, Han Yu, Siu-Ming Yiu, Christian Jensen, Pietro Lio, Kwok-Yan Lam

The integration of Large Language Models (LLMs) into software engineering has driven a transition from traditional rule-based systems to autonomous agentic systems capable of solving complex problems. However, systematic progress is hindered by a lack of comprehensive understanding of how benchmarks and solutions interconnect. This survey addresses this gap by providing the first holistic analysis of LLM-powered software engineering, offering insights into evaluation methodologies and solution paradigms. We review over 150 recent papers and propose a taxonomy along two key dimensions: (1) Solutions, categorized into prompt-based, fine-tuning-based, and agent-based paradigms, and (2) Benchmarks, including tasks such as code generation, translation, and repair. Our analysis highlights the evolution from simple prompt engineering to sophisticated agentic systems incorporating capabilities like planning, reasoning, memory mechanisms, and tool augmentation. To contextualize this progress, we present a unified pipeline illustrating the workflow from task specification to deliverables, detailing how different solution paradigms address various complexity levels. Unlike prior surveys that focus narrowly on specific aspects, this work connects 50+ benchmarks to their corresponding solution strategies, enabling researchers to identify optimal approaches for diverse evaluation criteria. We also identify critical research gaps and propose future directions, including multi-agent collaboration, self-evolving systems, and formal verification integration. This survey serves as a foundational guide for advancing LLM-driven software engineering. We maintain a GitHub repository that continuously updates the reviewed and related papers at https://github.com/lisaGuojl/LLM-Agent-SE-Survey.

将大型语言模型(LLMs)融入软件工程已经推动了从传统基于规则的系统向能够解决复杂问题的自主代理系统的转变。然而,由于对基准测试解决方案之间如何相互连接的综合理解不足,阻碍了系统的进步。这篇综述通过提供首份全面的LLM驱动软件工程分析来解决这一差距,深入探讨了评估方法和解决方案范式。我们回顾了150多篇近期论文,并提出了两个关键维度的分类:(1)解决方案,分为基于提示的、基于微调的和基于代理的范式;(2)基准测试,包括代码生成、翻译和修复等任务。我们的分析突显了从简单的提示工程到融入规划、推理、记忆机制和工具增强等能力的复杂代理系统的演变。为了上下文化这种进步,我们呈现了一个统一的工作流程,详细说明了从任务规定到交付品的流程,以及不同解决方案范式如何处理不同复杂级别的方法。不同于之前只关注特定方面的调查,这项工作将50多个基准测试与相应的解决方案策略连接在一起,使研究人员能够根据不同的评估标准确定最佳方法。我们还确定了关键的研究空白并提出了未来的方向,包括多智能体协作、自我进化系统和形式化验证集成。这篇综述是推进LLM驱动软件工程的基础指南。我们维护了一个GitHub仓库,持续更新所评审的相关论文,地址为:https://github.com/lisaGuojl/LLM-Agent-SE-Survey。

论文及项目相关链接

PDF 21 pages

Summary
大型语言模型(LLM)在软件工程中的集成推动了从传统的规则驱动系统向能够解决复杂问题的自主智能系统的转变。然而,由于缺乏基准测试和解决方案之间联系的综合理解,系统性进展受到阻碍。本文解决了这一空白,提供了首个全面的LLM驱动软件工程分析,并深入介绍了评估方法和解决方案范式。通过对超过150篇最新论文的审查,我们提出了一个包含两个关键维度的分类法:解决方案(包括基于提示的、基于微调的和基于代理的范式),以及基准测试(包括代码生成、翻译和修复等任务)。我们的分析强调了从简单的提示工程到复杂的智能系统演变的趋势,这些系统融入了规划、推理、记忆机制和工具扩展等功能。为了背景化这种进展,我们呈现了一个统一的工作流程管道,详细说明了不同解决方案范例如何处理各种复杂性级别。这项工作不同于之前的调查,将重点关注狭义的具体方面,而是将重点放在超过50个基准测试与其相应的解决方案策略的联系上。我们还确定了关键的研究空白并提出了未来的方向,包括多智能体协作、自我进化系统和形式化验证集成。本文是对LLM驱动的软件工程发展的基础指南。

Key Takeaways:

  1. 大型语言模型(LLM)集成于软件工程推动了自主智能系统的转型,能处理复杂问题。
  2. 当前缺乏关于基准测试和解决方案互联的综合理解制约了进展。
  3. 本文提供了首个全面的LLM驱动软件工程分析。
  4. 论文分类法涵盖两个关键维度:解决方案和基准测试。
  5. LLM解决方案已从简单的提示工程演变为复杂的智能系统,融入多种功能。
  6. 文中详细说明了不同解决方案范例如何处理不同复杂性级别的工作流程。

Cool Papers

点此查看论文截图

L2M-AID: Autonomous Cyber-Physical Defense by Fusing Semantic Reasoning of Large Language Models with Multi-Agent Reinforcement Learning (Preprint)

Authors:Tianxiang Xu, Zhichao Wen, Xinyu Zhao, Jun Wang, Yan Li, Chang Liu

The increasing integration of Industrial IoT (IIoT) exposes critical cyber-physical systems to sophisticated, multi-stage attacks that elude traditional defenses lacking contextual awareness. This paper introduces L2M-AID, a novel framework for Autonomous Industrial Defense using LLM-empowered, Multi-agent reinforcement learning. L2M-AID orchestrates a team of collaborative agents, each driven by a Large Language Model (LLM), to achieve adaptive and resilient security. The core innovation lies in the deep fusion of two AI paradigms: we leverage an LLM as a semantic bridge to translate vast, unstructured telemetry into a rich, contextual state representation, enabling agents to reason about adversary intent rather than merely matching patterns. This semantically-aware state empowers a Multi-Agent Reinforcement Learning (MARL) algorithm, MAPPO, to learn complex cooperative strategies. The MARL reward function is uniquely engineered to balance security objectives (threat neutralization) with operational imperatives, explicitly penalizing actions that disrupt physical process stability. To validate our approach, we conduct extensive experiments on the benchmark SWaT dataset and a novel synthetic dataset generated based on the MITRE ATT&CK for ICS framework. Results demonstrate that L2M-AID significantly outperforms traditional IDS, deep learning anomaly detectors, and single-agent RL baselines across key metrics, achieving a 97.2% detection rate while reducing false positives by over 80% and improving response times by a factor of four. Crucially, it demonstrates superior performance in maintaining physical process stability, presenting a robust new paradigm for securing critical national infrastructure.

随着工业物联网(IIoT)的日益融合,关键的网络物理系统面临复杂的多阶段攻击,这些攻击绕过了缺乏上下文意识的传统防御手段。本文介绍了L2M-AID,一个利用大型语言模型赋能的多智能体强化学习的新型自主工业防御框架。L2M-AID协同调度一支智能体团队,每个智能体都由大型语言模型(LLM)驱动,以实现自适应和稳健的安全防护。核心创新点在于两种人工智能范式的深度融合:我们利用大型语言模型作为语义桥梁,将大量的非结构化遥测数据转化为丰富的上下文状态表示,使智能体能够推理出敌方的意图,而不仅仅是匹配模式。这种语义感知状态使多智能体强化学习(MARL)算法得以发挥优势,利用MAPPO算法学习复杂的合作策略。MARL奖励函数独特设计,旨在平衡安全目标(威胁消除)与操作指令,明确惩罚破坏物理过程稳定性的行为。为了验证我们的方法,我们在SWaT数据集和基于MITRE ATT&CK for ICS框架生成的新型合成数据集上进行了广泛实验。结果表明,L2M-AID在关键指标上显著优于传统入侵检测系统、深度学习异常检测器和单智能体强化学习基线,检测率达到97.2%,同时减少了超过80%的误报,响应时间提高了四倍。最重要的是,它在保持物理过程稳定性方面表现出卓越的性能,为关键国家基础设施的安全提供了稳健的新范式。

论文及项目相关链接

PDF This preprint was submitted to IEEE TrustCom 2025. The accepted version will be published under copyright 2025 IEEE

Summary:随着工业物联网(IIoT)的集成度不断提高,关键网络物理系统面临更为复杂的多阶段攻击威胁,而传统缺乏上下文意识的防御手段难以应对。本文提出了基于多智能体强化学习(RL)的自主工业防御框架L2M-AID,结合大型语言模型(LLM)赋能。L2M-AID通过LLM作为语义桥梁,将大量非结构化遥测数据转化为丰富的上下文状态表示,使智能体能够理解对手意图而非仅仅匹配模式。在此基础上,采用多智能体强化学习算法MAPPO学习复杂的协同策略,其奖励函数独特设计,在平衡安全目标(威胁中立化)与操作指令的同时,明确惩罚对物理过程稳定性造成干扰的行为。实验验证显示,相较于传统入侵检测系统、深度学习异常检测器以及单智能体RL基准测试,L2M-AID在关键指标上表现优异,实现高达97.2%的检测率,同时降低误报率超过80%,响应时间缩短四倍。尤其值得一提的是,它在维持物理过程稳定性方面表现出卓越性能,为关键国家基础设施的安全保障提供了稳健的新范式。

Key Takeaways:

  1. 工业物联网(IIoT)的集成增加了关键网络物理系统面临的多阶段攻击风险。
  2. L2M-AID框架利用大型语言模型(LLM)和多智能体强化学习(MARL)进行自适应和弹性安全防御。
  3. LLM将非结构化遥测数据转化为上下文丰富的状态表示,助力智能体理解对手意图。
  4. L2M-AID采用MAPPO算法学习复杂的协同策略,通过独特设计的奖励函数平衡安全目标与操作指令。
  5. L2M-AID在检测率、误报率和响应时间等关键指标上表现优异,检测率高达97.2%。
  6. L2M-AID在维持物理过程稳定性方面有着卓越表现。

Cool Papers

点此查看论文截图

PerfBench: Can Agents Resolve Real-World Performance Bugs?

Authors:Spandan Garg, Roshanak Zilouchian Moghaddam, Neel Sundaresan

Performance bugs are inefficiencies in software that waste computational resources without causing functional failures, making them particularly challenging to detect and fix. While recent advances in Software Engineering agents have shown promise in automated bug fixing, existing benchmarks primarily focus on functional correctness and fail to evaluate agents’ abilities to identify and resolve non-functional issues like performance bugs. We introduce PerfBench, a benchmark comprising 81 real-world performance bug-fixing tasks from popular .NET repositories on GitHub. Unlike existing benchmarks that rely on pre-existing test suites, PerfBench features a novel evaluation harness that allows agents to generate their own performance benchmarks and validates fixes by comparing execution metrics collected for developer fix and agent fix. Each task in PerfBench is derived from actual developer fixes linked to performance-related issues, which are then verified by human experts, ensuring real-world relevance. Our evaluation reveals that current state-of-the-art coding agents struggle with performance optimization tasks, with baseline OpenHands agent achieving only a ~3% success rate on our benchmark. We develop OpenHands-Perf-Agent, which incorporates performance-aware tooling and instructions and achieves a ~20% success rate on the benchmark. We show that by ensuring the agent has proper instructions to benchmark its changes and tooling for benchmark output processing, we can improve the agent performance significantly, but room for improvement still remains. PerfBench provides a challenging test set for furthering the capabilities of agents in fixing performance issues.

性能漏洞是软件中的低效问题,它们会浪费计算资源而不会导致功能失效,因此难以检测和修复。尽管软件工程代理的最新进展在自动修复漏洞方面显示出希望,但现有的基准测试主要侧重于功能正确性,未能评估代理识别和解决非功能性问题(如性能漏洞)的能力。我们引入了PerfBench,它是一个包含来自GitHub上流行的.NET存储库中的8r真实性能修复任务的基准测试集。与依赖现有测试集的基准测试不同,PerfBench具有新型评估工具,允许代理生成自己的性能基准测试并通过比较开发者修复和代理修复所收集的执行指标来验证修复情况。PerfBench中的每个任务都来源于与性能相关问题相关联的实际开发者修复,并由人类专家进行验证,确保具有现实意义。我们的评估表明,当前最先进的编码代理在性能优化任务方面遇到了困难,基线OpenHands代理在我们的基准测试上仅达到约3%的成功率。我们开发了OpenHands-Perf-Agent,它结合了性能感知工具和指令,并在基准测试上实现了约20%的成功率。我们表明,通过确保代理具有适当的指令来评估其更改以及处理基准测试输出的工具,我们可以显着提高代理的性能,但仍存在改进的空间。PerfBench为进一步提高代理解决性能问题的能力提供了一个具有挑战性的测试集。

论文及项目相关链接

PDF

Summary
性能漏洞是软件中的低效问题,浪费计算资源而不引发功能故障,因而难以检测和修复。虽然软件工程代理在自动修复漏洞方面取得了进展,但现有基准测试主要关注功能正确性,未能评估代理在识别和解决非功能性问题(如性能漏洞)方面的能力。本文介绍了PerfBench基准测试,包含来自GitHub上流行的.NET仓库的81个真实性能漏洞修复任务。它允许代理生成自己的性能基准测试,并通过比较开发人员修复和代理修复的执行指标来验证修复情况。每个任务都来源于与性能问题相关的实际开发人员修复,并经专家验证,确保现实性。评估显示,当前先进的编码代理在性能优化任务上表现不佳,基线OpenHands代理在基准测试上的成功率仅为约3%。开发OpenHands-Perf-Agent,融入性能感知工具和指令,在基准测试上的成功率达到约20%。显示通过确保代理具有基准测试其更改的工具和基准测试结果处理指令,可以显著改善代理性能,但仍需改进。PerfBench为进一步提高代理解决性能问题的能力提供了挑战。

Key Takeaways:

  1. 性能漏洞是软件中的挑战性问题,浪费计算资源且难以检测修复。
  2. 现有基准测试主要关注功能正确性,忽视了非功能性问题如性能漏洞的评估。
  3. 引入PerfBench基准测试,包含真实性能漏洞修复任务,经专家验证确保现实性。
  4. 当前先进的编码代理在性能优化任务上表现不佳,OpenHands代理成功率仅为约3%。
  5. 开发OpenHands-Perf-Agent融入性能感知工具和指令,提高代理在基准测试上的成功率至约20%。
  6. 通过确保代理具有基准测试其更改的工具和指令,可以显著改善代理性能。

Cool Papers

点此查看论文截图

ABMax: A JAX-based Agent-based Modeling Framework

Authors:Siddharth Chaturvedi, Ahmed El-Gazzar, Marcel van Gerven

Agent-based modeling (ABM) is a principal approach for studying complex systems. By decomposing a system into simpler, interacting agents, agent-based modeling (ABM) allows researchers to observe the emergence of complex phenomena. High-performance array computing libraries like JAX can help scale such computational models to a large number of agents by using automatic vectorization and just-in-time (JIT) compilation. One of the caveats of using JAX to achieve such scaling is that the shapes of arrays used in the computational model should remain immutable throughout the simulation. In the context of agent-based modeling (ABM), this can pose constraints on certain agent manipulation operations that require flexible data structures. A subset of which is represented by the ability to update a dynamically selected number of agents by applying distinct changes to them during a simulation. To this effect, we introduce ABMax, an ABM framework based on JAX that implements multiple just-in-time (JIT) compilable algorithms to provide this functionality. On the canonical predation model benchmark, ABMax achieves runtime performance comparable to state-of-the-art implementations. Further, we show that this functionality can also be vectorized, making it possible to run many similar agent-based models in parallel. We also present two examples in the form of a traffic-flow model and a financial market model to show the use case of ABMax

基于代理的建模(ABM)是研究复杂系统的主要方法。通过将系统分解为更简单、相互作用的代理,基于代理的建模(ABM)允许研究人员观察复杂现象的出现。像JAX这样的高性能阵列计算库可以通过自动矢量化和即时(JIT)编译来帮助扩展此类计算模型到大量代理。使用JAX实现此类扩展的一个注意事项是,计算模型中使用的数组形状在模拟过程中应保持不变。在基于代理的建模(ABM)的背景下,这可能会对需要灵活数据结构的某些代理操作造成约束。其中一部分由在模拟期间对动态选择数量的代理进行更新,对他们应用不同的更改的能力所代表。为此,我们引入了ABMax,这是一个基于JAX的ABM框架,实现了多个即时(JIT)编译算法,以提供此功能。在典型的捕食模型基准测试中,ABMax实现了与最新实现相当的运行时性能。此外,我们表明此功能还可以进行矢量化,使得可以并行运行许多类似的基于代理的模型。我们还以交通流模型和金融市场模型的形式给出了两个例子,以展示ABMax的使用情况。

论文及项目相关链接

PDF 8 pages, 7 figures, 4 tables, 2 algorithms

Summary
基于主体的建模(ABM)是研究复杂系统的主要方法。通过将系统分解为简单交互的主体,ABM允许研究人员观察复杂现象的出现。JAX等高性能阵列计算库可以通过自动矢量化即时编译(JIT)来帮助扩展此类计算模型到大量主体。使用JAX实现扩展的一个缺点是计算模型中使用的数组形状在模拟过程中应保持不变。在ABM的上下文中,这可能对需要灵活数据结构的某些主体操作造成约束。因此,我们引入了ABMax,这是一个基于JAX的ABM框架,实现了多个JIT编译算法来提供对动态选择主体的更新功能。在典型的捕食模型基准测试中,ABMax实现了与最新实现相当的运行时性能。此外,我们还展示了该功能也可以矢量化,使得可以并行运行许多类似的基于主体的模型。我们还通过交通流量模型和金融市场模型两个例子展示了ABMax的使用情况。

Key Takeaways

  1. 主体基于建模(ABM)是研究复杂系统的核心方法,通过分解主体交互来观察复杂现象的出现。
  2. JAX等高性能计算库可以帮助扩展ABM的计算规模。
  3. 使用JAX进行ABM模拟时,需要保持数组形状的稳定性,这对某些主体操作造成约束。
  4. ABMax是一个基于JAX的ABM框架,提供了对动态选择主体进行更新的功能。
  5. ABMax在捕食模型基准测试中实现了高效的运行时性能。
  6. ABMax的功能可以矢量化,允许并行运行多个基于主体的模型。

Cool Papers

点此查看论文截图

Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities

Authors:Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, Luke Marris, Sam Petulla, Colin Gaffney, Asaf Aharoni, Nathan Lintz, Tiago Cardal Pais, Henrik Jacobsson, Idan Szpektor, Nan-Jiang Jiang, Krishna Haridasan, Ahmed Omran, Nikunj Saunshi, Dara Bahri, Gaurav Mishra, Eric Chu, Toby Boyd, Brad Hekman, Aaron Parisi, Chaoyi Zhang, Kornraphop Kawintiranon, Tania Bedrax-Weiss, Oliver Wang, Ya Xu, Ollie Purkiss, Uri Mendlovic, Ilaï Deutel, Nam Nguyen, Adam Langley, Flip Korn, Lucia Rossazza, Alexandre Ramé, Sagar Waghmare, Helen Miller, Nathan Byrd, Ashrith Sheshan, Raia Hadsell, Sangnie Bhardwaj, Pawel Janus, Tero Rissa, Dan Horgan, Alvin Abdagic, Lior Belenki, James Allingham, Anima Singh, Theo Guidroz, Srivatsan Srinivasan, Herman Schmit, Kristen Chiafullo, Andre Elisseeff, Nilpa Jha, Prateek Kolhar, Leonard Berrada, Frank Ding, Xiance Si, Shrestha Basu Mallick, Franz Och, Sofia Erell, Eric Ni, Tejasi Latkar, Sherry Yang, Petar Sirkovic, Ziqiang Feng, Robert Leland, Rachel Hornung, Gang Wu, Charles Blundell, Hamidreza Alvari, Po-Sen Huang, Cathy Yip, Sanja Deur, Li Liu, Gabriela Surita, Pablo Duque, Dima Damen, Johnson Jia, Arthur Guez, Markus Mircea, Animesh Sinha, Alberto Magni, Paweł Stradomski, Tal Marian, Vlado Galić, Wenhu Chen, Hisham Husain, Achintya Singhal, Dominik Grewe, François-Xavier Aubet, Shuang Song, Lorenzo Blanco, Leland Rechis, Lewis Ho, Rich Munoz, Kelvin Zheng, Jessica Hamrick, Kevin Mather, Hagai Taitelbaum, Eliza Rutherford, Yun Lei, Kuangyuan Chen, Anand Shukla, Erica Moreira, Eric Doi, Berivan Isik, Nir Shabat, Dominika Rogozińska, Kashyap Kolipaka, Jason Chang, Eugen Vušak, Srinivasan Venkatachary, Shadi Noghabi, Tarun Bharti, Younghoon Jun, Aleksandr Zaks, Simon Green, Jeshwanth Challagundla, William Wong, Muqthar Mohammad, Dean Hirsch, Yong Cheng, Iftekhar Naim, Lev Proleev, Damien Vincent, Aayush Singh, Maxim Krikun, Dilip Krishnan, Zoubin Ghahramani, Aviel Atias, Rajeev Aggarwal, Christo Kirov, Dimitrios Vytiniotis, Christy Koh, Alexandra Chronopoulou, Pawan Dogra, Vlad-Doru Ion, Gladys Tyen, Jason Lee, Felix Weissenberger, Trevor Strohman, Ashwin Balakrishna, Jack Rae, Marko Velic, Raoul de Liedekerke, Oded Elyada, Wentao Yuan, Canoee Liu, Lior Shani, Sergey Kishchenko, Bea Alessio, Yandong Li, Richard Song, Sam Kwei, Orion Jankowski, Aneesh Pappu, Youhei Namiki, Yenai Ma, Nilesh Tripuraneni, Colin Cherry, Marissa Ikonomidis, Yu-Cheng Ling, Colin Ji, Beka Westberg, Auriel Wright, Da Yu, David Parkinson, Swaroop Ramaswamy, Jerome Connor, Soheil Hassas Yeganeh, Snchit Grover, George Kenwright, Lubo Litchev, Chris Apps, Alex Tomala, Felix Halim, Alex Castro-Ros, Zefei Li, Anudhyan Boral, Pauline Sho, Michal Yarom, Eric Malmi, David Klinghoffer, Rebecca Lin, Alan Ansell, Pradeep Kumar S, Shubin Zhao, Siqi Zuo, Adam Santoro, Heng-Tze Cheng, Solomon Demmessie, Yuchi Liu, Nicole Brichtova, Allie Culp, Nathaniel Braun, Dan Graur, Will Ng, Nikhil Mehta, Aaron Phillips, Patrik Sundberg, Varun Godbole, Fangyu Liu, Yash Katariya, David Rim, Mojtaba Seyedhosseini, Sean Ammirati, Jonas Valfridsson, Mahan Malihi, Timothy Knight, Andeep Toor, Thomas Lampe, Abe Ittycheriah, Lewis Chiang, Chak Yeung, Alexandre Fréchette, Jinmeng Rao, Huisheng Wang, Himanshu Srivastava, Richard Zhang, Rocky Rhodes, Ariel Brand, Dean Weesner, Ilya Figotin, Felix Gimeno, Rachana Fellinger, Pierre Marcenac, José Leal, Eyal Marcus, Victor Cotruta, Rodrigo Cabrera, Sheryl Luo, Dan Garrette, Vera Axelrod, Sorin Baltateanu, David Barker, Dongkai Chen, Horia Toma, Ben Ingram, Jason Riesa, Chinmay Kulkarni, Yujing Zhang, Hongbin Liu, Chao Wang, Martin Polacek, Will Wu, Kai Hui, Adrian N Reyes, Yi Su, Megan Barnes, Ishaan Malhi, Anfal Siddiqui, Qixuan Feng, Mihai Damaschin, Daniele Pighin, Andreas Steiner, Samuel Yang, Ramya Sree Boppana, Simeon Ivanov, Arun Kandoor, Aditya Shah, Asier Mujika, Da Huang, Christopher A. Choquette-Choo, Mohak Patel, Tianhe Yu, Toni Creswell, Jerry, Liu, Catarina Barros, Yasaman Razeghi, Aurko Roy, Phil Culliton, Binbin Xiong, Jiaqi Pan, Thomas Strohmann, Tolly Powell, Babi Seal, Doug DeCarlo, Pranav Shyam, Kaan Katircioglu, Xuezhi Wang, Cassidy Hardin, Immanuel Odisho, Josef Broder, Oscar Chang, Arun Nair, Artem Shtefan, Maura O’Brien, Manu Agarwal, Sahitya Potluri, Siddharth Goyal, Amit Jhindal, Saksham Thakur, Yury Stuken, James Lyon, Kristina Toutanova, Fangxiaoyu Feng, Austin Wu, Ben Horn, Alek Wang, Alex Cullum, Gabe Taubman, Disha Shrivastava, Chongyang Shi, Hamish Tomlinson, Roma Patel, Tao Tu, Ada Maksutaj Oflazer, Francesco Pongetti, Mingyao Yang, Adrien Ali Taïga, Vincent Perot, Nuo Wang Pierse, Feng Han, Yoel Drori, Iñaki Iturrate, Ayan Chakrabarti, Legg Yeung, Dave Dopson, Yi-ting Chen, Apoorv Kulshreshtha, Tongfei Guo, Philip Pham, Tal Schuster, Junquan Chen, Alex Polozov, Jinwei Xing, Huanjie Zhou, Praneeth Kacham, Doron Kukliansky, Antoine Miech, Sergey Yaroshenko, Ed Chi, Sholto Douglas, Hongliang Fei, Mathieu Blondel, Preethi Myla, Lior Madmoni, Xing Wu, Daniel Keysers, Kristian Kjems, Isabela Albuquerque, Lijun Yu, Joel D’sa, Michelle Plantan, Vlad Ionescu, Jaume Sanchez Elias, Abhirut Gupta, Manish Reddy Vuyyuru, Fred Alcober, Tong Zhou, Kaiyang Ji, Florian Hartmann, Subha Puttagunta, Hugo Song, Ehsan Amid, Anca Stefanoiu, Andrew Lee, Paul Pucciarelli, Emma Wang, Amit Raul, Slav Petrov, Isaac Tian, Valentin Anklin, Nana Nti, Victor Gomes, Max Schumacher, Grace Vesom, Alex Panagopoulos, Konstantinos Bousmalis, Daniel Andor, Josh Jacob, Yuan Zhang, Bill Rosgen, Matija Kecman, Matthew Tung, Alexandra Belias, Noah Goodman, Paul Covington, Brian Wieder, Nikita Saxena, Elnaz Davoodi, Muhuan Huang, Sharath Maddineni, Vincent Roulet, Folawiyo Campbell-Ajala, Pier Giuseppe Sessa, Xintian, Wu, Guangda Lai, Paul Collins, Alex Haig, Vytenis Sakenas, Xiaowei Xu, Marissa Giustina, Laurent El Shafey, Pichi Charoenpanit, Shefali Garg, Joshua Ainslie, Boone Severson, Montse Gonzalez Arenas, Shreya Pathak, Sujee Rajayogam, Jie Feng, Michiel Bakker, Sheng Li, Nevan Wichers, Jamie Rogers, Xinyang Geng, Yeqing Li, Rolf Jagerman, Chao Jia, Nadav Olmert, David Sharon, Matthew Mauger, Sandeep Mariserla, Hongxu Ma, Megha Mohabey, Kyuyeun Kim, Alek Andreev, Scott Pollom, Juliette Love, Vihan Jain, Priyanka Agrawal, Yannick Schroecker, Alisa Fortin, Manfred Warmuth, Ji Liu, Andrew Leach, Irina Blok, Ganesh Poomal Girirajan, Roee Aharoni, Benigno Uria, Andrei Sozanschi, Dan Goldberg, Lucian Ionita, Marco Tulio Ribeiro, Martin Zlocha, Vighnesh Birodkar, Sami Lachgar, Liangzhe Yuan, Himadri Choudhury, Matt Ginsberg, Fei Zheng, Gregory Dibb, Emily Graves, Swachhand Lokhande, Gabriel Rasskin, George-Cristian Muraru, Corbin Quick, Sandeep Tata, Pierre Sermanet, Aditya Chawla, Itay Karo, Yan Wang, Susan Zhang, Orgad Keller, Anca Dragan, Guolong Su, Ian Chou, Xi Liu, Yiqing Tao, Shruthi Prabhakara, Marc Wilson, Ruibo Liu, Shibo Wang, Georgie Evans, David Du, Alfonso Castaño, Gautam Prasad, Mona El Mahdy, Sebastian Gerlach, Machel Reid, Jarrod Kahn, Amir Zait, Thanumalayan Sankaranarayana Pillai, Thatcher Ulrich, Guanyu Wang, Jan Wassenberg, Efrat Farkash, Kiran Yalasangi, Congchao Wang, Maria Bauza, Simon Bucher, Ting Liu, Jun Yan, Gary Leung, Vikas Sindhwani, Parker Barnes, Avi Singh, Ivan Jurin, Jichuan Chang, Niket Kumar Bhumihar, Sivan Eiger, Gui Citovsky, Ben Withbroe, Zhang Li, Siyang Xue, Niccolò Dal Santo, Georgi Stoyanov, Yves Raimond, Steven Zheng, Yilin Gao, Vít Listík, Sławek Kwasiborski, Rachel Saputro, Adnan Ozturel, Ganesh Mallya, Kushal Majmundar, Ross West, Paul Caron, Jinliang Wei, Lluis Castrejon, Sharad Vikram, Deepak Ramachandran, Nikhil Dhawan, Jiho Park, Sara Smoot, George van den Driessche, Yochai Blau, Chase Malik, Wei Liang, Roy Hirsch, Cicero Nogueira dos Santos, Eugene Weinstein, Aäron van den Oord, Sid Lall, Nicholas FitzGerald, Zixuan Jiang, Xuan Yang, Dale Webster, Ali Elqursh, Aedan Pope, Georges Rotival, David Raposo, Wanzheng Zhu, Jeff Dean, Sami Alabed, Dustin Tran, Arushi Gupta, Zach Gleicher, Jessica Austin, Edouard Rosseel, Megh Umekar, Dipanjan Das, Yinghao Sun, Kai Chen, Karolis Misiunas, Xiang Zhou, Yixian Di, Alyssa Loo, Josh Newlan, Bo Li, Vinay Ramasesh, Ying Xu, Alex Chen, Sudeep Gandhe, Radu Soricut, Nikita Gupta, Shuguang Hu, Seliem El-Sayed, Xavier Garcia, Idan Brusilovsky, Pu-Chin Chen, Andrew Bolt, Lu Huang, Alex Gurney, Zhiying Zhang, Alexander Pritzel, Jarek Wilkiewicz, Bryan Seybold, Bhargav Kanagal Shamanna, Felix Fischer, Josef Dean, Karan Gill, Ross Mcilroy, Abhishek Bhowmick, Jeremy Selier, Antoine Yang, Derek Cheng, Vladimir Magay, Jie Tan, Dhriti Varma, Christian Walder, Tomas Kocisky, Ryo Nakashima, Paul Natsev, Mike Kwong, Ionel Gog, Chiyuan Zhang, Sander Dieleman, Thomas Jimma, Andrey Ryabtsev, Siddhartha Brahma, David Steiner, Dayou Du, Ante Žužul, Mislav Žanić, Mukund Raghavachari, Willi Gierke, Zeyu Zheng, Dessie Petrova, Yann Dauphin, Yuchuan Liu, Ido Kessler, Steven Hand, Chris Duvarney, Seokhwan Kim, Hyo Lee, Léonard Hussenot, Jeffrey Hui, Josh Smith, Deepali Jain, Jiawei Xia, Gaurav Singh Tomar, Keyvan Amiri, Du Phan, Fabian Fuchs, Tobias Weyand, Nenad Tomasev, Alexandra Cordell, Xin Liu, Jonathan Mallinson, Pankaj Joshi, Andy Crawford, Arun Suggala, Steve Chien, Nick Fernando, Mariella Sanchez-Vargas, Duncan Williams, Phil Crone, Xiyang Luo, Igor Karpov, Jyn Shan, Terry Thurk, Robin Strudel, Paul Voigtlaender, Piyush Patil, Tim Dozat, Ali Khodaei, Sahil Singla, Piotr Ambroszczyk, Qiyin Wu, Yifan Chang, Brian Roark, Chaitra Hegde, Tianli Ding, Angelos Filos, Zhongru Wu, André Susano Pinto, Shuang Liu, Saarthak Khanna, Aditya Pandey, Siobhan Mcloughlin, Qiujia Li, Sam Haves, Allan Zhou, Elena Buchatskaya, Isabel Leal, Peter de Boursac, Nami Akazawa, Nina Anderson, Terry Chen, Krishna Somandepalli, Chen Liang, Sheela Goenka, Stephanie Winkler, Alexander Grushetsky, Yifan Ding, Jamie Smith, Fan Ye, Jordi Pont-Tuset, Eric Li, Ruichao Li, Tomer Golany, Dawid Wegner, Tao Jiang, Omer Barak, Yuan Shangguan, Eszter Vértes, Renee Wong, Jörg Bornschein, Alex Tudor, Michele Bevilacqua, Tom Schaul, Ankit Singh Rawat, Yang Zhao, Kyriakos Axiotis, Lei Meng, Cory McLean, Jonathan Lai, Jennifer Beattie, Nate Kushman, Yaxin Liu, Blair Kutzman, Fiona Lang, Jingchen Ye, Praneeth Netrapalli, Pushkar Mishra, Myriam Khan, Megha Goel, Rob Willoughby, David Tian, Honglei Zhuang, JD Chen, Zak Tsai, Tasos Kementsietsidis, Arjun Khare, James Keeling, Keyang Xu, Nathan Waters, Florent Altché, Ashok Popat, Bhavishya Mittal, David Saxton, Dalia El Badawy, Michael Mathieu, Zheng Zheng, Hao Zhou, Nishant Ranka, Richard Shin, Qingnan Duan, Tim Salimans, Ioana Mihailescu, Uri Shaham, Ming-Wei Chang, Yannis Assael, Nishanth Dikkala, Martin Izzard, Vincent Cohen-Addad, Cat Graves, Vlad Feinberg, Grace Chung, DJ Strouse, Danny Karmon, Sahand Sharifzadeh, Zoe Ashwood, Khiem Pham, Jon Blanton, Alex Vasiloff, Jarred Barber, Mark Geller, Aurick Zhou, Fedir Zubach, Tzu-Kuo Huang, Lei Zhang, Himanshu Gupta, Matt Young, Julia Proskurnia, Ronny Votel, Valentin Gabeur, Gabriel Barcik, Aditya Tripathi, Hongkun Yu, Geng Yan, Beer Changpinyo, Filip Pavetić, Amy Coyle, Yasuhisa Fujii, Jorge Gonzalez Mendez, Tianhao Zhou, Harish Rajamani, Blake Hechtman, Eddie Cao, Da-Cheng Juan, Yi-Xuan Tan, Valentin Dalibard, Yilun Du, Natalie Clay, Kaisheng Yao, Wenhao Jia, Dimple Vijaykumar, Yuxiang Zhou, Xinyi Bai, Wei-Chih Hung, Steven Pecht, Georgi Todorov, Nikhil Khadke, Pramod Gupta, Preethi Lahoti, Arnaud Autef, Karthik Duddu, James Lee-Thorp, Alexander Bykovsky, Tautvydas Misiunas, Sebastian Flennerhag, Santhosh Thangaraj, Jed McGiffin, Zack Nado, Markus Kunesch, Andreas Noever, Amir Hertz, Marco Liang, Victor Stone, Evan Palmer, Samira Daruki, Arijit Pramanik, Siim Põder, Austin Kyker, Mina Khan, Evgeny Sluzhaev, Marvin Ritter, Avraham Ruderman, Wenlei Zhou, Chirag Nagpal, Kiran Vodrahalli, George Necula, Paul Barham, Ellie Pavlick, Jay Hartford, Izhak Shafran, Long Zhao, Maciej Mikuła, Tom Eccles, Hidetoshi Shimokawa, Kanav Garg, Luke Vilnis, Hanwen Chen, Ilia Shumailov, Kuang-Huei Lee, Abdelrahman Abdelhamed, Meiyan Xie, Vered Cohen, Ester Hlavnova, Dan Malkin, Chawin Sitawarin, James Lottes, Pauline Coquinot, Tianli Yu, Sandeep Kumar, Jingwei Zhang, Aroma Mahendru, Zafarali Ahmed, James Martens, Tao Chen, Aviel Boag, Daiyi Peng, Coline Devin, Arseniy Klimovskiy, Mary Phuong, Danny Vainstein, Jin Xie, Bhuvana Ramabhadran, Nathan Howard, Xinxin Yu, Gitartha Goswami, Jingyu Cui, Sam Shleifer, Mario Pinto, Chih-Kuan Yeh, Ming-Hsuan Yang, Sara Javanmardi, Dan Ethier, Chace Lee, Jordi Orbay, Suyog Kotecha, Carla Bromberg, Pete Shaw, James Thornton, Adi Gerzi Rosenthal, Shane Gu, Matt Thomas, Ian Gemp, Aditya Ayyar, Asahi Ushio, Aarush Selvan, Joel Wee, Chenxi Liu, Maryam Majzoubi, Weiren Yu, Jake Abernethy, Tyler Liechty, Renke Pan, Hoang Nguyen, Qiong, Hu, Sarah Perrin, Abhinav Arora, Emily Pitler, Weiyi Wang, Kaushik Shivakumar, Flavien Prost, Ben Limonchik, Jing Wang, Yi Gao, Timothee Cour, Shyamal Buch, Huan Gui, Maria Ivanova, Philipp Neubeck, Kelvin Chan, Lucy Kim, Huizhong Chen, Naman Goyal, Da-Woon Chung, Lu Liu, Yao Su, Anastasia Petrushkina, Jiajun Shen, Armand Joulin, Yuanzhong Xu, Stein Xudong Lin, Yana Kulizhskaya, Ciprian Chelba, Shobha Vasudevan, Eli Collins, Vasilisa Bashlovkina, Tony Lu, Doug Fritz, Jongbin Park, Yanqi Zhou, Chen Su, Richard Tanburn, Mikhail Sushkov, Mitchelle Rasquinha, Jinning Li, Jennifer Prendki, Yiming Li, Pallavi LV, Shriya Sharma, Hen Fitoussi, Hui Huang, Andrew Dai, Phuong Dao, Mike Burrows, Henry Prior, Danfeng Qin, Golan Pundak, Lars Lowe Sjoesund, Art Khurshudov, Zhenkai Zhu, Albert Webson, Elizabeth Kemp, Tat Tan, Saurabh Agrawal, Susie Sargsyan, Liqun Cheng, Jim Stephan, Tom Kwiatkowski, David Reid, Arunkumar Byravan, Assaf Hurwitz Michaely, Nicolas Heess, Luowei Zhou, Sonam Goenka, Viral Carpenter, Anselm Levskaya, Bo Wang, Reed Roberts, Rémi Leblond, Sharat Chikkerur, Stav Ginzburg, Max Chang, Robert Riachi, Chuqiao, Xu, Zalán Borsos, Michael Pliskin, Julia Pawar, Morgane Lustman, Hannah Kirkwood, Ankit Anand, Aditi Chaudhary, Norbert Kalb, Kieran Milan, Sean Augenstein, Anna Goldie, Laurel Prince, Karthik Raman, Yanhua Sun, Vivian Xia, Aaron Cohen, Zhouyuan Huo, Josh Camp, Seher Ellis, Lukas Zilka, David Vilar Torres, Lisa Patel, Sho Arora, Betty Chan, Jonas Adler, Kareem Ayoub, Jacky Liang, Fayaz Jamil, Jiepu Jiang, Simon Baumgartner, Haitian Sun, Yael Karov, Yaroslav Akulov, Hui Zheng, Irene Cai, Claudio Fantacci, James Rubin, Alex Rav Acha, Mengchao Wang, Nina D’Souza, Rohit Sathyanarayana, Shengyang Dai, Simon Rowe, Andrey Simanovsky, Omer Goldman, Yuheng Kuang, Xiaoyue Pan, Andrew Rosenberg, Tania Rojas-Esponda, Praneet Dutta, Amy Zeng, Irina Jurenka, Greg Farquhar, Yamini Bansal, Shariq Iqbal, Becca Roelofs, Ga-Young Joung, Parker Beak, Changwan Ryu, Ryan Poplin, Yan Wu, Jean-Baptiste Alayrac, Senaka Buthpitiya, Olaf Ronneberger, Caleb Habtegebriel, Wei Li, Paul Cavallaro, Aurora Wei, Guy Bensky, Timo Denk, Harish Ganapathy, Jeff Stanway, Pratik Joshi, Francesco Bertolini, Jessica Lo, Olivia Ma, Zachary Charles, Geta Sampemane, Himanshu Sahni, Xu Chen, Harry Askham, David Gaddy, Peter Young, Jiewen Tan, Matan Eyal, Arthur Bražinskas, Li Zhong, Zhichun Wu, Mark Epstein, Kai Bailey, Andrew Hard, Kamyu Lee, Sasha Goldshtein, Alex Ruiz, Mohammed Badawi, Matthias Lochbrunner, JK Kearns, Ashley Brown, Fabio Pardo, Theophane Weber, Haichuan Yang, Pan-Pan Jiang, Berkin Akin, Zhao Fu, Marcus Wainwright, Chi Zou, Meenu Gaba, Pierre-Antoine Manzagol, Wendy Kan, Yang Song, Karina Zainullina, Rui Lin, Jeongwoo Ko, Salil Deshmukh, Apoorv Jindal, James Svensson, Divya Tyam, Heri Zhao, Christine Kaeser-Chen, Scott Baird, Pooya Moradi, Jamie Hall, Qiuchen Guo, Vincent Tsang, Bowen Liang, Fernando Pereira, Suhas Ganesh, Ivan Korotkov, Jakub Adamek, Sridhar Thiagarajan, Vinh Tran, Charles Chen, Chris Tar, Sanil Jain, Ishita Dasgupta, Taylan Bilal, David Reitter, Kai Zhao, Giulia Vezzani, Yasmin Gehman, Pulkit Mehta, Lauren Beltrone, Xerxes Dotiwalla, Sergio Guadarrama, Zaheer Abbas, Stefani Karp, Petko Georgiev, Chun-Sung Ferng, Marc Brockschmidt, Liqian Peng, Christoph Hirnschall, Vikas Verma, Yingying Bi, Ying Xiao, Avigail Dabush, Kelvin Xu, Phil Wallis, Randall Parker, Qifei Wang, Yang Xu, Ilkin Safarli, Dinesh Tewari, Yin Zhang, Seungyeon Kim, Andrea Gesmundo, Mackenzie Thomas, Sergey Levi, Ahmed Chowdhury, Kanishka Rao, Peter Garst, Sam Conway-Rahman, Helen Ran, Kay McKinney, Zhisheng Xiao, Wenhao Yu, Rohan Agrawal, Axel Stjerngren, Catalin Ionescu, Jingjing Chen, Vivek Sharma, Justin Chiu, Fei Liu, Ken Franko, Clayton Sanford, Xingyu Cai, Paul Michel, Sanjay Ganapathy, Jane Labanowski, Zachary Garrett, Ben Vargas, Sean Sun, Bryan Gale, Thomas Buschmann, Guillaume Desjardins, Nimesh Ghelani, Palak Jain, Mudit Verma, Chulayuth Asawaroengchai, Julian Eisenschlos, Jitendra Harlalka, Hideto Kazawa, Don Metzler, Joshua Howland, Ying Jian, Jake Ades, Viral Shah, Tynan Gangwani, Seungji Lee, Roman Ring, Steven M. Hernandez, Dean Reich, Amer Sinha, Ashutosh Sathe, Joe Kovac, Ashleah Gill, Ajay Kannan, Andrea D’olimpio, Martin Sevenich, Jay Whang, Been Kim, Khe Chai Sim, Jilin Chen, Jiageng Zhang, Shuba Lall, Yossi Matias, Bill Jia, Abe Friesen, Sara Nasso, Ashish Thapliyal, Bryan Perozzi, Ting Yu, Anna Shekhawat, Safeen Huda, Peter Grabowski, Eric Wang, Ashwin Sreevatsa, Hilal Dib, Mehadi Hassen, Parker Schuh, Vedrana Milutinovic, Chris Welty, Michael Quinn, Ali Shah, Bangju Wang, Gabe Barth-Maron, Justin Frye, Natalie Axelsson, Tao Zhu, Yukun Ma, Irene Giannoumis, Hanie Sedghi, Chang Ye, Yi Luan, Kevin Aydin, Bilva Chandra, Vivek Sampathkumar, Ronny Huang, Victor Lavrenko, Ahmed Eleryan, Zhi Hong, Steven Hansen, Sara Mc Carthy, Bidisha Samanta, Domagoj Ćevid, Xin Wang, Fangtao Li, Michael Voznesensky, Matt Hoffman, Andreas Terzis, Vikash Sehwag, Gil Fidel, Luheng He, Mu Cai, Yanzhang He, Alex Feng, Martin Nikoltchev, Samrat Phatale, Jason Chase, Rory Lawton, Ming Zhang, Tom Ouyang, Manuel Tragut, Mehdi Hafezi Manshadi, Arjun Narayanan, Jiaming Shen, Xu Gao, Tolga Bolukbasi, Nick Roy, Xin Li, Daniel Golovin, Liviu Panait, Zhen Qin, Guangxing Han, Thomas Anthony, Sneha Kudugunta, Viorica Patraucean, Aniket Ray, Xinyun Chen, Xiaochen Yang, Tanuj Bhatia, Pranav Talluri, Alex Morris, Andrija Ražnatović, Bethanie Brownfield, James An, Sheng Peng, Patrick Kane, Ce Zheng, Nico Duduta, Joshua Kessinger, James Noraky, Siqi Liu, Keran Rong, Petar Veličković, Keith Rush, Alex Goldin, Fanny Wei, Shiva Mohan Reddy Garlapati, Caroline Pantofaru, Okwan Kwon, Jianmo Ni, Eric Noland, Julia Di Trapani, Françoise Beaufays, Abhijit Guha Roy, Yinlam Chow, Aybuke Turker, Geoffrey Cideron, Lantao Mei, Jon Clark, Qingyun Dou, Matko Bošnjak, Ralph Leith, Yuqing Du, Amir Yazdanbakhsh, Milad Nasr, Chester Kwak, Suraj Satishkumar Sheth, Alex Kaskasoli, Ankesh Anand, Balaji Lakshminarayanan, Sammy Jerome, David Bieber, Chun-Te Chu, Alexandre Senges, Tianxiao Shen, Mukund Sridhar, Ndaba Ndebele, Benjamin Beyret, Shakir Mohamed, Mia Chen, Markus Freitag, Jiaxian Guo, Luyang Liu, Paul Roit, Heng Chen, Shen Yan, Tom Stone, JD Co-Reyes, Jeremy Cole, Salvatore Scellato, Shekoofeh Azizi, Hadi Hashemi, Alicia Jin, Anand Iyer, Marcella Valentine, András György, Arun Ahuja, Daniel Hernandez Diaz, Chen-Yu Lee, Nathan Clement, Weize Kong, Drew Garmon, Ishaan Watts, Kush Bhatia, Khyatti Gupta, Matt Miecnikowski, Hugo Vallet, Ankur Taly, Edward Loper, Saket Joshi, James Atwood, Jo Chick, Mark Collier, Fotis Iliopoulos, Ryan Trostle, Beliz Gunel, Ramiro Leal-Cavazos, Arnar Mar Hrafnkelsson, Michael Guzman, Xiaoen Ju, Andy Forbes, Jesse Emond, Kushal Chauhan, Ben Caine, Li Xiao, Wenjun Zeng, Alexandre Moufarek, Daniel Murphy, Maya Meng, Nitish Gupta, Felix Riedel, Anil Das, Elijah Lawal, Shashi Narayan, Tiberiu Sosea, James Swirhun, Linda Friso, Behnam Neyshabur, Jing Lu, Sertan Girgin, Michael Wunder, Edouard Yvinec, Aroonalok Pyne, Victor Carbune, Shruti Rijhwani, Yang Guo, Tulsee Doshi, Anton Briukhov, Max Bain, Ayal Hitron, Xuanhui Wang, Ashish Gupta, Ke Chen, Cosmo Du, Weiyang Zhang, Dhruv Shah, Arjun Akula, Max Dylla, Ashyana Kachra, Weicheng Kuo, Tingting Zou, Lily Wang, Luyao Xu, Jifan Zhu, Justin Snyder, Sachit Menon, Orhan Firat, Igor Mordatch, Yuan Yuan, Natalia Ponomareva, Rory Blevins, Lawrence Moore, Weijun Wang, Phil Chen, Martin Scholz, Artur Dwornik, Jason Lin, Sicheng Li, Diego Antognini, Te I, Xiaodan Song, Matt Miller, Uday Kalra, Adam Raveret, Oscar Akerlund, Felix Wu, Andrew Nystrom, Namrata Godbole, Tianqi Liu, Hannah DeBalsi, Jewel Zhao, Buhuang Liu, Avi Caciularu, Lauren Lax, Urvashi Khandelwal, Victoria Langston, Eric Bailey, Silvio Lattanzi, Yufei Wang, Neel Kovelamudi, Sneha Mondal, Guru Guruganesh, Nan Hua, Ofir Roval, Paweł Wesołowski, Rishikesh Ingale, Jonathan Halcrow, Tim Sohn, Christof Angermueller, Bahram Raad, Eli Stickgold, Eva Lu, Alec Kosik, Jing Xie, Timothy Lillicrap, Austin Huang, Lydia Lihui Zhang, Dominik Paulus, Clement Farabet, Alex Wertheim, Bing Wang, Rishabh Joshi, Chu-ling Ko, Yonghui Wu, Shubham Agrawal, Lily Lin, XiangHai Sheng, Peter Sung, Tyler Breland-King, Christina Butterfield, Swapnil Gawde, Sumeet Singh, Qiao Zhang, Raj Apte, Shilpa Shetty, Adrian Hutter, Tao Li, Elizabeth Salesky, Federico Lebron, Jonni Kanerva, Michela Paganini, Arthur Nguyen, Rohith Vallu, Jan-Thorsten Peter, Sarmishta Velury, David Kao, Jay Hoover, Anna Bortsova, Colton Bishop, Shoshana Jakobovits, Alessandro Agostini, Alekh Agarwal, Chang Liu, Charles Kwong, Sasan Tavakkol, Ioana Bica, Alex Greve, Anirudh GP, Jake Marcus, Le Hou, Tom Duerig, Rivka Moroshko, Dave Lacey, Andy Davis, Julien Amelot, Guohui Wang, Frank Kim, Theofilos Strinopoulos, Hui Wan, Charline Le Lan, Shankar Krishnan, Haotian Tang, Peter Humphreys, Junwen Bai, Idan Heimlich Shtacher, Diego Machado, Chenxi Pang, Ken Burke, Dangyi Liu, Renga Aravamudhan, Yue Song, Ed Hirst, Abhimanyu Singh, Brendan Jou, Liang Bai, Francesco Piccinno, Chuyuan Kelly Fu, Robin Alazard, Barak Meiri, Daniel Winter, Charlie Chen, Mingda Zhang, Jens Heitkaemper, John Lambert, Jinhyuk Lee, Alexander Frömmgen, Sergey Rogulenko, Pranav Nair, Paul Niemczyk, Anton Bulyenov, Bibo Xu, Hadar Shemtov, Morteza Zadimoghaddam, Serge Toropov, Mateo Wirth, Hanjun Dai, Sreenivas Gollapudi, Daniel Zheng, Alex Kurakin, Chansoo Lee, Kalesha Bullard, Nicolas Serrano, Ivana Balazevic, Yang Li, Johan Schalkwyk, Mark Murphy, Mingyang Zhang, Kevin Sequeira, Romina Datta, Nishant Agrawal, Charles Sutton, Nithya Attaluri, Mencher Chiang, Wael Farhan, Gregory Thornton, Kate Lin, Travis Choma, Hung Nguyen, Kingshuk Dasgupta, Dirk Robinson, Iulia Comşa, Michael Riley, Arjun Pillai, Basil Mustafa, Ben Golan, Amir Zandieh, Jean-Baptiste Lespiau, Billy Porter, David Ross, Sujeevan Rajayogam, Mohit Agarwal, Subhashini Venugopalan, Bobak Shahriari, Qiqi Yan, Hao Xu, Taylor Tobin, Pavel Dubov, Hongzhi Shi, Adrià Recasens, Anton Kovsharov, Sebastian Borgeaud, Lucio Dery, Shanthal Vasanth, Elena Gribovskaya, Linhai Qiu, Mahdis Mahdieh, Wojtek Skut, Elizabeth Nielsen, CJ Zheng, Adams Yu, Carrie Grimes Bostock, Shaleen Gupta, Aaron Archer, Chris Rawles, Elinor Davies, Alexey Svyatkovskiy, Tomy Tsai, Yoni Halpern, Christian Reisswig, Bartek Wydrowski, Bo Chang, Joan Puigcerver, Mor Hazan Taege, Jian Li, Eva Schnider, Xinjian Li, Dragos Dena, Yunhan Xu, Umesh Telang, Tianze Shi, Heiga Zen, Kyle Kastner, Yeongil Ko, Neesha Subramaniam, Aviral Kumar, Pete Blois, Zhuyun Dai, John Wieting, Yifeng Lu, Yoel Zeldes, Tian Xie, Anja Hauth, Alexandru Ţifrea, Yuqi Li, Sam El-Husseini, Dan Abolafia, Howard Zhou, Wen Ding, Sahra Ghalebikesabi, Carlos Guía, Andrii Maksai, Ágoston Weisz, Sercan Arik, Nick Sukhanov, Aga Świetlik, Xuhui Jia, Luo Yu, Weiyue Wang, Mark Brand, Dawn Bloxwich, Sean Kirmani, Zhe Chen, Alec Go, Pablo Sprechmann, Nithish Kannen, Alen Carin, Paramjit Sandhu, Isabel Edkins, Leslie Nooteboom, Jai Gupta, Loren Maggiore, Javad Azizi, Yael Pritch, Pengcheng Yin, Mansi Gupta, Danny Tarlow, Duncan Smith, Desi Ivanov, Mohammad Babaeizadeh, Ankita Goel, Satish Kambala, Grace Chu, Matej Kastelic, Michelle Liu, Hagen Soltau, Austin Stone, Shivani Agrawal, Min Kim, Kedar Soparkar, Srinivas Tadepalli, Oskar Bunyan, Rachel Soh, Arvind Kannan, DY Kim, Blake JianHang Chen, Afief Halumi, Sudeshna Roy, Yulong Wang, Olcan Sercinoglu, Gena Gibson, Sijal Bhatnagar, Motoki Sano, Daniel von Dincklage, Qingchun Ren, Blagoj Mitrevski, Mirek Olšák, Jennifer She, Carl Doersch, Jilei, Wang, Bingyuan Liu, Qijun Tan, Tamar Yakar, Tris Warkentin, Alex Ramirez, Carl Lebsack, Josh Dillon, Rajiv Mathews, Tom Cobley, Zelin Wu, Zhuoyuan Chen, Jon Simon, Swaroop Nath, Tara Sainath, Alexei Bendebury, Ryan Julian, Bharath Mankalale, Daria Ćurko, Paulo Zacchello, Adam R. Brown, Kiranbir Sodhia, Heidi Howard, Sergi Caelles, Abhinav Gupta, Gareth Evans, Anna Bulanova, Lesley Katzen, Roman Goldenberg, Anton Tsitsulin, Joe Stanton, Benoit Schillings, Vitaly Kovalev, Corey Fry, Rushin Shah, Kuo Lin, Shyam Upadhyay, Cheng Li, Soroush Radpour, Marcello Maggioni, Jing Xiong, Lukas Haas, Jenny Brennan, Aishwarya Kamath, Nikolay Savinov, Arsha Nagrani, Trevor Yacovone, Ryan Kappedal, Kostas Andriopoulos, Li Lao, YaGuang Li, Grigory Rozhdestvenskiy, Kazuma Hashimoto, Andrew Audibert, Sophia Austin, Daniel Rodriguez, Anian Ruoss, Garrett Honke, Deep Karkhanis, Xi Xiong, Qing Wei, James Huang, Zhaoqi Leng, Vittal Premachandran, Stan Bileschi, Georgios Evangelopoulos, Thomas Mensink, Jay Pavagadhi, Denis Teplyashin, Paul Chang, Linting Xue, Garrett Tanzer, Sally Goldman, Kaushal Patel, Shixin Li, Jeremy Wiesner, Ivy Zheng, Ian Stewart-Binks, Jie Han, Zhi Li, Liangchen Luo, Karel Lenc, Mario Lučić, Fuzhao Xue, Ryan Mullins, Alexey Guseynov, Chung-Ching Chang, Isaac Galatzer-Levy, Adam Zhang, Garrett Bingham, Grace Hu, Ale Hartman, Yue Ma, Jordan Griffith, Alex Irpan, Carey Radebaugh, Summer Yue, Lijie Fan, Victor Ungureanu, Christina Sorokin, Hannah Teufel, Peiran Li, Rohan Anil, Dimitris Paparas, Todd Wang, Chu-Cheng Lin, Hui Peng, Megan Shum, Goran Petrovic, Demetra Brady, Richard Nguyen, Klaus Macherey, Zhihao Li, Harman Singh, Madhavi Yenugula, Mariko Iinuma, Xinyi Chen, Kavya Kopparapu, Alexey Stern, Shachi Dave, Chandu Thekkath, Florence Perot, Anurag Kumar, Fangda Li, Yang Xiao, Matthew Bilotti, Mohammad Hossein Bateni, Isaac Noble, Lisa Lee, Amelio Vázquez-Reina, Julian Salazar, Xiaomeng Yang, Boyu Wang, Ela Gruzewska, Anand Rao, Sindhu Raghuram, Zheng Xu, Eyal Ben-David, Jieru Mei, Sid Dalmia, Zhaoyi Zhang, Yuchen Liu, Gagan Bansal, Helena Pankov, Steven Schwarcz, Andrea Burns, Christine Chan, Sumit Sanghai, Ricky Liang, Ethan Liang, Antoine He, Amy Stuart, Arun Narayanan, Yukun Zhu, Christian Frank, Bahar Fatemi, Amit Sabne, Oran Lang, Indro Bhattacharya, Shane Settle, Maria Wang, Brendan McMahan, Andrea Tacchetti, Livio Baldini Soares, Majid Hadian, Serkan Cabi, Timothy Chung, Nikita Putikhin, Gang Li, Jeremy Chen, Austin Tarango, Henryk Michalewski, Mehran Kazemi, Hussain Masoom, Hila Sheftel, Rakesh Shivanna, Archita Vadali, Ramona Comanescu, Doug Reid, Joss Moore, Arvind Neelakantan, Michaël Sander, Jonathan Herzig, Aviv Rosenberg, Mostafa Dehghani, JD Choi, Michael Fink, Reid Hayes, Eric Ge, Shitao Weng, Chia-Hua Ho, John Karro, Kalpesh Krishna, Lam Nguyen Thiet, Amy Skerry-Ryan, Daniel Eppens, Marco Andreetto, Navin Sarma, Silvano Bonacina, Burcu Karagol Ayan, Megha Nawhal, Zhihao Shan, Mike Dusenberry, Shantanu Thakoor, Sagar Gubbi, Duc Dung Nguyen, Reut Tsarfaty, Samuel Albanie, Jovana Mitrović, Meet Gandhi, Bo-Juen Chen, Alessandro Epasto, Georgi Stephanov, Ye Jin, Samuel Gehman, Aida Amini, Jack Weber, Feryal Behbahani, Shawn Xu, Miltos Allamanis, Xi Chen, Myle Ott, Claire Sha, Michal Jastrzebski, Hang Qi, David Greene, Xinyi Wu, Abodunrinwa Toki, Daniel Vlasic, Jane Shapiro, Ragha Kotikalapudi, Zhe Shen, Takaaki Saeki, Sirui Xie, Albin Cassirer, Shikhar Bharadwaj, Tatsuya Kiyono, Srinadh Bhojanapalli, Elan Rosenfeld, Sam Ritter, Jieming Mao, João Gabriel Oliveira, Zoltan Egyed, Bernd Bandemer, Emilio Parisotto, Keisuke Kinoshita, Juliette Pluto, Petros Maniatis, Steve Li, Yaohui Guo, Golnaz Ghiasi, Jean Tarbouriech, Srimon Chatterjee, Julie Jin, Katrina, Xu, Jennimaria Palomaki, Séb Arnold, Madhavi Sewak, Federico Piccinini, Mohit Sharma, Ben Albrecht, Sean Purser-haskell, Ashwin Vaswani, Chongyan Chen, Matheus Wisniewski, Qin Cao, John Aslanides, Nguyet Minh Phu, Maximilian Sieb, Lauren Agubuzu, Anne Zheng, Daniel Sohn, Marco Selvi, Anders Andreassen, Krishan Subudhi, Prem Eruvbetine, Oliver Woodman, Tomas Mery, Sebastian Krause, Xiaoqi Ren, Xiao Ma, Jincheng Luo, Dawn Chen, Wei Fan, Henry Griffiths, Christian Schuler, Alice Li, Shujian Zhang, Jean-Michel Sarr, Shixin Luo, Riccardo Patana, Matthew Watson, Dani Naboulsi, Michael Collins, Sailesh Sidhwani, Emiel Hoogeboom, Sharon Silver, Emily Caveness, Xiaokai Zhao, Mikel Rodriguez, Maxine Deines, Libin Bai, Patrick Griffin, Marco Tagliasacchi, Emily Xue, Spandana Raj Babbula, Bo Pang, Nan Ding, Gloria Shen, Elijah Peake, Remi Crocker, Shubha Srinivas Raghvendra, Danny Swisher, Woohyun Han, Richa Singh, Ling Wu, Vladimir Pchelin, Tsendsuren Munkhdalai, Dana Alon, Geoff Bacon, Efren Robles, Jannis Bulian, Melvin Johnson, George Powell, Felipe Tiengo Ferreira, Yaoyiran Li, Frederik Benzing, Mihajlo Velimirović, Hubert Soyer, William Kong, Tony, Nguyên, Zhen Yang, Jeremiah Liu, Joost van Amersfoort, Daniel Gillick, Baochen Sun, Nathalie Rauschmayr, Katie Zhang, Serena Zhan, Tao Zhou, Alexey Frolov, Chengrun Yang, Denis Vnukov, Louis Rouillard, Hongji Li, Amol Mandhane, Nova Fallen, Rajesh Venkataraman, Clara Huiyi Hu, Jennifer Brennan, Jenny Lee, Jerry Chang, Martin Sundermeyer, Zhufeng Pan, Rosemary Ke, Simon Tong, Alex Fabrikant, William Bono, Jindong Gu, Ryan Foley, Yiran Mao, Manolis Delakis, Dhruva Bhaswar, Roy Frostig, Nick Li, Avital Zipori, Cath Hope, Olga Kozlova, Swaroop Mishra, Josip Djolonga, Craig Schiff, Majd Al Merey, Eleftheria Briakou, Peter Morgan, Andy Wan, Avinatan Hassidim, RJ Skerry-Ryan, Kuntal Sengupta, Mary Jasarevic, Praveen Kallakuri, Paige Kunkle, Hannah Brennan, Tom Lieber, Hassan Mansoor, Julian Walker, Bing Zhang, Annie Xie, Goran Žužić, Adaeze Chukwuka, Alex Druinsky, Donghyun Cho, Rui Yao, Ferjad Naeem, Shiraz Butt, Eunyoung Kim, Zhipeng Jia, Mandy Jordan, Adam Lelkes, Mark Kurzeja, Sophie Wang, James Zhao, Andrew Over, Abhishek Chakladar, Marcel Prasetya, Neha Jha, Sriram Ganapathy, Yale Cong, Prakash Shroff, Carl Saroufim, Sobhan Miryoosefi, Mohamed Hammad, Tajwar Nasir, Weijuan Xi, Yang Gao, Young Maeng, Ben Hora, Chin-Yi Cheng, Parisa Haghani, Yoad Lewenberg, Caden Lu, Martin Matysiak, Naina Raisinghani, Huiyu Wang, Lexi Baugher, Rahul Sukthankar, Minh Giang, John Schultz, Noah Fiedel, Minmin Chen, Cheng-Chun Lee, Tapomay Dey, Hao Zheng, Shachi Paul, Celine Smith, Andy Ly, Yicheng Wang, Rishabh Bansal, Bartek Perz, Susanna Ricco, Stasha Blank, Vaishakh Keshava, Deepak Sharma, Marvin Chow, Kunal Lad, Komal Jalan, Simon Osindero, Craig Swanson, Jacob Scott, Anastasija Ilić, Xiaowei Li, Siddhartha Reddy Jonnalagadda, Afzal Shama Soudagar, Yan Xiong, Bat-Orgil Batsaikhan, Daniel Jarrett, Naveen Kumar, Maulik Shah, Matt Lawlor, Austin Waters, Mark Graham, Rhys May, Sabela Ramos, Sandra Lefdal, Zeynep Cankara, Nacho Cano, Brendan O’Donoghue, Jed Borovik, Frederick Liu, Jordan Grimstad, Mahmoud Alnahlawi, Katerina Tsihlas, Tom Hudson, Nikolai Grigorev, Yiling Jia, Terry Huang, Tobenna Peter Igwe, Sergei Lebedev, Xiaodan Tang, Igor Krivokon, Frankie Garcia, Melissa Tan, Eric Jia, Peter Stys, Shikhar Vashishth, Yu Liang, Balaji Venkatraman, Chenjie Gu, Anastasios Kementsietsidis, Chen Zhu, Junehyuk Jung, Yunfei Bai, Mohammad Javad Hosseini, Faruk Ahmed, Aditya Gupta, Xin Yuan, Shereen Ashraf, Shitij Nigam, Gautam Vasudevan, Pranjal Awasthi, Adi Mayrav Gilady, Zelda Mariet, Ramy Eskander, Haiguang Li, Hexiang Hu, Guillermo Garrido, Philippe Schlattner, George Zhang, Rohun Saxena, Petar Dević, Kritika Muralidharan, Ashwin Murthy, Yiqian Zhou, Min Choi, Arissa Wongpanich, Zhengdong Wang, Premal Shah, Yuntao Xu, Yiling Huang, Stephen Spencer, Alice Chen, James Cohan, Junjie Wang, Jonathan Tompson, Junru Wu, Ruba Haroun, Haiqiong Li, Blanca Huergo, Fan Yang, Tongxin Yin, James Wendt, Michael Bendersky, Rahma Chaabouni, Javier Snaider, Johan Ferret, Abhishek Jindal, Tara Thompson, Andrew Xue, Will Bishop, Shubham Milind Phal, Archit Sharma, Yunhsuan Sung, Prabakar Radhakrishnan, Mo Shomrat, Reeve Ingle, Roopali Vij, Justin Gilmer, Mihai Dorin Istin, Sam Sobell, Yang Lu, Emily Nottage, Dorsa Sadigh, Jeremiah Willcock, Tingnan Zhang, Steve Xu, Sasha Brown, Katherine Lee, Gary Wang, Yun Zhu, Yi Tay, Cheolmin Kim, Audrey Gutierrez, Abhanshu Sharma, Yongqin Xian, Sungyong Seo, Claire Cui, Elena Pochernina, Cip Baetu, Krzysztof Jastrzębski, Mimi Ly, Mohamed Elhawaty, Dan Suh, Eren Sezener, Pidong Wang, Nancy Yuen, George Tucker, Jiahao Cai, Zuguang Yang, Cindy Wang, Alex Muzio, Hai Qian, Jae Yoo, Derek Lockhart, Kevin R. McKee, Mandy Guo, Malika Mehrotra, Artur Mendonça, Sanket Vaibhav Mehta, Sherry Ben, Chetan Tekur, Jiaqi Mu, Muye Zhu, Victoria Krakovna, Hongrae Lee, AJ Maschinot, Sébastien Cevey, HyunJeong Choe, Aijun Bai, Hansa Srinivasan, Derek Gasaway, Nick Young, Patrick Siegler, Dan Holtmann-Rice, Vihari Piratla, Kate Baumli, Roey Yogev, Alex Hofer, Hado van Hasselt, Svetlana Grant, Yuri Chervonyi, David Silver, Andrew Hogue, Ayushi Agarwal, Kathie Wang, Preeti Singh, Four Flynn, Josh Lipschultz, Robert David, Lizzetth Bellot, Yao-Yuan Yang, Long Le, Filippo Graziano, Kate Olszewska, Kevin Hui, Akanksha Maurya, Nikos Parotsidis, Weijie Chen, Tayo Oguntebi, Joe Kelley, Anirudh Baddepudi, Johannes Mauerer, Gregory Shaw, Alex Siegman, Lin Yang, Shravya Shetty, Subhrajit Roy, Yunting Song, Wojciech Stokowiec, Ryan Burnell, Omkar Savant, Robert Busa-Fekete, Jin Miao, Samrat Ghosh, Liam MacDermed, Phillip Lippe, Mikhail Dektiarev, Zach Behrman, Fabian Mentzer, Kelvin Nguyen, Meng Wei, Siddharth Verma, Chris Knutsen, Sudeep Dasari, Zhipeng Yan, Petr Mitrichev, Xingyu Wang, Virat Shejwalkar, Jacob Austin, Srinivas Sunkara, Navneet Potti, Yan Virin, Christian Wright, Gaël Liu, Oriana Riva, Etienne Pot, Greg Kochanski, Quoc Le, Gargi Balasubramaniam, Arka Dhar, Yuguo Liao, Adam Bloniarz, Divyansh Shukla, Elizabeth Cole, Jong Lee, Sheng Zhang, Sushant Kafle, Siddharth Vashishtha, Parsa Mahmoudieh, Grace Chen, Raphael Hoffmann, Pranesh Srinivasan, Agustin Dal Lago, Yoav Ben Shalom, Zi Wang, Michael Elabd, Anuj Sharma, Junhyuk Oh, Suraj Kothawade, Maigo Le, Marianne Monteiro, Shentao Yang, Kaiz Alarakyia, Robert Geirhos, Diana Mincu, Håvard Garnes, Hayato Kobayashi, Soroosh Mariooryad, Kacper Krasowiak, Zhixin, Lai, Shibl Mourad, Mingqiu Wang, Fan Bu, Ophir Aharoni, Guanjie Chen, Abhimanyu Goyal, Vadim Zubov, Ankur Bapna, Elahe Dabir, Nisarg Kothari, Kay Lamerigts, Nicola De Cao, Jeremy Shar, Christopher Yew, Nitish Kulkarni, Dre Mahaarachchi, Mandar Joshi, Zhenhai Zhu, Jared Lichtarge, Yichao Zhou, Hannah Muckenhirn, Vittorio Selo, Oriol Vinyals, Peter Chen, Anthony Brohan, Vaibhav Mehta, Sarah Cogan, Ruth Wang, Ty Geri, Wei-Jen Ko, Wei Chen, Fabio Viola, Keshav Shivam, Lisa Wang, Madeleine Clare Elish, Raluca Ada Popa, Sébastien Pereira, Jianqiao Liu, Raphael Koster, Donnie Kim, Gufeng Zhang, Sayna Ebrahimi, Partha Talukdar, Yanyan Zheng, Petra Poklukar, Ales Mikhalap, Dale Johnson, Anitha Vijayakumar, Mark Omernick, Matt Dibb, Ayush Dubey, Qiong Hu, Apurv Suman, Vaibhav Aggarwal, Ilya Kornakov, Fei Xia, Wing Lowe, Alexey Kolganov, Ted Xiao, Vitaly Nikolaev, Steven Hemingray, Bonnie Li, Joana Iljazi, Mikołaj Rybiński, Ballie Sandhu, Peggy Lu, Thang Luong, Rodolphe Jenatton, Vineetha Govindaraj, Hui, Li, Gabriel Dulac-Arnold, Wonpyo Park, Henry Wang, Abhinit Modi, Jean Pouget-Abadie, Kristina Greller, Rahul Gupta, Robert Berry, Prajit Ramachandran, Jinyu Xie, Liam McCafferty, Jianling Wang, Kilol Gupta, Hyeontaek Lim, Blaž Bratanič, Andy Brock, Ilia Akolzin, Jim Sproch, Dan Karliner, Duhyeon Kim, Adrian Goedeckemeyer, Noam Shazeer, Cordelia Schmid, Daniele Calandriello, Parul Bhatia, Krzysztof Choromanski, Ceslee Montgomery, Dheeru Dua, Ana Ramalho, Helen King, Yue Gao, Lynn Nguyen, David Lindner, Divya Pitta, Oleaser Johnson, Khalid Salama, Diego Ardila, Michael Han, Erin Farnese, Seth Odoom, Ziyue Wang, Xiangzhuo Ding, Norman Rink, Ray Smith, Harshal Tushar Lehri, Eden Cohen, Neera Vats, Tong He, Parthasarathy Gopavarapu, Adam Paszke, Miteyan Patel, Wouter Van Gansbeke, Lucia Loher, Luis Castro, Maria Voitovich, Tamara von Glehn, Nelson George, Simon Niklaus, Zach Eaton-Rosen, Nemanja Rakićević, Erik Jue, Sagi Perel, Carrie Zhang, Yuval Bahat, Angéline Pouget, Zhi Xing, Fantine Huot, Ashish Shenoy, Taylor Bos, Vincent Coriou, Bryan Richter, Natasha Noy, Yaqing Wang, Santiago Ontanon, Siyang Qin, Gleb Makarchuk, Demis Hassabis, Zhuowan Li, Mandar Sharma, Kumaran Venkatesan, Iurii Kemaev, Roxanne Daniel, Shiyu Huang, Saloni Shah, Octavio Ponce, Warren, Chen, Manaal Faruqui, Jialin Wu, Slavica Andačić, Szabolcs Payrits, Daniel McDuff, Tom Hume, Yuan Cao, MH Tessler, Qingze Wang, Yinan Wang, Ivor Rendulic, Eirikur Agustsson, Matthew Johnson, Tanya Lando, Andrew Howard, Sri Gayatri Sundara Padmanabhan, Mayank Daswani, Andrea Banino, Michael Kilgore, Jonathan Heek, Ziwei Ji, Alvaro Caceres, Conglong Li, Nora Kassner, Alexey Vlaskin, Zeyu Liu, Alex Grills, Yanhan Hou, Roykrong Sukkerd, Gowoon Cheon, Nishita Shetty, Larisa Markeeva, Piotr Stanczyk, Tejas Iyer, Yuan Gong, Shawn Gao, Keerthana Gopalakrishnan, Tim Blyth, Malcolm Reynolds, Avishkar Bhoopchand, Misha Bilenko, Dero Gharibian, Vicky Zayats, Aleksandra Faust, Abhinav Singh, Min Ma, Hongyang Jiao, Sudheendra Vijayanarasimhan, Lora Aroyo, Vikas Yadav, Sarah Chakera, Ashwin Kakarla, Vilobh Meshram, Karol Gregor, Gabriela Botea, Evan Senter, Dawei Jia, Geza Kovacs, Neha Sharma, Sebastien Baur, Kai Kang, Yifan He, Lin Zhuo, Marija Kostelac, Itay Laish, Songyou Peng, Louis O’Bryan, Daniel Kasenberg, Girish Ramchandra Rao, Edouard Leurent, Biao Zhang, Sage Stevens, Ana Salazar, Ye Zhang, Ivan Lobov, Jake Walker, Allen Porter, Morgan Redshaw, Han Ke, Abhishek Rao, Alex Lee, Hoi Lam, Michael Moffitt, Jaeyoun Kim, Siyuan Qiao, Terry Koo, Robert Dadashi, Xinying Song, Mukund Sundararajan, Peng Xu, Chizu Kawamoto, Yan Zhong, Clara Barbu, Apoorv Reddy, Mauro Verzetti, Leon Li, George Papamakarios, Hanna Klimczak-Plucińska, Mary Cassin, Koray Kavukcuoglu, Rigel Swavely, Alain Vaucher, Jeffrey Zhao, Ross Hemsley, Michael Tschannen, Heming Ge, Gaurav Menghani, Yang Yu, Natalie Ha, Wei He, Xiao Wu, Maggie Song, Rachel Sterneck, Stefan Zinke, Dan A. Calian, Annie Marsden, Alejandro Cruzado Ruiz, Matteo Hessel, Almog Gueta, Benjamin Lee, Brian Farris, Manish Gupta, Yunjie Li, Mohammad Saleh, Vedant Misra, Kefan Xiao, Piermaria Mendolicchio, Gavin Buttimore, Varvara Krayvanova, Nigamaa Nayakanti, Matthew Wiethoff, Yash Pande, Azalia Mirhoseini, Ni Lao, Jasmine Liu, Yiqing Hua, Angie Chen, Yury Malkov, Dmitry Kalashnikov, Shubham Gupta, Kartik Audhkhasi, Yuexiang Zhai, Sudhindra Kopalle, Prateek Jain, Eran Ofek, Clemens Meyer, Khuslen Baatarsukh, Hana Strejček, Jun Qian, James Freedman, Ricardo Figueira, Michal Sokolik, Olivier Bachem, Raymond Lin, Dia Kharrat, Chris Hidey, Pingmei Xu, Dennis Duan, Yin Li, Muge Ersoy, Richard Everett, Kevin Cen, Rebeca Santamaria-Fernandez, Amir Taubenfeld, Ian Mackinnon, Linda Deng, Polina Zablotskaia, Shashank Viswanadha, Shivanker Goel, Damion Yates, Yunxiao Deng, Peter Choy, Mingqing Chen, Abhishek Sinha, Alex Mossin, Yiming Wang, Arthur Szlam, Susan Hao, Paul Kishan Rubenstein, Metin Toksoz-Exley, Miranda Aperghis, Yin Zhong, Junwhan Ahn, Michael Isard, Olivier Lacombe, Florian Luisier, Chrysovalantis Anastasiou, Yogesh Kalley, Utsav Prabhu, Emma Dunleavy, Shaan Bijwadia, Justin Mao-Jones, Kelly Chen, Rama Pasumarthi, Emily Wood, Adil Dostmohamed, Nate Hurley, Jiri Simsa, Alicia Parrish, Mantas Pajarskas, Matt Harvey, Ondrej Skopek, Yony Kochinski, Javier Rey, Verena Rieser, Denny Zhou, Sun Jae Lee, Trilok Acharya, Guowang Li, Joe Jiang, Xiaofan Zhang, Bryant Gipson, Ethan Mahintorabi, Marco Gelmi, Nima Khajehnouri, Angel Yeh, Kayi Lee, Loic Matthey, Leslie Baker, Trang Pham, Han Fu, Alex Pak, Prakhar Gupta, Cristina Vasconcelos, Adam Sadovsky, Brian Walker, Sissie Hsiao, Patrik Zochbauer, Andreea Marzoca, Noam Velan, Junhao Zeng, Gilles Baechler, Danny Driess, Divya Jain, Yanping Huang, Lizzie Tao, John Maggs, Nir Levine, Jon Schneider, Erika Gemzer, Samuel Petit, Shan Han, Zach Fisher, Dustin Zelle, Courtney Biles, Eugene Ie, Asya Fadeeva, Casper Liu, Juliana Vicente Franco, Adrian Collister, Hao Zhang, Renshen Wang, Ruizhe Zhao, Leandro Kieliger, Kurt Shuster, Rui Zhu, Boqing Gong, Lawrence Chan, Ruoxi Sun, Sujoy Basu, Roland Zimmermann, Jamie Hayes, Abhishek Bapna, Jasper Snoek, Weel Yang, Puranjay Datta, Jad Al Abdallah, Kevin Kilgour, Lu Li, SQ Mah, Yennie Jun, Morgane Rivière, Abhijit Karmarkar, Tammo Spalink, Tao Huang, Lucas Gonzalez, Duc-Hieu Tran, Averi Nowak, John Palowitch, Martin Chadwick, Ellie Talius, Harsh Mehta, Thibault Sellam, Philipp Fränken, Massimo Nicosia, Kyle He, Aditya Kini, David Amos, Sugato Basu, Harrison Jobe, Eleni Shaw, Qiantong Xu, Colin Evans, Daisuke Ikeda, Chaochao Yan, Larry Jin, Lun Wang, Sachin Yadav, Ilia Labzovsky, Ramesh Sampath, Ada Ma, Candice Schumann, Aditya Siddhant, Rohin Shah, John Youssef, Rishabh Agarwal, Natalie Dabney, Alessio Tonioni, Moran Ambar, Jing Li, Isabelle Guyon, Benny Li, David Soergel, Boya Fang, Georgi Karadzhov, Cristian Udrescu, Trieu Trinh, Vikas Raunak, Seb Noury, Dee Guo, Sonal Gupta, Mara Finkelstein, Denis Petek, Lihao Liang, Greg Billock, Pei Sun, David Wood, Yiwen Song, Xiaobin Yu, Tatiana Matejovicova, Regev Cohen, Kalyan Andra, David D’Ambrosio, Zhiwei Deng, Vincent Nallatamby, Ebrahim Songhori, Rumen Dangovski, Andrew Lampinen, Pankil Botadra, Adam Hillier, Jiawei Cao, Nagabhushan Baddi, Adhi Kuncoro, Toshihiro Yoshino, Ankit Bhagatwala, Marcáurelio Ranzato, Rylan Schaeffer, Tianlin Liu, Shuai Ye, Obaid Sarvana, John Nham, Chenkai Kuang, Isabel Gao, Jinoo Baek, Shubham Mittal, Ayzaan Wahid, Anita Gergely, Bin Ni, Josh Feldman, Carrie Muir, Pascal Lamblin, Wolfgang Macherey, Ethan Dyer, Logan Kilpatrick, Víctor Campos, Mukul Bhutani, Stanislav Fort, Yanif Ahmad, Aliaksei Severyn, Kleopatra Chatziprimou, Oleksandr Ferludin, Mason Dimarco, Aditya Kusupati, Joe Heyward, Dan Bahir, Kevin Villela, Katie Millican, Dror Marcus, Sanaz Bahargam, Caglar Unlu, Nicholas Roth, Zichuan Wei, Siddharth Gopal, Deepanway Ghoshal, Edward Lee, Sharon Lin, Jennie Lees, Dayeong Lee, Anahita Hosseini, Connie Fan, Seth Neel, Marcus Wu, Yasemin Altun, Honglong Cai, Enrique Piqueras, Josh Woodward, Alessandro Bissacco, Salem Haykal, Mahyar Bordbar, Prasha Sundaram, Sarah Hodkinson, Daniel Toyama, George Polovets, Austin Myers, Anu Sinha, Tomer Levinboim, Kashyap Krishnakumar, Rachita Chhaparia, Tatiana Sholokhova, Nitesh Bharadwaj Gundavarapu, Ganesh Jawahar, Haroon Qureshi, Jieru Hu, Nikola Momchev, Matthew Rahtz, Renjie Wu, Aishwarya P S, Kedar Dhamdhere, Meiqi Guo, Umang Gupta, Ali Eslami, Mariano Schain, Michiel Blokzijl, David Welling, Dave Orr, Levent Bolelli, Nicolas Perez-Nieves, Mikhail Sirotenko, Aman Prasad, Arjun Kar, Borja De Balle Pigem, Tayfun Terzi, Gellért Weisz, Dipankar Ghosh, Aditi Mavalankar, Dhruv Madeka, Kaspar Daugaard, Hartwig Adam, Viraj Shah, Dana Berman, Maggie Tran, Steven Baker, Ewa Andrejczuk, Grishma Chole, Ganna Raboshchuk, Mahdi Mirzazadeh, Thais Kagohara, Shimu Wu, Christian Schallhart, Bernett Orlando, Chen Wang, Alban Rrustemi, Hao Xiong, Hao Liu, Arpi Vezer, Nolan Ramsden, Shuo-yiin Chang, Sidharth Mudgal, Yan Li, Nino Vieillard, Yedid Hoshen, Farooq Ahmad, Ambrose Slone, Amy Hua, Natan Potikha, Mirko Rossini, Jon Stritar, Sushant Prakash, Zifeng Wang, Xuanyi Dong, Alireza Nazari, Efrat Nehoran, Kaan Tekelioglu, Yinxiao Li, Kartikeya Badola, Tom Funkhouser, Yuanzhen Li, Varun Yerram, Ramya Ganeshan, Daniel Formoso, Karol Langner, Tian Shi, Huijian Li, Yumeya Yamamori, Amayika Panda, Alaa Saade, Angelo Scorza Scarpati, Chris Breaux, CJ Carey, Zongwei Zhou, Cho-Jui Hsieh, Sophie Bridgers, Alena Butryna, Nishesh Gupta, Vaibhav Tulsyan, Sanghyun Woo, Evgenii Eltyshev, Will Grathwohl, Chanel Parks, Seth Benjamin, Rina Panigrahy, Shenil Dodhia, Daniel De Freitas, Chris Sauer, Will Song, Ferran Alet, Jackson Tolins, Cosmin Paduraru, Xingyi Zhou, Brian Albert, Zizhao Zhang, Lei Shu, Mudit Bansal, Sarah Nguyen, Amir Globerson, Owen Xiao, James Manyika, Tom Hennigan, Rong Rong, Josip Matak, Anton Bakalov, Ankur Sharma, Danila Sinopalnikov, Andrew Pierson, Stephen Roller, Geoff Brown, Mingcen Gao, Toshiyuki Fukuzawa, Amin Ghafouri, Kenny Vassigh, Iain Barr, Zhicheng Wang, Anna Korsun, Rajesh Jayaram, Lijie Ren, Tim Zaman, Samira Khan, Yana Lunts, Dan Deutsch, Dave Uthus, Nitzan Katz, Masha Samsikova, Amr Khalifa, Nikhil Sethi, Jiao Sun, Luming Tang, Uri Alon, Xianghong Luo, Dian Yu, Abhishek Nayyar, Bryce Petrini, Will Truong, Vincent Hellendoorn, Nikolai Chinaev, Chris Alberti, Wei Wang, Jingcao Hu, Vahab Mirrokni, Ananth Balashankar, Avia Aharon, Aahil Mehta, Ahmet Iscen, Joseph Kready, Lucas Manning, Anhad Mohananey, Yuankai Chen, Anshuman Tripathi, Allen Wu, Igor Petrovski, Dawsen Hwang, Martin Baeuml, Shreyas Chandrakaladharan, Yuan Liu, Rey Coaguila, Maxwell Chen, Sally Ma, Pouya Tafti, Susheel Tatineni, Terry Spitz, Jiayu Ye, Paul Vicol, Mihaela Rosca, Adrià Puigdomènech, Zohar Yahav, Sanjay Ghemawat, Hanzhao Lin, Phoebe Kirk, Zaid Nabulsi, Sergey Brin, Bernd Bohnet, Ken Caluwaerts, Aditya Srikanth Veerubhotla, Dan Zheng, Zihang Dai, Petre Petrov, Yichong Xu, Ramin Mehran, Zhuo Xu, Luisa Zintgraf, Jiho Choi, Spurthi Amba Hombaiah, Romal Thoppilan, Sashank Reddi, Lukasz Lew, Li Li, Kellie Webster, KP Sawhney, Lampros Lamprou, Siamak Shakeri, Mayank Lunayach, Jianmin Chen, Sumit Bagri, Alex Salcianu, Ying Chen, Yani Donchev, Charlotte Magister, Signe Nørly, Vitor Rodrigues, Tomas Izo, Hila Noga, Joe Zou, Thomas Köppe, Wenxuan Zhou, Kenton Lee, Xiangzhu Long, Danielle Eisenbud, Anthony Chen, Connor Schenck, Chi Ming To, Peilin Zhong, Emanuel Taropa, Minh Truong, Omer Levy, Danilo Martins, Zhiyuan Zhang, Christopher Semturs, Kelvin Zhang, Alex Yakubovich, Pol Moreno, Lara McConnaughey, Di Lu, Sam Redmond, Lotte Weerts, Yonatan Bitton, Tiziana Refice, Nicolas Lacasse, Arthur Conmy, Corentin Tallec, Julian Odell, Hannah Forbes-Pollard, Arkadiusz Socala, Jonathan Hoech, Pushmeet Kohli, Alanna Walton, Rui Wang, Mikita Sazanovich, Kexin Zhu, Andrei Kapishnikov, Rich Galt, Matthew Denton, Ben Murdoch, Caitlin Sikora, Kareem Mohamed, Wei Wei, Uri First, Tim McConnell, Luis C. Cobo, James Qin, Thi Avrahami, Daniel Balle, Yu Watanabe, Annie Louis, Adam Kraft, Setareh Ariafar, Yiming Gu, Eugénie Rives, Charles Yoon, Andrei Rusu, James Cobon-Kerr, Chris Hahn, Jiaming Luo, Yuvein, Zhu, Niharika Ahuja, Rodrigo Benenson, Raphaël Lopez Kaufman, Honglin Yu, Lloyd Hightower, Junlin Zhang, Darren Ni, Lisa Anne Hendricks, Gabby Wang, Gal Yona, Lalit Jain, Pablo Barrio, Surya Bhupatiraju, Siva Velusamy, Allan Dafoe, Sebastian Riedel, Tara Thomas, Zhe Yuan, Mathias Bellaiche, Sheena Panthaplackel, Klemen Kloboves, Sarthak Jauhari, Canfer Akbulut, Todor Davchev, Evgeny Gladchenko, David Madras, Aleksandr Chuklin, Tyrone Hill, Quan Yuan, Mukundan Madhavan, Luke Leonhard, Dylan Scandinaro, Qihang Chen, Ning Niu, Arthur Douillard, Bogdan Damoc, Yasumasa Onoe, Fabian Pedregosa, Fred Bertsch, Chas Leichner, Joseph Pagadora, Jonathan Malmaud, Sameera Ponda, Andy Twigg, Oleksii Duzhyi, Jingwei Shen, Miaosen Wang, Roopal Garg, Jing Chen, Utku Evci, Jonathan Lee, Leon Liu, Koji Kojima, Masa Yamaguchi, Arunkumar Rajendran, AJ Piergiovanni, Vinodh Kumar Rajendran, Marco Fornoni, Gabriel Ibagon, Harry Ragan, Sadh MNM Khan, John Blitzer, Andrew Bunner, Guan Sun, Takahiro Kosakai, Scott Lundberg, Ndidi Elue, Kelvin Guu, SK Park, Jane Park, Arunachalam Narayanaswamy, Chengda Wu, Jayaram Mudigonda, Trevor Cohn, Hairong Mu, Ravi Kumar, Laura Graesser, Yichi Zhang, Richard Killam, Vincent Zhuang, Mai Giménez, Wael Al Jishi, Ruy Ley-Wild, Alex Zhai, Kazuki Osawa, Diego Cedillo, Jialu Liu, Mayank Upadhyay, Marcin Sieniek, Roshan Sharma, Tom Paine, Anelia Angelova, Sravanti Addepalli, Carolina Parada, Kingshuk Majumder, Avery Lamp, Sanjiv Kumar, Xiang Deng, Artiom Myaskovsky, Tea Sabolić, Jeffrey Dudek, Sarah York, Félix de Chaumont Quitry, Jiazhong Nie, Dee Cattle, Alok Gunjan, Bilal Piot, Waleed Khawaja, Seojin Bang, Simon Wang, Siavash Khodadadeh, Raghavender R, Praynaa Rawlani, Richard Powell, Kevin Lee, Johannes Griesser, GS Oh, Cesar Magalhaes, Yujia Li, Simon Tokumine, Hadas Natalie Vogel, Dennis Hsu, Arturo BC, Disha Jindal, Matan Cohen, Zi Yang, Junwei Yuan, Dario de Cesare, Tony Bruguier, Jun Xu, Monica Roy, Alon Jacovi, Dan Belov, Rahul Arya, Phoenix Meadowlark, Shlomi Cohen-Ganor, Wenting Ye, Patrick Morris-Suzuki, Praseem Banzal, Gan Song, Pranavaraj Ponnuramu, Fred Zhang, George Scrivener, Salah Zaiem, Alif Raditya Rochman, Kehang Han, Badih Ghazi, Kate Lee, Shahar Drath, Daniel Suo, Antonious Girgis, Pradeep Shenoy, Duy Nguyen, Douglas Eck, Somit Gupta, Le Yan, Joao Carreira, Anmol Gulati, Ruoxin Sang, Daniil Mirylenka, Emma Cooney, Edward Chou, Mingyang Ling, Cindy Fan, Ben Coleman, Guilherme Tubone, Ravin Kumar, Jason Baldridge, Felix Hernandez-Campos, Angeliki Lazaridou, James Besley, Itay Yona, Neslihan Bulut, Quentin Wellens, AJ Pierigiovanni, Jasmine George, Richard Green, Pu Han, Connie Tao, Geoff Clark, Chong You, Abbas Abdolmaleki, Justin Fu, Tongzhou Chen, Ashwin Chaugule, Angad Chandorkar, Altaf Rahman, Will Thompson, Penporn Koanantakool, Mike Bernico, Jie Ren, Andrey Vlasov, Sergei Vassilvitskii, Maciej Kula, Yizhong Liang, Dahun Kim, Yangsibo Huang, Chengxi Ye, Dmitry Lepikhin, Wesley Helmholz

In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal understanding and it is now able to process up to 3 hours of video content. Its unique combination of long context, multimodal and reasoning capabilities can be combined to unlock new agentic workflows. Gemini 2.5 Flash provides excellent reasoning abilities at a fraction of the compute and latency requirements and Gemini 2.0 Flash and Flash-Lite provide high performance at low latency and cost. Taken together, the Gemini 2.X model generation spans the full Pareto frontier of model capability vs cost, allowing users to explore the boundaries of what is possible with complex agentic problem solving.

在这份报告中,我们介绍了Gemini 2.X模型系列:包括Gemini 2.5 Pro和Gemini 2.5 Flash,以及我们早期的Gemini 2.0 Flash和Flash-Lite模型。Gemini 2.5 Pro至今为止是我们功能最强大的模型,在前沿编码和推理基准测试上达到了最新技术水平。除了令人印象深刻的编码和推理能力之外,Gemini 2.5 Pro还是一个擅长多模式理解的思考模型,现在能够处理长达3小时的视频内容。其独特的长上下文、多模式与推理能力的结合可以解锁新的代理工作流程。Gemini 2.5 Flash在计算和延迟要求的一小部分下提供了出色的推理能力,而Gemini 2.0 Flash和Flash-Lite则以低延迟和成本提供了高性能。综上所述,Gemini 2.X模型系列涵盖了模型能力与成本之间的全Pareto前沿,使用户能够探索复杂代理问题解决的可能边界。

论文及项目相关链接

PDF 72 pages, 17 figures

Summary

新一代Gemini模型家族包括Gemini 2.5 Pro、Gemini 2.5 Flash、早期的Gemini 2.0 Flash和Flash-Lite模型。Gemini 2.5 Pro是最先进的模型,在前沿编码和推理基准测试上达到SOTA性能,具备多模式理解能力,能处理长达3小时的视频内容,可解锁新的代理工作流程。其他模型如Gemini 2.5 Flash、Gemini 2.0 Flash和Flash-Lite在性能、延迟和成本方面提供了卓越的平衡。这一系列模型跨越了模型能力与成本之间的完整帕累托前沿,为复杂代理问题的解决提供了可能的边界探索。

Key Takeaways

  1. Gemini 2.5 Pro是最先进的模型,达到SOTA性能,具备多模式理解能力,能处理长达3小时的视频内容。
  2. Gemini 2.5 Pro能够解锁新的代理工作流程,具有强大的组合能力。
  3. Gemini 2.5 Flash在推理能力方面表现出色,同时降低了计算和延迟要求。
  4. Gemini 2.0 Flash和Flash-Lite在低成本和低延迟条件下提供高性能。
  5. Gemini 2.X模型家族覆盖了广泛的模型能力范围,满足了不同用户的需求。
  6. 这一系列模型在性能与成本之间达到了平衡,为用户提供了更广阔的探索空间。

Cool Papers

点此查看论文截图

Demonstrating Quantum Scaling Advantage in Approximate Optimization for Energy Coalition Formation with 100+ Agents

Authors:Naeimeh Mohseni, Thomas Morstyn, Corey O’Meara, David Bucher, Jonas Nüßlein, Giorgio Cortiana

The formation of energy communities is pivotal for advancing decentralized and sustainable energy management. Within this context, Coalition Structure Generation (CSG) emerges as a promising framework. The complexity of CSG grows rapidly with the number of agents, making classical solvers impractical for even moderate sizes. This suggests CSG as an ideal candidate for benchmarking quantum algorithms against classical ones. Facing ongoing challenges in attaining computational quantum advantage for exact optimization, we pivot our focus to benchmarking quantum and classical solvers for approximate optimization. Approximate optimization is particularly critical for industrial use cases requiring real-time optimization, where finding high-quality solutions quickly is often more valuable than achieving exact solutions more slowly. Our findings indicate that quantum annealing (QA) on DWave can achieve solutions of comparable quality to our best classical solver, but with more favorable runtime scaling, showcasing an advantage. This advantage is observed when compared to solvers, such as Tabu search, simulated annealing, and the state-of-the-art solver Gurobi, in finding approximate solutions for energy community formation involving over 100 agents. DWave also surpasses 1-round QAOA on IBM hardware. Our findings represent the largest benchmark of quantum approximate optimizations for a real-world dense model beyond the hardware’s native topology, where D-Wave demonstrates a scaling advantage.

能源社区的构建对于推动分布式和可持续能源管理至关重要。在此背景之下,联盟结构生成(CSG)显现为一个前景广阔的理论框架。随着智能体的数量增加,CSG的复杂性迅速增长,即使规模适中,经典求解器也变得不切实际。这表明CSG成为衡量量子算法与经典算法性能的理想候选者。在针对精确优化的计算量子优势方面存在持续挑战的情况下,我们将重点转向对近似优化的量子和经典求解器的基准测试。近似优化对于需要实时优化的工业用例至关重要,在这些用例中,快速找到高质量的解决方案通常比缓慢地实现精确解决方案更有价值。我们的研究结果表明,DWave上的量子退火(QA)可以达到与我们最好的经典求解器相当的质量解决方案,并且具有更有利的运行时间缩放,展示了其优势。在与诸如Tabu搜索、模拟退火和最新求解器Gurobi等求解器的比较中,在涉及超过100个智能体的能源社区形成近似解决方案时,观察到这一优势。DWave还超越了IBM硬件上的1轮QAOA。我们的研究结果代表了针对现实世界密集模型的量子近似优化最大基准测试,超越了硬件的固有拓扑结构,其中D-Wave展现出了缩放优势。

论文及项目相关链接

PDF

Summary

基于经典解算法,煤炭能源集团的构建框架展现出其在推动分散和可持续能源管理中的重要性。面对数量增多的代理人时,煤炭能源集团的复杂性增长迅速,即使是中等规模也使得经典解算法难以应对。因此,煤炭能源集团成为衡量量子算法与经典算法的理想基准。在面临精确优化的计算量子优势挑战时,我们将焦点转向衡量量子和经典解算法的近似优化。近似优化对于需要实时优化的工业案例特别关键,找到高质量解决方案的速度往往比实现缓慢的确切解决方案更有价值。研究发现,DWave量子退火可实现与我们最佳经典解算法相当质量的解决方案,且运行时间尺度更为有利,展现出优势。这一优势在与标签搜索、模拟退火以及高级求解器Gurobi对比中显而易见,用于涉及超过一百个代理人的能源集团构建。DWave还优于IBM硬件上的第一轮QAOA。本研究代表了针对现实世界密集模型进行的大规模量子近似优化基准测试,超越了硬件的固有拓扑结构,其中DWave展现出规模化优势。

Key Takeaways

  1. 能源集团的构建框架对于推动分散和可持续能源管理至关重要。
  2. 煤炭能源集团的复杂性随着代理人的数量迅速增长,使得经典解算法难以应对,成为衡量量子算法与经典算法的理想基准。
  3. 量子退火可以在解决涉及大量代理人的能源集团构建问题上展现出优势。
  4. 与其他解算法相比,DWave量子退火在解决这些问题时具有更快的运行时间尺度。
  5. DWave在实时优化问题上表现出强大的性能,特别是在涉及大量代理人的情况下。
  6. 本研究是首个针对现实世界密集模型的大规模量子近似优化基准测试。

Cool Papers

点此查看论文截图


文章作者: Kedreamix
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Kedreamix !
 上一篇
Few-Shot Few-Shot
Few-Shot 方向最新论文已更新,请持续关注 Update in 2025-10-18 Unifying Environment Perception and Route Choice Modeling for Trajectory Representation Learning
2025-10-18
下一篇 
LLM LLM
LLM 方向最新论文已更新,请持续关注 Update in 2025-10-18 Agentic Design of Compositional Machines
2025-10-18
  目录