嘘~ 正在从服务器偷取页面 . . .

Few-Shot


⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验

2025-08-22 更新

Tinker: Diffusion’s Gift to 3D–Multi-View Consistent Editing From Sparse Inputs without Per-Scene Optimization

Authors:Canyu Zhao, Xiaoman Li, Tianjian Feng, Zhiyue Zhao, Hao Chen, Chunhua Shen

We introduce Tinker, a versatile framework for high-fidelity 3D editing that operates in both one-shot and few-shot regimes without any per-scene finetuning. Unlike prior techniques that demand extensive per-scene optimization to ensure multi-view consistency or to produce dozens of consistent edited input views, Tinker delivers robust, multi-view consistent edits from as few as one or two images. This capability stems from repurposing pretrained diffusion models, which unlocks their latent 3D awareness. To drive research in this space, we curate the first large-scale multi-view editing dataset and data pipeline, spanning diverse scenes and styles. Building on this dataset, we develop our framework capable of generating multi-view consistent edited views without per-scene training, which consists of two novel components: (1) Referring multi-view editor: Enables precise, reference-driven edits that remain coherent across all viewpoints. (2) Any-view-to-video synthesizer: Leverages spatial-temporal priors from video diffusion to perform high-quality scene completion and novel-view generation even from sparse inputs. Through extensive experiments, Tinker significantly reduces the barrier to generalizable 3D content creation, achieving state-of-the-art performance on editing, novel-view synthesis, and rendering enhancement tasks. We believe that Tinker represents a key step towards truly scalable, zero-shot 3D editing. Project webpage: https://aim-uofa.github.io/Tinker

我们介绍了Tinker,这是一个用于高保真3D编辑的通用框架,它可以在单镜头和少镜头模式下运行,无需针对每个场景进行微调。与需要广泛场景优化的先前技术不同,以确保多视角一致性或生成数十个一致的可编辑输入视角,Tinker仅从一张或两张图像就能实现稳健、多视角一致编辑。这种能力来源于对预训练扩散模型的再利用,这解锁了它们的潜在3D意识。为了推动这一领域的研究,我们整理的第一个大规模多视角编辑数据集和数据流程管道,涵盖了各种场景和风格。基于这个数据集,我们开发了一个框架,能够在无需针对每个场景进行训练的情况下,生成多视角一致的可编辑视角,它包含两个新颖组件:(1)引用多视角编辑器:实现精确、参考驱动编辑,保持所有观点的一致性。(2)任何视角到视频合成器:利用视频扩散的空间时间先验知识,进行高质量场景完成和新颖视角生成,即使是从稀疏输入也能实现。通过广泛实验,Tinker显著降低了通用3D内容创作的障碍,在编辑、新颖视角合成和渲染增强任务上达到了最先进的性能。我们相信,Tinker是朝着真正可扩展的零镜头3D编辑迈出的关键一步。项目网页:https://aim-uofa.github.io/Tinker

论文及项目相关链接

PDF Project webpage: https://aim-uofa.github.io/Tinker

Summary

本文介绍了Tinker框架,该框架用于高保真度的3D编辑,可在单镜头和少镜头模式下操作,无需针对每个场景进行微调。Tinker通过重新利用预训练的扩散模型,实现了从极少图像(一张或两张)就能产生稳健、多视角一致的编辑效果。为推进此领域的研究,团队创建了首个大规模多视角编辑数据集和数据流程管道,涵盖各种场景和风格。基于该数据集,团队开发了无需针对每个场景进行训练的框架,包含两个新颖组件:一是参照多视角编辑器,可实现精确、参考驱动的编辑,在所有观点中保持一致性;二是任何视角到视频合成器,利用视频扩散的空间时间先验信息进行高质量的场景补全和新颖视角生成,即使从稀疏输入也能实现。通过广泛实验,Tinker大大降低了通用3D内容创作的门槛,在编辑、新颖视角合成和渲染增强任务上达到领先水平。我们相信,Tinker是朝着真正可扩展的零镜头3D编辑迈出的关键一步。

Key Takeaways

  1. Tinker是一个用于高保真度3D编辑的通用框架,可在无需为每个场景进行微调的情况下在一镜和少镜模式下操作。
  2. Tinker通过重新利用预训练的扩散模型实现了稳健、多视角一致的编辑,仅需要一张或两张图像。
  3. 为推动这一领域的研究,团队创建了首个大规模多视角编辑数据集。
  4. Tinker包含两个新颖组件:参照多视角编辑器和任何视角到视频合成器。
  5. 参照多视角编辑器可实现精确、参考驱动的编辑,保持所有观点的一致性。
  6. 任何视角到视频合成器能从稀疏输入生成高质量的场景补全和新颖视角。

Cool Papers

点此查看论文截图

D^3-Talker: Dual-Branch Decoupled Deformation Fields for Few-Shot 3D Talking Head Synthesis

Authors:Yuhang Guo, Kaijun Deng, Siyang Song, Jindong Xie, Wenhui Ma, Linlin Shen

A key challenge in 3D talking head synthesis lies in the reliance on a long-duration talking head video to train a new model for each target identity from scratch. Recent methods have attempted to address this issue by extracting general features from audio through pre-training models. However, since audio contains information irrelevant to lip motion, existing approaches typically struggle to map the given audio to realistic lip behaviors in the target face when trained on only a few frames, causing poor lip synchronization and talking head image quality. This paper proposes D^3-Talker, a novel approach that constructs a static 3D Gaussian attribute field and employs audio and Facial Motion signals to independently control two distinct Gaussian attribute deformation fields, effectively decoupling the predictions of general and personalized deformations. We design a novel similarity contrastive loss function during pre-training to achieve more thorough decoupling. Furthermore, we integrate a Coarse-to-Fine module to refine the rendered images, alleviating blurriness caused by head movements and enhancing overall image quality. Extensive experiments demonstrate that D^3-Talker outperforms state-of-the-art methods in both high-fidelity rendering and accurate audio-lip synchronization with limited training data. Our code will be provided upon acceptance.

在3D说话人头部合成中的关键挑战在于,依赖于长时间说话人头视频来从头开始针对每个目标身份训练新模型。最近的方法试图通过预训练模型从音频中提取通用特征来解决这个问题。然而,由于音频包含与嘴唇运动无关的信息,当仅在少量帧上进行训练时,现有方法通常很难将给定音频映射到目标面孔的现实嘴唇行为,从而导致嘴唇同步不良和说话人头部图像质量差。本文提出了D^3-Talker,这是一种构建静态3D高斯属性场的新方法,采用音频和面部运动信号独立控制两个不同的高斯属性变形场,有效解耦通用和个性化变形的预测。我们在预训练过程中设计了一种新型相似性对比损失函数,以实现更彻底的解耦。此外,我们整合了从粗糙到精细的模块来优化渲染图像,减轻头部运动引起的模糊,提高整体图像质量。大量实验表明,在有限的训练数据下,D^3-Talker在高保真渲染和准确的音频-嘴唇同步方面都优于最先进的方法。代码在通过后提供。

论文及项目相关链接

PDF

Summary
新一代三维头部合成技术面临的主要挑战在于需要大量训练数据为每个目标身份从头开始建立模型。本文提出一种名为D^3-Talker的新方法,构建静态三维高斯属性场,并利用音频和面部运动信号独立控制两个高斯属性变形场,实现通用和个性化变形的预测解耦。通过设计预训练期间的相似性对比损失函数,实现了更彻底的解耦。此外,集成粗到细模块,细化渲染图像,减轻头部运动引起的模糊问题,提高整体图像质量。实验表明,在有限训练数据下,D^3-Talker在高保真渲染和准确的音频唇同步方面优于现有技术。

Key Takeaways

  • D^3-Talker解决了三维头部合成中对大量训练数据的需求问题。
  • 通过构建静态三维高斯属性场并利用音频和面部运动信号独立控制变形场,实现了通用和个性化变形的预测解耦。
  • 采用相似性对比损失函数在预训练期间实现更彻底的解耦。
  • 集成粗到细模块以提高图像质量并减轻头部运动引起的模糊问题。
  • 在有限训练数据下,D^3-Talker在音频唇同步和高保真渲染方面表现出优越性能。

Cool Papers

点此查看论文截图

CCFC: Core & Core-Full-Core Dual-Track Defense for LLM Jailbreak Protection

Authors:Jiaming Hu, Haoyu Wang, Debarghya Mukherjee, Ioannis Ch. Paschalidis

Jailbreak attacks pose a serious challenge to the safe deployment of large language models (LLMs). We introduce CCFC (Core & Core-Full-Core), a dual-track, prompt-level defense framework designed to mitigate LLMs’ vulnerabilities from prompt injection and structure-aware jailbreak attacks. CCFC operates by first isolating the semantic core of a user query via few-shot prompting, and then evaluating the query using two complementary tracks: a core-only track to ignore adversarial distractions (e.g., toxic suffixes or prefix injections), and a core-full-core (CFC) track to disrupt the structural patterns exploited by gradient-based or edit-based attacks. The final response is selected based on a safety consistency check across both tracks, ensuring robustness without compromising on response quality. We demonstrate that CCFC cuts attack success rates by 50-75% versus state-of-the-art defenses against strong adversaries (e.g., DeepInception, GCG), without sacrificing fidelity on benign queries. Our method consistently outperforms state-of-the-art prompt-level defenses, offering a practical and effective solution for safer LLM deployment.

越狱攻击对大型语言模型(LLM)的安全部署构成了严峻挑战。我们介绍了CCFC(Core & Core-Full-Core),这是一个双轨制的提示级防御框架,旨在减轻LLM在提示注入和结构感知越狱攻击中的漏洞。CCFC通过少提示的方式首先隔离用户查询的语义核心,然后使用两个互补的轨道对查询进行评估:仅核心轨道,忽略对抗性干扰(例如有毒后缀或前缀注入);以及核心全核心(CFC)轨道,破坏基于梯度或编辑的攻击所利用的结构模式。最终的回应是基于两个轨道上的安全一致性检查而选择的,确保在不影响响应质量的情况下实现稳健性。我们证明,与最新防御技术相比,CCFC在应对强大对手(例如DeepInception、GCG)的攻击时,成功率降低了50-75%,同时在良性查询上的保真度不受损害。我们的方法始终优于最新的提示级防御技术,为更安全地部署LLM提供了实用有效的解决方案。

论文及项目相关链接

PDF 11 pages, 1 figure

Summary

LLM部署面临Jailbreak攻击的挑战。为此,我们提出CCFC(核心与核心全核心)框架,这是一种双轨提示级防御框架,旨在减轻LLM对提示注入和结构感知Jailbreak攻击的脆弱性。CCFC通过少提示的方式隔离用户查询的语义核心,并使用两个互补轨道评估查询:仅核心轨道忽略对抗性干扰(如毒性后缀或前缀注入),以及核心全核心轨道破坏基于梯度或编辑的攻击的结构模式。最终响应是基于两个轨道的安全一致性检查选择的,确保稳健性而不损害响应质量。研究表明,CCFC成功攻击率降低率高达百分之五十至七十五对比最先进技术防御强大的对手(如DeepInception和GCG),并在良性查询中不失真性。该方法在更专业的提示层面防御技术中不断领先,成为更安全部署LLM的有效解决方案。

Key Takeaways

  1. Jailbreak攻击对大型语言模型(LLM)的安全部署构成严峻挑战。
  2. CCFC框架旨在通过双轨提示级防御减轻LLM的脆弱性。
  3. CCFC框架包括仅核心轨道和核心全核心轨道两种评估方式。
  4. 仅核心轨道可忽略对抗性干扰,而核心全核心轨道则能破坏攻击的结构模式。
  5. 最终响应基于两个轨道的安全一致性检查选择,确保稳健性和响应质量。
  6. CCFC框架成功降低了攻击成功率百分之五十至七十五,对比其他先进技术防御效果更佳。

Cool Papers

点此查看论文截图

FMSD-TTS: Few-shot Multi-Speaker Multi-Dialect Text-to-Speech Synthesis for Ü-Tsang, Amdo and Kham Speech Dataset Generation

Authors:Yutong Liu, Ziyue Zhang, Ban Ma-bao, Yuqing Cai, Yongbin Yu, Renzeng Duojie, Xiangxiang Wang, Fan Gao, Cheng Huang, Nyima Tashi

Tibetan is a low-resource language with minimal parallel speech corpora spanning its three major dialects-"U-Tsang, Amdo, and Kham-limiting progress in speech modeling. To address this issue, we propose FMSD-TTS, a few-shot, multi-speaker, multi-dialect text-to-speech framework that synthesizes parallel dialectal speech from limited reference audio and explicit dialect labels. Our method features a novel speaker-dialect fusion module and a Dialect-Specialized Dynamic Routing Network (DSDR-Net) to capture fine-grained acoustic and linguistic variations across dialects while preserving speaker identity. Extensive objective and subjective evaluations demonstrate that FMSD-TTS significantly outperforms baselines in both dialectal expressiveness and speaker similarity. We further validate the quality and utility of the synthesized speech through a challenging speech-to-speech dialect conversion task. Our contributions include: (1) a novel few-shot TTS system tailored for Tibetan multi-dialect speech synthesis, (2) the public release of a large-scale synthetic Tibetan speech corpus generated by FMSD-TTS, and (3) an open-source evaluation toolkit for standardized assessment of speaker similarity, dialect consistency, and audio quality.

藏语是一种资源匮乏的语言,其三大方言区——乌齐、安多和康区——的平行语音语料库极为有限,限制了语音建模的进展。为了解决这一问题,我们提出了FMSD-TTS,这是一个少样本、多发言人、多方言的文本到语音框架,它可以从有限的参考音频和明确的方言标签中合成平行的方言语音。我们的方法具有新颖的发声人-方言融合模块和方言专用动态路由网络(DSDR-Net),能够捕捉不同方言之间精细的声学和语言变化,同时保留发声人的身份。大量的客观和主观评估表明,FMSD-TTS在方言表现力和发声人相似性方面显著优于基准线。我们进一步通过具有挑战性的语音到语音方言转换任务验证了合成语音的质量和实用性。我们的贡献包括:(1)针对藏语多方言语音合成的少样本TTS系统,(2)公开发布由FMSD-TTS生成的大规模合成藏语语音语料库,(3)开放源代码评估工具包,用于标准化评估发声人相似性、方言一致性和音频质量。

论文及项目相关链接

PDF 18 pages

Summary

该文本介绍了针对藏语这一低资源语言,如何利用有限的参考音频和明确的方言标签,构建一个能够合成平行方言语音的少量、多说话者、多方言文本到语音(TTS)框架FMSD-TTS。该框架通过创新的说话者方言融合模块和方言专业化动态路由网络(DSDR-Net),能够在有限的资源下捕捉方言间的细微声学差异和语言变化,同时保持说话者身份。通过客观和主观评估,FMSD-TTS在方言表达力和说话者相似性方面显著优于基准模型。此外,该研究还通过具有挑战性的语音到语音方言转换任务验证了合成语音的质量和实用性。整体而言,该研究为藏语的多方言语音合成提供了创新解决方案。

Key Takeaways

  1. 藏语是一种低资源语言,缺乏跨越其三大方言的平行语音语料库,限制了语音建模的进展。
  2. 提出了一种名为FMSD-TTS的少量、多说话者、多方言的文本到语音合成框架。
  3. FMSD-TTS通过创新的说话者方言融合模块和方言专业化动态路由网络(DSDR-Net),能在有限资源下捕捉方言的细微差异。
  4. FMSD-TTS在方言表达力和说话者相似性方面显著优于基准模型。
  5. 研究通过语音到语音的方言转换任务验证了合成语音的质量和实用性。
  6. 研究贡献包括为藏语多方言语音合成定制的少量TTS系统。

Cool Papers

点此查看论文截图

MEGA: Second-Order Gradient Alignment for Catastrophic Forgetting Mitigation in GFSCIL

Authors:Jinhui Pang, Changqing Lin, Hao Lin, Zhihui Zhang, Weiping Ding, Yu Liu, Xiaoshuai Hao

Graph Few-Shot Class-Incremental Learning (GFSCIL) enables models to continually learn from limited samples of novel tasks after initial training on a large base dataset. Existing GFSCIL approaches typically utilize Prototypical Networks (PNs) for metric-based class representations and fine-tune the model during the incremental learning stage. However, these PN-based methods oversimplify learning via novel query set fine-tuning and fail to integrate Graph Continual Learning (GCL) techniques due to architectural constraints. To address these challenges, we propose a more rigorous and practical setting for GFSCIL that excludes query sets during the incremental training phase. Building on this foundation, we introduce Model-Agnostic Meta Graph Continual Learning (MEGA), aimed at effectively alleviating catastrophic forgetting for GFSCIL. Specifically, by calculating the incremental second-order gradient during the meta-training stage, we endow the model to learn high-quality priors that enhance incremental learning by aligning its behaviors across both the meta-training and incremental learning stages. Extensive experiments on four mainstream graph datasets demonstrate that MEGA achieves state-of-the-art results and enhances the effectiveness of various GCL methods in GFSCIL. We believe that our proposed MEGA serves as a model-agnostic GFSCIL paradigm, paving the way for future research.

图增量学习(Graph Few-Shot Class-Incremental Learning,GFSCIL)允许模型在大量基础数据集初始训练后,从新的任务的有限样本中持续学习。现有的GFSCIL方法通常利用原型网络(PNs)进行基于度量的类表示,并在增量学习阶段微调模型。然而,基于PN的方法通过新的查询集微调来简化学习,并且由于架构约束,无法整合图持续学习(Graph Continual Learning,GCL)技术。为了应对这些挑战,我们为GFSCIL提出了一个更严格、更实用的设置,即在增量训练阶段排除查询集。在此基础上,我们引入了模型无关的元图持续学习(Model-Agnostic Meta Graph Continual Learning,MEGA),旨在有效缓解GFSCIL的灾难性遗忘问题。具体来说,通过计算元训练阶段的增量二阶梯度,我们赋予模型学习高质量先验的能力,这些先验知识通过元训练和增量学习阶段的行为对齐,增强了增量学习。在四个主流图数据集上的大量实验表明,MEGA达到了最新的结果,提高了GFSCIL中各种GCL方法的有效性。我们相信我们提出的MEGA作为一种模型无关的GFSCIL范式,为未来的研究铺平了道路。

论文及项目相关链接

PDF Under Review

Summary

基于原型网络的图少样本类增量学习(GFSCIL)面临挑战,现有方法过于简化学习并忽略查询集的作用。我们提出更严格的GFSCIL设置,并引入模型无关的元图持续学习(MEGA)来缓解灾难性遗忘问题。通过计算元训练阶段的二阶增量梯度,MEGA使模型学习高质量先验,提高增量学习效果。实验证明,MEGA在四个主流图数据集上取得最佳结果。

Key Takeaways

  • GFSCIL面临现有方法过于简化学习和忽略查询集的问题。
  • 提出更严格的GFSCIL设置,以应对挑战。
  • 介绍MEGA方法,旨在缓解灾难性遗忘问题。
  • 通过计算元训练阶段的二阶增量梯度,提高模型学习效果。
  • MEGA在四个主流图数据集上实现最佳性能。

Cool Papers

点此查看论文截图

Action Engine: Automatic Workflow Generation in FaaS

Authors:Akiharu Esashi, Pawissanutt Lertpongrujikorn, Shinji Kato, Mohsen Amini Salehi

Function as a Service (FaaS) is poised to become the foundation of the next generation of cloud systems due to its inherent advantages in scalability, cost-efficiency, and ease of use. However, challenges such as the need for specialized knowledge, platform dependence, and difficulty in scalability in building functional workflows persist for cloud-native application developers. To overcome these challenges and mitigate the burden of developing FaaS-based applications, in this paper, we propose a mechanism called Action Engine, that makes use of tool-augmented large language models (LLMs) at its kernel to interpret human language queries and automates FaaS workflow generation, thereby, reducing the need for specialized expertise and manual design. Action Engine includes modules to identify relevant functions from the FaaS repository and seamlessly manage the data dependency between them, ensuring the developer’s query is processed and resolved. Beyond that, Action Engine can execute the generated workflow by injecting the user-provided arguments. On another front, this work addresses a gap in tool-augmented LLM research via adopting an Automatic FaaS Workflow Generation perspective to systematically evaluate methodologies across four fundamental sub-processes. Through benchmarking various parameters, this research provides critical insights into streamlining workflow automation for real-world applications, specifically in the FaaS continuum. Our evaluations demonstrate that the Action Engine achieves comparable performance to the few-shot learning approach while maintaining platform- and language-agnosticism, thereby, mitigating provider-specific dependencies in workflow generation. We notice that Action Engine can unlock FaaS workflow generation for non-cloud-savvy developers and expedite the development cycles of cloud-native applications.

函数即服务(FaaS)由于其可扩展性、成本效益和易用性等方面的固有优势,正逐渐成为下一代云系统的基础。然而,对于云原生应用开发者来说,仍然存在一些挑战,例如需要专业知识、平台依赖以及构建功能工作流程时的可扩展性困难。为了克服这些挑战并减轻开发基于FaaS的应用的负担,本文提出了一种名为“Action Engine”的机制。该机制利用核心的工具增强大型语言模型(LLM)来解释人类语言查询,并自动化FaaS工作流程生成,从而减少专业知识和技能的需求以及手动设计。Action Engine包括模块,可以从FaaS存储库识别相关功能并无缝管理它们之间的数据依赖性,确保开发者的查询得到处理和解决。此外,Action Engine可以执行通过注入用户提供的参数生成的工作流。另一方面,本工作通过采用自动FaaS工作流程生成的角度,解决了工具增强型LLM研究中的空白,系统地评估了四个基本子过程的方法。通过基准测试各种参数,本研究为简化面向现实世界应用的工作流自动化提供了关键见解,特别是在FaaS领域中。我们的评估表明,Action Engine在达到少样本学习的性能的同时,还保持了平台和语言的独立性,从而减轻了工作流程生成中的特定提供商依赖。我们发现,Action Engine可以解锁非云专业人士的FaaS工作流程生成,并加快云原生应用的发展周期。

论文及项目相关链接

PDF Published in the Future Generation Computer Systems (FGCS) journal; Source code is available at: https://github.com/hpcclab/action_engine

Summary

函数即服务(FaaS)因其可扩展性、成本效益和易用性等优点,有望成为下一代云系统的基础。然而,开发者在构建基于FaaS的应用时面临专业知识需求高、平台依赖性强以及工作流难以扩展等挑战。为了克服这些挑战,本文提出了Action Engine机制,利用工具增强的大型语言模型(LLM)解读人类语言查询并自动生成FaaS工作流,减少专业需求并简化设计。Action Engine模块可识别FaaS存储库中的相关功能并无缝管理它们之间的数据依赖关系,确保开发者查询得到处理并解决。此外,它还能执行生成的工作流并注入用户提供的参数。本研究从自动FaaS工作流生成的角度评估了工具增强型LLM的方法论,为现实应用中的工作流自动化提供了关键见解。评估显示,Action Engine在保持跨平台和语言独立性的同时,实现了与少样本学习相当的性能,为缺乏云经验的开发者解锁了FaaS工作流生成能力并加快了云原生应用的开发周期。
 
**Key Takeaways**
 
1. FaaS因其可扩展性、成本效益和易用性被认为是下一代云系统的基础。
 
2. 开发者在构建基于FaaS的应用时面临专业知识需求高、平台依赖性强等挑战。
 
3. Action Engine机制通过利用工具增强的大型语言模型自动生成FaaS工作流来解决这些挑战。
 
4. Action Engine能识别FaaS存储库中的相关功能并管理数据依赖关系,确保开发者查询得到解决。
 
5. Action Engine能够执行生成的工作流并注入用户参数。
 
6. 研究通过自动FaaS工作流生成的角度评估了工具增强型LLM的方法论,并为现实应用提供了关键见解。

Cool Papers

点此查看论文截图


文章作者: Kedreamix
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Kedreamix !
 上一篇
I2I Translation I2I Translation
I2I Translation 方向最新论文已更新,请持续关注 Update in 2025-08-22 SATURN Autoregressive Image Generation Guided by Scene Graphs
下一篇 
Agent Agent
Agent 方向最新论文已更新,请持续关注 Update in 2025-08-22 Challenges and Opportunities for Participatory Design of Conversational Agents for Young People's Wellbeing
2025-08-22
  目录