嘘~ 正在从服务器偷取页面 . . .

Face Swapping


⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验

2025-10-02 更新

DGM4+: Dataset Extension for Global Scene Inconsistency

Authors:Gagandeep Singh, Samudi Amarsinghe, Priyanka Singh, Xue Li

The rapid advances in generative models have significantly lowered the barrier to producing convincing multimodal disinformation. Fabricated images and manipulated captions increasingly co-occur to create persuasive false narratives. While the Detecting and Grounding Multi-Modal Media Manipulation (DGM4) dataset established a foundation for research in this area, it is restricted to local manipulations such as face swaps, attribute edits, and caption changes. This leaves a critical gap: global inconsistencies, such as mismatched foregrounds and backgrounds, which are now prevalent in real-world forgeries. To address this, we extend DGM4 with 5,000 high-quality samples that introduce Foreground-Background (FG-BG) mismatches and their hybrids with text manipulations. Using OpenAI’s gpt-image-1 and carefully designed prompts, we generate human-centric news-style images where authentic figures are placed into absurd or impossible backdrops (e.g., a teacher calmly addressing students on the surface of Mars). Captions are produced under three conditions: literal, text attribute, and text split, yielding three new manipulation categories: FG-BG, FG-BG+TA, and FG-BG+TS. Quality control pipelines enforce one-to-three visible faces, perceptual hash deduplication, OCR-based text scrubbing, and realistic headline length. By introducing global manipulations, our extension complements existing datasets, creating a benchmark DGM4+ that tests detectors on both local and global reasoning. This resource is intended to strengthen evaluation of multimodal models such as HAMMER, which currently struggle with FG-BG inconsistencies. We release our DGM4+ dataset and generation script at https://github.com/Gaganx0/DGM4plus

生成模型的快速发展极大地降低了制作令人信服的多模式虚假信息的门槛。伪造的图像和操纵的描述越来越多地共同出现,以创造有说服力的虚假叙事。虽然检测与定位多媒体操纵(DGM4)数据集为这个领域的研究奠定了基础,但它仅限于本地操纵,例如面部交换、属性编辑和标题更改。这就出现了一个关键的空白:全局不一致性,例如前景和背景的不匹配,现在在实际伪造中普遍存在。为了解决这个问题,我们扩展了DGM4数据集,增加了5000个高质量样本,这些样本引入了前景与背景(FG-BG)的不匹配以及与文本操纵的混合。我们使用OpenAI的gpt-image-1和精心设计的提示来生成以人类为中心的新闻风格图像,其中真实的人物被放置在荒谬或不可能的背景中(例如,老师在火星表面上平静地对学生讲话)。描述是在三种条件下产生的:文字描述、文本属性和文本拆分,从而产生三种新的操纵类别:FG-BG、FG-BG+TA和FG-BG+TS。质量控制管道强制实施一到三个可见面孔、感知哈希重复消除、基于OCR的文本清理以及真实的标题长度。通过引入全局操纵,我们的扩展补充了现有数据集,创建了一个基准测试DGM4+,该测试对本地和全局推理进行检测。该资源旨在加强多模式模型(如HAMMER)的评估,这些模型目前面临FG-BG不一致的问题。我们在https://github.com/Gaganx0/DGM4plus上发布了我们的DGM4+数据集和生成脚本。

论文及项目相关链接

PDF 8 pages, 3 figures

Summary

本文介绍了生成模型技术的快速发展使得制作令人信服的多模式虚假信息变得更为容易。为了应对当前多媒体操纵的问题,文章扩展了DGM4数据集,引入了高质量样本,包括前景背景不匹配及其与文本操纵的结合。通过使用OpenAI的gpt-image-1和精心设计提示,生成了以新闻风格为主的图像,真实人物被置于荒谬或不可能的背景中。数据集通过质量控制流程强制执行特定的面孔数量、感知哈希重复内容剔除等规则,且针对全球操控推出了扩展版本DGM4+,以测试模型的本地和全局推理能力。该资源旨在加强如HAMMER等多模式模型的评估能力。数据集及生成脚本已发布在GitHub上。

Key Takeaways

  1. 生成模型技术降低了制作虚假信息的门槛。
  2. DGM4数据集扩展引入了高质量样本,包括前景背景不匹配的问题。
  3. 通过OpenAI技术生成具有新闻风格的图像,真实人物置于荒谬背景中。
  4. 新数据集涵盖多种新的操纵类别,包括FG-BG、FG-BG+TA和FG-BG+TS。
  5. 质量控制流程确保数据集的可靠性和真实性。
  6. DGM4+数据集推出,旨在测试模型的本地和全局推理能力。

Cool Papers

点此查看论文截图

PHASE-Net: Physics-Grounded Harmonic Attention System for Efficient Remote Photoplethysmography Measurement

Authors:Bo Zhao, Dan Guo, Junzhe Cao, Yong Xu, Tao Tan, Yue Sun, Bochao Zou, Jie Zhang, Zitong Yu

Remote photoplethysmography (rPPG) measurement enables non-contact physiological monitoring but suffers from accuracy degradation under head motion and illumination changes. Existing deep learning methods are mostly heuristic and lack theoretical grounding, which limits robustness and interpretability. In this work, we propose a physics-informed rPPG paradigm derived from the Navier-Stokes equations of hemodynamics, showing that the pulse signal follows a second-order dynamical system whose discrete solution naturally leads to a causal convolution. This provides a theoretical justification for using a Temporal Convolutional Network (TCN). Based on this principle, we design PHASE-Net, a lightweight model with three key components: (1) Zero-FLOPs Axial Swapper module, which swaps or transposes a few spatial channels to mix distant facial regions and enhance cross-region feature interaction without breaking temporal order; (2) Adaptive Spatial Filter, which learns a soft spatial mask per frame to highlight signal-rich areas and suppress noise; and (3) Gated TCN, a causal dilated TCN with gating that models long-range temporal dynamics for accurate pulse recovery. Extensive experiments demonstrate that PHASE-Net achieves state-of-the-art performance with strong efficiency, offering a theoretically grounded and deployment-ready rPPG solution.

远程光体积测量法(rPPG)可以实现非接触式生理监测,但在头部运动和光照变化的情况下,其精度会降低。现有的深度学习方法大多依赖于启发式且缺乏理论基础,这限制了其稳健性和可解释性。在本研究中,我们提出了一种基于血流动力学Navier-Stokes方程的物理学信息rPPG范式,发现脉搏信号遵循二阶动力系统,其离散解自然导致因果卷积。这为使用时序卷积网络(TCN)提供了理论支持。基于这一原理,我们设计了PHASE-Net模型,该模型具有三个关键组件:(1)零浮点运算轴向交换器模块,该模块交换或转置少数空间通道以混合远距离面部区域并增强跨区域特征交互而不会破坏时间顺序;(2)自适应空间滤波器,它学习每帧的软空间掩膜以突出信号丰富的区域并抑制噪声;(3)门控TCN是一种具有门控的因果膨胀TCN,用于模拟长期时间动态以进行准确的脉冲恢复。大量实验表明,PHASE-Net达到了最先进的性能并具有强大的效率,提供了一种有理论支撑且可部署的rPPG解决方案。

论文及项目相关链接

PDF

Summary
远程光容积脉搏波成像(rPPG)技术可实现非接触生理监测,但在头部运动和光照变化条件下存在精度下降的问题。现有深度学习方法大多缺乏理论支撑,限制了其稳健性和可解释性。本研究提出了基于血流动力学Navier-Stokes方程的物理学rPPG范式,证明了脉搏信号遵循二阶动力系统,其离散解自然导致因果卷积。这为使用时间卷积网络(TCN)提供了理论支持。基于此原理,我们设计了PHASE-Net模型,包含三个关键组件:零乘运算轴向交换器模块,自适应空间滤波器,以及带有门控机制的因果膨胀TCN。广泛实验表明,PHASE-Net实现了高效卓越的性能,提供了一种理论支撑且可部署的rPPG解决方案。

Key Takeaways

  1. 远程光容积脉搏波成像(rPPG)技术面临头部运动和光照变化的挑战,导致精度下降。
  2. 现有深度学习方法大多缺乏理论支撑,影响稳健性和可解释性。
  3. 本研究基于血流动力学Navier-Stokes方程提出物理学rPPG范式,为使用TCN提供理论支持。
  4. PHASE-Net模型包含三个关键组件:零乘运算轴向交换器模块,用于增强跨区域特征交互;自适应空间滤波器,突出信号丰富区域并抑制噪声;带有门控机制的因果膨胀TCN,用于准确恢复脉搏。
  5. PHASE-Net实现了高效卓越的性能。
  6. PHASE-Net提供了一种理论支撑且可部署的rPPG解决方案。

Cool Papers

点此查看论文截图


文章作者: Kedreamix
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Kedreamix !
 上一篇
3DGS 3DGS
3DGS 方向最新论文已更新,请持续关注 Update in 2025-10-02 HART Human Aligned Reconstruction Transformer
2025-10-02
下一篇 
Speech Speech
Speech 方向最新论文已更新,请持续关注 Update in 2025-10-02 Voice Evaluation of Reasoning Ability Diagnosing the Modality-Induced Performance Gap
2025-10-02
  目录