嘘~ 正在从服务器偷取页面 . . .

GAN


⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验

2025-10-22 更新

Is Artificial Intelligence Generated Image Detection a Solved Problem?

Authors:Ziqiang Li, Jiazhen Yan, Ziwen He, Kai Zeng, Weiwei Jiang, Lizhi Xiong, Zhangjie Fu

The rapid advancement of generative models, such as GANs and Diffusion models, has enabled the creation of highly realistic synthetic images, raising serious concerns about misinformation, deepfakes, and copyright infringement. Although numerous Artificial Intelligence Generated Image (AIGI) detectors have been proposed, often reporting high accuracy, their effectiveness in real-world scenarios remains questionable. To bridge this gap, we introduce AIGIBench, a comprehensive benchmark designed to rigorously evaluate the robustness and generalization capabilities of state-of-the-art AIGI detectors. AIGIBench simulates real-world challenges through four core tasks: multi-source generalization, robustness to image degradation, sensitivity to data augmentation, and impact of test-time pre-processing. It includes 23 diverse fake image subsets that span both advanced and widely adopted image generation techniques, along with real-world samples collected from social media and AI art platforms. Extensive experiments on 11 advanced detectors demonstrate that, despite their high reported accuracy in controlled settings, these detectors suffer significant performance drops on real-world data, limited benefits from common augmentations, and nuanced effects of pre-processing, highlighting the need for more robust detection strategies. By providing a unified and realistic evaluation framework, AIGIBench offers valuable insights to guide future research toward dependable and generalizable AIGI detection.Data and code are publicly available at: https://github.com/HorizonTEL/AIGIBench.

生成模型(如GAN和Diffusion模型)的快速发展使得创建高度逼真的合成图像成为可能,这引发了人们对虚假信息、深度伪造和版权侵犯的严重担忧。尽管已经提出了许多人工智能生成图像(AIGI)检测器,并且通常报告具有较高的准确性,但它们在现实场景中的有效性仍然值得怀疑。为了弥补这一差距,我们引入了AIGIBench,这是一个综合基准测试,旨在严格评估最新AIGI检测器的稳健性和泛化能力。AIGIBench通过四个核心任务模拟现实世界的挑战:多源泛化、对图像退化的稳健性、对数据增强的敏感性以及测试时预处理的影响。它包括23个多样的虚假图像子集,涵盖了先进和广泛采用的图像生成技术,以及从社交媒体和AI艺术平台收集的真实世界样本。对11个先进检测器的广泛实验表明,尽管它们在受控环境中的报告准确率很高,但这些检测器在真实世界数据上的性能却大幅下降,从常见的数据增强中获益有限,预处理的影响也更为微妙,这突显了需要更稳健的检测策略。通过提供统一和现实的评估框架,AIGIBench为未来的研究提供了宝贵的见解,以可靠和可泛化的AIGI检测为目标。数据和代码可在以下网址公开获取:https://github.com/HorizonTEL/AIGIBench

论文及项目相关链接

PDF Accepted by NeurIPS 2025 Datasets and Benchmarks Track

Summary

GAN及其他生成模型(如Diffusion模型)的快速发展产生了高度逼真的合成图像,引发了关于虚假信息、深度伪造和版权侵犯的担忧。尽管已提出许多人工智能生成图像(AIGI)检测器,并常报告高准确率,但它们在实际场景中的有效性仍有待验证。为解决此问题,我们引入了AIGIBench,这是一个综合基准测试平台,旨在严格评估最新AIGI检测器的稳健性和泛化能力。AIGIBench模拟现实挑战,包括多源泛化、对图像退化的稳健性、对数据增强的敏感性以及测试时预处理的影响等四个核心任务。它包括23个多样化的虚假图像子集,涵盖了先进的和广泛采用的图像生成技术,以及从社交媒体和人工智能艺术平台收集的真实世界样本。对11个先进检测器的广泛实验表明,这些检测器在真实世界数据上的性能大幅下降,从常见的数据增强中获益有限,以及预处理的影响微妙,这突显了需要更稳健的检测策略。通过提供统一和现实的评估框架,AIGIBench为可靠的、可推广的AIGI检测研究提供了宝贵的见解。

Key Takeaways

  1. 生成模型如GAN和Diffusion模型的发展带来了高度逼真的合成图像。
  2. 人工智能生成图像(AIGI)检测器的实际有效性存在疑问。
  3. 引入AIGIBench综合基准测试平台,评估AIGI检测器的稳健性和泛化能力。
  4. AIGIBench模拟现实挑战,包括多源泛化、图像退化、数据增强和测试时预处理等核心任务。
  5. AIGIBench包含多样化的虚假图像子集和真实世界样本。
  6. 先进检测器在真实世界数据上的性能有待提高。

Cool Papers

点此查看论文截图

Principled Feature Disentanglement for High-Fidelity Unified Brain MRI Synthesis

Authors:Jihoon Cho, Jonghye Woo, Jinah Park

Multisequence Magnetic Resonance Imaging (MRI) provides a more reliable diagnosis in clinical applications through complementary information across sequences. However, in practice, the absence of certain MR sequences is a common problem that can lead to inconsistent analysis results. In this work, we propose a novel unified framework for synthesizing multisequence MR images, called hybrid-fusion GAN (HF-GAN). The fundamental mechanism of this work is principled feature disentanglement, which aligns the design of the architecture with the complexity of the features. A powerful many-to-one stream is constructed for the extraction of complex complementary features, while utilizing parallel, one-to-one streams to process modality-specific information. These disentangled features are dynamically integrated into a common latent space by a channel attention-based fusion module (CAFF) and then transformed via a modality infuser to generate the target sequence. We validated our framework on public datasets of both healthy and pathological brain MRI. Quantitative and qualitative results show that HF-GAN achieves state-of-the-art performance, with our 2D slice-based framework notably outperforming a leading 3D volumetric model. Furthermore, the utilization of HF-GAN for data imputation substantially improves the performance of the downstream brain tumor segmentation task, demonstrating its clinical relevance.

多序列磁共振成像(MRI)通过序列间的互补信息提供更可靠的临床诊断。然而,在实践中,某些MR序列的缺失是一个常见问题,可能导致分析结果不一致。在这项工作中,我们提出了一种合成多序列MR图像的新型统一框架,称为混合融合生成对抗网络(HF-GAN)。这项工作的基本机制是特征解耦,它将架构的设计与特征的复杂性相匹配。我们构建了一个强大的多对一流,用于提取复杂的互补特征,同时利用并行的一对一流来处理特定于模态的信息。这些解耦的特征通过基于通道注意力的融合模块(CAFF)动态地集成到一个公共潜在空间中,然后通过模态注入器进行转换以生成目标序列。我们在公共数据集上对健康和病理性大脑MRI的框架进行了验证。定量和定性结果表明,HF-GAN达到了最先进的技术性能,我们的基于2D切片的框架显著优于领先的3D体积模型。此外,使用HF-GAN进行数据插补大大提高了下游脑肿瘤分割任务的性能,证明了其临床相关性。

论文及项目相关链接

PDF 14 pages, 9 figures

Summary

本文介绍了一种基于混合融合生成对抗网络(HF-GAN)的多序列磁共振成像(MRI)合成框架。该框架通过特征分解和动态集成技术,利用不同序列的MRI信息,解决了实际应用中某些MR序列缺失导致的不一致分析结果问题。在公共数据集上的验证结果显示,HF-GAN达到了最先进的性能,并在二维切片框架上显著优于领先的的三维体积模型。此外,利用HF-GAN进行数据补全可显著提高下游脑肿瘤分割任务的性能,显示出其临床价值。

Key Takeaways

  1. HF-GAN框架用于合成多序列MRI,解决实际应用中序列缺失的问题。
  2. 基于特征分解技术,构建了一个强大的许多到一流动力流以提取复杂互补特征,并利用一对一流动处理特定模态信息。
  3. 通过通道注意力融合的融合模块(CAFF)将特征动态集成到公共潜在空间。
  4. HF-GAN达到了最先进的性能表现,并在二维切片框架上优于领先的的三维体积模型。
  5. HF-GAN在数据补全方面表现出色,能提高下游脑肿瘤分割任务的性能。
  6. HF-GAN合成的MRI图像对临床诊断和治疗具有重要价值。

Cool Papers

点此查看论文截图


文章作者: Kedreamix
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Kedreamix !
 上一篇
元宇宙/虚拟人 元宇宙/虚拟人
元宇宙/虚拟人 方向最新论文已更新,请持续关注 Update in 2025-10-22 Capturing Head Avatar with Hand Contacts from a Monocular Video
下一篇 
Speech Speech
Speech 方向最新论文已更新,请持续关注 Update in 2025-10-22 DELULU Discriminative Embedding Learning Using Latent Units for Speaker-Aware Self-Supervised Speech Foundational Model
2025-10-22
  目录