⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-11-21 更新
Denoising weak lensing mass maps with diffusion model: systematic comparison with generative adversarial network
Authors:Shohei D. Aoyama, Ken Osato, Masato Shirasaki
Removing the shape noise from the observed weak lensing field, i.e., denoising, enhances the potential of WL by accessing information at small scales where the shape noise dominates without denoising. We utilise two machine learning (ML) models for denosing: generative adversarial network (GAN) and diffusion model (DM). We evaluate the performance of denosing with GAN and DM utilising the large suite of mock WL observations, which serve as the training and test data sets. We apply denoising to 1,000 noisy mass maps with GAN and DM models trained with 39,000 mock observations. Both models can fairly well reproduce the true convergence map on large scales. Then, we measure cosmological statistics: power spectrum, bispectrum, one-point probability distribution function, peak and minima counts, and scattering transform coefficients. We find that DM outperforms GAN in almost all considered statistics and recovers the correct statistics down to small scales. For example, the angular power spectrum can be recovered with DM up to multipoles $\ell \lesssim 6000$ while the noise power spectrum dominates from $\ell \simeq 2000$. We also conduct stress tests on the trained model; denoising the maps with different characteristics, e.g., different source redshifts, from the training data. The performance degrades at small scales, but the statistics can still be recovered at large scales. Though the training of DM is more computationally demanding compared with GAN, there are several advantages: numerically stable training, higher performance in the reconstruction of cosmological statistics, and sampling multiple realisations once the model is trained. It has been known that DM can generate higher-quality images in real-world problems than GAN, the superiority has been confirmed as well in the WL denoising problem.
从观测到的弱引力透镜场去除形状噪声,即降噪,可以提高弱引力透镜的潜力。通过访问未降噪时形状噪声占主导地位的小尺度信息来实现这一点。我们利用两种机器学习模型进行降噪处理:生成对抗网络(GAN)和扩散模型(DM)。我们利用大量的模拟弱引力透镜观测结果来评估使用GAN和DM进行降噪的性能,这些观测结果既作为训练数据也作为测试数据集。我们将降噪应用于1000张带有噪声的质量图,这些图使用GAN和DM模型进行训练,训练数据为39000个模拟观测结果。两种模型在大尺度上都能较好地复现真实的收敛图。然后,我们测量宇宙学统计量:功率谱、双谱、一点概率分布函数、峰和谷计数以及散射变换系数。我们发现,在所有考虑的统计量中,DM几乎总是优于GAN,并恢复到小尺度的正确统计量。例如,用DM可以恢复到多重角≤6000的角功率谱,而从≈2000的角开始噪声功率谱占据主导。我们还对训练好的模型进行了压力测试,即对具有不同特征(例如不同的源红移)的地图进行降噪,这些特征不同于训练数据。虽然在小尺度上的性能有所下降,但在大尺度上仍能恢复统计量。尽管与GAN相比,DM的训练在计算上需求更高,但它有几个优点:数值稳定的训练、在重建宇宙学统计量方面性能更高,以及一旦模型训练完成就可以对多个实现进行采样。已知在现实世界的问题中,DM可以生成比GAN更高质量的图像,其在弱引力透镜消噪问题中的优越性也得到了证实。
论文及项目相关链接
PDF Submitted to PASJ, 18 pages, 19 figures, 5 tables
Summary
利用生成对抗网络(GAN)和扩散模型(DM)对弱引力透镜观测数据进行降噪处理,增强其在小尺度上的信息获取能力。通过模拟弱引力透镜观测的大数据集对两种模型进行训练和测试,发现扩散模型在恢复宇宙学统计特征方面表现优于GAN,可在更小的尺度上恢复真实的收敛图。尽管DM训练较GAN计算量大,但其数值稳定的训练过程和较高的宇宙学统计重建性能使其具有优势。
**Key Takeaways**
1. GAN和DM用于弱引力透镜观测数据的降噪处理。
2. 两种模型在模拟数据集上进行训练和测试。
3. 扩散模型DM在恢复宇宙学统计特征方面优于GAN。
4. DM可在较小尺度上恢复真实的收敛图。
5. DM训练计算量较大,但具有数值稳定的训练和较高的性能优势。
6. 在不同的源红移特性下,降噪地图的性能会有所下降,但在大尺度上仍可以恢复统计特征。