⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-09-17 更新
Enhancement Without Contrast: Stability-Aware Multicenter Machine Learning for Glioma MRI Imaging
Authors:Sajad Amiri, Shahram Taeb, Sara Gharibi, Setareh Dehghanfard, Somayeh Sadat Mehrnia, Mehrdad Oveisi, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R. Salmanpour
Gadolinium-based contrast agents (GBCAs) are central to glioma imaging but raise safety, cost, and accessibility concerns. Predicting contrast enhancement from non-contrast MRI using machine learning (ML) offers a safer alternative, as enhancement reflects tumor aggressiveness and informs treatment planning. Yet scanner and cohort variability hinder robust model selection. We propose a stability-aware framework to identify reproducible ML pipelines for multicenter prediction of glioma MRI contrast enhancement. We analyzed 1,446 glioma cases from four TCIA datasets (UCSF-PDGM, UPENN-GB, BRATS-Africa, BRATS-TCGA-LGG). Non-contrast T1WI served as input, with enhancement derived from paired post-contrast T1WI. Using PyRadiomics under IBSI standards, 108 features were extracted and combined with 48 dimensionality reduction methods and 25 classifiers, yielding 1,200 pipelines. Rotational validation was trained on three datasets and tested on the fourth. Cross-validation prediction accuracies ranged from 0.91 to 0.96, with external testing achieving 0.87 (UCSF-PDGM), 0.98 (UPENN-GB), and 0.95 (BRATS-Africa), with an average of 0.93. F1, precision, and recall were stable (0.87 to 0.96), while ROC-AUC varied more widely (0.50 to 0.82), reflecting cohort heterogeneity. The MI linked with ETr pipeline consistently ranked highest, balancing accuracy and stability. This framework demonstrates that stability-aware model selection enables reliable prediction of contrast enhancement from non-contrast glioma MRI, reducing reliance on GBCAs and improving generalizability across centers. It provides a scalable template for reproducible ML in neuro-oncology and beyond.
钆基造影剂(GBCAs)在胶质瘤成像中占据核心地位,但同时也引发了关于安全性、成本和可及性的担忧。利用机器学习(ML)从非对比磁共振成像预测对比增强提供了一种更安全的替代方案,因为增强反映了肿瘤的侵袭性并为治疗计划提供了信息。然而扫描器和患者群体的差异性阻碍了稳健模型的选择。我们提出了一种稳定感知框架,用于识别可重复的机器学习管道,以进行多中心预测胶质瘤MRI对比度增强。我们分析了来自四个TCIA数据集的1446例胶质瘤病例(UCSF-PDGM、UPENN-GB、BRATS-Africa、BRATS-TCGA-LGG)。非对比T1WI作为输入,通过配对的对比后T1WI得出增强效果。在IBSI标准下使用PyRadiomics提取了108个特征,并与48种降维方法和25种分类器相结合,产生了1200个管道。采用旋转验证在三个数据集上进行训练并在第四个数据集上进行测试。交叉验证预测准确率在0.91至0.96之间,外部测试在UCSF-PDGM达到0.87,UPENN-GB达到0.98,BRATS-Africa达到0.95,平均准确率为0.93。F1分数、精确度和召回率保持稳定(在0.87至0.96之间),而ROC-AUC变化范围较广(在0.50至0.82之间),反映了患者群体的异质性。MI与ETr管道始终排名最高,在准确性和稳定性之间取得平衡。该框架表明,采用稳定感知模型选择能够从非对比胶质瘤MRI可靠预测对比增强,减少对GBCAs的依赖并改善跨中心的普及性。它为神经肿瘤学以及其他领域可重复的机器学习提供了可扩展的模板。
论文及项目相关链接
PDF 14 Pages, 1 Figure, and 6 Tables
Summary:该研究提出了一个稳定性感知的框架,用于识别和预测非对比剂磁共振成像中的胶质瘤对比增强现象。通过机器学习方法处理不同中心的磁共振数据,分析结果显示预测准确性较高,稳定性良好。该研究为减少依赖钆基造影剂,提高跨中心成像分析的可靠性提供了一种新的方法。
Key Takeaways:
- 机器学习被用来预测胶质瘤的对比增强,这是基于非对比剂的MRI图像,反映肿瘤侵袭性并有助于治疗计划。
- 跨中心的扫描设备差异导致机器学习模型的选择面临挑战。提出稳定性感知框架以提高模型预测的可重复性。
- 利用四个数据集的分析结果表明预测准确度范围从很高的数值。对于不同中心数据的训练模型和测试模型的验证结果显示,准确性达到外部测试的可靠性程度较高。某些具体数值体现了这一点:平均准确性为约 0.93,其中精确度和召回率稳定介于 0.87 至 0.96 之间。反映模型的鲁棒性和适应不同数据的能力更强。
点此查看论文截图



Comparing Conditional Diffusion Models for Synthesizing Contrast-Enhanced Breast MRI from Pre-Contrast Images
Authors:Sebastian Ibarra, Javier del Riego, Alessandro Catanese, Julian Cuba, Julian Cardona, Nataly Leon, Jonathan Infante, Karim Lekadir, Oliver Diaz, Richard Osuala
Dynamic contrast-enhanced (DCE) MRI is essential for breast cancer diagnosis and treatment. However, its reliance on contrast agents introduces safety concerns, contraindications, increased cost, and workflow complexity. To this end, we present pre-contrast conditioned denoising diffusion probabilistic models to synthesize DCE-MRI, introducing, evaluating, and comparing a total of 22 generative model variants in both single-breast and full breast settings. Towards enhancing lesion fidelity, we introduce both tumor-aware loss functions and explicit tumor segmentation mask conditioning. Using a public multicenter dataset and comparing to respective pre-contrast baselines, we observe that subtraction image-based models consistently outperform post-contrast-based models across five complementary evaluation metrics. Apart from assessing the entire image, we also separately evaluate the region of interest, where both tumor-aware losses and segmentation mask inputs improve evaluation metrics. The latter notably enhance qualitative results capturing contrast uptake, albeit assuming access to tumor localization inputs that are not guaranteed to be available in screening settings. A reader study involving 2 radiologists and 4 MRI technologists confirms the high realism of the synthetic images, indicating an emerging clinical potential of generative contrast-enhancement. We share our codebase at https://github.com/sebastibar/conditional-diffusion-breast-MRI.
动态对比增强(DCE)MRI对于乳腺癌诊断和治疗至关重要。然而,它对造影剂的依赖引发了安全性担忧、禁忌症、成本增加和工作流程复杂等问题。为此,我们提出了基于预造影条件去噪扩散概率模型的DCE-MRI合成方法,在单乳和全乳环境中共介绍了22种生成模型并对其进行了评估和比较。为提高病灶保真度,我们引入了肿瘤感知损失函数和明确的肿瘤分割掩模条件。使用公共多中心数据集,与相应的预造影基线进行比较,我们发现基于减法图像的模型在五种互补评估指标上持续优于基于后造影的模型。除了评估整个图像外,我们还对感兴趣区域进行了单独评估,肿瘤感知损失和分割掩模输入均改善了评估指标。后者显著提高了对比剂摄取的定性结果,尽管假设存在肿瘤定位输入,但在筛查环境中并不一定能保证提供这些输入。涉及2名放射科医生和4名MRI技术专家的读者研究证实了合成图像的高度逼真性,表明了生成对比增强的潜在临床价值。我们在https://github.com/sebastibar/conditional-diffusion-breast-MRI分享我们的代码库。
论文及项目相关链接
PDF 13 pages, 5 figures, submitted and accepted to MICCAI Deepbreath workshop 2025
Summary:
本文介绍了利用预对比条件下的去噪扩散概率模型合成动态对比增强MRI(DCE-MRI)的方法,以解决其在乳腺癌诊断和治疗中的安全性问题、禁忌症、成本增加和工作流程复杂性等问题。文章提出了总共22种生成模型变体,分别在单乳腺和全乳腺环境中进行了测试比较。为了增强病灶的保真度,引入了肿瘤感知损失函数和明确的肿瘤分割掩膜条件。在公共多中心数据集上的实验表明,基于减法图像模型的性能优于基于对比后的模型,并在感兴趣区域进行了单独评估。尽管假设有肿瘤定位输入在筛查环境中可能不可用,但结合了肿瘤感知损失和分割掩膜输入的模型能显著提高对比吸收定性结果的捕捉能力。此外,由两位放射学家和四位MRI技术人员参与的研究证实了合成图像的高度逼真性,表明生成对比增强技术具有潜在的临床应用价值。
Key Takeaways:
- DCE-MRI在乳腺癌诊断治疗中的重要性及其存在的安全问题、成本和工作流程复杂性问题被提及。
- 介绍了利用预对比条件下的去噪扩散概率模型合成DCE-MRI的方法,并评估了多种生成模型变体。
- 为提高病灶保真度,引入了肿瘤感知损失函数和肿瘤分割掩膜条件。
- 实验表明基于减法图像模型的性能优于基于对比后的模型,并在感兴趣区域进行了单独评估。
- 结合肿瘤感知损失和分割掩膜输入的模型能提高对比吸收定性结果的捕捉能力。
- 合成图像的高度逼真性得到了读者研究的证实,表明生成对比增强技术具有潜在的临床应用价值。
点此查看论文截图




RealRAG: Retrieval-augmented Realistic Image Generation via Self-reflective Contrastive Learning
Authors:Yuanhuiyi Lyu, Xu Zheng, Lutao Jiang, Yibo Yan, Xin Zou, Huiyu Zhou, Linfeng Zhang, Xuming Hu
Recent text-to-image generative models, e.g., Stable Diffusion V3 and Flux, have achieved notable progress. However, these models are strongly restricted to their limited knowledge, a.k.a., their own fixed parameters, that are trained with closed datasets. This leads to significant hallucinations or distortions when facing fine-grained and unseen novel real-world objects, e.g., the appearance of the Tesla Cybertruck. To this end, we present the first real-object-based retrieval-augmented generation framework (RealRAG), which augments fine-grained and unseen novel object generation by learning and retrieving real-world images to overcome the knowledge gaps of generative models. Specifically, to integrate missing memory for unseen novel object generation, we train a reflective retriever by self-reflective contrastive learning, which injects the generator’s knowledge into the sef-reflective negatives, ensuring that the retrieved augmented images compensate for the model’s missing knowledge. Furthermore, the real-object-based framework integrates fine-grained visual knowledge for the generative models, tackling the distortion problem and improving the realism for fine-grained object generation. Our Real-RAG is superior in its modular application to all types of state-of-the-art text-to-image generative models and also delivers remarkable performance boosts with all of them, such as a gain of 16.18% FID score with the auto-regressive model on the Stanford Car benchmark.
最近出现的文本到图像生成模型,如Stable Diffusion V3和Flux,已经取得了显著的进步。然而,这些模型受到其有限知识的强烈限制,即使用封闭数据集训练的固定参数。这导致在面对细粒度和未见过的全新现实世界物体(例如特斯拉赛博卡车)时,会出现明显的幻觉或失真。为此,我们首次提出了基于真实物体的检索增强生成框架(RealRAG),通过学习并检索真实世界的图像,来克服生成模型的知识空白,从而增强细粒度和未见过的全新物体的生成。具体来说,为了整合未见过的全新物体生成的缺失记忆,我们通过自我反思对比学习训练了一个反射检索器,将生成器的知识注入到自我反思的负样本中,确保检索到的增强图像能够弥补模型的缺失知识。此外,基于真实物体的框架为生成模型集成了细粒度视觉知识,解决了失真问题,提高了细粒度物体生成的逼真度。我们的RealRAG在模块化应用于所有最先进的文本到图像生成模型时均表现优越,并与它们一起实现了显著的性能提升,例如在斯坦福汽车基准测试上,与自回归模型相比,FID得分提高了16.18%。
论文及项目相关链接
PDF Accepted to ICML2025
Summary
本文介绍了针对文本到图像生成模型的知识限制问题,提出了首个基于真实对象的检索增强生成框架(RealRAG)。该框架通过学习和检索真实世界图像来克服生成模型的知识空白,特别是通过自我反思对比学习训练了一个反射检索器,将生成器的知识注入到自我反思的负样本中。此外,RealRAG整合了细粒度视觉知识,解决了生成模型的失真问题,提高了细粒度对象生成的现实感。RealRAG可应用于所有类型的先进文本到图像生成模型,并与其显著提升了性能,如在Stanford Car基准测试上,与自回归模型相比,FID得分提高了16.18%。
Key Takeaways
- 文本到图像生成模型如Stable Diffusion V3和Flux存在知识限制问题,导致面对细粒度和未见过的真实世界对象时产生幻觉或失真。
- 提出了基于真实对象的检索增强生成框架(RealRAG)以克服这一限制。
- RealRAG通过学习和检索真实世界图像来增强模型对未见过的对象的生成能力。
- 采用自我反思对比学习训练反射检索器,将生成器知识注入自我反思的负样本中。
- RealRAG整合了细粒度视觉知识,提高了生成模型的现实感,并解决了失真问题。
- RealRAG可广泛应用于各种先进的文本到图像生成模型,并显著提升其性能。
点此查看论文截图



