嘘~ 正在从服务器偷取页面 . . .

无监督/半监督/对比学习


⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验

2025-11-27 更新

Patch-Level Glioblastoma Subregion Classification with a Contrastive Learning-Based Encoder

Authors:Juexin Zhang, Qifeng Zhong, Ying Weng, Ke Chen

The significant molecular and pathological heterogeneity of glioblastoma, an aggressive brain tumor, complicates diagnosis and patient stratification. While traditional histopathological assessment remains the standard, deep learning offers a promising path toward objective and automated analysis of whole slide images. For the BraTS-Path 2025 Challenge, we developed a method that fine-tunes a pre-trained Vision Transformer (ViT) encoder with a dedicated classification head on the official training dataset. Our model’s performance on the online validation set, evaluated via the Synapse platform, yielded a Matthews Correlation Coefficient (MCC) of 0.7064 and an F1-score of 0.7676. On the final test set, the model achieved an MCC of 0.6509 and an F1-score of 0.5330, which secured our team second place in the BraTS-Pathology 2025 Challenge. Our results establish a solid baseline for ViT-based histopathological analysis, and future efforts will focus on bridging the performance gap observed on the unseen validation data.

胶质母细胞瘤是一种侵袭性的脑肿瘤,其分子和病理上的异质性给诊断和治疗带来了复杂性。虽然传统的组织病理学评估仍是标准方法,但深度学习为客观和自动化的全幻灯片图像分析提供了有前景的途径。为了应对BraTS-Path 2025挑战赛,我们开发了一种方法,该方法使用专用分类头对预训练的Vision Transformer(ViT)编码器进行微调,并在官方训练数据集上进行训练。我们的模型在Synapse平台上对在线验证集的表现,获得了Matthews相关系数(MCC)为0.7064和F1分数为0.7676的成绩。在最终测试集上,模型的MCC为0.6509,F1分数为0.5330,这使我们的团队在BraTS-Pathology 2025挑战赛中获得了第二名。我们的结果建立了基于ViT的组织病理学分析的坚实基础,未来的工作将专注于缩小在未见验证数据上观察到的性能差距。

论文及项目相关链接

PDF Accepted by the International Brain Tumor Segmentation (BraTS) challenge organized at MICCAI 2025 conference

Summary
胶质母细胞瘤是一种侵袭性脑肿瘤,存在显著的分子和病理异质性,使得诊断和患者分层复杂化。为应对BraTS-Path 2025挑战赛,研究团队使用预训练的Vision Transformer(ViT)编码器,结合专用分类头对官方训练数据集进行微调。模型在在线验证集上的表现经Synapse平台评估,获得了Matthews Correlation Coefficient(MCC)为0.7064和F1分数为0.7676的成绩。在最终测试集上,模型实现了MCC为0.6509和F1分数为0.5330,为团队赢得了BraTS-Pathology 2025挑战赛的第二名。该研究结果为基于ViT的病理分析建立了坚实的基准线。

Key Takeaways

  1. 胶质母细胞瘤存在显著的分子和病理异质性,使得诊断和患者分层具有挑战性。
  2. 研究团队使用预训练的Vision Transformer(ViT)编码器进行图像分析。
  3. 模型在BraTS-Path 2025挑战赛的在线验证集上表现良好,获得了较高的MCC和F1分数。
  4. 模型在最终测试集上获得了第二名的好成绩。
  5. 此研究为基于ViT的病理分析建立了坚实的基准线。
  6. 未来研究将侧重于缩小模型在未见验证数据上的性能差距。

Cool Papers

点此查看论文截图

History-Augmented Contrastive Meta-Learning for Unsupervised Blind Super-Resolution of Planetary Remote Sensing Images

Authors:Huijia Zhao, Jie Lu, Yunqing Jiang, Xiao-Ping Lu, Kaichang Di

Planetary remote sensing images are affected by diverse and unknown degradations caused by imaging environments and hardware constraints. These factors limit image quality and hinder supervised blind super-resolution due to the lack of ground-truth images. This work presents History-Augmented Contrastive Blind Super-Resolution (HACBSR), an unsupervised framework for blind super-resolution that operates without ground-truth images and external kernel priors. HACBSR comprises two components: (1) a contrastive kernel sampling mechanism with kernel similarity control to mitigate distribution bias from Gaussian sampling, and (2) a history-augmented contrastive learning that uses historical models to generate negative samples to enable less greedy optimization and to induce strong convexity without ground-truth. A convergence analysis of the history-augmented contrastive learning is given in the Appendix. To support evaluation in planetary applications, we introduce Ceres-50, a dataset with diverse geological features simulated degradation patterns. Experiments show that HACBSR achieves competitive performance compared with state-of-the-art unsupervised methods across multiple upscaling factors. The code is available at https://github.com/2333repeat/HACBSR, and the dataset is available at https://github.com/2333repeat/Ceres-50.

行星遥感图像受到多种未知退化因素的影响,这些退化因素源于成像环境和硬件限制。这些因素限制了图像质量,并由于缺少地面实况图像而阻碍了监督盲超分辨率的应用。本文提出了历史增强对比盲超分辨率(HACBSR),这是一种无需地面实况图像和外部核先验的盲超分辨率的无监督框架。HACBSR包含两个组件:(1)一种对比核采样机制,具有核相似性控制,以减轻高斯采样中的分布偏差;(2)历史增强对比学习,利用历史模型生成负样本,以实现更省力的优化,并在没有地面实况的情况下产生强烈的凸性。历史增强对比学习的收敛性分析见附录。为了支持在行星应用中的评估,我们引入了Ceres-50数据集,该数据集具有模拟退化模式的多种地质特征。实验表明,与最先进的无监督方法相比,HACBSR在多个放大倍数上实现了具有竞争力的性能。代码可在https://github.com/2333repeat/HACBSR获取,数据集可在https://github.com/2333repeat/Ceres-50获取。

论文及项目相关链接

PDF 13pages

Summary

本文提出一种无监督的盲超分辨率重建框架History-Augmented Contrastive Blind Super-Resolution (HACBSR),无需地面真实图像和外部核先验,适用于处理行星遥感图像。该框架包含对比核采样机制和历史增强对比学习两部分。通过核相似性控制缓解高斯采样引起的分布偏差,并利用历史模型生成负样本,实现无需地面真实数据的优化和强凸性。实验表明,HACBSR在多个放大倍数上达到了与先进无监督方法相当的性能。

Key Takeaways

  1. 遥感图像受到环境和硬件限制的影响,导致图像质量下降,影响监督盲超分辨率重建。
  2. HACBSR是一种无监督盲超分辨率框架,无需地面真实图像和外部核先验。
  3. HACBSR包含对比核采样机制,通过核相似性控制缓解分布偏差问题。
  4. 历史增强对比学习使用历史模型生成负样本,实现更优化的训练过程。
  5. HACBSR框架具有强大的性能,在多个放大倍数上达到先进的无监督方法水平。
  6. Ceres-50数据集用于支持行星应用中的评估,包含多样的地质特征模拟退化模式。

Cool Papers

点此查看论文截图

Cross-Contrastive Clustering for Multimodal Attributed Graphs with Dual Graph Filtering

Authors:Haoran Zheng, Renchi Yang, Hongtao Wang, Jianliang Xu

Multimodal Attributed Graphs (MMAGs) are an expressive data model for representing the complex interconnections among entities that associate attributes from multiple data modalities (text, images, etc.). Clustering over such data finds numerous practical applications in real scenarios, including social community detection, medical data analytics, etc. However, as revealed by our empirical studies, existing multi-view clustering solutions largely rely on the high correlation between attributes across various views and overlook the unique characteristics (e.g., low modality-wise correlation and intense feature-wise noise) of multimodal attributes output by large pre-trained language and vision models in MMAGs, leading to suboptimal clustering performance. Inspired by foregoing empirical observations and our theoretical analyses with graph signal processing, we propose the Dual Graph Filtering (DGF) scheme, which innovatively incorporates a feature-wise denoising component into node representation learning, thereby effectively overcoming the limitations of traditional graph filters adopted in the extant multi-view graph clustering approaches. On top of that, DGF includes a tri-cross contrastive training strategy that employs instance-level contrastive learning across modalities, neighborhoods, and communities for learning robust and discriminative node representations. Our comprehensive experiments on eight benchmark MMAG datasets exhibit that DGF is able to outperform a wide range of state-of-the-art baselines consistently and significantly in terms of clustering quality measured against ground-truth labels.

多模态属性图(MMAGs)是一种表达数据模型,能够表示来自多个数据模态(文本、图像等)的实体之间的复杂互连关系。在这种数据上进行聚类可以找到许多实际应用场景,例如社交网络社区检测、医疗数据分析等。然而,我们的实证研究揭示,现有的多视图聚类解决方案在很大程度上依赖于不同视图之间属性的高相关性,而忽略了大规模预训练语言和视觉模型在MMAG中输出的多模态属性的独特特征(例如低模态间关联性和强烈的特征噪声),从而导致聚类性能不佳。受前述实证观察和图信号处理理论分析的启发,我们提出了双图滤波(DGF)方案。该方案创新地将特征级的降噪组件纳入节点表示学习,从而有效地克服了现有多视图图聚类方法中采用的传统图滤波器的局限性。此外,DGF还采用了一种三重交叉对比训练策略,通过跨模态、邻域和社区进行实例级对比学习,以学习稳健和具有区分度的节点表示。我们在八个基准MMAG数据集上的综合实验表明,在基于真实标签的聚类质量衡量下,DGF能够持续且显著地优于一系列最新基线方法。

论文及项目相关链接

PDF Accepted by SIGKDD 2026. The code is available at https://github.com/HaoranZ99/DGF

Summary

该文本介绍了多模态属性图(MMAGs)作为一种表达数据模型,能够表示实体之间复杂的相互连接,这些实体关联来自多个数据模态(如文本、图像等)的属性。针对现有多视图聚类解决方案在MMAGs上的不足,作者提出了双图滤波(DGF)方案,该方案创新地融入了特征级的去噪组件,有效地克服了传统图滤波的局限性。此外,DGF还包括一种三交叉对比训练策略,通过跨模态、邻域和社区进行实例级的对比学习,以学习稳健的节点表示。在多个基准MMAG数据集上的实验表明,DGF在聚类质量方面能够持续且显著地超越一系列最先进的基线方法。

Key Takeaways

  1. MMAGs是一种用于表示多模态数据复杂互联的数据模型,广泛应用于实际场景如社交网络社区检测和医疗数据分析。
  2. 现有多视图聚类解决方案在处理MMAGs时存在局限性,忽视了多模态属性的独特特性,如低模态间关联性和强烈的特征噪声。
  3. DGF方案通过结合特征级的去噪组件,克服了传统图滤波的局限性,提高了节点表示的效能。
  4. DGF采用三交叉对比训练策略,通过跨模态、邻域和社区进行实例级对比学习,增强节点表示的稳健性和判别力。
  5. 在多个基准MMAG数据集上的实验表明,DGF在聚类质量方面表现优异,显著超越了现有先进方法。
  6. DGF方案特别适用于处理由大型预训练语言和视觉模型输出的多模态数据。

Cool Papers

点此查看论文截图


文章作者: Kedreamix
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Kedreamix !
 上一篇
Speech Speech
Speech 方向最新论文已更新,请持续关注 Update in 2025-11-27 Bridging the Language Gap Synthetic Voice Diversity via Latent Mixup for Equitable Speech Recognition
2025-11-27
下一篇 
人脸相关 人脸相关
人脸相关 方向最新论文已更新,请持续关注 Update in 2025-11-27 Leveraging Unlabeled Data from Unknown Sources via Dual-Path Guidance for Deepfake Face Detection
2025-11-27
  目录