⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-09-28 更新
Multimodal Deep Learning for Phyllodes Tumor Classification from Ultrasound and Clinical Data
Authors:Farhan Fuad Abir, Abigail Elliott Daly, Kyle Anderman, Tolga Ozmen, Laura J. Brattain
Phyllodes tumors (PTs) are rare fibroepithelial breast lesions that are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas. This often leads to unnecessary surgical excisions. To address this, we propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy. We developed a dual-branch neural network that extracts and fuses features from ultrasound images and patient metadata from 81 subjects with confirmed PTs. Class-aware sampling and subject-stratified 5-fold cross-validation were applied to prevent class imbalance and data leakage. The results show that our proposed multimodal method outperforms unimodal baselines in classifying benign versus borderline/malignant PTs. Among six image encoders, ConvNeXt and ResNet18 achieved the best performance in the multimodal setting, with AUC-ROC scores of 0.9427 and 0.9349, and F1-scores of 0.6720 and 0.7294, respectively. This study demonstrates the potential of multimodal AI to serve as a non-invasive diagnostic tool, reducing unnecessary biopsies and improving clinical decision-making in breast tumor management.
叶状肿瘤(PTs)是一种罕见的乳腺纤维上皮病变,由于其放射学与良性纤维腺瘤相似,故术前难以分类,这经常导致不必要的手术切除。为了解决这一问题,我们提出了一种多模态深度学习框架,该框架结合了乳腺超声(BUS)图像与结构化临床数据,以提高诊断准确性。我们开发了一个双分支神经网络,从经过确认的PTs的超声图像和患者元数据中提取并融合特征。应用类别感知采样和主体分层5折交叉验证,以防止类别不平衡和数据泄露。结果表明,我们提出的多模态方法在分类良性与边界性或恶性PT方面优于单模态基线。在六种图像编码器中,ConvNeXt和ResNet18在多模态设置中表现最佳,其AUC-ROC得分分别为0.9427和0.9349,F1得分分别为0.6720和0.7294。本研究展示了多模态人工智能作为非侵入性诊断工具的潜力,可以减少不必要的活检,提高乳腺肿瘤管理的临床决策能力。
论文及项目相关链接
PDF IEEE-EMBS International Conference on Body Sensor Networks (IEEE-EMBS BSN 2025)
摘要
利用多模态深度学习框架结合乳腺超声图像和结构化临床数据,提高叶状肿瘤(PTs)术前诊断准确性。研究提出一种双分支神经网络,从超声图像和患者元数据中提取并融合特征。通过类感知采样和分层交叉验证应对类别不平衡和数据泄露问题。结果显示,多模态方法较单模态基线在鉴别良性与交界性或恶性叶状肿瘤方面表现更佳。在多种图像编码器中,ConvNeXt和ResNet18在多模态环境中表现最佳,AUC-ROC得分分别为0.9427和0.9349,F1分数分别为0.6720和0.7294。研究证明多模态人工智能作为无创诊断工具的潜力,有望降低不必要的活检率,提高乳腺癌管理的临床决策水平。
关键见解
- 叶状肿瘤在术前难以通过放射学诊断,常与良性纤维腺瘤混淆,导致不必要的手术摘除。
- 提出一种多模态深度学习框架,结合乳腺超声图像和结构化临床数据,以提高诊断准确性。
- 双分支神经网络能从超声图像和患者元数据中提取并融合特征。
- 通过类感知采样和分层交叉验证应对类别不平衡和数据泄露的挑战。
- 多模态方法表现优于单模态基线,在鉴别良性与交界性或恶性叶状肿瘤方面有更准确的诊断效果。
- ConvNeXt和ResNet18在多模态环境中表现最佳,AUC-ROC和F1分数显示其高诊断效能。
点此查看论文截图





