⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-09-18 更新
MSR-Codec: A Low-Bitrate Multi-Stream Residual Codec for High-Fidelity Speech Generation with Information Disentanglement
Authors:Jingyu Li, Guangyan Zhang, Zhen Ye, Yiwen Guo
Audio codecs are a critical component of modern speech generation systems. This paper introduces a low-bitrate, multi-scale residual codec that encodes speech into four distinct streams: semantic, timbre, prosody, and residual. This architecture achieves high-fidelity speech reconstruction at competitive low bitrates while demonstrating an inherent ability for information disentanglement. We construct a two-stage language model for text-to-speech (TTS) synthesis using this codec, which, despite its lightweight design and minimal data requirements, achieves a state-of-the-art Word Error Rate (WER) and superior speaker similarity compared to several larger models. Furthermore, the codec’s design proves highly effective for voice conversion, enabling independent manipulation of speaker timbre and prosody.
音频编解码器是现代语音生成系统的关键组成部分。本文介绍了一种低比特率、多尺度残差编解码器,该编解码器将语音编码为四个不同的流:语义、音色、音调和残差。该架构在具有竞争力的低比特率下实现了高保真语音重建,同时展示了固有的信息解纠缠能力。我们使用该编解码器构建了一个用于文本到语音(TTS)合成的两阶段语言模型,尽管其设计轻巧且对数据要求最低,但其在单词错误率(WER)方面达到了最新技术水平,并且在与多个大型模型的比较中拥有更出色的发音人相似性。此外,编解码器的设计在语音转换方面非常有效,能够实现独立操作发音人的音色和语调。
论文及项目相关链接
Summary
本文介绍了一种低比特率的多尺度残差音频编码技术,该技术将语音编码为语义、音色、语调和残差四个独立流。这种架构在竞争性的低比特率下实现了高保真语音重建,并展现出内在的信息分离能力。利用此编码技术构建的两阶段文本转语音(TTS)合成模型,在轻量化设计和最小数据需求下,实现了最先进的词错误率(WER)和出色的说话人相似性,相比其他大型模型有优势。此外,该编码器的设计对于语音转换非常有效,能够实现说话人音色和语调的独立操作。
Key Takeaways
- 引入了一种低比特率的多尺度残差音频编码技术。
- 该技术将语音分为四个独立流:语义、音色、语调和残差。
- 此技术实现了高保真语音重建,在竞争性的低比特率下有出色表现。
- 基于该编码技术的TTS合成模型实现了先进的词错误率(WER)和出色的说话人相似性。
- 与其他大型模型相比,该模型在轻量化设计和最小数据需求下有优势。
- 该编码器的设计对于语音转换任务非常有效。
点此查看论文截图




LTA-thinker: Latent Thought-Augmented Training Framework for Large Language Models on Complex Reasoning
Authors:Jiaqi Wang, Binquan Ji, Haibo Luo, Yiyang Qi, Ruiting Li, Huiyan Wang, Yuantao Han, Cangyi Yang, jiaxu Zhang, Feiliang Ren
Complex Reasoning in Large Language Models can be dynamically optimized using Test-Time Scaling (TTS) to mitigate Overthinking. Methods such as Coconut, SoftCoT and its variant are effective in continuous latent space inference, the core bottleneck still lies in the efficient generation and utilization of high-quality Latent Thought. Drawing from the theory of SoftCoT++ that a larger variance in the generated Latent Thought distribution more closely approximates the golden truth distribution, we propose a Latent Thought-Augmented Training Framework–LTA-Thinker, which improves distributional variance and enhances reasoning performance from two perspectives. First, LTA-Thinker constructs a Latent Thought generation architecture based on a learnable prior. This architecture aims to increase the variance distribution of generated Latent Thought Vectors in order to simplify the overall structure and raise the performance ceiling. Second, LTA-Thinker introduces a distribution-based directional optimization paradigm that jointly constrains both distribution locality and distribution scale. This mechanism improves information efficiency and computational cost through a multi-objective co-training strategy, which combines standard Supervised Fine-Tuning (SFT) loss with two novel losses: Semantic Alignment Loss, which utilizes KL divergence to ensure that the Latent Thought is highly relevant to the semantics of the question; Reasoning Focus Loss, which utilizes a contrastive learning mechanism to guide the model to focus on the most critical reasoning steps. Experiments show that LTA-thinker achieves state-of-the-art (SOTA) performance among various baselines and demonstrates a higher performance ceiling and better scaling effects.
在大语言模型的复杂推理中,可以通过测试时间缩放(TTS)进行动态优化,以减轻过度思考。椰子数、SoftCoT及其变种等方法在连续潜在空间推理中效果显著,但核心瓶颈仍在于高效生成和利用高质量的潜在思维。根据SoftCoT++的理论,生成的潜在思维分布在更大的方差上更接近黄金真实分布。因此,我们提出了一种基于潜在思维的训练框架——LTA思考者(Latent Thought-Augmented Training Framework),从两个方面提高分布方差和推理性能。首先,LTA思考者构建了一个基于可学习先验的潜在思维生成架构。该架构旨在增加生成潜在思维向量的方差分布,以简化整体结构并提高性能上限。其次,LTA思考者引入了一种基于分布的方向优化范式,该范式联合约束分布局部性和分布规模。该机制通过多目标联合训练策略提高了信息效率和计算成本,结合了标准监督微调(SFT)损失和两个新损失:语义对齐损失,利用KL散度确保潜在思维与问题的语义高度相关;推理焦点损失,利用对比学习机制引导模型关注最关键的推理步骤。实验表明,LTA思考者在各种基线方法中达到了最新水平(SOTA)的性能表现,并展示了更高的性能上限和更好的扩展效果。
论文及项目相关链接
Summary
本文探讨了在大语言模型中利用测试时间缩放(TTS)进行动态优化以减少过度思考的方法。文章介绍了Coconut、SoftCoT及其变体等方法在连续潜在空间推理中的应用,并提出了一种潜在思维增强训练框架LTA-Thinker,旨在提高分布方差并增强推理性能。该框架通过构建基于可学习先验的潜在思维生成架构和引入基于分布的方向优化范式,提高了信息效率和计算成本。实验表明,LTA-Thinker在各种基线中取得了最佳性能,并展示了更高的性能上限和更好的缩放效果。
Key Takeaways
- 大语言模型中的复杂推理可以通过测试时间缩放(TTS)进行动态优化。
- SoftCoT++理论表明,更大的潜在思维分布方差更接近真实分布。
- LTA-Thinker训练框架旨在提高分布方差并增强推理性能。
- LTA-Thinker通过构建基于可学习先验的潜在思维生成架构来简化整体结构和提高性能上限。
- LTA-Thinker引入了一种基于分布的方向优化范式,通过多目标联合训练策略来约束分布局部性和规模。
- LTA-Thinker通过结合标准监督微调(SFT)损失与两种新损失(语义对齐损失和推理焦点损失)来提高信息效率和计算成本。
点此查看论文截图



Building Coding Agents via Entropy-Enhanced Multi-Turn Preference Optimization
Authors:Jiahao Yu, Zelei Cheng, Xian Wu, Xinyu Xing
Software engineering presents complex, multi-step challenges for Large Language Models (LLMs), requiring reasoning over large codebases and coordinated tool use. The difficulty of these tasks is exemplified by benchmarks like SWE-bench, where current LLMs still struggle to resolve real-world issues. A promising approach to enhance performance is test-time scaling (TTS), but its gains are heavily dependent on the diversity of model outputs. While standard alignment methods such as Direct Preference Optimization (DPO) and Kahneman-Tversky Optimization (KTO) are effective at aligning model outputs with human preferences, this process can come at the cost of reduced diversity, limiting the effectiveness of TTS. Additionally, existing preference optimization algorithms are typically designed for single-turn tasks and do not fully address the complexities of multi-turn reasoning and tool integration required for interactive coding agents. To bridge this gap, we introduce \sys, an entropy-enhanced framework that adapts existing preference optimization algorithms to the multi-turn, tool-assisted setting. \sys augments the preference objective to explicitly preserve policy entropy and generalizes learning to optimize over multi-turn interactions rather than single-turn responses. We validate \sys by fine-tuning a diverse suite of models from different families and sizes (up to 106B parameters). To maximize performance gains from TTS, we further propose a hybrid best-trajectory selection scheme combining a learned verifier model with model free approaches. On the \swebench leaderboard, our approach establishes new state-of-the-art results among open-weight models. A 30B parameter model trained with \sys ranks 1st on \lite and 4th on \verified on the open-weight leaderboard, surpassed only by models with over 10x more parameters(\eg$>$350B).
软件工程为大型语言模型(LLM)带来了复杂、多步骤的挑战,需要在大规模代码库上进行推理和协调工具使用。这些任务的难度可以通过SWE-bench等基准测试体现出来,当前LLM在解决现实世界问题方面仍然面临困难。一种提高性能的有前途的方法是测试时缩放(TTS),但其收益在很大程度上取决于模型输出的多样性。虽然直接偏好优化(DPO)和卡内曼-特维尔斯基优化(KTO)等标准对齐方法可以有效地将模型输出与人的偏好对齐,但这个过程可能会降低多样性,从而限制了TTS的有效性。此外,现有的偏好优化算法通常是为单回合任务而设计的,并没有完全解决多回合推理和工具集成所需的交互式编码代理的复杂性。为了弥补这一差距,我们引入了\sys,这是一个熵增强框架,它适应了现有的偏好优化算法,用于多回合、工具辅助的环境。\sys增加了偏好目标来显式地保留策略熵,并将学习推广到优化多回合交互而不是单回合响应。我们通过微调来自不同家族和大小的多样模型套件来验证\sys(参数高达106B)。为了从TTS获得最大的性能提升,我们进一步提出了一种混合的最佳轨迹选择方案,该方案结合了学习验证模型和无模型方法。在Swebench排行榜上,我们的方法树立了最新的业界领先地位。使用\sys训练的30B参数模型在开放权重排行榜上的\lite排名第一,在\verified排名第四,仅被参数超过10倍(例如超过350B)的模型所超越。
论文及项目相关链接
Summary
本文探讨了大型语言模型(LLM)在软件工程中的挑战,并指出当前LLM在解决现实世界问题时的局限性。文章提出了一种增强性能的方法——测试时间缩放(TTS),但其效果取决于模型输出的多样性。文章介绍了通过引入熵增强框架来解决这一问题的方法,该框架结合了偏好优化算法,以在多轮对话和工具辅助的环境中优化模型性能。实验验证显示,该方法在多个模型上取得了最新成果。
Key Takeaways
- 大型语言模型(LLM)在软件工程中面临复杂的多步挑战,需要处理大规模代码库和协调工具使用。
- 当前的LLM在解决现实世界问题方面仍有困难,特别是在处理如SWE-bench等基准测试时。
- 测试时间缩放(TTS)是一种提高LLM性能的方法,但其效果取决于模型输出的多样性。
- 直接偏好优化(DPO)和Kahneman-Tversky优化(KTO)等标准对齐方法虽然可以有效对齐模型输出与人类偏好,但可能会降低输出多样性,从而限制TTS的效果。
- 为了解决这一不足,引入了一个熵增强框架,该框架结合了偏好优化算法,以应对多轮对话和工具辅助环境中的复杂性。
- 该框架通过显式保留策略熵并优化多轮交互而非单轮响应来增强模型性能。
点此查看论文截图


SwinSRGAN: Swin Transformer-based Generative Adversarial Network for High-Fidelity Speech Super-Resolution
Authors:Jiajun Yuan, Xiaochen Wang, Yuhang Xiao, Yulin Wu, Chenhao Hu, Xueyang Lv
Speech super-resolution (SR) reconstructs high-frequency content from low-resolution speech signals. Existing systems often suffer from representation mismatch in two-stage mel-vocoder pipelines and from over-smoothing of hallucinated high-band content by CNN-only generators. Diffusion and flow models are computationally expensive, and their robustness across domains and sampling rates remains limited. We propose SwinSRGAN, an end-to-end framework operating on Modified Discrete Cosine Transform (MDCT) magnitudes. It is a Swin Transformer-based U-Net that captures long-range spectro-temporal dependencies with a hybrid adversarial scheme combines time-domain MPD/MSD discriminators with a multi-band MDCT discriminator specialized for the high-frequency band. We employs a sparse-aware regularizer on arcsinh-compressed MDCT to better preserve transient components. The system upsamples inputs at various sampling rates to 48 kHz in a single pass and operates in real time. On standard benchmarks, SwinSRGAN reduces objective error and improves ABX preference scores. In zero-shot tests on HiFi-TTS without fine-tuning, it outperforms NVSR and mdctGAN, demonstrating strong generalization across datasets
语音超分辨率(SR)技术从低分辨率语音信号中重建高频内容。现有系统常常在两段式梅尔频谱解码器管道中存在表示不匹配的问题,并且仅由卷积神经网络生成的器会对虚构的高频内容进行过度平滑处理。扩散和流模型计算开销大,其在不同领域和采样率之间的鲁棒性仍然有限。我们提出了SwinSRGAN,这是一个基于修改后的离散余弦变换(MDCT)振幅的端到端框架。它是基于Swin Transformer的U形网络,能够捕捉长时间谱图依赖关系,混合对抗方案结合了时域MPD/MSD鉴别器与针对高频带的多种带MDCT鉴别器。我们对压缩后的MDCT采用了稀疏感知的正则化,以便更好地保留瞬态成分。该系统一次性采样不同输入到48kHz,可以实时运行。在标准基准测试中,SwinSRGAN降低了客观误差并提高了ABX偏好得分。在未对HiFi-TTS进行微调进行零样本测试时,其性能优于NVSR和mdctGAN,展示了在不同数据集上的强大泛化能力。
论文及项目相关链接
PDF 5 pages This work has been submitted to the IEEE for possible publication
摘要
本摘要介绍了Speech super-resolution (SR) 技术,它可以从低分辨率语音信号中重建高频内容。现有系统存在两个阶段梅尔语音编码器的表示不匹配问题,以及卷积神经网络生成器对合成高频内容的过度平滑问题。本文提出了SwinSRGAN,一个基于修改后的离散余弦变换(MDCT)幅度进行操作的端到端框架。它采用基于Swin Transformer的U-Net,能够捕捉长期谱时间依赖性,采用混合对抗方案,将时间域MPD/MSD鉴别器与针对高频带的多频带MDCT鉴别器相结合。在MDCT的arcsinh压缩上采用稀疏感知正则化器,以更好地保留瞬态分量。该系统可以在单次传递中以实时方式将各种采样率的输入上采样到48 kHz。在标准基准测试中,SwinSRGAN降低了客观误差并提高了ABX偏好分数。在无需微调的高保真文本到语音(HiFi-TTS)零样本测试中,其表现优于NVSR和mdctGAN,表现出强大的跨数据集泛化能力。
关键见解
- Speech super-resolution (SR) 技术旨在从低分辨率语音信号中重建高频内容。
- 现有系统存在表示不匹配和过度平滑合成高频内容的问题。
- SwinSRGAN是一个端到端的框架,基于修改后的离散余弦变换(MDCT)幅度进行操作。
- SwinSRGAN采用Swin Transformer的U-Net结构,捕捉长期谱时间依赖性。
- 采用了混合对抗方案,结合了时间域鉴别器与针对高频带的MDCT鉴别器。
- SwinSRGAN在MDCT的arcsinh压缩上应用稀疏感知正则化器以保留瞬态分量。
- SwinSRGAN能在单次传递中以实时方式将各种采样率的输入上采样到48 kHz,并在基准测试中表现出优异的性能。
点此查看论文截图



