⚠️ 以下所有内容总结都来自于 大语言模型的能力,如有错误,仅供参考,谨慎使用
🔴 请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛!
💗 如果您觉得我们的项目对您有帮助 ChatPaperFree ,还请您给我们一些鼓励!⭐️ HuggingFace免费体验
2025-02-12 更新
GWRF: A Generalizable Wireless Radiance Field for Wireless Signal Propagation Modeling
Authors:Kang Yang, Yuning Chen, Wan Du
We present Generalizable Wireless Radiance Fields (GWRF), a framework for modeling wireless signal propagation at arbitrary 3D transmitter and receiver positions. Unlike previous methods that adapt vanilla Neural Radiance Fields (NeRF) from the optical to the wireless signal domain, requiring extensive per-scene training, GWRF generalizes effectively across scenes. First, a geometry-aware Transformer encoder-based wireless scene representation module incorporates information from geographically proximate transmitters to learn a generalizable wireless radiance field. Second, a neural-driven ray tracing algorithm operates on this field to automatically compute signal reception at the receiver. Experimental results demonstrate that GWRF outperforms existing methods on single scenes and achieves state-of-the-art performance on unseen scenes.
我们提出了通用无线辐射场(GWRF),这是一个用于对任意三维发射器和接收器位置的无线信号传播进行建模的框架。与以往将普通神经辐射场(NeRF)从光学领域自适应到无线信号领域的方法不同,这些方法需要大量针对每个场景进行训练,而GWRF则能有效地应用于不同场景。首先,基于几何感知的Transformer编码器无线场景表示模块结合了来自地理邻近发射器的信息,以学习可推广的无线辐射场。其次,基于神经的射线追踪算法在该场上进行操作,以自动计算接收器处的信号接收情况。实验结果表明,GWRF在单个场景上的性能优于现有方法,并在未见场景上达到了最新技术水平。
论文及项目相关链接
Summary
本文提出了通用无线辐射场(GWRF)框架,用于对任意三维发射器和接收器位置的无线信号传播进行建模。与以往将NeRF从光学领域直接应用到无线信号领域的方法不同,GWRF通过引入一个基于Transformer编码器的几何感知无线场景表示模块以及一个神经网络驱动的射线追踪算法,实现跨场景的通用化推广。实验结果证明了其在单个场景上的表现和未知场景上的先进性。
Key Takeaways
- GWRF是一个针对无线信号传播的建模框架,适用于任意三维发射器和接收器位置。
- 它通过引入基于Transformer编码器的几何感知无线场景表示模块,实现无线辐射场的建模。
- GWRF利用神经网络驱动的射线追踪算法自动计算信号接收。
- GWRF在单个场景和未见场景上均表现出优异的性能。
- 与以往直接应用NeRF的方法不同,GWRF具有更强的通用性和适应性。
点此查看论文截图





Hyperparameter Optimization and Force Error Correction of Neuroevolution Potential for Predicting Thermal Conductivity of Wurtzite GaN
Authors:Zhuo Chen, Yuejin Yuan, Wenyang Ding, Shouhang Li, Meng An, Gang Zhang
As a representative of wide-bandgap semiconductors, wurtzite gallium nitride (GaN) has been widely utilized in high-power devices due to high breakdown voltage and low specific on resistance. Accurate prediction of wurtzite GaN thermal conductivity is a prerequisite for designing effective thermal management systems of electronic applications. Machine learning driven molecular dynamics simulation offers a promising approach to predicting the thermal conductivity of large-scale systems without requiring predefined parameters. However, these methods often underestimate the thermal conductivity of materials with inherently high thermal conductivity due to the large predicted force error compared with first-principle calculation, posing a critical challenge for their broader application. In this study, we successfully developed a neuroevolution potential for wurtzite GaN and accurately predicted its thermal conductivity, 259 W/m-K at room temperatue, achieving excellent agreement with reported experimental measurements. The hyperparameters of neuroevolution potential (NEP) were optimized based on systematic analysis of reproduced energy and force, structural feature, computational efficiency. Furthermore, a force prediction error correction method was implemented, effectively reducing the error caused by the additional force noise in the Langevin thermostat by extrapolating to the zero-force error limit. This study provides valuable insights and hold significant implication for advancing efficient thermal management technologies in wide bandgap semiconductor devices.
作为宽带隙半导体的代表,纤锌矿氮化镓(GaN)因其高击穿电压和低特定开电阻而在高功率设备中得到了广泛应用。准确预测纤锌矿GaN的热导率是设计电子应用的有效热管理系统的前提条件。机器学习驱动的分子动力学模拟为预测大规模系统的热导率提供了一种有前途的方法,而无需预设参数。然而,这些方法通常会由于预测的力误差与第一原理计算相比存在较大的误差,从而低估了高固有热导率材料的热导率,为其更广泛的应用带来了重大挑战。在这项研究中,我们成功开发了一种纤锌矿GaN的神经进化潜力,并准确预测了其热导率,在室温下为259 W/m-K,与报道的实验测量结果吻合良好。神经进化潜力(NEP)的超参数是基于对再现能量和力、结构特征、计算效率的系统分析而优化的。此外,实施了一种力预测误差校正方法,通过外推到零力误差极限,有效减少了朗之万恒温器附加力噪声引起的误差。本研究为推进宽带隙半导体设备中的高效热管理技术提供了有价值的见解和重要意义。
论文及项目相关链接
PDF 15 pages, 5 figures
Summary
本文介绍了宽禁带半导体代表材料六方氮化镓(GaN)在高功率器件中的广泛应用。基于机器学习驱动分子动力学模拟预测材料热导率的优势,本文成功开发出针对六方GaN的神经进化势(NEP),准确预测了其室温下的热导率为259 W/m-K,并与实验测量结果一致。通过对再现的能量和力进行系统分析优化超参数并实施了力预测误差校正方法,有效减少了朗之万恒温器附加力噪声引起的误差。该研究为高效热管理技术在宽禁带半导体器件中的应用提供了重要见解和启示。
Key Takeaways
- 宽禁带半导体六方氮化镓(GaN)因其高击穿电压和低特定电阻而广泛应用于高功率器件中。
- 六方GaN的热导率准确预测是电子应用有效热管理系统设计的前提条件。
- 机器学习驱动分子动力学模拟为预测大规模系统热导率提供了前景方法,但固有高导热性材料的预测往往由于大预测力误差而低估。
- 成功开发针对六方GaN的神经进化势(NEP),准确预测室温热导率为259 W/m-K并与实验测量结果相符。
- 通过优化超参数实现了对神经进化潜力的系统性分析,包括能量和力的重现性、结构特征以及计算效率等方面。
- 实施力预测误差校正方法,减少朗之万恒温器附加力噪声引起的误差。
点此查看论文截图



VistaFlow: Photorealistic Volumetric Reconstruction with Dynamic Resolution Management via Q-Learning
Authors:Jayram Palamadai, William Yu
We introduce VistaFlow, a scalable three-dimensional imaging technique capable of reconstructing fully interactive 3D volumetric images from a set of 2D photographs. Our model synthesizes novel viewpoints through a differentiable rendering system capable of dynamic resolution management on photorealistic 3D scenes. We achieve this through the introduction of QuiQ, a novel intermediate video controller trained through Q-learning to maintain a consistently high framerate by adjusting render resolution with millisecond precision. Notably, VistaFlow runs natively on integrated CPU graphics, making it viable for mobile and entry-level devices while still delivering high-performance rendering. VistaFlow bypasses Neural Radiance Fields (NeRFs), using the PlenOctree data structure to render complex light interactions such as reflection and subsurface scattering with minimal hardware requirements. Our model is capable of outperforming state-of-the-art methods with novel view synthesis at a resolution of 1080p at over 100 frames per second on consumer hardware. By tailoring render quality to the capabilities of each device, VistaFlow has the potential to improve the efficiency and accessibility of photorealistic 3D scene rendering across a wide spectrum of hardware, from high-end workstations to inexpensive microcontrollers.
我们介绍了VistaFlow,这是一种可扩展的三维成像技术,能够从一组二维照片中重建出全交互的三维体积图像。我们的模型通过可微分的渲染系统合成新型视角,该系统能够在真实三维场景上进行动态分辨率管理。我们通过引入QuiQ来实现这一目标,这是一种新型中间视频控制器,通过Q学习进行训练,以毫秒级的精度调整渲染分辨率,从而保持一致的高帧率。值得注意的是,VistaFlow原生运行在集成CPU图形上,使其成为移动设备和小型入门级设备的可行选择,同时仍能提供高性能渲染。VistaFlow绕过神经辐射场(NeRFs),使用PlenOctree数据结构渲染复杂的灯光交互,如反射和次表面散射,对硬件的要求极低。我们的模型能够在消费者硬件上以超过每秒100帧的速度,以1080p的分辨率进行新颖视图合成,超越现有技术。通过根据每台设备的能力定制渲染质量,VistaFlow有望在从高端工作站到廉价微控制器的各种硬件设备上提高真实三维场景渲染的效率和可及性。
论文及项目相关链接
Summary
VistaFlow技术通过采用PlenOctree数据结构绕过神经网络辐射场(NeRFs),实现在低硬件需求下对复杂光线交互如反射和次表面散射的渲染。该技术通过可微分渲染系统和动态分辨率管理技术,从一组二维照片重建出全交互的三维体积图像。此外,VistaFlow原生运行在集成CPU图形上,适用于手机和入门级设备,同时提供高性能渲染。它能够在消费者硬件上以超过100帧每秒的速度合成视角新颖的1080p分辨率图像,通过针对每台设备的能力定制渲染质量,VistaFlow有望改善从高端工作站到廉价微控制器等广泛硬件的光照真实感三维场景渲染效率。
Key Takeaways
- VistaFlow技术能够从一组二维照片重建出全交互的三维体积图像。
- 通过可微分渲染系统和动态分辨率管理技术实现渲染。
- 使用了PlenOctree数据结构来管理复杂的视觉信息。
- QuiQ这一创新视频控制器有助于保持稳定的帧率并调整渲染分辨率。
- VistaFlow技术适用于移动设备和入门级设备,同时在硬件性能上也有出色的表现。
- 能在消费者硬件上以高分辨率和高帧率实现新颖视角的合成。
点此查看论文截图






TivNe-SLAM: Dynamic Mapping and Tracking via Time-Varying Neural Radiance Fields
Authors:Chengyao Duan, Zhiliu Yang
Previous attempts to integrate Neural Radiance Fields (NeRF) into the Simultaneous Localization and Mapping (SLAM) framework either rely on the assumption of static scenes or require the ground truth camera poses, which impedes their application in real-world scenarios. This paper proposes a time-varying representation to track and reconstruct the dynamic scenes. Firstly, two processes, a tracking process and a mapping process, are maintained simultaneously in our framework. In the tracking process, all input images are uniformly sampled and then progressively trained in a self-supervised paradigm. In the mapping process, we leverage motion masks to distinguish dynamic objects from the static background, and sample more pixels from dynamic areas. Secondly, the parameter optimization for both processes is comprised of two stages: the first stage associates time with 3D positions to convert the deformation field to the canonical field. The second stage associates time with the embeddings of the canonical field to obtain colors and a Signed Distance Function (SDF). Lastly, we propose a novel keyframe selection strategy based on the overlapping rate. Our approach is evaluated on two synthetic datasets and one real-world dataset, and the experiments validate that our method achieves competitive results in both tracking and mapping when compared to existing state-of-the-art NeRF-based dynamic SLAM systems.
之前将神经辐射场(NeRF)融入同时定位与地图构建(SLAM)框架的尝试要么依赖于静态场景的假设,要么需要真实的相机姿态,这阻碍了它们在现实世界场景中的应用。本文针对动态场景的跟踪和重建提出了一种时变表示方法。首先,在我们的框架中,同时维持两个过程:跟踪过程和映射过程。在跟踪过程中,所有输入图像被均匀采样,然后在自监督模式下进行逐步训练。在映射过程中,我们利用运动掩膜来区分动态物体和静态背景,并从动态区域采样更多像素。其次,两个过程的参数优化由两个阶段组成:第一阶段将时间与3D位置相关联,将变形场转换为规范场。第二阶段将时间与规范场的嵌入相关联,以获得颜色和带符号距离函数(SDF)。最后,我们提出了一种基于重叠率的关键帧选择策略。我们的方法在两个合成数据集和一个真实世界数据集上进行了评估,实验验证了我们的方法在跟踪和映射方面与现有的最先进的基于NeRF的动态SLAM系统相比具有竞争力。
论文及项目相关链接
Summary
本文提出一种结合NeRF技术的时间变化表示法,用于追踪和重建动态场景。通过同时维护跟踪过程和映射过程,在无需假设场景静态或无需真实相机姿态的情况下,实现了动态场景的SLAM。参数优化分为两个阶段,并采用了基于重叠率的新的关键帧选择策略。实验验证,该方法在跟踪和映射方面都达到了与现有最先进的NeRF动态SLAM系统相当的结果。
Key Takeaways
- 该方法结合NeRF技术,提出一种时间变化表示法用于追踪和重建动态场景。
- 通过同时维护跟踪过程和映射过程,解决了现有NeRF在动态场景应用中的限制。
- 采用自我监督的训练方式,对输入图像进行均匀采样,实现渐进式训练。
- 利用运动掩码区分动态物体和静态背景,并从动态区域采样更多像素。
- 参数优化分为两个阶段,将时间关联到三维位置和典型场,以获得变形场到典型场的转换。
- 提出基于重叠率的新型关键帧选择策略。
点此查看论文截图





