上海大学学报(自然科学版) ›› 2025, Vol. 31 ›› Issue (3): 530-542.doi: 10.12066/j.issn.1007-2861.2663

• 通信工程 • 上一篇    下一篇

多尺度EPI融合的密集光场解耦重建

曹捷1,2, 吴玉静3, 张倩1,2, 孟春丽1, 严涛4   

  1. 1. 上海师范大学 信息与机电工程学院, 上海 200234;
    2. 上海智能教育大数据工程技术研究中心, 上海 200234;
    3. 复旦大学附属中学, 上海 200433;
    4. 莆田学院 人工智能学院, 福建 莆田 351100
  • 收稿日期:2024-10-22 出版日期:2025-06-30 发布日期:2025-07-22
  • 通讯作者: 张倩(1983-),女,副教授,研究方向为视频和图像信息处理. E-mail:qianzhang@shnu.edu.cn
  • 基金资助:
    国家自然科学基金资助项目(62301320);福建省自然科学基金资助项目(2023J011009);青少年身心成长的智能监测与评估资助项目(2023YFC3305802);莆田市科技局资助项目(2021G2001ptxy08);莆田学院人才科研启动经费资助项目(2019003)

Dense light field decoupling reconstruction based on multiscale EPI fusion

CAO Jie1,2, WU Yujing3, ZHANG Qian1,2, MENG Chunli1, YAN Tao4   

  1. 1. The College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 200234, China;
    2. Shanghai Engineering Research Center of Intelligent Education and Bigdata, Shanghai 200234, China;
    3. High School Affiliated to Fudan University, Shanghai 200433, China;
    4. School of Artificial Intelligence, Putian University, Putian 351100, Fujian, China
  • Received:2024-10-22 Online:2025-06-30 Published:2025-07-22

摘要: 为了充分挖掘光场极平面视图(epipolar plane image,EPI)之间的内在相关性,同时强化对空间信息的有效捕捉,提出了一种多尺度EPI信息融合的密集光场解耦重建方法.该方法对空间维度和极平面维度进行了更深层次的特征利用,能够更好地捕捉子孔径视图间的角度相关性,通过解耦与融合多种信息,提高了光场重建的精度和效果.首先,在四维光场数据的基础上,增加了密集的空间维度,提升了网络的泛化能力,并增强了其对图像局部结构和纹理信息的理解.其次,为了更好地补充和增强极平面间的信息互补性,设计了一个极平面融合模块,并提出了一种新的多尺度卷积注意力机制来融合特征信息.该注意力机制通过多尺度特征提取与全局关注机制,能有效捕捉角度信息,增强重要特征表达并抑制冗余内容.最后,在HCInew、HCIold和Stanford等光场数据集上进行实验.结果表明,本方法在峰值信噪比(peak signal-to-noise ratio,PSNR)和结构相似性(structural similarity,SSIM)等评价指标上优于现有的最先进的SOTA (state-of-the-art)方法,在大多数测试场景中实现了更好的重建效果.

关键词: 光场, 光场解耦, 密集特征提取, 极平面融合, 角度超分辨率

Abstract: To fully exploit the inherent correlation between light field epipolar plane images(EPI) and strengthen the effective capture of spatial information, this study proposed a dense light field decoupling reconstruction method based on multiscale EPI information fusion. This method utilized the spatial and epipolar plane dimensions to enable a better capture of the angular correlations between subaperture views. By decoupling and fusing multiple types of information, the accuracy and effectiveness of light field reconstruction could be enhanced. First, based on four-dimensional light field data, additional dense spatial dimensions were introduced to improve the generalization capability of the network and enhanced its understanding of local structures and texture details in images. Second, to better complement and enhance the mutual information among epipolar planes, an epipolar plane fusion module was designed along with a novel multiscale convolutional attention mechanism to integrate feature information. This attention mechanism effectively captured angular correlations through multiscale feature extraction and a global attention mechanism, thereby enhancing the expression of critical features while suppressing redundant content. Finally, experiments conducted on light field datasets, such as HCInew, HCIold,and Stanford, demonstrated that the proposed method outperformed existing state-of-theart(SOTA) approaches in terms of evaluation metrics including the peak signal-to-noise ratio(PSNR) and structural similarity(SSIM). The proposed method achieved superior reconstruction performance in most test scenarios.

Key words: light field, light field decoupling, dense feature extraction, epipolar plane images(EPI) fusion, angular super-resolution

中图分类号: