通信工程

复杂光照条件下基于鲁棒特征点提取的卫星帆板相对位姿估计

  • 况亦晗 ,
  • 李冠壹 ,
  • 王正 ,
  • 常亮 ,
  • 曾丹
展开
  • 1. 上海大学 通信与信息工程学院, 上海 200444;
    2. 中国科学院 微小卫星创新研究院, 上海 201304

收稿日期: 2025-03-17

  网络出版日期: 2025-07-22

基金资助

国家自然科学基金资助项目(62372284)

Relative position and attitude estimation of satellite solar panels under complex lighting conditions via robust feature-point extraction

  • KUANG Yihan ,
  • LI Guanyi ,
  • WANG Zheng ,
  • CHANG Liang ,
  • ZENG Dan
Expand
  • 1. School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China;
    2. Innovation Academy for Microsatellites of Chinese Academy of Science, Shanghai 201304, China

Received date: 2025-03-17

  Online published: 2025-07-22

摘要

卫星帆板相对位姿估计是航天领域的核心技术,对于顺利开展在轨维护任务至关重要.然而,在太空复杂光照条件下,受限于非均匀光照成像和边缘纹理干扰,帆板特征点难以准确获取,从而影响相对位姿精度.因此,提出一种复杂光照条件下基于鲁棒特征点提取的卫星帆板相对位姿估计方法.首先,采用轻量化多尺度边缘引导网络精准分割帆板区域;随后,对预处理的分割结果进行边缘直线拟合和交点计算,实现特征点的有效提取;最后,利用相邻帧信息进行点对匹配,获取帆板的相对位姿参数.实验结果表明,在复杂光照条件下,所提出方法能够有效满足在相机从60 m至15 m的动态抵近过程中,帆板相对姿态误差始终保持在2°以内,相对位置误差由0.38 m逐步减小至0.04 m,具有较高的精度和鲁棒性.

本文引用格式

况亦晗 , 李冠壹 , 王正 , 常亮 , 曾丹 . 复杂光照条件下基于鲁棒特征点提取的卫星帆板相对位姿估计[J]. 上海大学学报(自然科学版), 2025 , 31(3) : 516 -529 . DOI: 10.12066/j.issn.1007-2861.2671

Abstract

As a critical technology in the aerospace field, estimating the relative position and attitude of satellite solar panels is crucial for successfully executing on-orbit satellite maintenance missions. However, under complex lighting conditions in space, nonuniform illumination and interference from repetitive edge textures complicate the accurate extraction of solar panel feature points, thereby impacting the precision of relative position and attitude estimation. Therefore, a method for estimating the relative positions and attitudes of satellite solar panels under complex lighting conditions through robust feature-point extraction was proposed. This method began by accurately segmenting the solar panel areas using a lightweight, multiscale edge-guided network. After preprocessing the segmentation results, the edges were fitted to straight lines, and the intersection points of these lines were calculated to efficiently extract the feature points of the solar panels.Based on this information, the relative position and attitude parameters of the solar panels were determined by matching point pairs based on adjacent frame data. Experimental results demonstrate that, under complex lighting conditions, as the camera dynamically approaches from a distance of 60 to 15 m, the proposed method effectively maintained the relative attitude error within 2° and reduced the relative position error from 0.38 to 0.04 m,highlighting its high precision and robustness.

参考文献

[1] 高振良, 孙小凡, 刘育强, 等. 航天器在轨延寿服务发展现状与展望[J]. 航天器工程, 2022, 31(4): 98-107.
[2] 谭启成. 空间非合作目标视觉位姿估计研究[D]. 哈尔滨: 哈尔滨工业大学, 2024.
[3] 刘付成, 韩飞, 孙玥, 等. 在轨服务航天器的制导、 导航与控制关键技术[J]. 中国惯性技术学报, 2023, 31(9): 849-860; 869.
[4] 郭素婕. 面向在轨服务的非合作目标识别与相对位姿测量技术研究[D]. 上海: 中国科学院大学(中国科学院微小卫星创新研究院), 2023.
[5] 胡海东, 杜航, 王殿佑, 等. 空间非合作目标特征提取与运动测量方法[J]. 中国科学: 物理学力学天 文学, 2022, 52(1): 114-123.
[6] 张杜祥, 刘成. 一种基于双目视觉的立方星位姿参数估计算法[J]. 空间控制技术与应用, 2023, 49(6): 28-37.
[7] De Jongh W C, Jordaan H W, Van Daalen C E. Experiment for pose estimation of uncooperative space debris using stereo vision [J]. Acta Astronautica, 2020, 168: 164-173.
[8] 冯田, 冯志辉, 南亚明, 等. 基于激光雷达的非合作航天器姿态测量[J]. 传感器与微系统, 2024, 43(2): 139-142; 147.
[9] Simpsi A, Roggerini M, Cannici M, et al. 6 DoF pose regression via difierentiable rendering [C]// International Conference on Image Analysis and Processing. 2022: 645-656.
[10] 金泽明, 汪玲, 刘柯, 等. 联合EKF和EKPF的空间非合作目标单目位姿估计[J]. 宇航学报, 2021, 42(7): 907-916.
[11] Guo M, Chen Y, Liang B, et al. Fast recognition and pose estimation algorithm for space cooperative target via mono-vision [J]. Journal of Physics: Conference Series, 2022, 2405(1): 012021.
[12] Long C, Hu Q. Monocular-vision-based relative pose estimation of noncooperative spacecraft using multicircular features [J]. IEEE/ASME Transactions on Mechatronics, 2022, 27(6): 5403- 5414.
[13] Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation [C]// Medical Image Computing and Computer-Assisted Intervention|MICCAI 2015: 18th International Conference. 2015: 234-241.
[14] Fan D P, Ji G P, Zhou T, et al. PraNet: parallel reverse attention network for polyp segmentation [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. 2020: 263-273.
[15] Bui N T, Hoang D H, Nguyen Q T, et al. MEGANet: multi-scale edge-guided attention network for weak boundary polyp segmentation [C]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024: 7985-7994.
[16] Von Gioi R G, Randall G. A sub-pixel edge detector: an implementation of the Canny/Devernay algorithm [J]. Image Processing on Line, 2017, 7: 347-372.
[17] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[18] Gao S H, Cheng M M, Zhao K, et al. Res2Net: a new multi-scale backbone architecture [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 43(2): 652-662.
[19] Howard A, Sandler M, Chu G, et al. Searching for MobileNetV3[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 1314-1324.
[20] Cui Z, Qi G J, Gu L, et al. Multitask AET with orthogonal tangent regularity for dark object detection [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 2553-2562.
[21] Margolin R, Zelnik-Manor L, Tal A. How to evaluate foreground maps? [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014: 248-255.
[22] Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation [C]// Proceedings of the European Conference on Computer Vision. 2018: 801-818.
文章导航

/