TY - GEN
T1 - An Automatically Annotated Spacecraft Intelligent Perception Dataset Based on Segment Anything Model
AU - Chen, Zilong
AU - Zhao, Shengyun
AU - Zhong, Rui
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Spacecraft intelligent perception, including pose estimation and target detection, plays a vital role in the relative navigation system for spacecraft rendezvous and docking and active debris removal. Deep learning-based methods are widely used among spacecraft relative state perception. However, labeled images of on-orbit spacecraft for model training are difficult to obtain. In this paper, we propose the Spacecraft Object Detection and Pose Estimation Dataset (SDPED), generated by Unreal Engine 4 (UE4) and hardware in the laboratory (HIL), respectively. Additionally, we utilize the segment anything model (SAM) to autonomously annotate UE4 images through small-batch training. The advantages of the proposed SDPED are that it includes a variety of spacecraft types, a variety of label information, and an autonomous and accurate labeling method. Subsequently, we modified the multi-task network PVSPE by retaining only the pose estimation and target detection heads to evaluate the effectiveness of the SDPED. Extensive experiments are conducted on challenging synthetic and hardware-in-the-loop images. The results demonstrate that average estimation errors of position and attitude, as well as average IoU, on synthetic images, are 0.55, 6.11°, and 0.91, respectively. Moreover, the model generalizes well to HIL images through data augmentation and self-attention mechanism.
AB - Spacecraft intelligent perception, including pose estimation and target detection, plays a vital role in the relative navigation system for spacecraft rendezvous and docking and active debris removal. Deep learning-based methods are widely used among spacecraft relative state perception. However, labeled images of on-orbit spacecraft for model training are difficult to obtain. In this paper, we propose the Spacecraft Object Detection and Pose Estimation Dataset (SDPED), generated by Unreal Engine 4 (UE4) and hardware in the laboratory (HIL), respectively. Additionally, we utilize the segment anything model (SAM) to autonomously annotate UE4 images through small-batch training. The advantages of the proposed SDPED are that it includes a variety of spacecraft types, a variety of label information, and an autonomous and accurate labeling method. Subsequently, we modified the multi-task network PVSPE by retaining only the pose estimation and target detection heads to evaluate the effectiveness of the SDPED. Extensive experiments are conducted on challenging synthetic and hardware-in-the-loop images. The results demonstrate that average estimation errors of position and attitude, as well as average IoU, on synthetic images, are 0.55, 6.11°, and 0.91, respectively. Moreover, the model generalizes well to HIL images through data augmentation and self-attention mechanism.
KW - intelligent perception
KW - SDPED
KW - Segment Anything Model
UR - https://www.scopus.com/pages/publications/85219581638
U2 - 10.1109/DICTA63115.2024.00061
DO - 10.1109/DICTA63115.2024.00061
M3 - 会议稿件
AN - SCOPUS:85219581638
T3 - Proceedings - 2024 25th International Conference on Digital Image Computing: Techniques and Applications, DICTA 2024
SP - 367
EP - 373
BT - Proceedings - 2024 25th International Conference on Digital Image Computing
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th International Conference on Digital Image Computing: Techniques and Applications, DICTA 2024
Y2 - 27 November 2024 through 29 November 2024
ER -