[1]茹祥宇,金 潮,潘成峰,等.单目视觉惯性融合方法在无人机位姿估计中的应用[J].控制与信息技术(原大功率变流技术),2018,(06):50-58.[doi:10.13889/j.issn.2096-5427.2018.06.009]
 RU Xiangyu,JIN Chao,PAN Chengfeng,et al.Monocular Vision-inertial Fusion Approach for MAV State Estimation[J].High Power Converter Technology,2018,(06):50-58.[doi:10.13889/j.issn.2096-5427.2018.06.009]
点击复制

单目视觉惯性融合方法在无人机位姿估计中的应用()
分享到:

《控制与信息技术》(原《大功率变流技术》)[ISSN:2095-3631/CN:43-1486/U]

卷:
期数:
2018年06期
页码:
50-58
栏目:
控制理论与应用
出版日期:
2018-12-05

文章信息/Info

Title:
Monocular Vision-inertial Fusion Approach for MAV State Estimation
文章编号:
2096-5427(2018)06-0050-09
作者:
茹祥宇1金 潮2潘成峰2许 超1
(1. 浙江大学 工业控制技术国家重点实验室,浙江 杭州 310027;2.温岭市非普电气有限公司 太平分公司,浙江 温岭 317500)
Author(s):
RU Xiangyu1JIN Chao2PAN Chengfeng2XU Chao1
( 1. State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, Zhejiang 310027, China; 2. Taiping Branch, Feipu Electric Co., Ltd., Wenling, Zhejiang 317500, China )
关键词:
四旋翼位姿估计IMUARTag标签卡尔曼滤波
Keywords:
quadrotor pose estimation IMU(inertial measurement unit) ARTag Kalman filter
分类号:
TP273
DOI:
10.13889/j.issn.2096-5427.2018.06.009
文献标志码:
A
摘要:
为了提高四旋翼飞行器位姿估计精度,文章提出了一种多传感器融合算法。其设计了一个模块化的卡尔曼滤波框架,将视觉部分的初步位姿估计和基于惯性测量单元(IMU)的动力学模型相融合;提出了一种新的视觉惯性里程计方案,利用ARTag标签进行多标签融合的位姿估计,并将其结果作为算法的观测量;建立了基于IMU的动力学模型,将视觉结果加入状态向量不断进行迭代。由于IMU的频率高于视觉结果的频率,每当利用视觉结果进行更新时,选取最近的状态向量,这样在视觉结果异常时,该算法依然可以很好地工作。该算法的性能在实验中有很好的体现。
Abstract:
To improve the pose estimation accuracy of quadrotors MAV(mertial measurement unit), it presented a new algorithm of state estimation for quadrotors. In this paper, a modular Kalman filter framework was designed, which combines the initial pose estimation of the vision part and the IMU-based dynamic model, and this framework is a new visual inertial odometry. The ARTags were used to obtain pose estimation, and the result was used as the observation of the algorithm. The dynamic model based on IMU was established, and the visual result was added to the state vector for iterations. Since the frequency of IMU is higher than that of the visual result, the most recent state vector is selected when it updates. The algorithm still works well when the visual result is abnormal, and it shows a good performance in the experiment.

参考文献/References:

[1]ABDELKRIM N, AOUF N, TSOURDOS A, et al. Robust nonlinear filtering for INS/GPS UAV localization[C]//IEEE The 16th Mediterranean Conference on Control and Automation, 2008:695-702.
[2]WITTE T H, WILSON A M. Accuracy of non-differential GPS for the determination of speed over ground[J]. Journal of Biomechanics, 2004, 37 (12):1891-1898.
[3]DEUTSCHER J, BLAKE A,REID I. Articulated body motion capture by annealed particle filtering[C]//Proceedings IEEE Conference on Computer Vision and Pattern Recognition. Hilton Head Island, SC:IEEE, 2000.
[4]HOWARD I P, ROGERS B J. Binocular vision and stereopsis[M]. New York:Oxford University Press, 1995.
[5]GUO T, SILAMU W. A fuzzy logic information fusion algorithm for autonomous mobile robot avoidance based on multi-ultrasonic range sensors[C]//IEEE Piezoelectricity, Acoustic Waves, and Device Applications, 2010:84.
[6]HUH S, SHIM D H,KIM J. Integrated navigation system using camera and gimbaled laser scanner for indoor and outdoor autonomous flight of UAVs[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo:IEEE, 2013:3158-3163.
[7]JIM?NEZ A R, SECO F, PRIETO J C,et al. Guevara. Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IMU[C]//The Workshop on Positioning Navigation & Communication. Dresden:IEEE, 2010:135-143.
[8]NIST’ER D, NARODITSKY O, BERGEN J R. Visual odometry[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC, USA:IEEE, 2004:652-659.
[9]CHEVIRON T, HAMEL T, MAHONY R E,et al. Robust nonlinear fusion of inertial and visual data for position, velocity and attitude estimation of UAV[C]//IEEE International Conference on Robotics and Automation. Roma, Italy:IEEE, 2007.
[10]LOWE D G. Distinctive image features from scale-invariant key-points[J]. International Journal of Computer Vision, 2004, 60(2):91-110.
[11]BAY H, TUYTELAARS T, GOOL L V. SURF:Speeded up robust features[C]//European Computer Vision & Image Understanding. [S.l.]:Springer, 2006, 110(3):404-417.
[12]ROSTEN E, DRUMMOND T. Machine learning for high-speed corner detection[C]//European Conference on Computer Vision. [S.l.]:Springer, 2006:430-443.
 [13]RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB:An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2012:2564-2571.
 [14]FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO:Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation. Hong Kong, China:IEEE, 2014:15-22.
[15]ENGEL J, SCHOPS T, CREMERS D. LSD-SLAM:Large-scale direct monocular SLAM[C]//European Conference on Computer Vision.[S.l.]:Springer, 2014:834-849.
[16]ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2018, 40(3):611-625.
[17]GUI J J, GU D B, WANG S, et al. A review of visual inertial odometry from filtering and optimisation perspectives[J]. Advanced Robotics, 2015, 29(20):1289-1301.
[18]WEISS S, SIEGWART R. Real-time metric state estimation for modular vision-inertial systems[C]//International Conference on Robotics and Automation. Shanghai:IEEE, 2011:4531-4537.
[19]WEISS S, ACHTELIK M W, CHLI M, et al. Versatile distributed pose estimation and sensor self-calibration for an autonomous MAV[C]//IEEE International Conference on Robotics and Automation. Saint Paul, USA:IEEE, 2012:31-38.
[20]LYNEN S, ACHTELIK M W, WEISS S, et al. A robust and modular multi-sensor fusion approach applied to MAV navigation[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan:IEEE, 2013:3923-3929.
[21]MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//IEEE International Conference on Robotics and Automation. Madrid, Spain:IEEE, 2007:3565-3572.
[22]BLOESCH M, OMARI S, HUTTER M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg, Germany:IEEE, 2015:298-304.
[23]LEUTENEGGER S, FURGALE P, RABAUD V, et al. Keyframe-based visual-inertial SLAM using nonlinear optimization[C]//IEEE Robotics: Science and Systems, 2013:789-795.
[24]LIU H M, ZHANG G F, BAo H J. Robust keyframe-based monocular SLAM for augmented reality[C]//IEEE International Symposium on Mixed and Augmented Reality. Merida, Mexico:IEEE, 2016:1-10.

备注/Memo

备注/Memo:
收稿日期:2018-09-16
作者简介:茹祥宇(1991—),男,硕士研究生,目前研究方向为基于四旋翼飞行器平台多传感器融合的位姿估计方法。
基金项目:国家自然科学基金项目(61473253);国家自然科学基金创新研究群体科学基金(61621002)
更新日期/Last Update: 2018-12-25