§ Browsing ETD Metadata
System No. U0002-1508201314045500
Title (in Chinese) 立體視覺里程計演算法之設計與實現
Title (in English) Design and Implementation of a Stereo Visual Odometry Algorithm
Other Title
Institution 淡江大學
Department (in Chinese) 電機工程學系碩士班
Department (in English) Department of Electrical and Computer Engineering
Other Division
Other Division Name
Other Department/Institution
Academic Year 101
Semester 2
PublicationYear 102
Author's name (in Chinese) 黃志弘
Author's name(in English) Chih-Hung Huang
Student ID 600460140
Degree 碩士
Language Traditional Chinese
Other Language
Date of Oral Defense 2013-07-16
Pagination 62page
Committee Member advisor - Chi-Yi Tsai
co-chair - 蘇木春
co-chair - 李世安
co-chair - 蔡奇謚
Keyword (inChinese) 視覺里程計
Keyword (in English) visual odometry
pose estimation
re-projection error
stereo vision
Other Keywords
Abstract (in Chinese)
Abstract (in English)
This thesis proposes a stereo vision odometry algorithm used in two degree-of-freedom (DOF) pan-tilt and wheeled robot mobile platforms. The proposed algorithm achieves six-DOF camera pose estimation. To improve the accuracy of pose estimation, a simple and efficient calibration method is proposed to find the internal parameters of a stereo vision camera. Next, an optimal camera pose estimation algorithm is designed via a re-projection scheme based on the estimated camera internal parameters. Applying the proposed algorithm to a stereo vision camera system allows a platform equipped with the system actually estimating the pose change between two consecutive images, achieving the purpose of visual odometry. When applied on a two-DOF pan-tilt platform and a wheeled mobile robot, experimental results validate the performance of the proposed algorithm by comparing with an existing method.
Other Abstract
Table of Content (with Page Number)
中文摘要	I
Abstract	II
目錄	III
圖目錄	IV
表目錄	VI
第一章	 序論	1
1.1	 研究背景	1
1.2	 研究動機與目的	4
1.3	 論文架構	5
第二章	 實驗系統介紹	6
2.1	 硬體介紹	6
2.2	 立體視覺軟體工具應用	10
第三章	 視覺里程計相關演算法	12
3.1	 本質矩陣方法	12
3.1.1	基本矩陣與內部參數矩陣	12
3.1.2	本質矩陣方法	14
3.2	 RANSAC移動估測方法	16
第四章	 立體視覺里程計演算法	22
4.1	 視覺里程計前處理	23
4.1.1	攝影機內部參數估測	23
4.2	 提出之視覺里程計演算法	27
4.2.1	特徵提取演算法	28
4.2.2	離群點移除演算法	31
4.2.3	最佳化重新投影(Re-projection)誤差	33
4.3	 姿態更新	39
4.4	 系統架構圖	41
第五章	 實驗結果與分析	44
5.1	 攝影機內部參數估測	45
5.2	 攝影機姿態估測實驗	49
參考文獻	59

圖2.1、Bumblebee2立體視覺攝影機。	7
圖2.2、Pan-Tilt Unit-D46控制平台。	8
圖2. 3、FiveBOT 004 輪型機器人平台	9
圖2.4、軟硬體應用流程。	10
圖3.1、三維映射轉換動作。	13
圖3.2、本質矩陣方法流程圖。	16
圖3.3、RANSAC演算法流程圖。	17
圖3.4、RANSAC移動估測方法流程圖。	21
圖4.1、所提出之方法主要流程圖。	22
圖4. 2、二維平面圖形參考	24
圖4.3、SIFT主要流程圖	29
圖4.4、特徵點賦予主方向示意圖。	29
圖4.5、描述子建立示意圖。	30
圖4.6、特徵點匹配與離群點移除結果圖。	33
圖4. 7、輪型載具與攝影機表示圖	39
圖4. 8、輪型載具行進示意圖	39
圖4.9、本論文演算法系統架構圖。	41
圖5. 1、三軸旋轉與三軸方向示意圖	46
圖5.2、攝影機內部參數矩陣估測流程圖	46
圖5.3、攝影機內部參數估測場景:(a)場景一,(b)場景二,(c)場景三,(d)場景四。	47
圖5. 4、實驗的實際軌跡:(a)實驗A,(b)實驗B。	51
圖5.5、提出的方法與RANSAC移動估測方法之均方誤差比較:(a)實驗A的均方誤差比較圖,(b)實驗B的位移均方誤差比較圖,(c)實驗B的旋轉均方誤差比較圖。	55
圖5. 6、提出之方法與RANSAC方法實驗結果之軌跡路徑	57

表2.1	Bumblebee2相關規格需求表	9
表2.2	Pan-Tilt Unit-D46相關規格表	9
表2. 3	FiveBOT004 輪型機器人平台規格	10
表2.4	電腦規格表	10
表4.1	四種特徵提取方法的比較。	28
表5.1	攝影機內部參數統計與平均	48
表5.2	實驗比較誤差百分比	55
表5. 3	實驗偏差值比較	55
[1]D. Fernandez and A. Price, “Visual odometry for an outdoor mobile robot,” Proceedings of IEEE International Conference on Robotics, Automation and Mechatronics, pp. 816-821, 2004.
[2]J.-P. Tardif, Y. Pavlidis, and K. Daniilidis, “Monocular visual odometry in urban environments using an omnidirectional camera,” Proceedings of IEEE International Conference on Intelligent Robots and Systems, pp. 2531-2538, 2008.
[3]C. Ye and M. Bruch, “A visual odometry method based on the SwissRanger SR4000,” SPIE Proceedings 7692: Unmanned Systems Technology XII, Orlando, FL, 2010.
[4]R. Roberts, N. H. Nguyen, N. Krishnamurthi, and T. Balch, “Memory-based learning for visual odometry,” Proceedings of IEEE International Conference on Robotics and Automation, pp. 47-52, 2008.
[5]J. Zhang and D. Song, “Error aware monocular visual odometry using vertical line pairs for small robots in urban areas,” Proceedings of 24th AAAI Conference on Artificial Intelligence, pp. 1645-1650, 2010.
[6]C. F. Olson, L. H. Matthies, M. Schoppers, and M. W. Maimone, “Rover navigation using stereo ego-motion,” Robotics and Autonomous Systems, Vol. 43, pp. 215-229, 2003.
[7]L. Matthies and S. A. Shafer, “Error modeling in stereo navigation,” IEEE Transactions on Robotics and Automation, Vol. 3, No. 3, pp. 239-248, 1978.
[8]P. S. Maybeck, Stochastic Models, Estimation, and Control Volume 1, New York, Academic Press, INC., 1979.
[9]Y. Cheng, M. W. Maimone, and L. Matthies, “Visual odometry on the Mars exploration rovers,” IEEE Robotics and Automation Magazine, Vol. 13, No. 2, pp. 54-62, 2006.
[10]O. Faugeras. “Three-Dimensional Computer Vision:A Geometric Viewpoint.” MIT Press, 1993.
[11]D. Nister,“An efficient solution to the five-point relative pose problem,” Proceedings Computer Vision and Pattern Recognition (CVPR ’03), pp. II: 195–202, 2003.
[13]R. I. Hartley, “In Defence of the 8-point Algorith,” Fifth International Conference on Computer Vision, Cambridge, MA, pp. 1064 - 1070, 1995.
[14]L. Li, J. Lian, L. Guo and R. Wang,”Visual Odometry for Planetary Exploration Rovers in Sandy Terrains,” International Journal of Advanced Robotic Systems, Vol. 10, 234, 2013.
[15]P. Sturm and S. Maybank, “On Plane-based Camera Calibration: A General Algorithm, Singularities, Applications,” Proceedings of the IEEE Conference on Computer vision and Pattern Recognitions, pp.432-437, Fort Collins, Colorado, Jun, 1999. IEEE Computer Society Press.
[16]Zhengyou Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, Nov 2000.
[17]D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches," Wiley-Interscience, Hoboken, New Jersey, 2006.
[18]D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, 2004.
[19]C. Harris and J. Pike, “3d positional integration from image sequences,” Proceedings Alvey Vision Conference, pp. 87–90, Manchester, Britain, 1988.
[20]E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” Proceedings European Conference Computer Vision, pp. 430–443, Graz, Austria, 2006.
[21]H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speeded up robust features,” Proceedings ECCV, pp. 404–417, 2006.
[23]R. L. PI0, "Euler Angle Transformations," IEEE Transactions on automatic control, Vol. 11, No. 4, pp. 707-715, 1966
[24]S. Sarkka, “ Notes on quaternions,” Internal Technical Document, Helsinki University of Technology, 2007.
[25]Gimbal Lock, available at: http://blog.donews.com/wanderpoet/archive/2005/07/04/453608.aspx
Terms of Use
Within Campus
On-campus access to my hard copy thesis/dissertation is open immediately
Agree to authorize disclosure on campus
Duration for delaying release from 5 years.
Outside the Campus
I grant the authorization for the public to view/print my electronic full text with royalty fee and I donate the fee to my school library as a development fund.
Duration for delaying release from 5 years.

If you have any questions, please contact us!

Library: please call (02)2621-5656 ext. 2487 or email