系統識別號 | U0002-0907201118470600 |
---|---|
DOI | 10.6846/TKU.2011.00296 |
論文名稱(中文) | 基於影像特徵點之全方位視覺里程計的設計 |
論文名稱(英文) | Image Feature-Based Omni-directional Visual Odometry Design |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 電機工程學系碩士班 |
系所名稱(英文) | Department of Electrical and Computer Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 99 |
學期 | 2 |
出版年 | 100 |
研究生(中文) | 羅一喬 |
研究生(英文) | I-Chiao Luo |
學號 | 698470019 |
學位類別 | 碩士 |
語言別 | 繁體中文 |
第二語言別 | |
口試日期 | 2011-05-16 |
論文頁數 | 87頁 |
口試委員 |
指導教授
-
翁慶昌(onejoeluo@gmail.com)
委員 - 龔宗鈞(cckung@ttu.edu.tw) 委員 - 許陳鑑(jhsu@ntnu.edu.tw) 委員 - 李宜勳(i_hsum@yahoo.com.tw) 委員 - 李世安(lishyhan@gmail.com) 委員 - 翁慶昌(wong@ee.tku.edu.tw) |
關鍵字(中) |
視覺里程計 全方位視覺系統 特徵點匹配 SURF |
關鍵字(英) |
Visual odometry Omni-directional vision system Feature matching SURF |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
本論文提出一個基於影像特徵點之全方位視覺里程計的設計與實現方式。主要有4個部分:(1)距離模型的建立,(2)環境特徵點的選取,(3)特徵點匹配(Feature Matching)與(4)視覺里程計輸出。在距離模型的建立上,本論文以有理函數插值法來取代傳統方法,此方法可以降低全方位視覺之取樣點,並得到精確至像素之全方位視覺距離模型。在環境特徵點的選取上,由於加速強健特徵點(Speed Up Robust Feature, SURF)演算法可得到大量強健特徵點之特性,本論文用SURF演算法來得到影像中每一訊框之環境特徵。在特徵點匹配上,本論文提出一個主維度優先搜尋法(Main Dimensional Priority Search)來取代傳統K維樹法(K-Dimensional Tree)。在視覺里程計輸出上,本論文以動態估測(Motion Estimation)之步驟來計算機器人之動態與相對移動數值,得到視覺里程計之輸出。從實驗結果可知,在移動與旋轉的整體效能上,本論文所提出之視覺里程計確實較輪型里程計為佳,而且視覺里程計對於地形與摩擦力並不敏感,所以本論文所提出之視覺里程計可以取代輪型里程計來作為感測器融合之感測資訊用。 |
英文摘要 |
An image feature based omni-directional visual odometry is designed and implemented in this thesis. There are four parts: (1) Distance model building, (2) Feature extraction, (3) Feature matching, and (4) Output of visual odometry. In the distance model building, a rational function interpolation method is used to build the distance model. In comparison with the traditional model regulated method, it can reduce the sampling points of model to get the precise distance between pixels of the omni-directional visual models. In the feature extraction, the SURF (Speed Up Robust Feature) algorithm is used to get environmental features of each frame, because it can detect a lot of features and these features are robust. In the feature matching, a new matching method called the main dimensional priority searching method is proposed. In comparison with the K-dimensional tree searching method, it can remove the step of building searching tree so that the matching speed will increase by 450%. In the output of visual odometry, the motion estimation method is used to calculate the relative movement of the robot motion and information. From experimental results, we can see the overall performance of the proposed visual odometry is better than that of the traditional wheeled odometry. |
第三語言摘要 | |
論文目次 |
目錄 I 圖目錄 IV 表目錄 IX 第一章 序論 1 1.1 研究背景 1 1.2 研究動機 3 1.3 論文架構 4 第二章 中型機器人介紹 5 2.1 中型足球機器人系統架構 5 2.2 中型足球機器人運動系統 9 2.3 全方位視覺系統 13 2.3.1 全方位影像系統設定 14 2.3.2 全方位影像距離模型建立 17 第三章 特徵點標定 21 3.1 簡介 21 3.2 興趣點檢測 25 3.2.1 圖像積分 25 3.2.2 基於赫式矩陣之興趣點 27 3.2.3 尺度空間表示法 29 3.2.4 興趣點定位 34 3.3 特徵點描述與快速匹配標記 37 3.3.1 方位標定 37 3.3.2 基於哈爾小波響應之描述子 39 3.3.3 快速匹配標記 42 3.4 特徵點匹配 44 3.4.1 K維樹 44 3.4.2 回溯分支定界法 48 3.4.3 最佳容器優先搜尋法 55 3.4.4 主維度優先搜尋法 58 第四章 視覺里程計 61 4.1 特徵追蹤 62 4.2 動態估測 67 第五章 實驗與結果 70 5.1 主維度優先搜尋法測試 70 5.2 基於有理函數之全方位儀距離模型測試 73 5.3 視覺里程計平移狀態測試 75 5.4 視覺里程計旋轉狀態測試 79 第六章 結論與未來展望 83 參考文獻 84 圖目錄 圖2.1 中型足球機器人之硬體外觀 5 圖2.2 中型足球機器人之系統架構圖 6 圖2.3 全方位移動系統之硬體配置圖 8 圖2.4 中型足球機器人之系統功能圖 8 圖2.5 全方位輪示意圖 9 圖2.6 中型足球機器人之全方位輪組配置 10 圖2.7 全方位輪組配置與速度關係圖 11 圖2.8 錐狀鏡實體圖 13 圖2.9 全方位影像光線補捉示意圖 14 圖2.10 全方位影像畫面 14 圖2.11 錐狀鏡與攝影機關係圖 15 圖2.13 全方位影像的基本設定 16 圖2.14 全方位影像距離模型建立法概念圖 17 圖2.15 距離模型建立結果 19 圖2.16 兩距離模型之分離狀況 19 圖2.17 兩距離模型之結合狀況 20 圖3.1 積分圖像示意圖 26 圖3.2 盒狀卷積遮罩運算示意圖 26 圖3.3 高斯二階微分遮罩與近似遮罩對照示意圖 28 圖3.4 SIFT尺度空間建立法(右)與SURF尺度空間建立法(左)示意圖 30 圖3.5 遮罩增長示意圖 31 圖3.6 前三組階面積示意圖 32 圖3.7 興趣點分布示意圖 33 圖3.8 非最大值抑制法示意圖 34 圖3.9 哈爾小波遮罩示意圖 38 圖3.10 興趣點方位測定示意圖 39 圖3.11 特徵向量建構示意圖 40 圖3.12 特徵向量特性示意圖 41 圖3.13 特徵向量抗雜訊能力比較示意圖 42 圖3.14 快速標記比對方式示意圖 43 圖3.15 二維樹建立之第一次分割示意圖 44 圖3.16二維樹建立之第二次分割示意圖 45 圖3.17二維樹建立之第三次分割示意圖 45 圖3.18二維樹建立之第四次分割示意圖 45 圖3.19 二維樹空間分割示意圖 46 圖3.20 二維樹正確選取最鄰近點示意圖 47 圖3.21 二維樹錯誤選取最鄰近點示意圖 47 圖3.22 回溯分支定界法搜索範圍示意圖 49 圖3.23 回溯分支定界法第一步搜索示意圖 50 圖3.24回溯分支定界法第二步搜索示意圖 50 圖3.25回溯分支定界法第三步搜索示意圖 50 圖3.26回溯分支定界法第四步搜索示意圖 51 圖3.27回溯分支定界法第五步搜索示意圖 51 圖3.28回溯分支定界法第六步搜索示意圖 51 圖3.29回溯分支定界法第七步搜索示意圖 52 圖3.30回溯分支定界法第八步搜索示意圖 52 圖3.31回溯分支定界法第九步搜索示意圖 52 圖3.32回溯分支定界法第十步搜索示意圖 53 圖3.33回溯分支定界法第十一步搜索示意圖 53 圖3.34回溯分支定界法第十二步搜索示意圖 53 圖3.35回溯分支定界法第十三步搜索示意圖 54 圖3.36回溯分支定界法第十四步搜索示意圖 54 圖3.37回溯分支定界法第十五步搜索示意圖 54 圖3.38最佳容器優先搜尋法搜索第一步示意圖 56 圖3.39最佳容器優先搜尋法搜索第二步示意圖 56 圖3.40最佳容器優先搜尋法搜索第三步示意圖 57 圖3.41最佳容器優先搜尋法搜索第四步示意圖 57 圖3.42 主維度優先搜尋法第一次修剪示意圖 60 圖3.43主維度優先搜尋法第二次修剪示意圖 60 圖4.1全方位儀影像扭曲示意圖 62 圖4.2遠距離模型與近距離模型交界段差示意圖 63 圖4.3有理函數插值之全方位儀模型示意圖 66 圖4.4機器人移動狀態示意圖 67 圖4.5機器人動態判定示意圖 68 圖4.6機器人旋轉狀態特徵點位移示意圖 69 圖5.1距離閥值與候選點數量關係圖 70 圖5.2距離閥值與正確率關係圖 71 圖5.3距離閥值與計算時間關係圖 72 圖5.4 有理函數插值與線性插值之像素與距離關係示意圖 74 圖5.5 視覺里程計移動誤差示意圖 77 圖5.6 實際與估測行走距離示意圖 78 圖5.7 視覺里程計旋轉誤差示意圖 81 圖5.8 實際與估測旋轉距離示意圖 82 表目錄 表5.1 全向儀有理函數插值模型分析表 73 表5.2 視覺里程計平移狀態分析表 76 表5.3 里程計旋轉狀態分析表 80 |
參考文獻 |
[1] URL: http://en.wikipedia.org/wiki/Robot [2] H. Moravec, Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover, Tech. report CMU-RI-TR-80-03, Robotics Institute, Carnegie Mellon University & doctoral dissertation, Stanford University, Sep. 1980. [3] L. Matthies and S. Shafer, “Error modeling in stereo navigation,” Robotics and Automation on IEEE Journal, Vol. 3, No. 3, pp. 239-248, Jun. 1987. [4] D.J. Kriegman, E. Triendl, and T.O. Binford, “Stereo vision and navigation in buildings for mobile robots,” IEEE Transactions on Robotics and Automation, Vol. 5, No. 6, pp. 792-803, Dec. 1989. [5] Z.Y. Zhang, O.D. Faugeras, and N. Ayache, “Analysis of a sequence of stereo scenes containing multiple moving objects using rigidity constraints,” Computer Vision Second International Conference, pp. 177-186, Dec. 1988. [6] S. Lacroix, A. Mallet, R. Chatila, and L. Gallo, “Rover self localization in planetary-like environments,” Artificial Intelligence, Robotics and Automation in Space, Vol. 5, pp. 433, Jun. 1999. [7] A. Mallet, S. Lacroix, and L. Gallo, “Position estimation in outdoor environments using pixel tracking and stereovision,” International Conference of Robotics and Automation, Vol. 4, pp. 3519-3524, 2000. [8] M.N. Dailey, and M. Parnichkun, “Simultaneous localization and mapping with stereo vision,” International Conference of Control, Automation, Robotics and Vision, Vol. 9, pp. 1-6, Dec. 2006. [9] M. Maimone, J. Andrew, Y. Cheng, W. Reg, and L. Matthies, “Autonomous navigation results from the Mars Exploration Rover (MER) mission,” Experimental Robotics IX, Springer Tracts in Advanced Robotics, Vol. 21, pp. 3-13, 2006. [10] 劉智誠,全方位模糊運動控制器之設計與實現,淡江大學電機工程學系碩士論文,2007。 [11] 鄧宏志,中型機器人足球系統之即時影像處理,淡江大學電機工程學系碩士論文,2006。 [12] C.Y. Chen, C.Y. Ho, C.C. Wong, and H. Aoyama, “Omnidirectional vision-based robot localization on soccer field by particle filter,” SICE Annual Conference 2010, pp. 2976-2981, Aug. 2010. [13] C. Harris, and M. Stephens, “A combined corner and edge detector,” In Proceedings of the Alvey Vision Conference 1988, pp. 147-151, 1988. [14] T. Lindeberg, “Feature detection with automatic scale selection,” International Journal of Computer Vision 1998, Vol. 39, No. 2, pp. 77-116, 1998. [15] K. Mikolajczyk and C. Schmid, “Indexing based on scale invariant interest points,” International Conference of Computer Vision 2001, Vol. 1, pp. 525-531, 2001. [16] D.G. Lowe, “Object recognition from local scale-invariant features,” International Conference of Computer Vision 1999, Vol. 2, pp. 1150-1157, 1999. [17] T. Kadir, and M. Brady, “Saliency, scale and image description,” International Journal of Computer Vision 2001, Vol. 45, pp. 83-105, 2001. [18] F. Jurie, and C. Schmid, “Scale-invariant shape features for recognition of object categories,” Computer Vision and Pattern Recognition 2004, Vol. 2, pp. II-90- II-96, Jul. 2004. [19] K. Mikolajczyk, and C. Schmid, “Scale and affine invariant interest point detectors,” International Journal of Computer Vision 2004, Vol. 60, No. 1, pp. 63-86, 2004. [20] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 10, pp. 1615-1630, Oct. 2005. [21] L.M.J. Florack, B.M. Ter Haar Romeny, J.J. Koenderink, and M.A. Viergever, “General intensity transformations and differential invariants,” Journal of Mathematical Imaging and Vision, Vol. 4, No. 2, pp. 171-187, May 1994. [22] F. Mindru, T. Tuytelaars, L.V. Gool, and T. Moons, “Moment invariants for recognition under changing viewpoint and illumination,” Computer Vision and Image Understanding, Vol. 94, pp. 3-27, Apr. 2004. [23] A. Baumberg, “Reliable feature matching across widely separated views,” Computer Vision and Pattern Recognition, pp. 1774-1781, 2000. [24] W.T. Freeman, and E.H. Adelson, “The design and use of steerable filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, pp. 891-906, Sep. 1991. [25] G. Carneiro, and A.D. Jepson, “Multi-scale phase-based local features,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 736-743, Jun. 2003. [26] D. Lowe, “Distinctive image features from scale-invariant keypoints, cascade filtering approach,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, 2004. [27] Y. Ke, and R. Sukthankar, “PCA-SIFT: A more distinctive representation for local image descriptors,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 511-517, Jun. 2004. [28] S. Se, H.K. Ng, P. Jasiobedzki, and T.J. Moyung, “Vision based modeling and localization for planetary exploration rovers,” International Astronautical Congress, pp. 11, 2004. [29] M. Grabner, H. Grabner, and H. Bischof, “Fast approximated sift,” Asian Conference on Computer Vision, Vol. 1, pp. 918-927, 2006. [30] S.M. Omohundro, “Five balltree construction algorithms,” ICSI Technical Report TR-89-063, 1989. [31] D. Nisteer, and H. Steweenius, “Scalable recognition with a vocabulary tree,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 2161-2168, 2006. [32] M. Datar, N. Immorlica, P. Indyk, and V.S. Mirrokni, “Locality- sensitive hashing scheme based on p-stable distributions,” Proceedings of the twentieth annual symposium on Computational geometry, pp. 253-262, 2004. [33] J. Goldstein, J.C. Platt, and C.J.C. Burges, “Redundant bit vectors for quickly searching high-dimensional regions,” Deterministic and Statistical Methods in Machine Learning, pp. 137-158, 2004. [34] H. Bay, A. Ess, T. Tuytelaars, and L.V. Gool, “SURF: Speeded Up Robust Features,” Computer Vision and Image Understanding, Vol. 110, No. 3, pp. 346-359, 2008. [35] P.A. Viola, and M.J. Jones, “Rapid object detection using a boosted cascade of simple features,” Computer Vision and Pattern Recognition, Vol. 1, pp. 511-518, 2001. [36] T. Lindeberg, and L. Bretzner, “Real-time scale selection in hybrid multi-scale representations,” Lecture Notes in Computer Science, Vol. 2695, pp. 148-163, 2003. [37] M. Brown, and D. Lowe, “Invariant features from interest point groups,” British Machine Vision Conference, pp. 656-665, 2002. [38] J. Beis, and D.G. Lowe, “Shape indexing using approximate nearest-neighbor search in high-dimensional spaces,” Conference on Computer Vision and Pattern Recognition,” pp. 1000-1006, 1997. [39] D. Nistér, O. Naroditsky, and J. Bergen, “Visual odometry for ground vehicle applications,” Journal of Field Robotics, Vol. 23, No. 1, pp. 3-20, 2006. [40] J. Stoer and R. Bulirsch, Introduction of Numerical Analysis 2nd, Springer-Verlag, 1992. |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信