系統識別號 | U0002-3006200917254500 |
---|---|
DOI | 10.6846/TKU.2009.01134 |
論文名稱(中文) | 應用分佈的主動嵌入式視覺網路系統於 差速機器人之導引 |
論文名稱(英文) | Distributed Active Embedded Vision System for the Navigation of Differential Mobile Robot |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 電機工程學系碩士班 |
系所名稱(英文) | Department of Electrical and Computer Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 97 |
學期 | 2 |
出版年 | 98 |
研究生(中文) | 劉家箕 |
研究生(英文) | Chia-Chi Liu |
學號 | 695460369 |
學位類別 | 碩士 |
語言別 | 繁體中文 |
第二語言別 | |
口試日期 | 2009-06-16 |
論文頁數 | 42頁 |
口試委員 |
指導教授
-
黃志良(clhwang@mail.tku.edu.tw)
委員 - 施慶隆(shihcl@mail.ntust.edu.tw) 委員 - 洪敏雄(mhhung@ieee.org) 委員 - 楊智旭(jrsyu@tedns.te.tku.edu.tw) 委員 - 王銀添(ytwang@mail.tku.edu.tw) |
關鍵字(中) |
分佈視覺網路空間 輪型機器人導航 導航軌跡規劃避障 模糊可變結構控制 多處理器控制系統 |
關鍵字(英) |
Distributed active-vision pan-tilt-zoom system Differential mobile robot Navigation Trajectory tracking Fuzzy sliding-mode decentralized control Multi-processor |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
本論文乃是利用德州儀器的數位訊號處理器TMS320DM642EVM包含其相關軟體系統(Code Composer Studio)來實現分佈的主動視覺網路系統中輪型機器人導引。所應用彩色格式為YCrCb,其中Y代表亮度,Cr代表紅色的彩度,Cb代表藍色的彩度。將紅色和矩形特徵貼於輪型機器人上,以利於其之辨識及定位。首先由迴轉台將影像訊號輸入到TMS320DM642來進行包括分割紅色、二值化、應用中值濾波器去除雜訊、計算面積、求中心位置的座標的影像處理。至於障礙的特徵為藍色及圓形,其影像處理與輪型機器人類似。 所謂的分佈式網路空間系統,即是他能在自己的空間內監控所發生的事件,建立自己的空間模型,能與鄰居連繫,並且按照自己所做的決定進行相關的動作,例如,輪型機器人被設計能夠追蹤因建築物限制的折線軌跡,根據研究結果,證明應用折線軌跡的路徑規畫是很實際的。此外, 當輪型機器人在分佈式感測網路空間系統中,典型的輪型機器人所遭遇的很多問題,都可提供另外的解決之道。在另一方面,幾乎所有分佈式CCD都是固定的,它們監控區域是有限的,如果要增加監控的區域則須要增加CCD的數量,如此一來,會導致系統更加地複雜。雖然全方位的視覺系統可觀看360度的區域,由於其具有很大影像失真,它的影像處理須要消耗大量的計算時間和它的估測(或校正)誤差亦很大。 是故本論文將以多層類神經網路建立影像平面座標與世界座標軸的關係(或轉換)。其輸入分別是迴轉台x軸的角度 、迴轉台y軸的角度 、影像平面X軸的座標 和Y軸的座標 ,其輸出則為世界座標軸的 , 。將輸入及相關輸出數據,經過有效的學習具有兩層隱藏層的多層類神經網路,得到有效的數學模式。實驗結果顯示,所建立的影像系統可在具有非均勻亮度和反射的環境下進行處理影像,並完成輪型機器人之軌跡追蹤及躲避障礙。 |
英文摘要 |
The navigation for a differential wheeled robot (DWR) in a distributed active-vision pan-tilt-zoom system (DAVPTZS) is developed. The present thesis uses the digital signal processor of TMS320DM642EVM from Texas Instruments Co to be an important platform. The format of color space is YCrCb, where Y denotes the luminance, Cr is the red color, and Cb is the blue color. For the purpose of easy recognition and localization of DWR, the red and rectangular feature is placed on the top of the DWR. In the beginning, the visual information coming from the speed dome is transferred to TMS320DM642 to execute the corresponding image processing, including the segment of Cr component, binary, the removal of noise by median filter, the calculation of the area of image feature, and the computation of coordinate of the center position of image feature. Similarly, the obstacle with blue color and circular shape can be tackled by the above procedure. Recently, distributed control applications within sensor networks become more important. Many of the problems encountered by classic wheeled robots (e.g., localization, high computational power, different software for different kinds of mobile robot, the interference with each sensor) are solved when they are in a distributed network-space. However, almost distributed CCDs are fixed; therefore, the visible region is limited or the number of CCDs should increase to monitor the larger visible area. Although the omni- directional vision system (ODVS) possesses view angle, it contains the following disadvantages. Due to the distortion of image, its image processing is time-consuming and the estimation error (or calibration error) is large. In this situation, a MLP with two-hidden layers are employed to establish the relation between image plane coordinate and world coordinate. The corresponding inputs for the MLP are , , and ; the related outputs are and . After an effective learning, the corresponding MLP is applied to navigate the mobile robot to track a specific trajectory and to avoid the known obstacle. |
第三語言摘要 | |
論文目次 |
目錄 中文摘要..................................................................................................I 英文摘要...............................................................................................II 目錄.........................................................................................................III 圖目錄..................................................................................................... V 表目錄...............................................................................................VII 第一章 緒論..........................................................................................1 第二章 系統描述和研究工具...........................................................4 2.1 系統描述.............................................................................................4 2.2 研究任務.............................................................................................7第三章 影像處理與座標轉換......................................................…9 3.1 影像處理.........................................................................................9 3.1.1 影像獲得..............................................................................11 3.1.2 影像分割..............................................................................12 3.1.3 二值化..................................................................................12 3.1.4 影像面積與中心..................................................................12 3.1.5 中值濾波.............................................................................13 3.1.6 形狀辨識..............................................................................14 3.2 座標轉換...........................................................................................15 3.2.1 攝影機幾何模型...............................................................17 3.2.2 多層感知器類神經網路....................................................19 第四章 差速機器人之二維運動學及操控..................................23 第五章 實驗結果與考論.................................................................27 5.1 實驗預備....................................................................................27 5.2 實驗結果.............................................................................28 第六章 結論.........................................................................................38 參考文獻...............................................................................................40 圖目錄 圖2.1 整個系統之示意圖.........................................................................6 圖2.2 差速機器人實體圖.........................................................................6 圖3.1 原始影像.........................................................................................9 圖3.2 紅色DMR二值化影像. ...............................................................10 圖3.3 DMR中值濾波後影像...................................................................10 圖3.4 藍色障礙二值化影像...................................................................10 圖3.5 障礙中值濾波後影像...................................................................11 圖3.6 DMR在影像平面的姿態計算和影像特徵...................................13 圖3.7 影像處理流程圖...........................................................................15 圖3.8 影像平面座標和世界座標的轉換...............................................17 圖3.9 多層感知器類神經網路架構圖...................................................19 圖3.10 MLP_NN獲得代表性的訓練數據..............................................21 圖3.11 MLP_NN訓練10000次的結果....................................................21 圖3.12 MLP_NN建立影像平面座標與世界座標的轉換之流程圖......22 圖4.1 折線軌跡追蹤的策略...................................................................25 圖4.2 微調模式的方法...........................................................................26 圖5.1 蝴蝶是搜尋未被AEVS鎖住的DMR............................................28 圖5.2 初始在軌跡上並鎖住DMR之軌跡追蹤響應.............................29 圖5.3 初始未在軌跡上並鎖住DMR之軌跡追蹤響應.........................30 圖5.4 初始未在軌跡上且未鎖住DMR之軌跡追蹤響應.....................30 圖5.5 初始與圖5.4不同及未在軌跡上且未鎖住DMR之軌跡追蹤響應..............................................................................................................31 圖5.6 圖5.3粒子並具有一個障礙之DMR軌跡追蹤響應........................31 圖5.7 圖5.6例子並具有一個不同障礙之DMR軌跡追蹤響應................32 圖5.8 具有兩個障礙之響應...................................................................32 圖5.9 具有兩個障礙不同軌跡之響應...................................................33 |
參考文獻 |
[1] T. Matsuyama and N. Ukita, “Real-time multitarget tracking by a cooperative distributed vision system,” Proceedings of the IEEE, vol. 90, no. 7, pp. 1136-1150, 2002. [2] T. Yamaguchi, E. Sato and Y. Takama, “Intelligent space and human centered robotics,” IEEE Trans. Ind. Electron., vol. 50, no. 5, pp. 881-889, Oct. 2003. [3] J. H. Lee and H. Hashimoto, “Controlling mobile robots in distributed intelligent sensor network,” IEEE Trans. Ind. Electron., vol. 50, no. 5, pp. 890-902, Oct. 2003. [4] V. Lippiello, B. Siciliano and L. Villani, “Position-based visual servoing in industrial multirobot cells using a hybrid cameras configuration,” IEEE Trans. Robotics, vol. 25, no. 1, pp. 73-86, Feb. 2007. [5] E. Menegatti, G. Cicirelli, C. Simionato, T. D’Orazio, and H. Ishigro, “Explicit knowledge distribution in an omnidirectional distributed vision system,” Proc. of Int. Conf. on Intell. Robots and Syst., Sendai, Japan, pp. 2743-2750, 2004. [6] M. M. Trivedi, K. S. Huang and I. Mikic, “Dynamic context capture and distributed video arrays for intelligent space,” IEEE Trans. Syst. Man & Cyber, Part A, vol. 35, no. 1, pp. 22-27, Jan. 2005. [7] F. Doctor, H. Hagras and V. Callaghan, “A fuzzy embedded agent-based approach for realizing ambient intelligence in intelligent inhabited environments,” IEEE Trans. Syst. Man & Cyber., Part A, vol. 35, no. 1, pp. 55-65, Jan. 2005. [8] C. L. Hwang and L. J. Chang, “Trajectory tracking and obstacle avoidance of mobile robots in an intelligent space using mixed decentralized control,” IEEE/ASME Trans. Mechatronics, vol. 12, no. 3, pp. 345-352, Jun. 2007. [9] C. L. Hwang and C. Y. Shih, “A distributed active-vision network-space approach for trajectory tracking and obstacle avoidance of a wheeled robot,” IEEE Trans. Ind. Electronics, vol. 56, no. 3, pp. 846-855, Mar. 2009. [10] D. Lee and W. Chang, “Discrete-stature-based localization for indoor service robots,” IEEE Trans. Ind. Electron., vol. 53, no. 5, pp. 1737-1746, Oct. 2006. [11] S. Han. H. S. Lim and J. M. Lee, “An efficient localizations scheme for a differential-driving mobile robot based on RFID system,” IEEE Trans. Ind. Electron., vol. 54, no. 6, pp. 3362-3369, Dec. 2007. [12] K. Briechle and U. D. Hanebeck, “Location of a mobile robot using relative bearing measurements,” IEEE Trans. Robot. & Automat., vol. 20, no. 1, pp. 36-44, Jan. 2004. [13] S. Se, D. G. Lowe and J. J. Little, “Vision-based global localization and mapping for mobile robots,” IEEE Trans. Robot. & Automat., vol. 21, no. 3, pp. 364-375, Jun. 2005. [14] J. Minguez and L. Montano, “Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios,” IEEE Trans. Robot. & Automat., vol. 20, no. 1, pp. 45-59, Jan. 2004. [15] T. H. S. Li, S. J. Chang, “Fuzzy target tracking control of autonomous mobile robots by using infrared sensors,” IEEE Trans. Fuzzy Syst., vol. 12, no. 4, pp. 491-501, Aug. 2004. [16] A. A. Argyros, D. P. Tsakiris and C. Groyer, “Biomimetric centering behavior --- Mobile robots with panoramic sensors,” IEEE Robotics & Automat. Mag., pp. 21-31, Dec. 2004. [17] E. Menegatti, A. Pretto, A. Scarpa, and E. Pagello, “Omni-directional vision scan matching for robot localization in dynamic environments,” IEEE Trans. Robotics, vol. 22, no. 3, pp. 523-535, Jun. 2006. [18] I. H. Chen and S. J. Wang, “An effective approach for the calibration of multiple PTZ cameras,” IEEE Trans. Automat. Sci. & Engr., vol. 4, no. 2, pp. 286-293, Apr. 2007. [19] K. T. Song and J. C. Tai, “Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring,” IEEE Trans. Syst. Man & Cybern., Pt. B, vol. 36, no. 5, pp. 1091-1103, Oct. 2006. [20] I. Baturone, F. J. Moreno-Velo, S. Sanchez-Solano and A. Ollero, “Automatic design of fuzzy controllers for car-like autonomous robots,” IEEE Trans. Fuzzy Syst., vol. 12, no. 4, pp. 447-465, Aug. 2004. [21] I. Baturone, F. J. Moreno-Velo, V. Blanco and J. Fervuz, “Design of embedded DSP-based fuzzy controllers for autonomous mobile robots,” IEEE Trans. Ind. Electron., vol. 55, no. 2, pp. 928-936, Feb. 2007. [22] P. Vadakkepat, P. Lim, L. C. De Silva, L. Jing and L. L. Ling, “Multimodal approach to human-face detection and tracking,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1385-1393, Mar. 2008. [23] D. Xu, Y. F. Li, M. Tan and Y. Shen, “A new active visual for humanoid robots,” IEEE Trans. Syst. Man & Cyber., Part B, vol. 38, no. 2, pp. 320-330, Apr. 2008. [24] Y. Han, “Imitation of human-eye motion --- how to fix gaze of an active vision system,” IEEE Trans. Syst. Mans & Cybern., Part A., vol. 37, no. 6, pp. 854-863, Nov. 2007. [25] D. Xu, Y. F. Li, M. Tan and Y. Shen, “A new active visual for humanoid robots,” IEEE Trans. Syst. Man & Cyber., Part B, vol. 38, no. 2, pp. 320-330, Apr. 2008. |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信