系統識別號 | U0002-0608201213564300 |
---|---|
DOI | 10.6846/TKU.2012.00249 |
論文名稱(中文) | 應用類神經網路定位為基礎的主動視覺於人形機器人搜尋、罰踢之研究 |
論文名稱(英文) | “Search, Track and Kick to Virtual Target Point” of Humanoid Robots by a Neural-Network-Based Active Embedded Vision System |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 電機工程學系碩士班 |
系所名稱(英文) | Department of Electrical and Computer Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 100 |
學期 | 2 |
出版年 | 101 |
研究生(中文) | 周尹喆 |
研究生(英文) | Yin-Che Chou |
學號 | 699460019 |
學位類別 | 碩士 |
語言別 | 繁體中文 |
第二語言別 | |
口試日期 | 2012-07-11 |
論文頁數 | 57頁 |
口試委員 |
指導教授
-
黃志良
委員 - 施慶隆 委員 - 郝樹聲 委員 - 翁慶昌 委員 - 黃志良 |
關鍵字(中) |
人形機器人 罰踢 搜尋目標 影像定位 類神經網路建模 視覺導引 姿態調整 |
關鍵字(英) |
Humanoid robot Penalty kick Searching Target Image processing for localization Modeling using multilayer neural network Strategy for visual navigation Posture revision |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
本研究實驗平台為一有 23 個自由度的小型人形機器人,其高度為 65 公分,重量 4 公斤,其核心系統為嵌入式單板電腦 RB-100,透過我們所設計的人機介面及整合電路控制相關馬達,使得機器人完成所指定的動作(例如,直走、轉彎、踢球)。所應用的嵌入式視覺系統為德州儀器的數位訊號處理器TMS320C6713和VM480CCD視覺模組,及其軟體系統(Code Composer Studio),藉此以實現類神經網路為基礎的主動搜尋及定位目標物體,並導引人形機器人執行罰踢。本研究包括如下四個部份:人形機器人的機構設計與步態規劃、視覺影像的處理、類神經網路為基礎的主動搜尋及定位目標球、導引人形機器人的策略。 由於人形機器人的罰踢任務,最重要的因素之一即是目標球的定位準確性,因此本論文乃以類神經網路進行其定位,首先將影像視覺投影所形成的影像平面座標轉換到世界座標,進而導引人形機器人執行相關動作。即當視覺系統搜尋到目標球後,由視覺模組將擷取的影像輸入至TMS320C6713以進行包括二值化、中值濾波器去除雜訊、影像修正、計算目標球中心點位置的影像處理。以所訓練的類神經網路,即時並正確地計算目標球與人形機器人的方位與距離,進而導引人形機器人走向目標球所設定的位置。當機器人到達目標球前方約10公分後,視覺系統將會再搜尋球門,若存在球門則定位其虛擬中心位置,以進行機器人踢球的位置修正,當機器人踢球位置修正完成後,則執行踢球的動作;若球門不存在則往前踢球、接著追球、再搜尋球門,直至球門被搜尋到以完成罰踢之任務。最後以相關實驗證明所建議方法之有效性及可行性。 |
英文摘要 |
In this thesis, the Texas Instruments TMS320C6713 digital signal processor, VM480CCD vision module, and the related software (Code Composer Studio), are employed to obtain the “Search, Track and Kick to Virtual Target Point” of humanoid robots by a neural-network-based active embedded vision system. A human machine interface is also designed by an on board computer (i.e., RB-100) to implement a variety of different actions, e.g., walking, turning and kicking, required by the task of penalty kick. The research is accomplished by the integration of the following four parts: the humanoid robot mechanism design and gait planning, the visual image processing for penalty kick, the neural network modeling for searching and positioning, and the strategy of visual navigation of a humanoid robot to execute penalty kick. One of the most important factors for penalty kick is to accurately locate the ball. In this situation, the relation between image plane coordinate and earth coordinate of a ball is modeled by neural network with suitable learning law. First, the CCD module captures the visual image and then sends it to the TMS320C6713 for the corresponding image processing, including binary segmentation, median filter to remove noise, image correction, calculation of ball center. These coordinates of ball and its ground truths are then applied to construct a neural network model between image plane coordinate and earth coordinate. When the humanoid robot is navigated in the vicinity of the ball (e.g., about 10 cm) by the trained neural network model, the visual system starts searching the goal. If it is found, then the posture revision of HR is made for the execution of penalty kick. If it is not found, the ball is kicked, then tracked, and repeated until the goal is found. Finally, the corresponding experiments are arranged to confirm the effectiveness and efficiency of the proposed method. |
第三語言摘要 | |
論文目次 |
目錄 中文摘要..............................................I 英文摘要..............................................II 目錄..................................................III 圖目錄................................................ VI 表目錄................................................IX 第一章 緒論...........................................1 1.1 研究背景..........................................1 1.2 研究動機與目的....................................3 1.3 論文架構..........................................4 第二章 系統描述及任務陳述.............................5 2.1 硬體介紹..........................................5 2.2 機器人運動控制介面之介紹..........................8 2.3 研究任務介紹......................................9 第三章 人形機器人基本架構介紹.........................12 3.1 機器人架構介紹....................................12 3.2 人形機器人上半身之伺服系統........................13 3.3 人形機器人下半身之伺服系統........................15 3.4 機器人身體之機構設計..............................18 第四章 影像辨識與處理.................................19 4.1 影像視覺處理與辨識方式............................19 4.2 影像擷取、分割與二值化............................20 4.3 影像雜訊處理......................................22 4.4 影像修正..........................................23 第五章 應用類神經網路之影像定位.......................25 5.1 類神經網路概述....................................25 5.2 影像座標與世界座標之轉換..........................26 5.3 類神經網路建模....................................27 5.4 類神經網路之誤差分析..............................30 第六章 搜尋及導引踢球之策略...........................33 6.1 目標物搜尋........................................33 6.2 目標距離與方位之計算..............................35 6.3 導引策略與步態規劃................................38 6.4 球門定位與罰踢位置修正............................39 第七章 實驗結果與討論.................................42 7.1 實驗預備..........................................42 7.2 實驗結果..........................................43 第八章 結論與未來展望.................................54 8.1 結論..............................................54 8.2 未來展望..........................................55 參考文獻..............................................56 圖目錄 圖2.1、23個自由度的人形機器人系統.....................5 圖2.2、系統架構圖.....................................6 圖2.3、嵌入系統RB-100.................................6 圖2.4、嵌入式視覺系統模組.............................8 圖2.5、機器人運動控制介面.............................9 圖2.6、任務流程圖.....................................11 圖3.1、人形機器人整體示意圖...........................12 圖3.2、人形機器人四肢及身體自由度示意圖...............13 圖3.3、機器人上半身正視圖.............................14 圖3.4、機器人上半身長度規劃圖.........................14 圖3.5、AX-12伺服馬達..................................15 圖3.6、機器人下半身規劃圖.............................16 圖3.7、機器人下半身長度規劃圖.........................17 圖3.8、RX-28伺服馬達..................................17 圖3.9、RX-64伺服馬達..................................18 圖3.10、機器人身體正視圖..............................18 圖4.1、原始擷取影像示意...............................19 圖4.2、影像處理流程圖.................................20 圖4. 3、彩色影像與二值化結果..........................21 圖4.4、經由中值濾波器處理後的二值化影像...............23 圖4.5、經由影像切割處理後的二值化影像.................23 圖4.6、影像修正方式...................................24 圖4.7、經由影像修正後之目標物影像.....................24 圖5.1、影像傾斜角度之可視範圍.........................27 圖5.2、影像平面座標轉換大地座標示意圖.................28 圖5.3、多層感知器類神經網路架構圖.....................28 圖5.4、多層感知器類神經網獲得代表性的訓練數據.........29 圖5.5、遠距離模式多層感知器類神經網路訓練結果圖.......30 圖5.6、遠距離模式之誤差分析表.........................31 圖5.7、中遠距離模式之誤差分析表.......................32 圖5.8、中近距離模式之誤差分析表.......................32 圖5.9、近距離模式之誤差分析表.........................32 圖6.1、搜尋方向示意圖.................................33 圖6.2、搜尋時各視窗重疊情況...........................34 圖6.3、搜尋範圍示意圖.................................35 圖6.4、計算目標物與機器人方向及距離示意圖.............36 圖6.5、導引人形機器人到達目標物之示意圖...............38 圖6.6、計算進球點與罰踢位置修正示意圖.................40 圖7.1、人形機器人系統整合示意圖.......................42 圖7.2、導引人形機器人示意圖...........................43 圖7.3、藍色球在機器人左邊之實驗成果圖.................44 圖7.4、藍色球在機器人左邊之追蹤軌跡圖.................45 圖7.5、綠色球在機器人右邊之實驗成果圖.................46 圖7.6、綠色球在機器人右邊之實驗軌跡圖.................47 圖7.7、目標球離球門較遠之實驗成果圖...................48 圖7.8、目標球離球門較遠之實驗軌跡圖...................49 圖7.9、目標球不在機器人前方(藍色球)之實驗成果圖.......50 圖7.10、目標球不在機器人前方(藍色球)之實驗軌跡圖......51 圖7.11、目標球不在機器人前方(綠色球)之實驗成果圖......52 圖7. 12、目標球不在機器人前方(綠色球)之實驗軌跡圖.....53 表目錄 表2.1、嵌入系統RB-100規格.............................7 表2.2數位訊號處理器TMS320C6713 規格...................8 表3.1、手部伺服機規格表...............................15 表3.2、腿部伺服機規格表...............................17 表3.3、膝關節伺服機規格表.............................18 |
參考文獻 |
[1]K. Loffler, M. Gienger, F. Pfeiffer and H. Ulbrich, “Sensors and control concept of a biped robot,” IEEE Trans. Ind. Electron., vol. 51, no. 5, pp.972-980, Oct. 2004. [2]Q. Huang and Y. Nakamura, “Sensory reflex control for humanoid walking,” IEEE Trans. Robotics, vol. 21, no. 5, pp. 977-984, Oct. 2005. [3]Y. Guan, E. S. Neo, K. Yokoi and K. Tanie, “Stepping over obstacles with humanoid robots,” IEEE Trans. Robotics, vol. 22, no. 5, pp. 958-973, Oct. 2006. [4]K. Harada, S. Kajita, F. Kanehiro, K. Fujiwara, K. Kaneko, K. Yokoi and H. Hirukawa, “Real-time planning of humanoid robot’s gait for force-controlled manipulation,” IEEE/ASME Trans. Mechatron., vol. 12, no. 1, pp. 53-62, Feb., 2007. [5]E. S. Neo, K. Yokoi, S. Kajita and K. Tanie, “Whole-body motion generation integrating operator’s intention and robot’s autonomy in controlling humanoid robots,” IEEE Trans. Robotics, vol. 23, no. 4, pp.763-775, Aug. 2007. [6]D. Xu, Y. F. Li, M. Tan and Y. Shen, “A new active visual for humanoid robots,” IEEE Trans. Syst. Man & Cyber., Part B, vol. 38, no. 2, pp. 320-330, Apr. 2008. [7]G. Arechavaleta, J. P. Laumond, H. Hicheur, and A. Berthoz, “An optimality principle governing human walking,” IEEE Trans. Robotics, vol. 24, no. 1, pp. 5-14, Feb. 2008. [8]L. Montesano, M. Lopes, A. Bernardino, and Jos´e Santos-Victor, “Learning object affordances: from sensory–motor coordination to imitation,” IEEE Trans. Robotics, vol. 24, no. 1, pp. 15-264, Feb. 2008. [9]T. Nomura, T. Kanda, T. Suzuki, and K. Kato, “Prediction of human behavior in human–robot interaction using psychological scales for anxiety and negative postures toward robots,” IEEE Trans. Robotics, vol. 24, no. 2, pp. 442-451, Apr. 2008. [10]C. Fu and K. Chen, “Gait synthesis and sensory control of stair climbing for a humanoid robot,” IEEE Trans. Ind. Electronics, vol. 55, no. 5, pp. 2111-2120, May 2008. [11]T. Kanda, T. Miyashita, T. Osada, Y. Haikawa, and H. Ishiguro, “Analysis of humanoid appearances in human–robot interaction,” IEEE Trans. Robotics, vol. 24, no. 3, pp. 725-735, Jun. 2008. [12]E. Yoshida, C. Esteves, I. Belousov, J. P. Laumond, T. Sakaguchi and K. Yokoi, “Planning 3-D collision-free dynamic robotic motion through iterative reshaping,” IEEE Trans. Robotics, vol. 24, no. 3, pp. 1186-1197, Oct. 2008. [13]C. Chevallereau, J. W. Grizzle and C. L. Shih, “Asymptotically stable walking of a five-link underactuated 3-D bipedal robot”, IEEE Trans. Robotics, vol. 25, no. 1, pp. 37-50, Feb. 2009. [14]J. Y. Choi, B. R. So, B. J. Yi, W. Kim, and I. H. Suh, “Impact based trajectory planning of a soccer ball in a kicking robot,” in Proc. Int. Conf. Robot. Autom., Barcelona, Spain, 2005, pp. 2834–2840. [15]C. L. Huang, H. C. Shih and C. Y. Chao, “Semantic analysis of soccer video using dynamic Bayesian network,” IEEE Trans. Multimedia, vol. 8, no. 4, pp. 749-760, Aug. 2006. [16]S. Behnke, M. Schreiber, J. Stuckler, R. Renner and H. Strasdat, “See, walk, and kick: Humanoid robots start to play soccer,” The 6th IEEE RAS Int. Conf. on Humanoid Robot, pp. 497-503, Dec. 4t~6th, 2006. [17]Z. Chen and M. Hemami, “Sliding mode control of kicking a soccer ball in the sagittal plane,” IEEE Trans. Syst. Man & Cybernectics, Part A, vol. 37, no. 6, Nov. 2007. [18]C. M. Chang, M. F. Lu, C. Y. Hu, S. W. Lai, S. H. Liu, Y. T. Su, and T. H. S. Li, “Design and implementation of penalty kick function for small-sized humanoid robot by using FPGA,” IEEE International Conference on Advanced Robotics and its Social Impacts, Taipei, Taiwan, Aug. 23-25, 2008. [19]E. Menegatti, A. Pretto, A. Scarpa, and E. Pagello, “Omni-directional vision scan matching for robot localization in dynamic environments,” IEEE Trans. Robotics, vol. 22, no. 3, pp. 523-535, Jun. 2006. [20]K. T. Song and J. C. Tai, “Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring,” IEEE Trans. Syst. Man & Cybern., Pt. B, vol. 36, no. 5, pp. 1091-1103, Oct. 2006. [21]I. H. Chen and S. J. Wang, “An effective approach for the calibration of multiple PTZ cameras,” IEEE Trans. Automat. Sci. & Engr., vol. 4, no. 2, pp. 286-293, Apr. 2007. [22]P. Vadakkepat, P. Lim, L. C. De Silva, L. Jing and L. L. Ling, “Multimodal approach to human-face detection and tracking,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1385-1393, Mar. 2008. [23]S. Haykin, Neural Networks and Learning Machines, 3rd Ed., 2009. [24]陸念聞,“應用類神經網路定位為基礎的主動視覺於人形機器人罰踢之研究”,淡江大學電機工程學系碩士班,民國九十九年六月。 [25]黃俊豪,“人形機器人的人機介面設計與運動控制之研究”,淡江大學電機系機器人碩士班,民國九十九年六月。 |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信