§ 瀏覽學位論文書目資料
  
系統識別號 U0002-0507201018231300
DOI 10.6846/TKU.2010.00136
論文名稱(中文) 應用類神經網路定位為基礎的主動視覺於人形機器人罰踢之研究
論文名稱(英文) Penalty Kick of a Humanoid Robot by a Neural-Network-Based Active Embedded Vision System
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 98
學期 2
出版年 99
研究生(中文) 陸念聞
研究生(英文) Nien-Wen Lu
學號 697470044
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2010-06-23
論文頁數 61頁
口試委員 指導教授 - 黃志良(clhwang@mail.tku.edu.tw)
委員 - 施慶隆
委員 - 洪敏雄
委員 - 蔡奇謚
委員 - 黃志良
關鍵字(中) 人形機器人
罰踢
定位與影像處理
類神經網路建模
視覺導引策略
姿態修正
關鍵字(英) Humanoid robot
Penalty kick
Image processing for localization
Modeling using multilayer neural network
Strategy for visual navigation
Posture revision
第三語言關鍵字
學科別分類
中文摘要
本論文主要是應用德州儀器的數位訊號處理器TMS320C6713、視覺模組VM480CCD及相關軟體系統(Code Composer Studio),實現以類神經網路定位目標物體之主動視覺系統,並導引人形機器人與執行足球罰踢。此研究內容主要由四個部分整合實現,其分別為人形機器人的步態規劃、視覺影像的處理、類神經網路定位及導引人形機器人的策略,完成人形機器罰踢足球之任務。
    首先由CCD視覺模組將擷取到的影像輸入到TMS320C6713,以進行包括二值化、中值濾波器去除雜訊、影像修正、計算面積與目標物中心點位置的影像處理,接著以類神經網路建立影像座標平面與世界座標之關係(或轉換),正確計算出目標物體與人形機器人的相對方位與距離,進而導引人形機器人走向目標物的所在位置。當機器人到達目標物附近約10公分後,視覺系統將會開始搜尋球門並且定位虛擬目標位置,以進行機器人罰踢的姿態修正,當姿態修正完成後,則開始執行罰踢的任務。
    由於影像視覺系統最重要的卽是目標物位置定位的準確性,因此本論文選用以類神經網路來定位,將影像視覺投影所形成的影像平面轉換到世界座標,進而導引人形機器至所規劃的姿態(卽方位與距離)。運用所開發的圖形化之人機介面,設計與規劃導引人形機器人所需要的不同動作,並且將影像視覺系統與人形機器人核心系統以RS232方式互相傳輸資料,實現本論文之任務。最後我們以相關的實驗驗證本論文之有效性及效率性。
英文摘要
This thesis is to use the Texas Instruments TMS320C6713 digital signal processor, the vision module VM480CCD, and the related software systems (e.g., Code Composer Studio) to obtain the task of the penalty kick of a humanoid robot by using neural network based localization. In this thesis, there have four parts: the path planning of gait, the image processing, the modeling using neural network, and the strategy for visual navigation, to execute the task of the PK for an HR. 
  First, the CCD module will capture the visual image, which is transferred to the TMS320C6713 for the image processing, including binary segmentation to reduce the storage and computation load, median filter to remove noise, image restoration to improve the accuracy, and calculation of the target position. The modeling using neural network is applied to establish the relationship between the image plane coordinate and the world coordinate. When the robot reaches in the vicinity of target (i.e., about 10 cm), the visual system starts searching the gate and the virtue target point to modify the posture of the HR for the PK. After the posture revision, a fine visual window is employed to confirm the posture for the PK.
  The most important thing for vision system is the accuracy of localization. Therefore, a neural-network-based active embedded vision system is developed to approximate the relation between the world coordinate and the image plane coordinate. An interface for man and machine is also applied to design the desired motion of the HR, to connect the signal between the embedded vision system TMS320C6713 and the central embedded system RB-100, and then to navigate the HR to the planned posture for the PK. Finally, the corresponding experiments for the PK of an HR confirm the effectiveness and efficiency of the proposed system.
第三語言摘要
論文目次
目錄
中文摘要...............................................I
英文摘要..............................................II
目錄.................................................III
圖目錄............................................... VI
表目錄................................................IX 
第一章 緒論............................................1
1.1 研究背景...........................................1
1.2 研究動機與目的.....................................2
1.3 論文架構...........................................4
第二章 系統描述及任務陳述..............................5
2.1 系統描述...........................................5
2.2 研究任務...........................................8
第三章 人形機器人模型與架構...........................10
3.1 人形機器人基本架構................................10
3.2 具有6個自由度之腳的機構與伺服系統.................11
3.3 具有4個自由度之手的機構與伺服系統.................14
3.4 具有1個自由度之身體(含腰部)的機構及伺服系統.......16
3.5 機器人身體的機構設計..............................17
第四章 人形機器人人機介面架構.........................19
4.1 人形機器人人機介面................................19
4.2 人機介面編輯功能..................................20
4.3 馬達回授介面......................................21
4.4 組合動作功能......................................22
4.5 寫檔與讀檔功能....................................23
第五章 視覺影像辨識與處理方式.........................24
5.1 影像視覺處理與辨識方式............................24
5.2 影像擷取與二值化..................................25
5.3 影像雜訊處理方式..................................27
5.4 影像修正..........................................28
第六章 應用類神經網路之影像定位.......................30
6.1 類神經網路之概述..................................30
6.2影像座標與大地座標之轉換...........................31
6.3 類神經網路建模....................................32
6.4 類神經網路之誤差分析..............................35
第七章 導引人形機器人罰踢之策略.......................38
7.1 目標距離與方位之計算..............................38
7.2 導引策略與人形機器人步態之規劃....................41
7.3 球門方位及距離計算與罰踢位置之修正................46
第八章 實驗結果與討論.................................51
8.1 實驗預備..........................................51
8.2 實驗結果..........................................52
第九章 結論與未來展望.................................58
9.1 結論..............................................58
9.2 未來展望..........................................59
參考文獻..............................................60
圖目錄
圖2.1、22個自由度的人形機器人系統......................6
圖2.2、嵌入式視覺系統模組..............................7
圖2.3、系統架構圖......................................7
圖2.4、嵌入系統RB-100..................................7
圖3.1、人形機器人整體示意圖...........................10
圖3.2、人形機器人四肢及身體自由度示意圖...............11
圖3.3、機器人腳部正面示意圖...........................12
圖3.4、機器人腳部側視圖...............................13
圖3.5、機器人下半身長度規劃圖.........................13
圖3.6 機器人手部正視圖................................15
圖3.7、機器人上半身長度規劃圖.........................16
圖3.8、機器人腰部正視圖...............................17
圖3.9、機器人身體正視圖...............................18
圖4.1、人機介面架構...................................19
圖4.2、人機介面編輯功能...............................21
圖4.3、馬達回授介面...................................22
圖4.4、組合動作功能...................................23
圖5.1、原始擷取影像...................................24
圖5.2、影像處理流程圖.................................25
圖5.3、藍色球的彩色影像與二值化處理後影像.............26
圖5.4、黃色球門的彩色影像與二值化處理後影像...........26
圖5.5、經由中值濾波器處理後的二值化影像...............28
圖5.6、經由影像切割處理後的二值化影像.................28
圖5.7、影像修正方式...................................29
圖5.8、經由影像修正後之目標物影像.....................29
圖6.1、影像傾斜角度之可視範圍.........................32
圖6.2、影像平面座標轉換大地座標示意圖.................33
圖6.3 多層感知器類神經網路架構圖......................33
圖6.4、多層感知器類神經網獲得代表性的訓練數據.........34
圖6.5、遠距離模式多層感知器類神經網路訓練結果圖.......35
圖6.6、遠距離模式之誤差分析表.........................36
圖6.7、中遠距離模式之誤差分析表.......................37
圖6.8、中近距離模式之誤差分析表.......................37
圖6.9、近距離模式之誤差分析表.........................37
圖7.1、計算目標物與機器人方向及距離示意圖.............38
圖7.2、導引人形機器人到達目標物之示意圖...............40
圖7.3、人形機器人左轉1度之動作圖......................42
圖7.4、人形機器人左轉5度之動作圖......................43
圖7.5、人形機器人左轉10度之動作圖.....................43
圖7.6、人形機器人左轉30度之動作圖.....................44
圖7.7、人形機器人大步伐直走(10~12cm)之動作圖..........44
圖7.8、人形機器人中步伐直走(6~8cm)之動作圖............45
圖7.9、人形機器人小步伐直走(3~5cm)之動作圖............45
圖7.10、人形機器人微步伐直走(1~3cm)之動作圖...........46
圖7.11、計算進球點與罰踢位置修正示意圖................47
圖7.12、人形機器人左橫移之動作圖......................49
圖7.13、人形機器人右橫移之動作圖......................49
圖7.14、人形機器人踢球射門之動作圖....................50
圖8.1、人形機器人系統整合示意圖.......................51
圖8.2、導引人形機器人示意圖...........................53
圖8.3、搜尋進球點位置與人形機器人修正示意圖...........53
圖8.4、罰踢位置修正示意圖.............................54
圖8.5、目標物在機器人左邊之實驗成果圖.................55
圖8.6、目標物在機器人中間之實驗成果圖.................56
圖8.7、目標物在機器人右邊之實驗成果圖.................57
表目錄
表2.1、嵌入系統RB-100規格..............................8
表3.1、膝關節伺服機規格表.............................14
表3.2、腿部伺服機規格表...............................14
表3.3、手部伺服機規格表...............................16
參考文獻
參考文獻
[1]K. Loffler, M. Gienger, F. Pfeiffer, and H. Ulbrich, “Sensors and control concept of a biped robot,” IEEE Trans. Ind. Electron., vol. 51, no. 5, pp.972-980, Oct. 2004.
[2]Q. Huang and Y. Nakamura, “Sensory reflex control for humanoid walking,” IEEE Trans. Robotics, vol. 21, no. 5, pp. 977-984, Oct. 2005.
[3]Y. Guan, E. S. Neo, K. Yokoi, and K. Tanie, “Stepping over obstacles with humanoid robots,” IEEE Trans. Robotics, vol. 22, no. 5, pp. 958-973, Oct. 2006.
[4]K. Harada, S. Kajita, F. Kanehiro, K. Fujiwara, K. Kaneko, K. Yokoi and H. Hirukawa, “Real-time planning of humanoid robot’s gait for force-controlled manipulation,” IEEE/ASME Trans. Mechatron., vol. 12, no. 1, pp. 53-62, Feb. 2007.
[5]E. S. Neo, K. Yokoi, S. Kajita, and K. Tanie, “Whole-body motion generation integrating operator’s intention and robot’s autonomy in controlling humanoid robots,” IEEE Trans. Robotics, vol. 23, no. 4, pp.763-775, Aug. 2007.
[6]D. Xu, Y. F. Li, M. Tan, and Y. Shen, “A new active visual for humanoid robots,” IEEE Trans. Syst. Man & Cyber., Part B, vol. 38, no. 2, pp. 320-330, Apr. 2008.
[7]G. Arechavaleta, J. P. Laumond, H. Hicheur, and A. Berthoz, “An optimality principle governing human walking,” IEEE Trans. Robotics, vol. 24, no. 1, pp. 5-14, Feb. 2008.
[8]L. Montesano, M. Lopes, A. Bernardino, and Jos´e Santos-Victor, “Learning object affordances: from sensory–motor coordination to imitation,” IEEE Trans. Robotics, vol. 24, no. 1, pp. 15-264, Feb. 2008.
[9]T. Nomura, T. Kanda, T. Suzuki, and K. Kato, “Prediction of human behavior in human–robot interaction using psychological scales for anxiety and negative postures toward robots,” IEEE Trans. Robotics, vol. 24, no. 2, pp. 442-451, Apr. 2008.
[10]C. Fu and K. Chen, “Gait synthesis and sensory control of stair climbing for a humanoid robot,” IEEE Trans. Ind. Electronics, vol. 55, no. 5, pp. 2111-2120, May 2008.
[11]T. Kanda, T. Miyashita, T. Osada, Y. Haikawa, and H. Ishiguro, “Analysis of humanoid appearances in human–robot interaction,” IEEE Trans. Robotics, vol. 24, no. 3, pp. 725-735, Jun. 2008.
[12]E. Yoshida, C. Esteves, I. Belousov, J. P. Laumond, T. Sakaguchi, and K. Yokoi, “Planning 3-D collision-free dynamic robotic motion through iterative reshaping,” IEEE Trans. Robotics, vol. 24, no. 3, pp. 1186-1197, Oct. 2008.
[13]C. Chevallereau, J. W. Grizzle, and C. L. Shih, “Asymptotically stable walking of a five-link underactuated 3-D bipedal robot”, IEEE Trans. Robotics, vol. 25, no. 1, pp. 37-50, Feb. 2009.
[14]J. Y. Choi, B. R. So, B. J. Yi, W. Kim, and I. H. Suh, “Impact based trajectory planning of a soccer ball in a kicking robot,” in Proc. Int. Conf. Robot. Autom., Barcelona, Spain, 2005, pp. 2834–2840.
[15]C. L. Huang, H. C. Shih, and C. Y. Chao, “Semantic analysis of soccer video using dynamic Bayesian network,” IEEE Trans. Multimedia, vol. 8, no. 4, pp. 749-760, Aug. 2006.
[16]S. Behnke, M. Schreiber, J. Stuckler, R. Renner, and H. Strasdat, “See, walk, and kick: Humanoid robots start to play soccer,” The 6th IEEE RAS Int. Conf. on Humanoid Robot, pp. 497-503, Dec. 4t~6th, 2006.
[17]Z. Chen and M. Hemami, “Sliding mode control of kicking a soccer ball in the sagittal plane,” IEEE Trans. Syst. Man & Cybernectics, Part A, vol. 37, no. 6, Nov. 2007.
[18]C. M. Chang, M. F. Lu, C. Y. Hu, S. W. Lai, S. H. Liu, Y. T. Su, and T. H. S. Li, “Design and implementation of penalty kick function for small-sized humanoid robot by using FPGA,” IEEE International Conference on Advanced Robotics and its Social Impacts, Taipei, Taiwan, Aug. 23-25, 2008.
[19]E. Menegatti, A. Pretto, A. Scarpa, and E. Pagello, “Omni-directional vision scan matching for robot localization in dynamic environments,” IEEE Trans. Robotics, vol. 22, no. 3, pp. 523-535, Jun. 2006.
[20]K. T. Song and J. C. Tai, “Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring,” IEEE Trans. Syst. Man & Cybern., Pt. B, vol. 36, no. 5, pp. 1091-1103, Oct. 2006.
[21]I. H. Chen and S. J. Wang, “An effective approach for the calibration of multiple PTZ cameras,” IEEE Trans. Automat. Sci. & Engr., vol. 4, no. 2, pp. 286-293, Apr. 2007.
[22]P. Vadakkepat, P. Lim, L. C. De Silva, L. Jing, and L. L. Ling, “Multimodal approach to human-face detection and tracking,” IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1385-1393, Mar. 2008. 
[23]S. Haykin, Neural Networks and Learning Machines, 3rd Ed., 2009.
論文全文使用權限
校內
紙本論文於授權書繳交後5年公開
同意電子論文全文授權校園內公開
校內電子論文於授權書繳交後5年公開
校外
同意授權
校外電子論文於授權書繳交後5年公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信