§ 瀏覽學位論文書目資料
系統識別號 U0002-0809202013205800
DOI 10.6846/TKU.2020.00184
論文名稱(中文) 基於無標記式擴增實境多人連線機器人導引系統之設計與實現
論文名稱(英文) Design and Implementation of a Robot Guidance System Based on Multi-Person Connection of Markerless Augmented Reality
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 108
學期 2
出版年 109
研究生(中文) 呂昀翰
研究生(英文) Yun-Han Lu
學號 606460078
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2020-07-17
論文頁數 52頁
口試委員 指導教授 - 蔡奇謚
委員 - 蔡奇謚
委員 - 李世安
委員 - 許陳鑑
關鍵字(中) 無標記式擴增實境
多人連線
Cloud Anchor
機器人導引
關鍵字(英) Markerless Augmented Reality
Multiplayer Connection
Cloud Anchor
Robot Guidance
第三語言關鍵字
學科別分類
中文摘要
本論文提出一種基於無標記式擴增實境多人連線的機器人導引系統,透過一種直覺式的操控方式來簡化移動機器人的導航控制應用。由於智慧型移動機器人的普及,一般無專業背景的使用者,如小孩、老人或殘障人士等,可能會需要操控機器人進行互動應用,此時需要更簡單明瞭的操作方式讓使用者進行操控。本論文提出的導引系統以觸擊手機螢幕方式生成擴增實境目標點導引機器人移動,簡單且直觀,讓使用者都能夠快速上手。再者,當需要將移動機器人導引到一個較遠的目標區域時,可能需要較為繁瑣的控制指令,才能夠使移動機器人到達目的地。相對地,所提出的導引系統可以透過多目標點選定的方式,清楚的讓使用者直接在現實環境中選擇想要機器人移動的軌跡來到達目的地。所提出的系統架構是由二個部分組成,一是無標記擴增實境連線系統,透過Unity和ARCore做整合開發,讓每一個使用者能夠處在同個擴增實境空間;另一部分是擴增實境導引系統,運用無標記擴增實境多人連線系統讓多位使用者與移動機器人連線應用,每位使用者均可將生成的擴增實境目標點傳送給移動機器人,進而達到多人導引機器人的導航應用。
英文摘要
This thesis proposes a robot guidance system based on multiplayer connection of markerless augmented reality (AR), which simplifies the navigation control application of mobile robots through an intuitive control method. Due to the popularity of intelligent mobile robots, users with no professional background, such as children, the elderly or people with disabilities, may need to control the robot for interactive applications. In this case, a simpler and clearer control method is required for the user to control the mobile robot. The proposed guidance system generates an AR target point by touching the screen of the mobile phone to guide the robot to move to the target position. This control method is simple and intuitive, allowing users to get started quickly. Furthermore, when the user needs to guide the mobile robot to a target area farther away, more control instructions may be required to make the mobile robot reach the destination. On the contrary, the proposed guidance system allows the user to select the trajectory of the mobile robot directly in the real environment to reach the destination through a multi-target point selection method. The proposed system architecture is composed of two parts. One is the markerless AR connection system, which is developed and integrated through Unity and ARCore, so that each user can be in the same augmented reality space with the robot. The other part is the AR guidance system, which uses the makerless AR connection system to allow multiple users to connect with the mobile robot. Each user can transfer the generated AR target point to the mobile robot, so that multiple people can control the mobile robot together.
第三語言摘要
論文目次
目錄
中文摘要	......................................I
英文摘要	.....................................II
目錄	....................................III
圖目錄	.....................................VI
表目錄	....................................VIII
第一章 序論....................................1
1.1  研究背景..................................1
1.2  研究動機與目的.............................5
1.3  論文架構..................................7
第二章 相關背景知識.............................9
2.1 擴增實境技術................................9
  2.1.1 標記式擴增實境.........................10
  2.1.2 無標記式擴增實境........................10
2.2 多人連線網路...............................11
  2.2.1 主從式網路連線架構......................11
  2.2.2 對等式網路連線架構......................12
2.3 即時定位與地圖構建..........................12
2.4 總結.......................................14
第三章 無標記擴增實境多人連線系統................15
3.1 ARCore....................................15
3.2 Unity引擎..................................16
3.3 無標記擴增實境多人連線系統...................17
  3.3.1 雲端錨點...............................17
  3.3.2 伺服端.................................19
  3.3.3 網路管理器..............................21
  3.3.4 客戶端..................................22
第四章 擴增實境導引系統..........................23
4.1 擴增實境導引系統.............................23
4.2 運動學模型..................................28
4.3控制器設計...................................30
  4.3.1 誤差計算................................31
  4.3.2 線速度PID控制器..........................32
  4.3.3 角速度PID控制器..........................32
  4.3.4 指令轉換.................................33
第五章 實驗流程與結果分析.........................36
5.1軟硬體介紹....................................36
5.2實驗流程......................................37
5.3測試結果......................................38
5.4結果分析......................................43
第六章 結論與未來展望.............................47
參考文獻.........................................48

圖目錄
圖1. 1、Milgram's Reality-Virtuality Continuum....1
圖1. 2、論文架構圖.................................8
圖2. 1、擴增實境示意圖(a)為標記式擴增實境,紅色框為指定的標記圖片;(b)為擴增實境手機遊戲畫面,屬於無標記式擴增實境,偵測環境平面將虛擬人物建置在平面................................................9
圖2. 2、Client-Server網路架構.....................11
圖2. 3、Peer-to-Peer網路架構......................12
圖3. 1、無標記擴增實境多人連線系統架構圖............18
圖3. 2、無標記擴增實境多人連線系統手機定位架構圖.....21
圖4. 1、擴增實境導引系統示意圖.....................23
圖4. 2、擴增實境導引系統架構圖.....................24
圖4. 3、擴增實境導引系統座標關係圖..................25
圖4. 4、擴增實境導引系統流程圖.....................26
圖4. 5、擴增實境導引控制流程圖.....................28
圖4. 6、移動機器人座標關係圖.......................29
圖4. 7、移動機器人控制器模型.......................31
圖4. 8、移動機器人位置與方向誤差圖..................32
圖5. 1、手機設備,(a)為Google Pixel 4,(b)為Google pixel XL...36
圖5. 2、Arduino移動機器人..........................37

表目錄
表1. 1、現有擴增實境結合機器人控制方法比較............6
表3. 1、無標記式擴增實境多人連線系統軟體功能說明......19
表4. 1、機器人移動狀態表............................34
表4. 2、參數單位表.................................35
表5. 1、Arduino移動機器人規格.......................37
表5. 2、各實驗條件.................................38
表5. 3、實驗一結果呈現..............................39
表5. 4、實驗二結果呈現..............................40
表5. 5、實驗三結果呈現..............................41
表5. 6、實驗四結果呈現..............................42
表5. 7、實驗一移動機器人與擴增實境目標實際誤差........43
表5. 8、實驗二移動機器人與擴增實境目標實際誤差........43
表5. 9、實驗三移動機器人與擴增實境目標實際誤差........43
表5. 10、實驗四移動機器人與擴增實境目標實際誤差.......43
參考文獻
[1]P. Milgram, F. Kishino. “A taxonomy of mixed reality visual displays,” in IEICE Trans. on Inf. and Syst., vol. E77-D, pp. 1321-1329, Dec. 1994,.
[2]R. T. Azuma, “A survey of augmented reality,” Teleoperators and Virtual Environments, vol. 6, pp. 355-385, Aug. 1997.
[3]S. L. Kim, H. J. Suk, J. H. Kang, J. M. Jung, T. H. Laine, J. Westlin, “Using Unity 3D to facilitate mobile augmented reality game development,” in 2014 IEEE World Forum on Internet of Things (WF-IoT), Seoul, South Korea, Mar. 6-8, 2014.
[4]D. Adrianto, M. Hidajat, V. Yesmaya, “Augmented reality using Vuforia for marketing residence,” in 2016 1st Int. Conf. on Game, Game Art, and Gamification (ICGGAG), Jakarta, Indonesia, Dec. 19-21, 2016.
[5]Z. Teng, H. Hanwu, W. Yueming, C. He'en, C. Yongbin, “Mixed reality application: a framework of markerless assembly guidance system with Hololens glass,” in Int. Conf. on Virtual Reality and Visualization (ICVRV), Zhengzhou, China, pp. 433-434, Oct. 21-22, 2017.
[6]M.T. Abhishek, P.S. Aswin, N. C. Akhil, A Souban, S. K. Muhammedali, A. Vial, “Virtual lab using markerless augmented reality,” in 2018 IEEE Int. Conf. on Teaching, Assessment, and Learning for Eng. (TALE), Wollongong, NSW, Australia, pp. 1150-1153, Dec. 4-7, 2018.
[7]Unity URL:http://www.cg.com.tw/UNet/
[8]ARCore URL:https://developers.google.com/ar
[9]L. Qian, X. Zhang, A. Deguet, P. Kazanzides, “ARAMIS: augmented reality assistance for minimally invasive surgery using a head-mounted display,” Medical Image Comput. and Comput.-Assisted Intervention (MICCA), pp. 74-82, Oct. 2019.
[10]Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, A. Farhadi “Target-driven visual navigation in indoor scenes using deep reinforcement learning,” in 2017 Int. Conf. on Robot. and Automat. (ICRA), Singapore, Singapore, pp. 3357-3364, May 29- June, 3, 2017.
[11]T. Arakawa, N. Kuboa, “Development of guide robot that leads and considers following user,” in 2019 Int. Conf. on Fuzzy Syst. (FUZZ-IEEE), New Orleans, LA, USA, June, 23-26, 2019. 
[12]S. Na, H. S. Ahn, Y. C. Lee, W. Yu, “Navi-Guider: an intuitive guiding system for the mobile robot,” in RO-MAN 2007 - The 16th IEEE Int. Symp. on Robot and Human Interactive Commun. , Jeju, South Korea, pp. 228-233, Aug. 26-29, 2007
[13]H. Ro, J. H. Byun, I. Kim, Y. J. Park, K. Kim,T. D. Han, “Projection-based augmented reality robot prototype with human-awareness,” in 2019 14th ACM/IEEE Int. Conf. on Human-Robot Interact. (HRI), Daegu, South Korea, pp. 598-599, Mar. 11-14, 2019.
[14]B. Ibari, K. Bouzgou, Z. Ahmed-foitih, L. Benchikh, “An application of augmented reality in the manipulation of fanuc 200iC robot,” in Fifth Int. Conf. on the Innovative Comput. Technol. (INTECH 2015), Pontevedra, Spain, pp. 56-60, Aug. 20-22, 2015.
[15]H. Fang, S. K. Ong, A. Y. C. Nee, “Robot programming using augmented reality,” in 2009 Int. Conf. on CyberWorlds, Bradford, UK, pp.13-20, Sept. 7-11, 2009.
[16]A. G. Millard, R. Redpath, A. M. Jewers, C. Arndt, R. Joyce, J. A. Hilder, L.J. McDaid, D. M. Halliday, “ARDebug: an augmented reality tool for analysing and debugging swarm robotic systems,” Frontiers in Robot. and AI, vol. 5, pp. 1-6, July, 2018.
[17]C. P. Quintero, S. Li, M. KXJ. Pan, W. P. Chan, H.F. M. V. der Loos, E. Croft, “Robot programming through augmented trajectories in augmented reality,” in 2018 IEEE/RSJ Int. Conf. on Intell. Robots and Syst. (IROS), Madrid, Spain, pp. 1838-1844, Oct. 1-5, 2018.
[18]A. Yew, S. Ong, A. Nee, “Immersive augmented reality environment for the teleoperation of maintenance robots,” The 24th CIRP Conf. on Life Cycle Eng., vol. 61, pp. 305-310, Apr. 2017.
[19]R. Kuriya, T. Tsujimura, K. Izumi, “Augmented reality robot navigation using infrared marker,” in 2015 24th IEEE Int. Symp. on Robot and Human Interactive Commun. (RO-MAN), Kobe, Japan, pp. 450-455, Aug. 31- Sept. 4, 2015.
[20]H. Gacem, G. Bailly, J. Eagan, E. Lecolinet, “Finding objects faster in dense environments using a projection augmented robotic arm,” in IFIP Conf. on Human-Computer Interact., pp 221-238, Sept. 2015.
[21]E. Malayjerdi, M. Yaghoobi, M. Kardan, “Mobile robot navigation based on fuzzy cognitive map optimizedwith grey wolf optimization algorithm used in augmented reality,” in 2017 5th RSI Int. Conf. on Robot. and Mechatronics (ICRoM), Tehran, Iran, pp. 211-218, Oct. 25-27, 2017.
[22]C. Papachristos, K. Alexis, “Augmented reality-enhanced structural inspection using aerial robots,” in 2016 IEEE Int. Symp. on Intell. Control (ISIC), Buenos Aires, Argentina, pp. 185-190, Sept. 19-22, 2016.
[23]G. Grisetti, C. Stachniss, W. Burgard. “Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling,” in Proc. of the 2005 IEEE Int. Conf. on Robot. and Automa. (ICRA), Barcelona, Spain, Apr. 18-22, 2005.
[24]G. Grisetti, C. Stachniss, W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Trans. on Robot., vol. 23, pp. 34-46, 2007.
[25]S. Kohlbrecher, J. Meyer, O. V. Stryk, U. Klingauf, “A flexible and scalable SLAM system with full 3D motion estimation,” in 2011 IEEE Int. Symp. on Safety, Security and Rescue Robot. (SSRR), Kyoto, Japan, pp.155-160 Nov. 1-5, 2011.
[26]KartoSLAM URL: http://wiki.ros.org/slam_karto
[27]B. Steux, O. El. Hamzaoui, “tinySLAM: a SLAM algorithm in less than 200 lines C-language program,” in 2010 11th Int. Conf. on Control Automat. Robot. & Vision, Singapore, Singapore, pp. 1975-1979, Dec. 7-10, 2010.
[28]L. Carlone, R. Aragues, J. A. Castellanos, B. Bona, “Linear approximation for graph optimization the optimization process requires no initial guess,” in Robot.: Sci. and Syst. VII, Los Angeles, USA, June 27-30, 2011.
[29]A. J. Davison, I. D. Reid, N. D. Molton, “MonoSLAM: real-time single camera SLAM.” IEEE Trans. on Pattern Anal. and Mach. Intell., vol. 29, pp. 1052-1067, June 2007.
[30]G. Klein, D. Murray, “Parallel tracking and mapping for small AR workspaces,” in 2007 6th IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara, Japan, Nov. 13-16, 2007.
[31]J. Civera, A. J. Davison,J. M. M. Montiel, “Inverse depth parametrization for monocular SLAM,” IEEE Trans. on Robot., vol. 24, pp. 932-945, Oct. 2008.
[32]J. Engel, T. Schöps, D. Cremers, “LSD-SLAM: large-scale direct monocular SLAM,” European Conf. on Comput. Vision, pp. 834-849, 2014
[33]ARKit URL:https://developer.apple.com/augmented-reality/
[34]T. Qin, P. Li, S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator.” IEEE Trans. on Robot., vol. 34, pp. 1004-1020, Jul. 2018.
[35]S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, P. Furgale, “Keyframe-based visual-inertial SLAM using nonlinear optimization.” Proc. of Robot.: Sci. and Syst., Jun. 2013.
[36]A. I. Mourikis, S. I. Roumeliotis, “A multi-state constraint Kalman filter for vision-aided inertial navigation.” in Proc. 2007 IEEE Int. Conf. on Robot. and Automat., Roma, Italy, pp. 3565–3572, Apr. 10-14, 2017.
[37]CloudAnchor API URL:
https://developers.google.com/ar/develop/unity/cloud-anchors/overview-unity
[38]Arduino URL:https://www.arduino.cc/
論文全文使用權限
校內
校內紙本論文延後至2025-09-09公開
同意電子論文全文授權校園內公開
校內電子論文延後至2025-09-09公開
校內書目立即公開
校外
同意授權
校外電子論文延後至2025-09-09公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信