§ 瀏覽學位論文書目資料
  
系統識別號 U0002-0909202411054700
DOI 10.6846/tku202400753
論文名稱(中文) 叉式移動機器人之自動棧板對接系統的設計與實現
論文名稱(英文) Design and Implementation of Automatic Pallet Docking System for Forklift Mobile Robots
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 112
學期 2
出版年 113
研究生(中文) 黃子祐
研究生(英文) Zi-You Huang
學號 611460063
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2024-07-11
論文頁數 70頁
口試委員 口試委員 - 馮玄明(hmfeng@nqu.edu.tw)
口試委員 - 蔡奇謚(chiyi_tsai@mail.tku.edu.tw)
指導教授 - 翁慶昌(wong@ee.tku.edu.tw)
關鍵字(中) 自主棧板搬運車
路徑規劃
導航
機器視覺
點雲
物件偵測
物件姿態估測
對接控制
視覺伺服控制
關鍵字(英) Autonomous Pallet Transporter
Path Planning
Navigation
Machine Vision
Point Cloud
Object Detection
Object Pose Estimation
Docking Control
Visual Servo Control
第三語言關鍵字
學科別分類
中文摘要
本論文整合了物件偵測、物件姿態估測以及機器人控制等技術來設計與實現一個自動棧板對接系統。本論文主要分為三個部分:(1) 棧板偵測與姿態估測系統、(2) 導航系統、(3) 棧板自動對接系統。在棧板偵測與姿態估測系統部分,首先在棧板偵測上使用二個實例分割模型YOLOv8-seg與Mask R-CNN (Mask Region-based Convolutional Neural Network)來訓練深度神經網路模型。一些結果和比較說明這兩個訓練模型都可以快速且準確地從複雜的環境背景中辨識影像中的棧板物件遮罩。對於棧板姿態估測,使用隨機抽樣一致性(RANSAC)演算法與主成分分析(Principal components analysis, PCA)來取得棧板的姿態。在導航系統部分,使用ROS Navigation架構進行研究以及開發。本文使用2D光學雷達(LiDAR)、RGB-D相機和IMU來獲得環境深度圖像、彩色圖像和機器人的姿態,並使用資料收集、預處理和融合保證資料準確性和穩定性。對於地圖管理,使用GMapping演算法進行同時定位與地圖建置。對於自主定位,使用自適應蒙特卡洛定位(AMCL)方法,其結合擴展卡爾曼濾波器(EKF)進行資料融合,以提升定位精度和穩定性。對於全域路徑規劃,使用Dijkstra演算法來規劃一條可以從起始位置到達目標的路徑。對於局部路徑規劃,使用動態窗口法(DWA)根據動態約束和感測器資料規劃一條可以即時避開障礙物的路徑。對於移動控制,根據所規劃之路徑生成控制命令,使機器人平穩地沿規劃軌跡移動。在棧板自動對接系統部分,提出一個棧板對接策略來整合所有系統和技術,使叉式移動機器人能夠成功與棧板完成對接。最後,一些模擬以及實際小型叉式移動機器人的實驗說明,所提的系統確實能夠有效地完成棧板的自動對接。
英文摘要
This thesis integrates object detection, object pose estimation, and robot control technologies to design and implement an automatic pallet docking system. This thesis is mainly divided into three parts: (1) Pallet detection and pose estimation system, (2) Navigation system, and (3) Pallet automatic docking system. In the part of the pallet detection and pose estimation system, two instance segmentation models YOLOv8-seg and Mask R-CNN (Mask Region-based Convolutional Neural Network) are first used to train the deep neural network model on pallet detection. Some results and comparisons show that both training models can quickly and accurately identify pallet object masks in images from complex environmental backgrounds. For pallet pose estimation, the RANdom SAmple Consensus (RANSAC) algorithm and Principal Component Analysis (PCA) are used to obtain the pallet pose. In the part of the navigation system, ROS Navigation framework is used for research and development. This thesis uses 2D LiDAR, RGB-D camera, and IMU to obtain environmental depth images, color images, and robot’s pose, and uses data collection, preprocessing, and fusion to ensure data accuracy and stability. For map management, the GMapping algorithm is used for Simultaneous Localization And Mapping (SLAM). For autonomous localization, the Adaptive Monte Carlo Localization (AMCL) method is used, which is combined with the Extended Kalman Filter (EKF) for data fusion to improve localization accuracy and stability. For global path planning, the Dijkstra algorithm is used to plan a path that can reach the goal from the starting position. For local path planning, the Dynamic Window Approach (DWA) is used to plan a path that can avoid obstacles in real time based on dynamic constraints and sensor data. For mobile control, control commands are generated based on the planned path to make the robot move smoothly along the planned trajectory. In the part of the pallet automatic docking system, a pallet docking strategy is proposed to integrate all systems and technologies so that the forklift mobile robot can successfully dock with the pallet. Finally, some simulations and actual experiments on a small forklift mobile robots show that the proposed system can indeed effectively complete the automatic docking of pallets.
第三語言摘要
論文目次
目錄
中文摘要	I
英文摘要	II
目錄	III
圖目錄	V
表目錄	VII
第一章 緒論	1
1.1研究動機與目的	1
1.2文獻回顧	4
1.3論文架構	8
第二章 系統架構與硬體設備	10
2.1硬體設備	10
2.2開源軟體	15
2.3系統架構	16
第三章 棧板偵測與棧板姿態估測系統	18
3.1棧板偵測	19
3.2棧板姿態估測	24
第四章 導航系統	29
4.1感測器資料與地圖管理	30
4.2自主定位	32
4.3路徑規劃與移動控制	33
第五章 棧板自動對接系統	40
5.1棧板對接策略	41
5.2座標轉換	44
5.3眼在手校正	45
第六章 實驗結果	47
6.1實驗環境一	47
6.2實驗環境二	53
第七章 結論與未來展望	56
7.1結論	56
7.2未來展望	56
參考文獻	58
附錄1:符號對照表	64
附錄2:中英文對照表	68

 
圖目錄
圖1.1、自主移動機器人市場趨勢[1]	2
圖2.1、實驗平台	10
圖2.2、Turtlebot3自主移動機器人	11
圖2.3、Intel® RealSense D435i RGB-D攝影機	12
圖2.4、RPLIDAR A2M12 2D光學雷達	13
圖2.5、AERO 15 Classic筆記型電腦	14
圖2.6、ROS通訊架構圖	15
圖2.7、整體系統架構圖	16
圖3.1、棧板偵測與棧板姿態估測系統架構圖	18
圖3.2、深度神經網路模型訓練架構圖	19
圖3.3、棧板的訓練資料集	20
圖3.4、ROBOFLOW工具棧板標註示意圖	21
圖3.5、隨機抽樣一致性(RANSAC)演算法進行平面擬合示意圖	26
圖4.1、ROS Navigation架構圖	29
圖4.2、二維網格環境地圖	31
圖4.3、自適應蒙特卡洛自主定位	33
圖4.4、全域路徑規劃示意圖	34
圖4.5、區域路徑規劃示意圖	36
圖4.8、眼到手的手眼校正法之示意圖	46
圖5.1、叉式移動機器人之自動棧板對接系統流程圖	40
圖5.2、實際棧板對接圖	43
圖6.1、實驗環境一示意圖	48
圖6.2、實驗環境一棧板遮罩點雲狀態與實體環境的有效範圍示意圖	48
圖6.3、實驗環境一無對接策略的實驗結果及資料圖	50
圖6.4、實驗環境一無對接策略的實驗執行影像	50
圖6.5、實驗環境一有對接策略的實驗結果及資料	51
圖6.6、實驗環境一有對接策略的實驗執行影像	52
圖6.7、實驗環境二示意圖	54
圖6.8、實驗環境二在資料不完整條件下對接策略實驗執行影像	55
 
表目錄
表 2.1、Turtlebot3自主移動機器人之硬體規格表	12
表 2.2、Intel® RealSense D435i 深度攝影機之規格表	12
表 2.3、RPLIDAR A2M12 2D光學雷達之規格表	13
表 2.4、AERO 15 Classic筆記型電腦之規格表	14
表 2.5、本論文研究使用之軟體環境及開源套件的名稱與版本列表	16
表 3.1、棧板資料集標註文件生成與導出格式表	21
表 3.2、YOLOv8-seg和Mask R-CNN性能指標比較表	23
表 5.1、導航關鍵參數表	41
表 6.1、實驗環境一無對接策略的實驗結果及資料表	49
表 6.2、實驗環境一有對接策略的實驗結果及資料表	51
表 6.3、實驗環境二在資料不完整條件下對接策略的實驗結果	55
 
參考文獻
參考文獻
[1]	Mordor Intelligence公司的自主移動機器人市場佔有率分析、產業趨勢與統計報告書, URL:
https://www.mordorintelligence.com/industry-reports/global-mobile-robots-market/market-size
[2]	R. Raj and A. Kos, “A comprehensive study of mobile robot: History, devel-opments, applications, and future research perspectives,” Applied Sciences, vol. 12, no. 14, p. 6951, 2022.
[3]	M. A. Niloy, A. Shama, R. K. Chakrabortty, M. J. Ryan, F. R. Badal,Z. Tasneem, M. H. Ahamed, S. I. Moyeen, S. K. Das, M. F. Ali, et al.,“Critical design and control issues of indoor autonomous mobile robots: A review,” IEEE Access, vol. 9, pp. 35338–35370, 2021.
[4]	日本 MAKINO自動導航叉車(Automated Guided Forklift, AGF), URL: https://www.youtube.com/watch?v=M3sm6nbldYU&ab_channel=MAKINOJ
[5]	日本三菱重工Mitsubishi Heavy Industries 國內初冷凍冷藏倉庫型AGF, URL: https://www.mhi.com/jp/news/210614.html
[6]	X. Chen, J. Chase, and Y. Chen, Mobiles Robots-Past Present and Future. IntechOpen, 2009.
[7]	R. Siegwart, I. R. Nourbakhsh, and D. Scaramuzza, Introduction to Autonomous Mobile Robots. MIT press, 2011.
[8]	H. Chen, H. Cheng, B. Zhang, J. Wang, T. Fuhlbrigge, and J. Liu, “Semi-autonomous industrial mobile manipulation for industrial applications,” in 2013 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, pp. 361–366, IEEE, 2013.
[9]	H. Unger, T. Markert, and E. M ̈uller, “Evaluation of use cases of autonomous mobile robots in factory environments,” Procedia Manufacturing, vol. 17, pp. 254–261, 2018.
[10]	C. G. Grlj, N. Krznar, and M. Pranji ́c, “A decade of uav docking stations: A brief overview of mobile and fixed landing platforms,” Drones, vol. 6, no. 1, p. 17, 2022.
[11]	J. G. Bellingham, “Autonomous underwater vehicle docking,” Springer Handbook of Ocean Engineering, pp. 387–406, 2016.
[12]	H. Kolvenbach and M. Hutter, “Life extension: An autonomous docking station for recharging quadrupedal robots,” in Field and Service Robotics: Results of the 11th International Conference, pp. 545–557, Springer, 2018.
[13]	G. Song, H. Wang, J. Zhang, and T. Meng, “Automatic docking system for recharging home surveillance robots,” IEEE Transactions on Consumer Electronics, vol. 57, no. 2, pp. 428–435, 2011.
[14]	F. Guangrui and W. Geng, “Vision-based autonomous docking and re- charging system for mobile robot in warehouse environment,” in 2017 2nd In- ternational Conference on Robotics and Automation Engineering (ICRAE), pp. 79–83, 2017.
[15]	G. Bolanakis, K. Nanos, and E. Papadopoulos, “A qr code-based high-precision docking system for mobile robots exhibiting submillimeter accuracy,” in 2021 IEEE/ASME International Conference on Advanced Intelli-gent Mechatronics (AIM), pp. 830–835, 2021.
[16]	Y. Liu, “A laser intensity based autonomous docking approach for mo- bile robot recharging in unstructured environments,” IEEE Access, vol. 10, pp. 71165–71176, 2022.
[17]	R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580– 587, 2014.
[18]	R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448, 2015.
[19]	C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “Dssd: Deconvolutional single shot detector,” arXiv preprint arXiv:1701.06659, 2017.
[20]	J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271, 2017.
[21]	J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015.
[22]	S. Watanabe, T. Hori, S. Karita, T. Hayashi, J. Nishitoba, Y. Unno, N. E. Y. Soplin, J. Heymann, M. Wiesner, N. Chen, et al., “Espnet: End-to-end speech processing toolkit,” arXiv preprint arXiv:1804.00015, 2018.
[23]	O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional net- works for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
[24]	N. Bellomo, E. Marcuzzi, L. Baglivo, M. Pertile, E. Bertolazzi, and M. De Cecco, “Pallet pose estimation with LiDAR and vision for autonomous forklifts,” IFAC Proceedings Volumes, vol. 42, no. 4, pp. 612–617, 2009.
[25]	I. S. Mohamed, A. Capitanelli, F. Mastrogiovanni, S. Rovetta, and R. Zac- caria, “Detection, localisation and tracking of pallets using machine learning techniques and 2D range data,” Neural Computing and Applications, vol. 32, pp. 8811–8828, 2020.
[26]	Z. Zhang, Z. Liang, M. Zhang, X. Zhao, H. Li, M. Yang, W. Tan, and S. Pu, “Rangelvdet: Boosting 3D object detection in LiDAR with range image and RGB image,” IEEE Sensors Journal, vol. 22, no. 2, pp. 1391–1403, 2021.
[27]	J.-L. Syu, H.-T. Li, J.-S. Chiang, C.-H. Hsia, P.-H. Wu, C.-F. Hsieh, and S.- A. Li, “A computer vision assisted system for autonomous forklift vehicles in real factory environment,” Multimedia Tools and Applications, vol. 76, pp. 18387–18407, 2017.
[28]	R. Fan, T.-B. Xu, and Z. Wei, “Estimating 6d aircraft pose from keypoints and structures,” Remote Sensing, vol. 13, no. 4, p. 663, 2021.
[29]	K. Guo, H. Ye, X. Gao, and H. Chen, “An accurate and robust method for absolute pose estimation with UAV using RANSAC,” Sensors, vol. 22, no. 15, p. 5925, 2022.
[30]	C. Zhao, C. F. Lui, S. Du, D. Wang, and Y. Shao, “An earth mover’s distance based multivariate generalized likelihood ratio control chart for effective monitoring of 3D point cloud surface,” Computers & Industrial Engineering, vol. 175, p. 108911, 2023.
[31]	X. Wang, B. Liu, X. Mei, X. Wang, and R. Lian, “A novel method for mea- suring, collimating, and maintaining the spatial pose of terminal beam in laser processing system based on 3D and 2D hybrid vision,” IEEE Transactions on Industrial Electronics, vol. 69, no. 10, pp. 10634–10643, 2022.
[32]	H. Lee, J.-M. Park, K. H. Kim, D.-H. Lee, and M.-J. Sohn, “Accuracy evaluation of surface registration algorithm using normal distribution transform in stereotactic body radiotherapy/radiosurgery: A phantom study,” Journal of Applied Clinical Medical Physics, vol. 23, no. 3, p. e13521, 2022.
[33]	X. Xie, X. Wang, and Z. Wu, “3D face dense reconstruction based on sparse points using probabilistic principal component analysis,” Multimedia Tools and Applications, pp. 1–21, 2022.
[34]	W. Liu, “LiDAR-IMU time delay calibration based on iterative closest point and iterated sigma point Kalman filter,” Sensors, vol. 17, no. 3, p. 539, 2017.
[35]	F. Zhang, S. Li, S. Yuan, E. Sun, and L. Zhao, “Algorithms analysis of mobile robot slam based on Kalman and particle filter,” in 2017 9th International Conference on Modelling, Identification and Control (ICMIC), pp. 1050– 1055, IEEE, 2017.
[36]	D. Harabor and A. Grastien, “Online graph pruning for pathfinding on grid maps,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 25, pp. 1114–1119, 2011.
[37]	A. Ammar, H. Bennaceur, I. Châari, A. Koubâa, and M. Alajlan, “Relaxed Dijkstra and A* with linear complexity for robot path planning problems in large-scale grid environments,” Soft Computing, vol. 20, pp. 4149–4171, 2016.
[38]	P.-D. Wang, G.-Y. Tang, Y. Li, and X.-X. Yang, “Ant colony algorithm using endpoint approximation for robot path planning,” in Proceedings of the 31st Chinese Control Conference, pp. 4960–4965, IEEE, 2012.
[39]	Docker, URL: https://www.docker.com/
[40]	ROS, URL: https://www.ros.org/
[41]	ROBOFLOW, URL: https://roboflow.com/
[42]	YOLOv8 github, URL: https://github.com/ultralytics/ultralytics
[43]	K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969, 2017.
[44]	R. B. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” in 2011 IEEE International Conference on Robotics and Automation, pp. 1–4, IEEE, 2011.
[45]	A. Maćkiewicz and W. Ratajczak, “Principal components analysis (PCA),” Computers & Geosciences, vol. 19, no. 3, pp. 303–342, 1993.
[46]	ROS Navigation, URL: http://wiki.ros.org/navigation
[47]	Y. Abdelrasoul, A. B. S. H. Saman, and P. Sebastian, “A quantitative study of tuning ROS GMapping parameters and their effect on performing indoor 2D SLAM,” in 2016 2nd IEEE International Symposium on Robotics and Manufacturing Automation (ROMA), pp. 1–6, IEEE, 2016.
[48]	D. Talwar and S. Jung, “Particle filter-based localization of a mobile robot by using a single LiDAR sensor under SLAM in ROS environment,” in 2019 19th International Conference on Control, Automation and Systems (ICCAS), pp. 1112–1115, 2019.
[49]	E. W. Dijkstra, “A note on two problems in connexion with graphs,” Nu- merische Mathematik, vol. 1, pp. 269–271, Dec 1959.
[50]	D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision avoidance,” IEEE Robotics & Automation Magazine, vol. 4, no. 1, pp. 23–33, 1997.
[51]	ROS tf工具, URL: http://wiki.ros.org/tf
[52]	ROS hand_eye_calibratione工具, URL: https://wiki.ros.org/rc_hand_eye_calibration_client
[53]	AprilTag, URL:
https://hackmd.io/@TPoqXDxdQfS_CcCr6i1q4g/BkfPYnon9#2019-Apriltag-3
論文全文使用權限
國家圖書館
同意無償授權國家圖書館,書目與全文電子檔於繳交授權書後, 於網際網路立即公開
校內
校內紙本論文立即公開
同意電子論文全文授權於全球公開
校內電子論文立即公開
校外
同意授權予資料庫廠商
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信