§ 瀏覽學位論文書目資料
  
系統識別號 U0002-0508202010542900
DOI 10.6846/TKU.2020.00107
論文名稱(中文) 雙臂機器人之任務導向的夾取與工具操作
論文名稱(英文) Task-Oriented Grasping and Tool Manipulation for Dual-Arm Robot
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 機器人博士學位學程
系所名稱(英文) Doctoral Program in Robotics, College of Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 108
學期 2
出版年 109
研究生(中文) 賴宥澄
研究生(英文) Yu-Cheng Lai
學號 805440095
學位類別 博士
語言別 繁體中文
第二語言別
口試日期 2020-07-15
論文頁數 162頁
口試委員 指導教授 - 翁慶昌(wong@gmail.com)
共同指導教授 - 蔡奇謚(chiyi_tsai@mail.tku.edu.tw)
委員 - 陳博現
委員 - 王文俊
委員 - 李祖聖
委員 - 蘇順豐
委員 - 李世安
委員 - 蔡奇謚
關鍵字(中) 雙臂機器人
零空間
姿態估測
任務導向的夾取
工具操作
關鍵字(英) Dual-Arm Robot
Null Space
Pose Estimation
Task-Oriented Grasping
Tool Manipulation
第三語言關鍵字
學科別分類
中文摘要
本論文之主要目的在於透過整合深度學習與機器人學方法,使雙臂機器人能夠對生活中常見之物件進行拿取,以其功能點作為新的工具中心點來執行任務,並避免在執行任務過程中遭遇硬體極限。主要有三個部分:(1) 冗餘機械手臂之運動控制、(2) 任務導向之夾取、以及(3) 雙臂機器人之工具操作規劃。在冗餘機械手臂之運動控制上,本論文提出了一套零空間運動控制方法,搭配所提出的目標函數,能夠使冗餘機械手臂在執行任務的過程中,透過所具有的零空間特性,來即時的迴避關節極限和奇異點之硬體極限。在任務導向之夾取上,本論文提出了一套結合兩個神經網路的夾取姿態估測方法,並透過所設計之目標函數來同時偵測物件之可被夾取的承擔特質,以及其在多機械手臂場域中的最佳夾取姿態。在雙臂機器人之工具操作規劃上,本論文提出了一套基於視覺的工具中心點估測與校正方法來偵測物件可作為工具的承擔特質,並在計算出其適合的工具功能點後,以其作為機械手臂之新的工具中心點來執行任務。在實驗結果上,本論文將所提方法實現於實驗室所自行開發之雙臂機器人上,說明所提出的方法確實使雙臂機器人能夠自主地夾取寶特瓶與杯子的瓶身,然後分別以瓶口與杯口為左臂和右臂之新的工具中心點來穩定地完成倒水任務。由一些模擬與實驗結果可看出本論文所提出之方法確實具有不錯的控制效果。
英文摘要
The target of this dissertation is to integrate methods of deep learning and robotics to let a dual-arm robot can grasp everyday objects, manipulate its function point as a new tool center point (TCP) to execute the task, and avoid encountering hardware limits in the process of performing tasks. There are three main parts: (1) motion control of redundant robot manipulator, (2) task-oriented grasping, and (3) tool manipulation planning of dual-arm robot. In motion control of redundant robot manipulator, a motion control method and an objective function are designed to make redundant robot manipulator to avoid the hardware limits, including joint limit and singularity by its null space characteristic. In task-oriented grasping, a method that combined two neural networks and an objective function are designed to detect both graspable affordance of object and its best grasp poses for multiple robotic arms. In tool manipulation planning of dual-arm robot, a vision based method for TCP estimation and TCP calibration are proposed to detect the functional point and calibrate it as a new TCP to execute task. In experiment results, the proposed methods are implemented on a lab-made dual-arm robot to illustrate that the proposed methods do enable the dual-arm robot autonomously grasp the body of bottle and cup, and then the mouth of the bottle and the mouth of the cup are respectively as new tool center points of the left-arm and right-arm to stably complete the task of pouring water. It can be seen from some simulation and experimental results that the methods proposed in this dissertation have good control effects.
第三語言摘要
論文目次
目錄
目錄	I
圖目錄	V
表目錄	IX
學術名詞名稱之中英文對照表	XI
符號對照表	XVIII
第一章 緒論	1
1.1 研究背景	1
1.2 研究動機與目的	5
1.3 論文架構	6
第二章 剛體姿態與座標系的表示與轉換	8
2.1 剛體在三維空間中的位置與姿態	8
2.2 剛體三維姿態表示法	9
2.3 剛體三維姿態表示法的變換	14
2.4 空間座標系的轉換	19
2.5 像素座標系與大地座標系之間的轉換	20
第三章 系統架構與硬體設備	23
3.1 系統架構	23
3.2 雙臂機器人之硬體規格	25
3.3 雙臂機器人之關節與連桿配置	26
第四章 冗餘機械手臂之運動控制	28
4.1 冗餘機械手臂之運動控制架構	29
4.2 冗餘機械手臂之運動學	30
4.3 機械手臂之任務空間的軌跡規劃	50
4.4 奇異點迴避	54
4.5 冗餘機械手臂之零空間運動控制	57
4.6 關節超速抑制	63
第五章 實例分割及任務導向的夾取	67
5.1 物件辨識綜述	67
5.2 基於候選區域的物件偵測	78
5.3 遮罩區域卷積神經網路	92
5.4 夾取偵測	99
5.5 任務導向的夾取規劃	102
第六章 雙臂機器人之工具操作規劃	111
6.1 雙臂機器人之工具操作規劃架構	111
6.2 雙臂機器人運動學	112
6.3 工具操作姿態估測	118
6.4 工具中心點校正	119
6.5 雙臂機器人之協同軌跡規劃	124
第七章 實驗結果與討論	127
7.1 實驗環境設定	127
7.2 影像分割及任務導向的夾取規劃實驗	130
7.3 冗餘機械手臂之運動控制實驗	138
7.4 雙臂機器人之工具操作規劃實驗	144
7.5 雙臂機器人之工具操作實驗	149
第八章 結論與未來展望	152
8.1 結論	152
8.2 未來展望	153
參考文獻	155
研究著作	160
實務競賽	162
 
圖目錄
圖2.1、軸-角旋轉示意圖	13
圖2.2、座標轉換之示意圖	20
圖2.3、針孔攝影機模型:2D像素座標系與3D大地座標系的轉換	21
圖3.1、所提出之系統架構圖	24
圖3.2、所提出之系統架構的整體系統方塊圖	25
圖3.3、雙臂機器人之活動範圍	27
圖3.4、七自由度手臂的 結構示意圖	27
圖3.5、七自由度手臂的 連桿長度	27
圖4.1、本論文所提出之冗餘機械手臂的運動控制架構	30
圖4.2、本論文所採用之 XYZ固定角示意圖	31
圖4.3、機械手臂之 工具中心點座標示意圖	31
圖4.4、冗餘機械手臂之冗餘軸示意圖[26]	32
圖4.5、冗餘機械手臂之冗餘角度介紹	32
圖4.6、zi-1與zi無共平面示意圖	33
圖4.7、zi-1與zi互相平行示意圖	34
圖4.8、zi-1與zi相交之情況一	34
圖4.9、zi-1與zi相交之情況二	35
圖4.10、七軸機械手臂座標系配置	35
圖4.11、機械手臂關節及連桿參數示意圖	37
圖4.12、球型關節	39
圖4.13、冗餘圓圓心示意圖	40
圖4.14、手肘關節(第四軸)解法的示意圖	43
圖4.15、第一、二軸解法的示意圖	44
圖4.16、第三軸解法示意圖	45
圖4.17、冗餘角度計算示意圖	49
圖4.18、球面線性插值示意圖	53
圖4.19、反向球面線性插值示意圖	54
圖4.20、肩關節奇異點示意圖[35]	55
圖4.21、肘關節奇異點示意圖[35]	56
圖4.22、腕關節奇異點示意圖[35]	57
圖4.23、文獻[36][37]與本論文所提出之關節極限估測函數比較圖	59
圖4.24、本論文所提出之奇異點程度估測函數之值域分布	60
圖4.25、零空間運動控制流程	62
圖5.1、深度學習方法於物件辨識的主流分類[38]	68
圖5.2、卷積神經網路架構	70
圖5.3、卷積神經網路運算流程	70
圖5.4、卷積運算示意圖[45]	71
圖5.5、池化運算示意圖[45]	72
圖5.6、補零填充示意圖	73
圖5.7、反卷積之填充與卷積等二個步驟的示意圖	75
圖5.8、殘差模組架構示意圖[46]	77
圖5.9、殘差網路系列架構示意圖[46]	77
圖5.10、瓶頸殘差模組示意圖[46]	78
圖5.11、空間金字塔池化(Spatial Pyramid Pooling, SPP)示意圖[49]	79
圖5.12、Faster R-CNN網路架構示意圖	80
圖5.13、RPN整體運作流程	81
圖5.14、錨框與候選區域生成流程示意圖	83
圖5.15、錨框、包圍框預測值與包圍框真實值偏差示意圖	85
圖5.16、NMS結果示意圖	86
圖5.17、感興趣區間池化流程示意圖[51]	87
圖5.18、RPN類別向量重置示意圖	89
圖5.19、Mask R-CNN整體架構示意圖	92
圖5.20、殘差特徵金字塔網路架構	93
圖5.21、ROI Align運算示意圖[54]	96
圖5.22、Mask R-CNN 遮罩分支架構	97
圖5.23、單一ROI與多個ROI之遮罩損失函數計算的示意圖	99
圖5.24、夾取框之編碼參數示意圖[55]	100
圖5.25、夾取規劃網路架構	100
圖5.26、夾取框參數示意圖	101
圖5.27、本論文所提出之TOGP架構示意圖	102
圖5.28、計算夾取框與物件之位置誤差及姿態誤差的示意圖	105
圖5.29、夾取姿態轉換過程概念圖	106
圖5.30、具角度偏差之轉動過程示意圖	108
圖5.31、所提出之可修正角度偏差之轉動過程示意圖	109
圖6.1、雙臂機器人之工具操作規劃示意圖	112
圖6.2、兩個機械手臂之基座標系與工具座標系	113
圖6.3、雙臂機器人之基座標系與工具座標系	113
圖6.4、雙臂機器人之關節與座標系配置示意圖	114
圖6.5、雙臂機器人逆運動學之座標系間的轉換關係示意圖	117
圖6.6、工具操作姿態估測示意圖	118
圖6.7、TCP校正之三個主要階段的示意圖	120
圖6.8、工具中心點校正流程	121
圖6.9、雙臂機器人協同軌跡規劃流程圖	124
圖6.10、雙臂機器人之共享TCP示意圖	125
圖7.1、本論文中之主要硬體設備與實驗環境	129
圖7.2、TOGP實驗-物件的擺放方法示意圖	135
圖7.3、三種控制方法在測試命令一中同時進行關節極限與奇異點迴避的結果	139
圖7.4、三種控制方法在測試命令一中分別進行關節極限與奇異點迴避的結果	139
圖7.5、三種控制方法在測試命令二中同時進行關節極限與奇異點迴避的結果	141
圖7.6、三種控制方法在測試命令二中分別進行關節極限與奇異點迴避的結果	141
圖7.7、沒有進行超速抑制之各關節位置、速度與加速度響應	143
圖7.8、有進行超速抑制之各關節位置、速度與加速度響應	143
圖7.9、進行超速抑制時,末端中心點的位置、速度與加速度響應	144
圖7.10、沒有進行工具中心點校正之末端位置回授圖與移動軌跡	146
圖7.11、有進行工具中心點校正之末端位置回授圖與移動軌跡	146
圖7.12、有進行協同規劃之末端位置回授圖與移動軌跡	147
圖7.13、視覺工具中心點估測的	148
圖7.14、雙臂機器人執行倒水任務之分鏡圖	150
 
表目錄
表2.1、以式(2.6)為例之範例說明	11
表2.2、四種剛體姿態轉換公式之對照表	14
表3.1、雙臂機器人之硬體規格表	25
表3.2、雙臂機器人之單手的各關節所使用馬達的規格表	26
表4.1、四種D-H參數及說明	36
表4.2、本論文之七自由度冗餘機械手臂D-H連桿參數表	36
表4.3、機械手臂關節及連桿參數對照表	38
表4.4、任務空間之點到點軌跡規劃虛擬碼	51
表4.5、零空間運動控制流程虛擬碼	63
表4.6、本論文所提出之關節超速抑制虛擬碼	66
表5.1、判斷正標籤錨框之目標逼近物件的範例	84
表5.2、推薦區域篩選流程	87
表5.3、交替訓練流程	91
表5.4、特徵金字塔與Mask R-CNN之RPN錨框參數對照	95
表5.5、ROI Align運算流程	96
表6.1、雙臂機器人左手之DH連桿參數表	115
表6.2、雙臂機器人右手之DH連桿參數表	115
表6.3、TCP校正之參數對照表	120
表7.1、本論文所使用之實驗物品與其承擔特質示意圖	128
表7.2、兩種不同夾取框偵測方法之結果	131
表7.3、三種不同夾取姿態偵測方法中所使用的參數	132
表7.4、三種不同夾取姿態偵測方法之結果	132
表7.5、單機械手臂之任務導向的夾取實驗統計	136
表7.6、雙臂機器人之任務導向的夾取實驗統計	137
表7.7、測試命令一的輸入參數	139
表7.8、測試命令二的輸入參數	141
表7.9、關節超速抑制實驗的輸入條件	142
表7.10、沒有進行工具中心點校正之末端位置回授與移動時間數據	145
表7.11、有進行工具中心點校正之末端位置回授與移動時間數據	145
表7.12、有進行協同規劃之末端位置回授與移動時間數據	147
表7.13、視覺工具中心點估測的誤差計算結果	148
表7.14、雙臂機器人執行倒水任務之步驟流程	149
表7.15、雙臂機器人執行倒水任務之成功率	150
表7.16、近年各文獻在機器人工具操作任務中所考慮之議題整理	151
參考文獻
[1]	產業AI化落地/迎向智慧服務新紀元 AI讓服務更聰明貼心URL:https://udn.com/news/story/6905/3805554
[2]	Amazon 的倉庫機器人 Kiva URL:https://www.bnext.com.tw/px/article/34579/BN-ARTICLE-34579
[3]	A. Billard and D. Kragic, “Trends and Challenges in Robot Manipulation,” Science Robotics, 2019.
[4]	2016 IROS Robotic Grasping and Manipulation Competition: http://www.rhgm.org/activities/competition_iros2016/
[5]	J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-Driven Grasp Synthesis - A Survey,” IEEE Transaction on Robotics, vol. 30, no. 2, pp. 289-309, 2014.
[6]	J. Mahler, M. Matl, X. Liu, A. Li, and D. Gealy, et al., “Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning,” International Conference on Robotics and Automation (ICRA), pp. 5620-5627, 2018.
[7]	J. Mahler, M. Matl, and V. Satish, “Learning ambidextrous robot grasping policies,” Science Robotics, vol. 4, no. 26, 2019.
[8]	K. Fang, Y. Zhu, A. Garg, A. Kurenkov, and V. Mehta, et al., “Learning Task-oriented Grasping for Tool Manipulation from Simulated Self-Supervision,” International Journal of Robotics Research (IJRR), vol. 39, no. 2-3, pp. 202-216, 2020.
[9]	U. Asif, J. Tand, and S. Harrer, “GraspNet: An Efficient Convolutional Neural Network for Real-time Grasp Detection for Low-powered Devices,” International Joint Conference on Artificial Intelligence (IJCAI), pp. 4875-4882, 2018.
[10]	A. Zeng, S. Song, K.T. Yu, T. Donlon, and F. Hogan, et al. “Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching,” International Conference on Robotics and Automation (ICRA), pp. 3750-3757, 2018.
[11]	U. Asif, J. Tang, and S. Harrer, “Densely Supervised Grasp Detector,” Association for the Advancement of Artificial Intelligence (AAAI), vol. 3, pp. 8085-8093, 2019.
[12]	R. Detry, J. Papon, and L. Matthies, “Task-oriented grasping with Semantic and Geometric Scene Understanding,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3266-3273, 2017.
[13]	M. Iizuka and M. Hashimoto, “Detection of Semantic Grasping-Parameter using Part-Affordance Recognition,” 2018 19th International Conference on Research and Education in Mechatronics (REM), pp.136-140, 2018.
[14]	D. Morrison, P. Corke, and J. Leutner, “Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach,” Robotics: Science and Systems (RSS), 2018.
[15]	F.J. Chu, R. Xu, and P.A. Vela, “Real-world Multi-object, Multi-grasp Detection,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3355-3362, 2018.
[16]	I. Lenz, H. Lee, and A. Saxena, “Deep Learning for Detecting Robotic Grasps,” The International Journal of Robotics Research (IJRR), vol. 3, no. 4-5, pp. 705-724, 2015.
[17]	Y. Karayiannidis, C. Smith, F.E. Vina, and D. Kragic, “Online Contact Point Estimation for Uncalibrated Tool Use,” International Conference on Robotics and Automation (ICRA), pp. 2488-2494, 2014.
[18]	R. Holladay, T. Lozano-Perez, and A. Rodriguez, “Force-and-Motion Constrained Planning for Tool Use,” International Conference on Intelligent Robots and Systems (IROS), pp. 7409-7416, 2019.
[19]	M. Toussaint, K.R. Allen, K.A. Smith, and, J.B. Tenenbaum, “Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning,” Robotics: Science and Systems (RSS), pp. 6231-6235, 2018.
[20]	K.P. Tee, J. LI, L.T. Pang Chen, K.W. Wan, and, G. Ganesh, “Towards Emergence of Tool Use in Robots: Automatic Tool Recognition and Use without Prior Tool Learning,” International Conference on Robotics and Automation (ICRA), 2018.
[21]	S. Brown and C. Sammut, “Tool Use and Learning in Robots”. Encyclopedia of the Sciences of Learning,” Springer, pp. 3327-3330, 2012.
[22]	W. Liu, A. Daruna, and S. Chernova “CAGE: Context-Aware Grasping Engine,” International Conference on Robotics and Automation (ICRA), 2020.
[23]	D. Pavlichenko, D. Rodriguez, C. Lenz, M. Schwarz, and S. Behnke, “Autonomous Bimanual Functional Regrasping of Novel Object Class Instances,” International Conference on Humanoid Robots (Humanoids), pp. 351-358, 2019.
[24]	Z. Qin, K. Fang, Y. Zhu, L. Fei-Fei, and S. Savarese, “KETO: Learning Keypoint Representations for Tool Manipulation,” arXiv preprint arXiv:1910.11977, 2019.
[25]	 L. Manuelli, W. Gao, P. Florence, and R. Tedrake, “kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation,” International Symposium on Robotics Research (ISRR), 2019.
[26]	T. Yu, P. Abbeel, S. Levine, and C. Finn, “One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks,” International Conference on Intelligent Robots and Systems (IROS), 2019.
[27]	M. Shimizu, H. Kakuya, W.K. Yoon, K. Kitagaki, and K. Kosuge, “Analytical Inverse Kinematic Computation for 7-DOF Redundant Manipulators with Joint Limits and Its Application to Redundancy Resolution,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1131-1142, 2008.
[28]	Z. Cui, H. Pan, Y. Peng, and Z. Han, “A Novel Inverse Kinematics Solution for a 7-DOF Humanoid Manipulator,” 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 2230-2234, 2012.
[29]	H. Moradi and S. Lee,“Joint Limit Analysis and Elbow Movement Minimization for Redundant Manipulators using Closed Form Method,” International Conference on Intelligent Computing (ICIC), vol. 3645, no. 2, pp. 423-432, 2005.
[30]	H. Wang and T. Murakami,“Advanced Observer Design for Multi-task Control in Visual Feedback based Redundant Manipulators,”2013 IEEE International Conference on Mechatronics (ICM), pp.231-236, 2013.
[31]	J.J. Craig, Introduction to Robotics: Mechanics and Control, 3rd Ed, New York, NY, USA: Prentice Hall, 2004.
[32]	M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Dynamics and Control, 2nd Ed., John Wiley & Sons, 2004.
[33]	G.K. Singh and J. Claassens, “An Analytical Solution for the Inverse Kinematics of a Redundant 7DoF Manipulator with Link Offsets,” 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2976-2982, 2010.
[34]	Z. Cui, H. Pan, Y. Peng, and Z. Han, “A Novel Inverse Kinematics Solution for a 7-DOF Humanoid Manipulator,” 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 2230-2234, 2012.
[35]	M.J.D. Hayes, M.L. Husty, and P.J. Zsombor-Murray,“Singular Configurations of Wrist-partitioned 6R Serial Robots: A geometric Perspective for Users,”Transactions of the Canadian Society for Mechanical Engineering, vol. 26, no. 1, pp. 41-55, 2002.
[36]	H. Zghal, R.V. Dubey, and J.A. Euler, “Efficient Gradient Projection Optimization for Manipulators with Multiple Degrees of Redundancy,” International Conference on Robotics and Automation (ICRA), pp. 1006-1011, 1990.
[37]	T.F. Chan and R.V. Dueby, “A Weighted Least-norm Solution Based Scheme for Avoiding Joint Limits for Redundant Joint Manipulators,” IEEE Transaction on Robotics and Automation, vol. 11, no. 2, 1995.
[38]	T.Y. Lin, M. Maire, S. Belongie, J. Hays, and P. Perona, “Microsoft COCO: Common Objects in Context,” European Conference on Computer Vision (ECCV), pp. 740-755, 2014.
[39]	J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
[40]	J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” International Conference on Computer Vision and Pattern Recognition  (CVPR), pp. 7263-7271, 2017.
[41]	J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv preprint arXiv: 1804.02767, 2019.
[42]	W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, “SSD: Single Shot Multibox Detector,” European Conference on Computer Vision (ECCV), pp. 21-37, 2016.
[43]	R. Girshick, “Fast R-CNN”, International Conference on Computer Vision (ICCV), pp. 1440-1448, 2015.
[44]	S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, 2017.
[45]	卷積神經網路(Convolutional neural network, CNN) — CNN運算流程,URL: https://reurl.cc/qD7VZy
[46]	K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” International Conference on Computer Vision (ICCV), pp. 770-788, 2016.
[47]	K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” International Conference on Computer Vision (ICCV), pp. 2961-2969, 2017.
[48]	R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 580-587, 2014.
[49]	K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, 2015. 
[50]	J.R. Uijlings, K.E. Van De Sande, T. Gevers, and A.W. Smeulders, “Selective Search for Object Recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154-171, 2013.
[51]	Lecture of University of Waterloo WAVE Lab, URL: http://wavelab.uwaterloo.ca/wp-content/uploads/2017/04/Lecture_6.pdf
[52]	J. Hosang, R. Benenson, and B. Schiele, “Learning Non-maximum Suppression,” International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4507-4515, 2017
[53]	T.Y. Lin, P. Dollár, R. Girshick, K. He, and B. Hariharan, “Feature Pyramid Networks for Object Detection,” International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117-2125, 2017.
[54]	Mask R-CNN:A Perspective on Equivariance, URL: http://kaiminghe.com/iccv17tutorial/maskrcnn_iccv2017_tutorial_kaiminghe.pdf
[55]	S. Kumra and C. Kanan, “Robotic Grasp Detection using Deep Convolutional Neural Networks,” International Conference on Intelligent Robots and Systems (IROS), 2017.
[56]	Robot Learning Lab - Cornell grasping dataset, URL: http://pr.cs.cornell.edu/grasping/rect_data/data.php
[57]	C.M. Lin, C.Y. Tsai, Y.C. Lai, S.A. Li, and C.C. Wong, “Visual Object Recognition and Pose Estimation Based on a Deep Semantic Segmentation Network,” IEEE Sensors Journal, Vol. 18. no.22, pp. 9370-9381, Nov, 2018.
論文全文使用權限
校內
校內紙本論文延後至2023-07-31公開
同意電子論文全文授權校園內公開
校內電子論文延後至2023-07-31公開
校內書目立即公開
校外
同意授權
校外電子論文延後至2023-07-31公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信