§ 瀏覽學位論文書目資料
  
系統識別號 U0002-1908202110041700
DOI 10.6846/TKU.2021.00490
論文名稱(中文) 以非直觀之角度對用路人意向進行資料擷取
論文名稱(英文) Data Acquisition of Pedestrian Intentions from Non-intuitive Angles
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 109
學期 2
出版年 110
研究生(中文) 于建奕
研究生(英文) Chien-I Yu
學號 608460027
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2021-07-14
論文頁數 36頁
口試委員 指導教授 - 劉寅春
委員 - 邱謙松
委員 - 江東昇
關鍵字(中) OpenPose
K-Means聚類
行為意向
非直觀角度
關鍵字(英) OpenPose
K-Means
behavioral intention
non-intuitive angle
第三語言關鍵字
學科別分類
中文摘要
本研究以OpenPose作為人類行為意向估測之基礎,藉由拍攝行人行為意向改變時之影像取得其人體姿態之骨架與其中各關節之特徵點,建構人類行為意向之資料庫。
本研究提出利用特徵點所構成之角度當作判斷人類行為意向改變之依據。本研究利用低通濾波器剔除各特徵點中超出人類動作極限之高頻率擾動,從特徵點定義1092種不同角度,再透過均值濾波器消除雜訊干擾之浮動,防止系統做出誤判。
利用非監督式學習的K-means聚類分析,本研究針對行人行走之意向進行聚類分析。本研究透過側影函數以及肘點驗證法驗證不同聚類結果是否構成最佳分類群數,再藉由測試集檢測訓練後所構成之行為意向估測模型,確認模型準確率是否為其他聚類分析結果中最高。最終本研究得出針對行人行為意向之聚類分析模型,以此判定行人於影像中每幀行為意向為行走或停止兩種行為意向其一。
本研究發現影像資料需考慮不同情境下的變因,如夜間光線不足使其資料不合理。影像視角亦隨受測者位置變化。本研究以靜態拍攝路況之影像作為分析依據,現階段尚有需突破之限制,故未來會針對各類不同影像之資料處理進行突破以增加資料庫種類之全面性,以此優化分類結果取得更佳之判斷模型。
英文摘要
This research uses OpenPose as the basis for human behavioral intention estimation. This research obtain the skeleton and keypoints of the pedestrian 's posture to constructs a database of pedestrian behavioral intentions.
This research proposes to use the angle formed by the keypoints as the basis for judging the change of human behavior intention.This study uses a low-pass filter to eliminate high-frequency disturbances that exceed the limits of human action in each keypoints,then defines 1092 different angles from the key points, and then eliminates the noise through the mean filter to prevent the system from making misjudgments.
Using the K-means cluster analysis of unsupervised learning, this study conducted a cluster analysis of pedestrians’ walking intentions. This study uses the silhouette and the elbow method to verify  the best cluster, and then uses the test dataset to detect the behavioral intention estimation model after training to confirm whether the accuracy of the model is the highest the results. In the end, this study obtained a cluster analysis model for pedestrian behavior intentions, which can determine whether the pedestrian behavior intention in each frame of the video is one of two behavior intentions: walking or stopping.
This study found that the image data needs to consider the variable factors in different situations, such as insufficient light at night to make the data unreasonable. The viewing angle of the image also changes with the location of the subject. This research uses static images of road conditions as the basis for analysis. At this stage, there are still limitations that need to be overcome. Therefore, in the future, breakthroughs will be made in data processing of various different images to increase the comprehensiveness of the database types in order to optimize the classification results to obtain more Good judgment model.
第三語言摘要
論文目次
第一章 緒論	1
1.1 背景	1
1.1.1 自動駕駛載具	5
1.2 研究動機	7
1.3 問題陳述	8
1.3.1 人體骨架特徵點與角度定義	8
1.3.2 窮舉法與時序角度資料	9
第二章 人體姿態資料預處理程序	11
2.1 OpenPose	11
2.2 數據預處理程序	12
2.2.1 數據缺失與捕值	14
2.2.2 均值濾波器與人體動作極限頻率	15
2.2.3 均值濾波器	16
2.3 數據前處理結果	17
2.3.1 行人數據前處理結果	18
第三章 K-means聚類分析	19
3.1 行為意向定義	19
3.2 聚類分析結果驗證	20
3.2.1 側影函數	20
3.2.2 拐點驗證法	20
3.3 行人行走資料測定	22
3.3.1 Kmeans模型優化	23
3.3.2 新舊模型聚類效果	24
第四章 實驗結果	31
4.1 情境:複數行人穿越道路之情境	32
第五章 結論與未來展望	34
第六章 參考文獻	35

圖目錄
圖1.1 國際汽車工程學會提出自動駕駛之分類系統	5
圖1.2 直觀角與非直觀角之差異	9
圖1.3 特徵點與角度示意圖	10
圖2.1 OpenPose骨架圖	11
圖2.2 資料前處理流程圖	13
圖2.3 OpenPose判定為空值之範例	14
圖2.4 「停止」意向-角速度No.1908	18
圖2.5 「行走」意向-角速度No.1908	18
圖2.6 「停止」意向-角速度No.9955	18
圖2.7 「行走」意向-角速度No.9955	18
圖2.8 角速度No.1908示意圖	18
圖2.9 角速度No.9955示意圖	18
圖3.1 聚類分析流程圖	21
圖3.2:K-mean聚類分析-行人行走意向資料測定	22
圖3.3:直觀角行人行為意向Kmeans模型	23
圖3.4:非直觀角行人行為意向Kmeans模型	23
圖3.5 驗證集影片1	25
圖3.6 直觀角度Kmeans模型驗證	25
圖3.7 非直觀角度Kmeans模型驗證	25
圖3.8 驗證集影片2	26
圖3.9 直觀角度Kmeans模型驗證	26
圖3.10 非直觀角度Kmeans模型驗證	26
圖3.11 驗證集影片3	27
圖3.12 直觀角度Kmeans模型驗證	27
圖3.13 非直觀角度Kmeans模型驗證	27
圖3.14 驗證集影片4	28
圖3.15 直觀角度Kmeans模型驗證	28
圖3.16 非直觀角度Kmeans模型驗證	28
圖3.17 測試集影片1	29
圖3.18 直觀角度Kmeans模型驗證	29
圖3.19 非直觀角度Kmeans模型驗證	29
圖3.20 測試集影片2	30
圖3.21 直觀角度Kmeans模型驗證	30
圖3.22 非直觀角度Kmeans模型驗證	30
圖4.1 複數行人穿越道路之情境	31
圖4.2 行人資料於kmeans模型中之位置	33
圖4.3 行人與其對應之行為意向	33

表目錄
表2.1 人體關節活動之極限頻率	15
表3.1 影像資料幀數與行為意向	24
表4.1 行人於影像中一幀間之角度差	32
參考文獻
[1] "Wikipedia self driving car," [Online]. Available: https://en.wikipedia.org/wiki/ Self-driving_car.
[2] Santokh Singh, "Critical reasons for crashes investigated in the national motor vehicle crash causation survey," Traffic Safety Facts - Crash Stats, pp. 1-2, 2015. 
[3] Atsushi Tsukahara, Yasuhisa Hasegawa, Kiyoshi Eguchi and Yoshiyuki Sankai, "Restoration of Gait for Spinal Cord Injury Patients Using HAL With Intention Estimator for Preferable Swing Speed," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 23, no. 2, pp. 308-318, 2015. 
[4] Jian Huang, Weiguang Huo, Wenxia Xu, Samer Mohammed and Yacine Amirat, "Control of upper-limb power-assist exoskeleton using a human-robot interface based on motion intention recognition," IEEE Transactions on Automation Science and Engineering, vol. 12, no. 4, pp. 1257-1270, 2015. 
[5] Lichao Wang, Karim Lekadir, Su-Lin Lee, Robert Merrifield and Guang-Zhong Yang, "A general framework for context-specific image segmentation using reinforcement learning," IEEE Trans actions on Medical Imaging, vol. 32, no. 5, pp. 943-956, 2013. 
[6] Samira Sheikhi and Jean-Marc Odobez, "Combining dynamic head pose-gaze mapping with the robot conversational state for attention recognition in human-robot interactions," Pattern Recognition Letters, vol. 66, pp. 81-90, 2015. 
[7] Friederike Schneemann and Irene Gohl, "Analyzing driver-pedestrian interaction at cross walks: A contribution to autonomous driving in urban environments," IEEE Intelligent Vehicles Symposium (IV), pp. 38-43, 2016. 
[8] Tobias Lagström and Victor Malmsten Lundgren, "Avip - autonomous vehicles’ interaction with pedestrians - an investigation of pedestrian-driver communication and development of a vehicle external interface," Master of Science Thesis in the Master Degree Program Industrial Design Engineering, 2016. 
[9] Denis Osipychev, Duy Tran, Weihua Sheng and Girish Chowdhary, "Human intention-based collision avoidance for autonomous cars," American Control Conference (ACC), pp. 2974-2979, 2017. 
[10] Tirthankar Bandyopadhyay, Email author, Kok Sung Won, Emilio Frazzoli, David Hsu, Wee-Sun Lee and Daniela Rus, "Intention-Aware Motion Planning," in Algorithmic Foundations of Robotics, pp. 475-491, 2013. 
[11] Haoyu Bai, Shaojun Cai, Nan Ye, David Hsu and Wee-Sun Lee, "Intention-aware online POMDP planning for autonomous driving in a crowd," IEEE International Conference on Robotics and Automation (ICRA), pp. 454-460, 2015. 
[12] Lorenzo Nardi and Cyrill Stachniss, "User preferred behaviors for robot navigation exploiting previous experiences," Robotics and Autonomous Systems, vol. 97, pp. 204-216, 2017. 
[13] Andreas Th. Schulz and Rainer Stiefelhagen, "Pedestrian intention recognition using latent¬dynamic conditional random fields," IEEE Intelligent Vehicles Symposium (IV), pp. 622-627, 2015. 
[14] Ji-Hyeong Han, Seung-Jae Lee and Jong-Hwan Kim, "Behavior Hierarchy-Based Affordance Map for Recognition of Human Intention and Its Application to Human–Robot Interaction," IEEE Transactions on Human¬Machine Systems, vol. 46, no. 5, pp. 708-722, 2016. 
[15] "Sae levels," [Online]. Available: https://www.sae.org/news/press-room/ 2018/12/sae-international-releases-updated-visual-chart-for-its-%E2%80% 9Clevels-of-driving-automation%E2%80%9D-standard-for-self-driving-vehicles.
[16] "國情統計通報," 行政院主計總處, 2021.
[17] 謝宗霖, "利用二維姿態判斷人類的行為意向," 淡江大學電機工程學系機器人工程碩士班碩士論文, 2020.
[18] 季晨, 王俊民, "人體參數與機床操縱裝置設計," 現代製造工程, vol. 2006, no. 12, pp. 106-110, 2006.
論文全文使用權限
校內
校內紙本論文立即公開
同意電子論文全文授權校園內公開
校內電子論文立即公開
校外
同意授權
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信