§ 瀏覽學位論文書目資料
  
系統識別號 U0002-1308201916260800
DOI 10.6846/TKU.2019.00324
論文名稱(中文) 基於模糊系統的自駕車場景中機車騎士意向估測
論文名稱(英文) Motor Rider Intention Detecting by Fuzzy System in Autonomous Vehicle Scenario
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系機器人工程碩士班
系所名稱(英文) Master's Program In Robotics Engineering, Department Of Electrical And Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 107
學期 2
出版年 108
研究生(中文) 季伯諭
研究生(英文) Po-Yu Chi
學號 606470127
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2019-06-20
論文頁數 33頁
口試委員 指導教授 - 劉寅春
共同指導教授 - 易志孝
委員 - 邱謙松
委員 - 江東昇
關鍵字(中) 自駕車
OpenPose
人體姿態估測
關鍵字(英) Autonomous Vehicle
Openpose
Human Posture Estimation
第三語言關鍵字
學科別分類
中文摘要
在現今的工業環境當中,自駕車的發展可以說是不可缺少的一環,作為自動化載具,自動駕駛車輛不需要人類操作即能感測其環境及導航,完全的自動駕駛車輛仍未商用化,大多數都為原型機或展示系統。
    本篇論文中是以自駕車之環境感知來做為研究方向,以臺灣常見的機車作為研究對象,運用影像捕捉機車騎士騎車時的姿態,使用姿態判別人體骨架,透過姿態的改變,讓系統能夠知道前方機車騎士之行為意象。
    本篇論文中,我們基於CMU-Perceptual-Computing-Lab所開發的OpenPose來做為人類行為意象感知估測系統的基礎,提出利用關節間的姿態關係來去預先判斷機車騎士的行為意象,影像介面會顯示出當前之前方機車騎士姿態判斷,影像介面會產生計算後的機率,隨著機車騎士騎車姿態的改變來去判斷接下來機車騎士可能會需要左轉、右轉或者是切換車道等。
英文摘要
In today's industrial environment, the development of autonomous vehicles can be said to be an indispensable part. As an automated vehicle, autonomous vehicles can sense their environment and navigation without human operation. Complete autonomous vehicles are still not commercialized. Most are prototypes or display systems.
    In this paper, the environmental awareness of self-driving car is used as the research direction. The common motorcycle in Taiwan is used as the research object. The image captures the posture of the motor rider when riding, uses the posture to identify the human skeleton, and changes the posture to make the system Can know the behavior of the front motor rider.
    In this paper, we use OpenPose developed by CMU-Perceptual-Computing-Lab as the basis of the human behavior image perception estimation system, and propose to use the attitude relationship between joints to pre-determine the behavioral image of the motor rider. It shows the current judgment of the motor rider's posture, and the image interface will produce the calculated probability. As the motor rider changes his posture, the next motor rider may need to turn left, turn right or switch lanes.
第三語言摘要
論文目次
Contents
Acknowledgement I
Abstract in Chinese II
Abstract in English III
Contents IV
List of Figures VI
List of Tables VIII
1 Introdution 1
1.1 Background  . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Autonomous Vehicle  . . . . . . . . . . . . . . . 5
1.2 Motivation  . . . . . . . . . . . . . . . . . . . . 6
1.3 Problem Statement . . . . . . . . . . . . . . . . . 7
2 Human Intention Estimation  . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Traditional Theory of Planned Behavior  . . . . . . 9
2.2 Convolutional Neural Network . . . . . . . . . . . 11
2.3 Part Affinity Fields . . . . . . . . . . . . . . . 13
2.3.1 PAF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2 Trust interval diagram used for joint detection . . . . . . . . . . 13
2.3.3 Use PAF for body part combination . . . . . . . . . . . . . . . 14
2.3.4 Bottom-up method . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 OpenPose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Fuzzy Controller  . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1 Motor Rider Intention Detecting with Fuzzy System  . . . . . . . . . 18
3.1.1 Fuzzy Language . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2 Membership Function . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.3 3D Vector Angle Calculation . . . . . . . . . . . . . . . . . . . 20
4 Experiment Result  . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 Estimation 1: The front motor rider turns right . . . . . . . . . . . . . 22
4.2 Estimation 2: The front motor rider turns left . . . . . . . . . . . . . . 24
4.3 Estimation 3: Motor rider turn right into the opposite lane . . . . . . . 26
4.4 Estimation 4: Motor rider turn left into the opposite lane. . . . . . . . 28
5 Conclusion and Future work  . . . . . . . . . . . . . . . . . . . . . . . . 30
5.1 SVM(Support Vector Machine) . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Conclusion and Future work . . . . . . . . . . . . . . . . . . . . . . . . 31
References  . . . . . . . . . . . . . . . . . . . . . . . . 32

List of Figures
1.1 Autonomous Vehicle Levels . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Schematic diagram of planned behavior model . . . . . . . . . . . . . . 10
2.2 Hierarchical diagram of behavioral intention reasoning . . . . . . . . . 10
2.3 Convolutional Layer(Feature Detector) . . . . . . . . . . . . . . . . . . 11
2.4 Pooling Layer(Max Pooling) . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Fully connected layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Using Part Affinity Field . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7 Body joint estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.8 Facial estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.9 Gesture estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.10 Facial and Gesture estimation . . . . . . . . . . . . . . . . . . . . . . . 17
3.1 Fuzzy Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Pose Output Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1 Membership function of Angle between the left shoulder and the spine 23
4.2 Membership function of probability of turning . . . . . . . . . . . . . . 23
4.3 Membership function of Angle between the left shoulder and the spine 25
4.4 Membership function of probability of turning . . . . . . . . . . . . . . 25
VI
4.5 Membership function of Angle between the left shoulder and the spine 27
4.6 Membership function of probability of turning . . . . . . . . . . . . . . 27
4.7 Membership function of Angle between the left shoulder and the spine 29
4.8 Membership function of probability of turning . . . . . . . . . . . . . . 29
5.1 SVM uses feature coordinates to analyze non-separable feature state Spaces 30

List of Tables
3.1 Fuzzy Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Fuzzy Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1 Table of Angle changes when turning right . . . . . . . . . . . . . . . . 22
4.2 Table of Angle changes when turning left . . . . . . . . . . . . . . . . . 24
4.3 Table of Angle changes when turning right into the opposite lane . . . 26
4.4 Table of Angle changes when turning left into the opposite lane . . . . 28
參考文獻
[1] “wikipedia self driving car.” [Online]. Available: https://en.wikipedia.org/wiki/
Self-driving_car
[2] S. Singh, “Critical reasons for crashes investigated in the national motor vehicle
crash causation survey,” 2015.
[3] A. Tsukahara, Y. Hasegawa, K. Eguchi, and Y. Sankai, “Restoration of gait for
spinal cord injury patients using hal with intention estimator for preferable swing
speed,” IEEE Transactions on Neural Systems and Rehabilitation Engineering,
vol. 23, no. 2, pp. 308–318, March 2015.
[4] J. Huang, W. Huo, W. Xu, S. Mohammed, and Y. Amirat, “Control of upper-limb
power-assist exoskeleton using a human-robot interface based on motion intention
recognition,” IEEE Transactions on Automation Science and Engineering, vol. 12,
no. 4, pp. 1257–1270, Oct 2015.
[5] L. Wang, K. Lekadir, S. Lee, R. Merrifield, and G. Yang, “A general framework for
context-specific image segmentation using reinforcement learning,” IEEE Transactions
on Medical Imaging, vol. 32, no. 5, pp. 943–956, May 2013.
[6] S. Sheikhi and J.-M. Odobez, “Combining dynamic head pose-gaze mapping with
the robot conversational state for attention recognition in human-robot interactions,”
Pattern Recognition Letters, vol. 66, pp. 81–90, 2015.
[7] F. Schneemann and I. Gohl, “Analyzing driver-pedestrian interaction at crosswalks:
A contribution to autonomous driving in urban environments,” in 2016
IEEE Intelligent Vehicles Symposium (IV), June 2016, pp. 38–43.
[8] T. Lagström and V. M. Lundgren, “Avip - autonomous vehicles’ interaction with
pedestrians - an investigation of pedestrian-driver communication and development
of a vehicle external interface,” 2016.
[9] D. Osipychev, Duy Tran, Weihua Sheng, and G. Chowdhary, “Human intentionbased
collision avoidance for autonomous cars,” in 2017 American Control Conference
(ACC), May 2017, pp. 2974–2979.
[10] T. Bandyopadhyay, K. S. Won, E. Frazzoli, D. Hsu, W. S. Lee, and D. Rus,
“Intention-aware motion planning,” in Algorithmic Foundations of Robotics X,
E. Frazzoli, T. Lozano-Perez, N. Roy, and D. Rus, Eds. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2013, pp. 475–491.
[11] H. Bai, S. Cai, N. Ye, D. Hsu, and W. S. Lee, “Intention-aware online pomdp planning
for autonomous driving in a crowd,” in 2015 IEEE International Conference
on Robotics and Automation (ICRA), May 2015, pp. 454–460.
[12] “Sae levels.” [Online]. Available: https://www.sae.org/news/press-room/
2018/12/sae-international-releases-updated-visual-chart-for-its-%E2%80%
9Clevels-of-driving-automation%E2%80%9D-standard-for-self-driving-vehicles
[13] “Death rate.” [Online]. Available: https://group.dailyview.tw/article/detail/705
[14] “Government data.” [Online]. Available: https://www.motc.gov.tw/
uploaddowndoc?file=survey/201710311544141.pdf&filedisplay=105%E5%B9%
B4%E6%A9%9F%E8%BB%8A%E4%BD%BF%E7%94%A8%E7%8B%80%E6%
B3%81%E8%AA%BF%E6%9F%A5%E5%A0%B1%E5%91%8A%28%E5%85%
A8%29.pdf&flag=doc
[15] Z. Cao, T. Simon, S. Wei, and Y. Sheikh, “Realtime multi-person 2d pose estimation
using part affinity fields,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), July 2017, pp. 1302–1310.
[16] S. Qiao, Y. Wang, and J. Li, “Real-time human gesture grading based on openpose,”
in 2017 10th International Congress on Image and Signal Processing,
BioMedical Engineering and Informatics (CISP-BMEI), Oct 2017, pp. 1–6.
論文全文使用權限
校內
校內紙本論文立即公開
同意電子論文全文授權校園內公開
校內電子論文立即公開
校外
同意授權
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信