§ 瀏覽學位論文書目資料
  
系統識別號 U0002-2606200719161700
DOI 10.6846/TKU.2007.00837
論文名稱(中文) 人體運動行為之分類與搜尋
論文名稱(英文) Classification and Retrieval on Human Kinematical Movements
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 資訊工程學系博士班
系所名稱(英文) Department of Computer Science and Information Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 95
學期 2
出版年 96
研究生(中文) 黃俊宏
研究生(英文) Chun-Hong Huang
學號 892190017
學位類別 博士
語言別 英文
第二語言別
口試日期 2007-06-06
論文頁數 97頁
口試委員 指導教授 - 施國琛(tshih@cs.tku.edu.tw)
委員 - 吳家麟
委員 - 廖弘源
委員 - 趙榮耀
委員 - 許輝煌
關鍵字(中) 骨架辨識樹
人體運動
人體運動搜尋重現
變形動態演算法
虛擬實境模組語言
關鍵字(英) Skeleton Discrimination Tree
human movement
Human movement retrieval
Mutative Dynamic Programming
VRML
第三語言關鍵字
學科別分類
中文摘要
Motion retrieval是一個相當有趣且具挑戰性的研究議題,然而大多數的Motion retrieval系統是建立在2D視訊影像的基礎上。但是隨著3D動作捕捉器影像技術及3D VRML動畫的呈現,使得真實世界中人體運動的軌跡路徑得以利用3D的電腦技術呈現並加以分析及辯識。在本篇論文裡,我們提出一個3D人體運動動作搜尋重現系統,此系統可以使得使用者找出相似的3D人體運動動作,對於動作的分析與比較上,系統主要包含兩個主要的單元,第一個單元是動作類別的分類辨識,其依據的方法是以 Skeleton Discrimination Tree為基礎,Skeleton Discrimination Tree 根據人體運動時,四肢能量的分佈來做動作型態的初步分類,在非跳躍的動作類別型態上的辨識具有明顯的效果,另外可以利用腳部在y軸上能量明顯劇烈的變化以及雙腳是否離開地板等條件來判斷辨識跳躍與非跳躍兩種不同動作群組,此動作類型辨識單元可以過濾掉非相關類別的運動動作。第二個單元包含動作與時間相似度的比較,比較的方法是以mutative dynamic programming演算法為基礎,動作相似度比較是以兩個個別的動作上相對應到的關節點的運動軌跡做比較,我們詢問了數位體育界教授的意見及參考了許多文獻資料,根據教授們的意見及文獻資料訂定了人體上16個做為軌跡追蹤的重要關節點,包括手腕、手肘、腳踝、膝蓋、頸部、頭及其他重要的人體關節點等,做為比較的軌跡都是由這16重要特徵關節點所擷取出,每一段軌跡都是由連續的點座標所組成的,這些連續的點座標會被轉換成並以連續的向量方式呈現,我們就以不同空間中的兩個相對應軌跡上的向量所形成的夾角大小程度來判斷是否是屬於一樣的序列元素,以mutative dynamic programming求出兩軌跡路徑的最長共同序列,有愈長的共同序列表示動作相似度愈高。求出兩軌跡路徑的最長共同序列後,我們進一步計算出最長共同序列相對應元素的時間差異度,避免系統將雙手同時舉起與雙手沒有同時間舉起的動作誤判為一樣的動作。
在我們所擷取出的連續特徵點座標是一連串的數字資料,一連串的數字資料對於使用者而言並不能提供及滿足視覺化的效果與享受,使用者很難以連續的數字資料感受到整個人體運動動作在空間與時間上實際的變化,因此在我們所獲得的軌跡座標資料會被匯入到VRML格式的3D人體模組,使用者只要給予一個3D VRML人體動作模組,系統就會自動依據相似的程度排列找出相似的動作。我們並協請三位體育老師及碩士班學生為本系統所執行出來的結果做佐證,實驗證明此系統所搜尋的結果具有良好之成效。
另外我們系統中並提供調適性參數能夠讓使用者根據自己的觀點去調整參數以找出使用者所需要之動作。此外查詢的動作物件也可以跟資料庫中標準的動作做比較以找出動作之間的差異。我們希望此系統可以提供動畫設計師所需要的動作並套用不需再重新製作一個新的動畫,以減少動畫製作上的時間與成本,並且也可以提供教練或運動員做為運動技術上動作改進的輔助工具。
英文摘要
Motion retrieval is a quite interesting but challenging research topic. However, for the most motion retrieval systems are designed based on 2-D video information. But, with the motion capture of video technology and presented by VRML animation, it is possible to automatically represent, analyze and adjust the 3D motions of real person. In this study, we propose a 3D human movement retrieval system, which allows users to retrieve 3D kinematical movements. The system includes two major components for movement analysis and comparison. The first one is a recognition unit of movement types which is based on Skeleton Discrimination Tree. The Skeleton Discrimination Tree can judge movement types in the field of Un-Jump. According to the conceptions of violent variation of energy of foots in Y axis and the feature information that if both foots are stuck on the ground, we can distinguish the movement types which belong to groups of “Jump” or “Un-Jump”. This recognition component can filter unallied human movements. The second unit includes movement and synchronization similarity. The comparing approach is based on mutative dynamic programming that considers the degree of the included angles of the vectors which belong to individual feature tracks. There are 16 track points include head, knee, elbow, wrist, etc. and further aggregate important features of human joints. The trajectories that can be used to comparison are extracted from these 16 feature joints. Each trajectory is composed of serial coordinates. The serial coordinates would be transformed and represented as successive vectors. We use the succession of vectors as the feature information to compute the similarity based on mutative Dynamic Programming. 
The forms of the feature coordinate points are variational numeral data. The variational numeral data can not provide satisfied for human sense of sight. Furthermore, only based on the variational numeral data, it is difficult to experience the motion variation of whole human body parts with a spatial-template domain. To solve this problem, the feature coordinates points of human body parts’ trajectories are transformed into a 3-D human body model as VRML animations. Users may give a VRML human movement object, and find the similar human movements via the system. As a result, the system can automatically retrieve similar actions. The results are tested by three professors of physical education and master students with a good satisfaction. Besides, our system provides adaptive parameter which dynamically calculated according to user’s perception of motion features. The query object also can compare with standard human kinematical motion to find the difference in each joint.
第三語言摘要
論文目次
LIST OF FIGURES	III
LIST OF TABLES	V
CHAPTER 1 INTRODUCTION	1
1.1 MOTIVATION	1
1.2 APPLICATION OF 3D MOTION RETRIEVAL	2
1.3 OVERVIEW OF APPROACH TAKEN	3
1.3.1 PRE-WORKS	4
1.3.2 MAIN WORKS	5
CHAPTER 2 RELATED WORK	9
2.1 TRACKING	9
2.1.1 MODEL-BASED TRACKING 	10
2.1.2 FEATURE-BASED TRACKING	14
2.1.3 MULTI-CAMERA TRACKING	16
2.2 BEHAVIOR UNDERSTANDING	16
2.2.1 GENERAL TECHNIQUES	17
2.2.2 ACTION RECOGNITION	20
CHAPTER 3 SYSTEM REQUIREMENT	25
3.1 VRML	25
3.1.1 A WORD ABOUT VRML VERSIONS	26
3.1.2 THE VRML SYNTAX	29
3.2 3D BROWSER	35
3.3VICON	37
3.4 CORTONA SDK	40
CHAPTER 4 THE PROPOSED SCHEMES	42
4.1 REPRESENTATION OF SKELETION	43
4.2 FEATURE SPACE	44
4.2.1 FEATURE EXTRACTION	44 
4.2.2 RELATIVE COORDINATES TRANSFORMED INTO ABSOLUTE COORDINATES	44
4.2.3 FEATURE REPRESENTATION	46
4.3 CLASSIFICATION OF MOVEMENT TYPES	48
4.3.1 SKELETON DISCRIMINATION TREE	48
4.4 SIMILARITY OF HUMAN MOVEMENT	51
4.4.1 HUMAN MOTION SIMILARITY	52
4.4.2 THE SIMILARITY OF SYNCHRONIZATION	57
4.5 IMPROVING THE SPORT SKILL	60
4.6 GRAPHIC USER INTERFACE	63
CHAPTER 5 EXPERIMENT AND ANALYSIS 	64
5.1 ANALYSIS OF DYNAMIC PROGRAMMING	64
5.2 EXPERIMENT AND COMPARISON	67
CHAPTER 6 CONCLUSION AND FUTURE WORK	71
BIBLIOGRAPHY	73
PAPER LIST	86
APPENDIX	89

LIST OF FIGURES	III
Figure 1.	3D animation in VRML	31
Figure 2.	The VRML plugin--cortona plug-ins in Internet Explorer which shows the VR scene	37
Figure 3.	(A) and (B) are motion capture, (C) VICON system and (D) initialize state in filming procedure, A subject wearing a retro-reflective marker set in the NCPES Motion Capture Laboratory.	38
Figure 4.	Representation of 3D VRML human motion models (A) Standing Board Jump. (B) Throwing Act	40
Figure 5.	Overview of system architecture	42
Figure 6.	A Human Body Skeleton	44
Figure 7.	Feature Points of a Body Skeleton and Animation Tracks	45
Figure 8.	Skeleton Discrimination Tree	50
Figure 9.	Example of query trajectory	55
Figure 10.The result generated by the Fine-Tuning -Angle algorithm	56
Figure 11.The result generated by our proposed Rough-Tuning-Angle algorithm	56
Figure 12.Differences between standard kinematical movement and query object in x,y,z	61
Figure 13.The GUI shows the difference of 8 joints between the standard human movement and query object	62
Figure 14.User interface of a 3D kinematical movement retrieval system.	63
Figure 15.The motions have the same track, but with different timer	64
Figure 16.The motions have the same track, but with different timer	65
Figure 17.(A) Query object (B) The result generated by the method applied to “Min” function	66
Figure 18.The results generated by our proposed DP method applied to “Max” function	67
Figure 19.The results of our system.(A) Baseball Swing with right hand, (B) Side Baseball Pitching with left hand 69

LIST OF TABLES
Table 1.	Overview of VRML versions	27
Table 2.	A VRML file sample	32
Table 3.	Overview of 3D Browser Plugins	36
Table 4.	Timer of a motion sequence	  46
Table 5.	Parameters for Similarity Calculation	50
Table 6.	Correspondence relation between qh and th, and they have the same longest common sequence such as q’h and t’h.	  58
Table 7.	The experiential result of DP approach applied to “Max” and “Min” 66
參考文獻
[1].Dejan V.and D. Saupe "3D Model Retrieval", in: B. Falcidieno (Ed.): Proceedings of Spring Conference on Computer Graphics 2000, Comenius University Press, Bratislava, Slovakia, pp.89-93, May 2000.
[2].K. Arbter, W.E. Snyder, H. Burkhardt, G. Herzinger, “Application of Affine-invariant Fourier descriptors to recognition of 3D objects”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No12, pp.640–647, 1990.
[3].Motofumi T. Suzuki, et al, “A similarity retrieval of 3D Polygonal Models Using Rotation Invariant Shape Descriptors”, IEEE Int. Conference on Systems, Man, and CyBernetics(SMC2000), pp2952-2956, 2000.
[4].Ching-Sheng Wang, Timothy K. Shih, Chun-Hung Huang and Jia-Fu Chen, “Content-Based Information Retrieval for VRML 3D Objects”, Highly Commended Paper Award in the 2003 International Conference on Advanced Information Networking and Applications (AINA'03), Xi'an, China, March 27–9, 2003.
[5].Sangho Park, Jihun Park, and Jake K. Aggarwal, "Video Retrieval of Human Interactions Using Model-Based Motion Tracking and Multi-layer Finite State Automata”, International Conference on Image and Video Retrieval (CIVR 2003), LNCS 2728, pp.394–403, 2003.
[6].Ben-Arie, J., Wang, Z., Pandit, P. and Rajaram, S., “Human Activity Recognition Using Multidimensional Indexing”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 24, No. 8, pp.1091-1104, August 2002.
[7].Yahya Aydin, Hiroki Takahashi & Masayuki Nakajima, “Database Guided Animation of Grasp Movement for Virtual Actors”, Proceedings the 4th International Conference on Multimedia Modeling of Multimedia Modeling, pp.213-225, 1997.
[8].Akanksha, Z. Huang, B. Prabhakaran , and C. R. Ruiz, Jr.,“Reusing motions and models in animations”, Proceedings of the 6th Eurographics workshop on Multimedia, pp.21-32, September 08-09, 2001.
[9].Jezekiel Ben-Arie, Purvin Pandit, and ShyamSundar Rajaram,” Design of A Digital Library for Human Movement”, Proceedings of the 1st ACM/IEEE-CS joint conference on Digital libraries, pp.300–309, 2001.
[10].J.K. Aggarwal, Q. Cai, W. Liao, B. Sabata, “Non-Rigid motion analysis: articulated & elastic motion”, Computer Vision Image Understanding Vol. 70, No. 2, pp.142–156, 1998.
[11].I.A. Karaulova, P.M. Hall, A.D. Marshall, “A hierarchical model of dynamics for tracking people with a single video camera”, British Machine Vision Conference, pp.352–361, 2000.
[12].Mun Wai Lee.;and Cohen, I., “Human body tracking with auxiliary measurements”, Proceedings of IEEE International Workshop on Analysis and Modeling of Faces and Gestures(AMFG 2003), pp.112–119, 2003.
[13].Y. Guo, G. Xu, S. Tsuji, “Tracking human body motion based on a stick figure model”, Journal of Visual Communication and Image, Vol. 5, No. 1, pp.1–9, 1994.
[14].Y. Guo, G. Xu, S. Tsuji, “Understanding human motion patterns”, Proceedings of the International Conference on Pattern Recognition, pp.325–329, 1994.
[15].C.R. Wren, B.P. Clarkson, A. Pentland, “Understanding purposeful human motion”, Proceedings of the International Conference on Automatic Face and Gesture Recognition, France, March 2000.
[16].Y. Iwai, K. Ogaki, M. Yachida, “Posture estimation using structure and motion models”, Proceedings of the International Conference on Computer Vision, vol. 1, pp.214-219, Greece, September 1999.
[17].Y. Luo, F.J. Perales, J. Villanueva, “An automatic rotoscopy system for human motion based on a biomechanic graphical model”, Computer Graphics Vol. 16, Issue 4, pp.355–362, 1992.
[18].C. Yaniz, J. Rocha, F. Perales, “3D region graph for reconstruction of human motion”, Proceedings of the Workshop on Perception of Human Action at ECCV, 1998.
[19].M. Silaghi, et al., “Local and global skeleton fitting techniques for optical motion capture”, Proceedings of the Workshop on Modeling and Motion Capture Techniques for Virtual Environments, Switzerland, November 1998.
[20].S. Iwasaw, et al., “Real-time estimation of human body posture from monocular thermal images”, Proceedings of the IEEE CS Conference on Computer Vision and Pattern Recognition, 1997.
[21].M.K. Leung, Y.H. Yang, “First sight: a human body outline labeling system”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 4, pp.359-377, 1995.
[22].I.-C. Chang, C.-L. Huang, “Ribbon-based motion analysis of human body movements”, Proceedings of the International Conference on Pattern Recognition, pp.436–440, 1996.
[23].S.A. Niyogi, E.H. Adelson, “Analyzing and recognizing walking figures in XYT”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.469–474, 1994.
[24].S. Ju, M. Black, Y. Yaccob, “Cardboard people: a parameterized model of articulated image motion”, Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp.38–44, 1996.
[25].Y. Kameda, M. Minoh, K. Ikeda, “Three-dimensional pose estimation of an articulated object from its silhouette image”, Proceedings of the Asian Conference on Computer Vision, pp.612-615, 1993.
[26].Y. Kameda, M. Minoh, K. Ikeda, “Three-dimensional pose estimation of a human body using a difference image sequence”, Proceedings of the Asian Conference on Computer Vision, 1995.
[27].C. Hu, et al., “Extraction of parametric human model for posture recognition using generic algorithm”, Proceedings of the Fourth International Conference on Automatic Face and Gesture Recognition, France, March 2000.
[28].Hiroyuki Segawa Hiroyuki Shioya Norikazu Hiraki and Takashi Totsuka,” Constraint-conscious Smoothing Framework for the Recovery of 3D Articulated Motion from Image Sequences”, Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp.476-482, March 26 - 30, 2000. 
[29].K. Rohr, “Towards model-based recognition of human movements in image sequences”, CVGIP: Image Understanding, Vol. 59, Issue 1, pp.94–115, 1994.
[30].D. Hogg. “Model-based vision: a program to see a walking person”, Image and Vision Computing, Vol. 1, Issue 1, pp.5-20, 1983.
[31].S. Wachter, H.-H. Nagel, “Tracking persons in monocular image sequences”, Computer Vision and Image Understanding, Vol.74, Issue 3, pp.174–192, 1999.
[32].Akitsugu Sato, Satoshi Kawada, Yoshihiko Osaki, and Masanobu Yamamoto, “3D Model-Based Tracking of Human Actions from Multiple Image Sequences”, Systems and Computers in Japan, Vol. 29, No. 8, pp.48-56,1998.
[33].J.M. Rehg, T. Kanade, “ Model-based Tracking of Self-occluding Articulated Objectts”, Proceedings of the Fifth International Conference on Computer Vision, Cambridge, pp.612–617, 1995.
[34].Ioannis Kakadiaris, ”Model-Based Estimation of 3D Human Motion”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, pp.1453-1459, December 2000.
[35].Lucas Kovar_and Michael Gleicher , “Automated Extraction and  Parameterization of Motions in Large Data Sets”, ACM Transactions on Graphics (TOG) Vol. 23 , Issue 3, pp.559–568, August 2004.
[36].Ankur Agarwal and Bill Triggs, ”Learning to Track 3D Human Motion from Silhouettes” Proceedings of the 21st International Conference on Machine Learning, ACM International Conference Proceeding Series; Vol. 69, page 2, Banff, Canada, 2004.
[37].I.A. Kakadiaris, D. Metaxas, “Model-based estimation of 3-D human motion with occlusion based on active multi-viewpoint selection”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, pp.81–87, 1996.
[38].N. Goddard, “Incremental model-based discrimination of articulated movement from motion features”, Proceedings of the IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, pp. 89–94, 1994.
[39].C. Bregler, J. Malik, “Tracking people with twists and exponential maps", Proceedings of the IEEE CS Conference on Computer Vision and Pattern Recognition, 1998.
[40].O. Munkelt, et al., “A model driven 3D image interpretation system applied to person detection in video images”, Proceedings of the International Conference on Pattern Recognition, 1998.
[41].Q. Delamarre, O. Faugeras, “3D articulated models and multi-view tracking with silhouettes”, Proceedings of the International Conference on Computer Vision, Greece, September 1999.
[42].J.P. Luck, D.E. Small, C.Q. Little, “Real-time tracking of articulated human models using a 3d shape-from-silhouette method”, Proceedings of the Robot Vision Conference, Auckland, New Zealand, 2001.
[43].A. Shio and J. Sklansky, “Segmentation of people in motion”, Proceedings of IEEE Workshop on Visual Motion, IEEE Computer Society, pp.325–332, October 1991.
[44].B. K. P. Horn and B. G. Schunk, “Determining optical flow”, Artificial Intelligence, Vol. 16, No. 1–3, pp. 185–203,August 1981.
[45].C.R. Wren, A. Azarbayejani, T. Darrell, A.P. Pentland, “Pfinder: Real-time tracking of the human body”, IEEE Transactions on Pattern Analysis Machine Intelligence, Vol. 19, Issue 7, pp.780–785, 1997.
[46].R. Polana, R. Nelson, “Low level recognition of human motion”, Proceedings of the IEEE CS Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX, pp.77–82, 1994,.
[47].Daesik Jang; Hyung-Il Cho, “Moving object tracking using active models”, Proceedings of International Conference on Image Processing (ICIP 98), pp.648–652, 4-7 October, 1998.
[48].A. F. Bobick and A. D. Wilson, A state-based technique for the summarization and recognition of gesture, in Proceedings of 5th International Conference on Computer Vision, pp.382–388, 1995.
[49].Yang Ran and Qinfen Zheng, ”Multi moving people detection from binocular sequences”, Proceedings of the 2003 IEEE International Conference on Acoustics, Speech. & Signal Processing, April 6-10. 2003, Hong Kong (cancelled).
[50].Q. Cai, J.K. Aggarwal, “Tracking human motion using multiple cameras”, Proceedings of the 13th International Conference on Pattern Recognition, pp.68–72, 1996.
[51].C. Myers and L.R. Rabiner., “A comparative study of several dynamic time-warping algorithms for connected word recognition”, The Bell System Technical Journal, Vol. 60, No. 7, pp.1389-1409, September 1981.
[52].C. Myers, L. Rabinier, A. Rosenberg, “Performance tradeoffs in dynamic time warping algorithms for isolated word recognition”, IEEE Transaction on Acoustic, Speech and Signal Processing, Vol. 28, Issue 6, pp.623–635, 1980.
[53].K. Takahashi, S. Seki, H. Kojima, and R. Oka, “Recognition of dexterous manipulations from time varying images”, Proceedings of the IEEE Workshop on Motion of Non-Rigid and Articulated Objects, pp. 23–28, Austin 1994.
[54].Takeshi Yabe, and Katsumi Tanaka, “Similarity Retrieval of Human Motion as multi-stream time Series Data”, Proceedings of International Symposium on Database Applications in Non-Traditional Environments (DANTE'99), p.279, 1999.
[55].Yi Lin, “Efficient human motion retrieval in large databases”, Proceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia (GRAPHITE '06), p.31-37, 2006.
[56].A.B. Poritz, “Hidden Markov models: a guided tour”, Proceedings of the International Conference on Acoustic, Speech and Signal Processing, pp.7–13, 1988.
[57].L. Rabinier, “A tutorial on hidden Markov models and selected applications in speech recognition”, Proceedings of IEEE, Vol. 77,No. 2, pp.257–285, 1989.
[58].T. Starner, and A. Pentland, “Real-time American Sign Language recognition from video using hidden Markov models”, Proceedings of the International Symposium on Computer Vision, pp.265–270, 1995.
[59].J. Yamato, J. Ohya, and K. Ishii, “Recognizing human action in time-sequential images using hidden Markov model”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.379–385, 1992.
[60].M. Brand, and N. Oliver, “A. Pentland, Coupled hidden Markov models for complex action recognition, Proceedings of the IEEE CS Conference on Computer Vision and Pattern Recognition, pp.994–999, 1997.
[61].C. Vogler, and D. Metaxas, “ASL recognition based on a coupling between HMMs and 3D motion analysis”, Proceedings of the International Conference on Computer Vision, pp.363–369, 1998.
[62].M. Rosenblum, Y. Yacoob, and L. Davis, “Human emotion recognition from motion using a radial basis function network architecture”, Proceedings of the IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, pp.43–49, 1994.
[63].J.K. Aggarwal, and Q. Cai, “Human motion analysis: a review”, Computer Vision Image Understanding, Vol. 73, Issue 3, pp.428–440, 1999.
[64].Wang L., Hu W., and Tan T., “Recent developments in human motion analysis. Recent developments in human motion analysis”, Pattern Recognition, Volume 36, no. 3, pp.585-601, March 2003.
[65].Y. Cui, and J.J. Weng, “Hand segmentation using learning-based prediction and verification for hand sign recognition”, Proceedings of the IEEE CS Conference on Computer Vision and Pattern Recognition, pp. 88–93, 1997.
[66].J.E. Boyd, “Global versus structured interpretation of motion: moving light displays”, Proceedings of the IEEE CS Workshop on Motion of Non-Rigid and Articulated Objects, pp. 18–25, 1997.
[67].A.F. Bobick, and J. Davis, “Real-time recognition of activity using temporal templates”, Proceedings of the IEEE CS Workshop on Applications of Computer Vision, pp. 39–42, 1996.
[68].J.W. Davis, and A.F. Bobick, “The representation and recognition of action using temporal templates’, Technical Report 402, MIT Media Lab, Perceptual Computing Group, 1997.
[69].R. Rosales, and S. Sclaro, “3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions”, Proceedings of the IEEE CS Conference on Computer Vision and Pattern Recognition, vol.2, pp.117-223, June 1999.
[70].W. Freeman, and C. Weissman, “Television control by hand gestures”, Proceedings of the International Conference on Automatic Face and Gesture Recognition, pp.179–183, 1995.
[71].C. Bregler, “Learning and recognizing human dynamics in video sequences”, Proceedings of the IEEE CS Conference on Computer Vision and Pattern Recognition, pp.568–574, 1997.
[72].L. Campbell, and A. Bobick, “Recognition of human body motion using phase space constraints”, Proceedings of the 5th IEEE International Conference on Computer Vision, Cambridge, pp.624–630, 1995.
[73].J. Farmer,M. Casdagli, S. Eubank, and J. Gibson, “State-space reconstruction in the presence of noise”, Physics D 51, Elsevier Science Publishers B.V. (North-Holland), pp.52–98, 1991.
[74].A.M. Elgammal, and L.S. Davis, “Probabilistic framework for segmenting people under occlusion”, Proceedings of the 8th IEEE International Conference on Computer Vision, pp145-152, 2001.
[75].A. Mohan, C. Papageorgiou, and T. Poggio, “Example-based object detection in images by components”, IEEE Transaction on Pattern Recognition Machine Intelligence, Vol. 23, No. 4, pp.349–361, 2001.
[76].A. Elgammal, D. Harwood, and L.S. David, “Nonparametric background model for background subtraction”, Proceedings of the 6th European Conference on Computer Vision, 2000.
[77].L. Zhao, and C. Thorpe, “Recursive context reasoning for human detection and parts identification”, Proceedings of the IEEE Workshop on Human Modeling, Analysis and Synthesis, pp.136-141, June 2000.
[78].S. IoOe, and D. Forsyth, “Probabilistic methods for finding people”, International Journal of Computer Vision, Vol. 43, Issue 1, pp. 45–68, 2001.
[79].G. Welch, and G. Bishop, “An introduction to the Kalman filter”, from http://www.cs.unc.edu, UNC-Chapel Hill, TR95-041, November 2000.
[80].Timothy K. Shih, Ching-Sheng Wang, Yuan-Kai Chiu, Yi-Tsou Hsin, and Chun-Hong Huang, "On Automatic Actions Retrieval of Martial Arts," in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME 2004), pp.281-284, June, 2004.
[81].C. S. Li, J. R. Smith, L. D. Bergman, and V. Castelli, “Sequential processing for content-based retrieval of composite objects”, SPIE Storage and Retrieval of Image and Video Databases, pp.2-13, January, 1998. 
[82].H. Sundaram, and S. F. Chang, “Efficient video sequence retrieval in large repositories”, SPIE Storage and Retrieval of Image and Video Databases, San Jose, CA, J January, 1999. 
[83].Steve H Collins, Martijn Wisse and Andy Ruina, “A Three-dimensional Passive Dynamic Walking Robot with Two Legs and Knees”, The International Journal of Robotics Research, Vol. 20, No. 7, pp. 607, 2001.
[84].Hakkinen, K., PV Komi, M. Alen and H. Kauhanen. “EMG, muscle fibre and force production characteristics during a 1 year training period in elite weightlifters”, European Journal of Applied Physiology, pp.419-427, 1985.
[85].Bosco, C. “Evaluation and control of basic and specific muscle behaviour” Part 1. Track Tech. pp.3930-3933, 39411992a.
[86].Schmidtbleicher, D., “Muscular Mechanics and Neuromuscular Control,” In: B.E. Ungerechts, K. Wilke, and K. Reischle (eds.) Swimming Sci., V Int. Series Sport Sci.. Champaign, IL: Human Kinetics Publishers, pp.131-148, 1988.
[87].Yasuhiko Sakamoto, Shigeru Kuriyama, and Toyohisa Kaneko, ”Motion Map: Image-based Retrieval and Segmentation of Motion Data”, Symposium on Computer Animation, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation. 
[88].Marks J., W. Ruml, K. Ryall, J. Seims, S. Shieber, B. Andalman, P. A. Beardsley, W. Freeman, S. Gibson,J. Hodgins, T. Kang, B. Mirtich, and H. Pfister “Design galleries: A general approach to setting parameters for computer graphics and animation”, International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 389-400, 1997.
[89].http://www.w3.org/MarkUp/VRML/
[90].http://www.parallelgraphics.com/products/cortona/download/iexplore/
[91].http://www.xj3d.org/
[92].http://www.vicon.com
論文全文使用權限
校內
校內紙本論文立即公開
同意電子論文全文授權校園內公開
校內電子論文立即公開
校外
同意授權
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信