淡江大學覺生紀念圖書館 (TKU Library)
進階搜尋


下載電子全文限經由淡江IP使用) 
系統識別號 U0002-1407200913513200
中文論文名稱 以非線性內插映射法估算頭部角度
英文論文名稱 Head Pose Estimation Based on Nonlinear Interpolative Mapping
校院名稱 淡江大學
系所名稱(中) 資訊工程學系碩士班
系所名稱(英) Department of Computer Science and Information Engineering
學年度 97
學期 2
出版年 98
研究生中文姓名 張振維
研究生英文姓名 Chen-Wei Chang
電子信箱 696410314@s96.tku.edu.tw
學號 696410314
學位類別 碩士
語文別 中文
第二語文別 英文
口試日期 2009-06-23
論文頁數 35頁
口試委員 指導教授-林慧珍
委員-徐道義
委員-顏淑惠
委員-林慧珍
中文關鍵字 人臉辨識  頭部角度辨識  輻狀基底函數  流型  非線性內插映射 
英文關鍵字 Face recognition  Head pose estimation  Isomap  Radial Basis Function (RBF)  Nonlinear interpolative mapping 
學科別分類 學科別應用科學資訊工程
中文摘要 大部分的人臉辨識系統只對於接近正面的臉部影像有較好的辨識結果,對於方向角度變化較多的頭部影像,往往無法成功辨識。然而若是能夠先對人臉影像計算其頭部方向角度或是根據角度做方位分類,再進入分類後的人臉影像資料庫進行搜尋比對,必能大大提高辨識率。
本篇論文提出了ㄧ個人臉影像之頭部角度辨識(估計)與方向分類方法。本方法利是應用輻狀基底函數(Radial Basis Function,RBF),監督式訓練一個從輸入影像至特徵向量之非線性內插映射(Nonlinear interpolative mapping),對角度介於0度到360度的人臉影像辨識其角度;本方法與N. Hu et al. [1]所提的非監督式訓練非線性嵌入與映射方法比較,實驗結果顯示本篇論文提出方法無論在精確性或時間效率方面都有較好的結果。
英文摘要 The performance of face recognition systems depends on conditions being consistent, including lighting, pose and facial expression. To solve the problem produced by pose variation it is suggested to pre-estimate the pose orientation of the given head image before it is recognized. In this paper, we propose a head pose estimation method that is an improvement on the one proposed by N. Hu et al. [1]. The proposed method trains in a supervised manner a nonlinear interpolative mapping function that maps input images to predicted pose angles. This mapping function is a linear combination of some Radial Basis Functions (RBF). The experimental results show that our proposed method has a better performance than the method proposed by Nan Hu et al. in terms of both time efficiency and estimation accuracy.
論文目次 目錄
第一章 緒論 1
1.1 研究動機與目的 1
1.2 論文架構 3
第二章 相關研究 4
2.1 方法與應用 4
2.2 相關研究探討 6
第三章 頭部角度估算 8
3.1 非線性嵌入與映射法 8
3.2 本論文所提之監督式學習映射法 11
3.2.1 頭部角度影像資料庫建立與前處理 11
3.2.2 頭部影像均勻取樣 16
3.2.3 訓練樣本空間至特徵空間之映射函數 17
第四章 實驗結果與分析 21
第五章 結論與未來研究 26
參考文獻 27
附錄—英文論文 30

圖目錄
圖1. 系統訓練流程圖 2
圖2. 藍色點為一組資料經Isomap轉換之結果,紅色圓為正規化後 期望達到的分布結果 10
圖3. 正規化後人頭影像實際分布的情況 10
圖4. 影像之左側與下方分別為水平和垂直投影結果,影像中之藍色框架為其裁切線。 15
圖5. 紅色、藍色、黃色和綠色分別為正面、左側、後面和右側之影像切割框架, X軸和Y軸方向的跨越範圍分別為[Xkmin, Xkmax]與[Ykmin, Ykmax]。 15
圖6. 綠色外框為最終自動剪裁之框架 16
圖7. 預估頭部角度分布示意圖 17
圖8. 頭部影像序列範例1 21
圖9. 頭部影像序列範例2 22
圖10. 頭部影像序列範例3 23
圖11. 測試結果之角度分布圖: 藍色為預估角度;紅色為本方法測試結果;綠色為N. Hu et al.之方法測試結果 25

表目錄
表1. 兩方法之辨識角度誤差統計表 24
表2. 針對不同角度辨識範圍,兩方法之平均角度誤差 24

參考文獻 [1]Nan Hu, Weimin Huang, and Surendra Ranganath, “Head Pose Estimation by Non-linear Embedded and Mapping,” in Proc. of IEEE International Conference on Image Processing Vol. 2, pp. 342-345, September 2005.

[2]Ying Wen and Pengfei Shi, “Image PCA: A New Approach for Face Recognition,” in Proc. of International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. 1-1241-1-1244, April 2007.

[3]Wenyu Sun and Qiuqi Ruan, “Two-Dimension PCA for Facial Expression Recognition,” in Proc. of the 8th International Conference on Signal Processing, Vol. 3, November 2007.

[4]R. A. Fisher, “The Statistical Utilization of Multiple Measurements,” Annals of Eugenic, Vol. 8, pp. 376-386, 1938.

[5]Aleix M. MartõÂnez and Avinash C. Kak, “PCA versus LDA,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, pp. 228-233, February 2001.

[6]J. Tenenbaum, V. de Silva, and J. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, 290(5500), pp. 2319-2323, December 2000.

[7]S. Roweis and L. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, 290(5500), pp. 2323-2326, December 2000.

[8]S. H. Jeng and H. Y. Liao, “An Efficient Approach for Facial Feature Detection Using Geometrical Face Model,” in Proc. of IEEE International Conference on Pattern Recognition, pp. 426-430, 1996.

[9]C. A. Waring and X. W. Liu, “Face Detection Using Spectral Histograms and SVMs,” IEEE Transactions Systems, Man and Cybernetics, 35(3), pp. 467-476, 2005.

[10]H. A. Rowley, S. Baluja, and T. Kanade, “Neural Network based Face Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1), pp. 23-38, 1998.

[11]P. Viola and M. Jones, “Rapid Object Detection Using A Boosted Cascade of Simple Features,” in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 511-518, 2001.

[12]Q. Chen, T. Shimada, H. Wu, and T. Shioyama, “Head Pose Estimation Using Both Color and Feature Information,” in Proc. of 15th ICPR, Barcelona, Spain, Vol. 2, pp. 842-845, September 2000.

[13]L. M. Brown and Y.-L. Tian, “Comparative Study of Coarse Head Pose Estimation,” in Proc. of IEEE Workshop Motion Video Computer, pp. 125-130, December 2002.

[14]S. Ba and J.-M. Odobez, “Evaluation of Multiple Cue Head Pose Estimation Algorithms in Natural Environments,” in Proc. of IEEE ICME, Amsterdam, pp. 1330–1333, July 2005.

[15]E. Murphy-Chutorian and M. Trivedi, “Head Pose Estimation in Computer Vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 31, pp. 607-626, April 2009.

[16]M. Doi and Y. Aoki, “Real-time Video Surveillance System Using Omni-directional Image Sensor and Controllable Camera,” in Proc. of SPIE, Real-Time Imaging VII, Vol. 5012, pp. 1-9, Apr. 2003.

[17]U. Weidenbacher, G. Layher, P. Bayerl, and H. Neumann, “Detection of head Pose and Gaze Direction for Human-computer Interaction,” Perception and Interactive Technologies, Springer, Berlin, Heidelberg, Vol. 4021/2006, pp. 9-19, June 2006.

[18]B. Yip and J. Jin, “Pose Determination and Viewpoint Determination of Human Head in Video Conferencing Based on Head Movement,” in Proc. of 10th International Conference on Multimedia Model., Brisbane, Australia, pp. 130-135, January 2004.
[19]B. Yip and J. Jin, “3d Reconstruction of A Human Face with Monocular Camera Based on Head Movement,” in Proc. of Pan-Sydney Area Workshop VIP, Darlinghurst, Australia, pp. 99-103, 2003.

[20]C. Chien, Y. Chang, and Y. Chen, “Facial Expression Analysis Under Various Head Poses,” in Proc. of 3rd IEEE Pacific Rim Conf. Multimedia, Taiwan, Vol. 2532/2002, pp. 199-212, December 2002.

[21]Ying-Li Tian, Lisa Brown, and Jonathan Connell, “Absolute Head Pose Estimation from Overhead Wide-angle Cameras,” in Proc. of IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 92-99, October 2003.

[22]Longbin Chen, Lei Zhang, Yuxiao Hu, Mingjing Li, and Hongjiang Zhang, “Head Pose Estimation Using Fisher Manifold Learning,” IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 203-207, October 2003.

[23]Zhibo Guo, Huajun Liu, Qiong Wang, and Jingyu Yang, “A Fast Algorithm Face Detection and Head Pose Estimation for Driver Assistant System,” in Proc. of the 8th International Conference on Signal Processing, Vol. 3, pp. 16-20, 2006.

[24]Adel Lablack, Zhongfei Zhang, and Chabane Djeraba, “Supervised Learning for Head Pose Estimation Using SVD and Gabor Wavelets,” in Proc. of 10th IEEE International Symposium on Multimedia, pp. 592-596, December 2008.

[25]Andrew Fitzgibbon, Maurizio Pilu, and Robert B. Fisher, “Direct Least Square Fitting of Ellipses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 5, pp. 476-480, May 1999.
論文使用權限
  • 同意紙本無償授權給館內讀者為學術之目的重製使用,於2009-07-20公開。
  • 同意授權瀏覽/列印電子全文服務,於2009-07-20起公開。


  • 若您有任何疑問,請與我們聯絡!
    圖書館: 請來電 (02)2621-5656 轉 2281 或 來信