淡江大學覺生紀念圖書館 (TKU Library)
進階搜尋


下載電子全文限經由淡江IP使用) 
系統識別號 U0002-1307200821411000
中文論文名稱 以高斯混合模型為基礎之移動物件追蹤
英文論文名稱 Moving Object Detection and Tracking
校院名稱 淡江大學
系所名稱(中) 資訊工程學系碩士班
系所名稱(英) Department of Computer Science and Information Engineering
學年度 96
學期 2
出版年 97
研究生中文姓名 梁峰銘
研究生英文姓名 Feng-Ming Liang
學號 695410224
學位類別 碩士
語文別 中文
第二語文別 英文
口試日期 2007-06-18
論文頁數 56頁
口試委員 指導教授-林慧珍
委員-顏淑惠
委員-許秋婷
中文關鍵字 物件偵測/追蹤  背景相減法  高斯混合模型  粒子濾波器 
英文關鍵字 detection  tracking  Gaussian distribution  Gaussian mixture model (GMM)  particle filters (PF)  sequential K-means algorithm  expectation maximization (EM) 
學科別分類 學科別應用科學資訊工程
中文摘要 移動物件追蹤,大都應用在即時監控系統中,在影片的前景中追蹤特定的移動物件。一般都是利用先前取得的背景做背景相減法將影像減去背景而得到一個較為粗糙的前景。由於物件的隨意移動,周遭光線的變化或物件移動產生遮蔽等因素,在處理過程中不容易取得乾淨背景,因而我們的系統採用GMM(Gaussian Mixture Model, GMM)來建立背景,GMM是利用高斯能任意逼近任何曲線的特性來建立,將影像上每一點隨時間觀測到的色彩資訊序列當成一個隨機序列X的取樣,則該點可能的色彩資訊,可透過將多個高斯機率分布組合的模型來表示。一般而言,背景的色彩資訊比前景的色彩資訊在時間軸上停留總時間較為長久,因此便能定義適當的閥值,加以區別,進而建立背景。取得背景,利用背景相減法得到畫格中移動物件的區域(即前景),系統再利用粒子濾波器(Particle Filter, PF),進行追蹤。粒子濾波器一直是進行物體追蹤的一個有效方法;然而當特徵狀態空間維度很高時,其追蹤容易出現飄移的情況。由於與PF結合,GMM只需少數個高斯函式就可達到不錯的精確度,因此在提高GMM與PF的效率同時,也不影響整體精確性。利用GMM我們僅須建立背景就能較精確定位物件,並大量刪減所需計算的資訊,再利用PF的追蹤結果。
英文摘要 For object detection and tracking, we first obtain a background from a sequence of frames, and acquire a coarse foreground for the following frame by simply subtracting the background. However, it is difficult to obtain a clean background in practice due to the presence of moving objects, change of lighting condition, or object occlusion. To cope with such problems, we use modified version of Gaussian Mixture Models (GMMs) to perform background construction. The color information of each point in an image sequence may be considered as a random variable and thus it can be described by a combination of some Gaussian models. With the fact that background information stays longer than foreground information, GMMs distinguish background pixels and foreground pixels according to the maximum time the similar information staying on a pixel over a sequence of frames. As soon as a coarse foreground is obtained by background subtraction, we perform some operations, including shadow removal, edge detection, and connected component analysis, to localize each moving object in the foreground.
As long as an object is detected, it is then tracked in the following frames by the use of Particle Filters (PF). PF is effective but the dimension of its state space is high so as the tracked objects tend to be shifting. To reduce this problem we modify the particle filtering by carrying out tracking over the foreground portion instead of the whole image. With the use of the modified versions of GMMs and PFs, our system was proved to have high accuracy rate of detection/tracking and satisfactory time efficiency.
論文目次 目錄 I
圖目錄 II

第一章 緒論 1
1.1 研究動機與目的 1
1.2 移動物件追蹤 2
1.3 其他相關研究 3
第二章 系統架構 9
2.1 系統流程圖 9
2.2 高斯混合模型 10
2.2.1 高斯混合模型流程圖 11
2.2.2 高斯混合模型 12
2.2.3 修改後的高斯混合模型 16
2.3 粒子濾波器 20
2.3.1 粒子濾波器流程圖 21
2.3.2 粒子濾波器 22
2.3.3 修改後的粒子濾波器 26
2.4 移動物件偵測 31
2.5 移動物件追蹤 33
第三章 系統實驗結果與分析 34
3.1 實驗影像 34
第四章 結論與未來研究 39
4.1 結論 39
4.2 未來研究方向 40
參考文獻 41
附錄A 英文論文 43

圖目錄
圖 1 A. G. Bor et al. 所提出的方法實驗結果 3
圖 2 S. J. McKenna et al. 提出的特徵擷取實驗結果 4
圖 3 使用SVM分類器的偵測結果 6
圖 4 H. Fujiyoshi et al. 所提出將偵測區域經過型態影像學(morphology)和低通濾波器(low pass filter)處理所得到的輪廓 6
圖 5 H. Fujiyoshi et al. 分析骨架變化頻率來分辨人形物體運動狀態 7
圖 6 H. T. Nguyen et al. 提出的移動物體輪廓偵測方法 7
圖 7 H. T. Nguyen et al. 所提出方法的追蹤結果 8
圖 8 系統流程圖 9
圖 9 混合高斯訓練流程圖 11
圖 10 N. Friedman 所提出方法的實驗結果 14
圖 11 C. Stauffer et al. 提出方法的實驗結果(a) 原圖; (b) GMM得出的背景; (c) 前景; (d) 背景加追蹤資訊 15
圖 12 粒子濾波器流程圖 21
圖 13 粒子濾波三步驟示意圖 [18] 25
圖 14 本論文所提出來的方法 (a) 為輸入影像; (b) 為(a)之前景區域(c)(d) 分別為追蹤目標框選大小 30
圖 15 移動物體偵測示意圖:(a) 原始彩色影像;(b) 灰階化影像;(c) 利用混合高斯模型得到 (b)圖的背景;(d) (b)與(c)相減差異大於threshold的結果; (e) (d)與(a)交集之後的結果 32
圖 16 第一段影像序列的偵測追蹤結果 (a) frame 150; (b) frame 151; (c) frame 152; (d) frame 265; (e) frame 266; (f) frame 267; (g) frame 318; (h) frame 319; (i) frame 320 35
圖 17 第二段影像序列的偵測追蹤結果 (a) frame 65; (b) frame 66; (c) frame 67; (d) frame 114; (e) frame 115; (f) frame 116; (g) frame 146; (h) frame 147; (i) frame 148 36
圖 18 第三段影像序列的偵測追蹤結果 (a) frame 89; (b) frame 90; (c) frame 91; (d) frame 155; (e) frame 156; (f) frame 157; (g) frame 213; (h) frame 214; (i) frame 215 37
圖 19 第四段影像序列的偵測追蹤結果 (a) frame 101; (b) frame 102; (c) frame 103; (d) frame 153; (e) frame 154; (f) frame 155; (g) frame 263; (h) frame 264; (i) frame 265 38

參考文獻 [1] Adrian G. Bor and Ioannis Pitas, “Optical flow estimation and moving object segmentation based on median radial basis function network,” IEEE Transactions on Image Process, 1998, Vol. 7, No. 5, pp. 693-702.
[2] Alan Lipton, Hironobu Fujiyoshi, and Raju Patil, “Moving target classification and tracking from real-time video,” in Proceedings of the Fourth IEEE Workshop on Applications of Computer Vision (WACV'98), 1998, pp. 8-14.
[3] Berthold K. P. Horn and Brian G. Schunck, “Determining optical flow,” Artificial Intelligence, 1981, Vol. 17, pp. 185-203.
[4] Bijan Shoushtarian and Helmut E. Bez, “A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking,” Pattern Recognition Letters, 2005, Vol. 26, No. 1, pp. 5-26.
[5] Caifeng Shan, Tieniu Tan, and Yucheng Wei, “Real-time hand tracking using a mean shift embedded particle filter,” Pattern Recognition, 2007, Vol. 40, No. 7, pp. 1958-1970.
[6] Chris Stauffer and W. E. L Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999, pp. 23-25.
[7] N. Friedman and S. Russell, “Image segmentation in video sequences: A probabilistic approach,” in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI), 1997, pp. 175-181.
[8] Guanglei Xiong, Chao Feng, and Liang Ji, “Dynamical Gaussian mixture model for tracking elliptical living objects,” Pattern Recognition Letters, 2006, Vol. 27, No. 7, pp. 838-842.
[9] Hanzi Wang, Suter David, Schindler Konrad, and Shen Chunhua, “Adaptive Object Tracking Based on an Effective Appearance Filter,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, Vol. 29, No. 9, pp. 1661-1667.
[10] Hieu Tat Nguyen, Marcel Worring, Rein van den Boomgaard, and Arnold W. M. Smeulders, “Tracking nonparameterized object contours in video,” IEEE Transactions on Image Processing, 2002, Vol. 11, No. 9, pp. 1081-1091.
[11] Hironobu Fujiyoshi, and Alan J. Lipton, “Real-time human motion analysis by image skeletonization,” in Fourth IEEE Workshop on Applications of Computer Vision (WACV'98), 1998, pp. 15-21.
[12] N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, Vol. 21, No. 6, pp. 564-569.
[13] N. Peterfreund, “The velocity snake,” in Proceedings of IEEE Nonrigid and Articulated Motion Workshop, 1997, pp. 70-79.
[14] P. Wayne Power and Johann A. Schoonees, “Understanding Background Mixture Models for Foreground Segmentation,” in Proceedings of Image and Vision Computing New Zealand, 2002, pp. 266-271.
[15] Sanjeev Arulampalam, Simon Maskell, Neil Gordon and Tim Clapp, “A tutorial on particle filters for on-line non-linear/non-gaussian bayesian tracking,” IEEE Transactions on Signal Processing, 2002, Vol. 50, No. 2, pp.174-188.
[16] Shai Avidan, “Support vector tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, Vol. 26, No. 8, pp. 1064-1072.
[17] Sian Huang Wu, Ren Jie Sie, Bing Guo Wong, and Ying Yi Wu, “Moving Object Detection With Enhanced Gaussian Mixture Model for Background Subtraction,” in Proceedings of the 2006 Workshop on Consumer Electronics and Signal Processing (WCESP), 2006.
[18] Stephen J. McKenna, Sumer Jabri, Zoran Duric, Harry Wechsler, and Azriel Rosenfeld, “Tracking Groups of People,” Computer Vision and Image Understanding (CVIU), 2000, Vol. 80, No. 1, pp. 42-56.
[19] http://web.engr.oregonstate.edu/~hess/
論文使用權限
  • 同意紙本無償授權給館內讀者為學術之目的重製使用,於2013-07-17公開。
  • 同意授權瀏覽/列印電子全文服務,於2013-07-17起公開。


  • 若您有任何疑問,請與我們聯絡!
    圖書館: 請來電 (02)2621-5656 轉 2281 或 來信