§ 瀏覽學位論文書目資料
  
系統識別號 U0002-1302201213233900
DOI 10.6846/TKU.2012.00459
論文名稱(中文) 以適應性CamShift演算法應用於多攝影機追蹤系統
論文名稱(英文) An Improved CamShift Algorithm Based on Adaptive Motion Estimation for Multiple Camera Systems
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 100
學期 1
出版年 101
研究生(中文) 劉允中
研究生(英文) Yun-Jung Liou
學號 698450094
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2011-12-30
論文頁數 95頁
口試委員 指導教授 - 江正雄
委員 - 周建興
委員 - 郭景明
委員 - 賴永康
委員 - 許明華
關鍵字(中) 移動物件追蹤
CamShift演算法
MeanShift演算法
移動估測法
關鍵字(英) Moving Object Tracking
Modified CamShift
MeanShift
Motion Estimation
第三語言關鍵字
學科別分類
中文摘要
近年來,由於犯罪率的高漲,使得社會各界不得不開始重視保全機制的重要性,無人監控系統便是其中的一種,然而目前的無人監視系統所能提供的資訊相當有限,主要原因在於攝影機是採用固定式的定點監視,因此便會產生監視範圍的死角,在這樣的因素下,即使拍攝到移動物體,亦很難達到近一步的監視效果,在現今的無人監控系統中,無人監控系統具有移動物體偵測及移動物體自動追蹤的功能是十分重要的,因為我們希望的是掌握全盤的影像資訊。
    一般而言,在被監控的場所中,其影像畫面的變化的部分或是有移動物體的影像區塊,其資訊通常是比較有價值且比較重要的,固本系統便是希望能夠做到自動的偵測是否有移動的物體,並且進一步的自動追蹤移動物體。
  鑑於上述,本篇論文結合移動物體偵測及多移動物體自動追蹤與多攝影機協同技術。利用多攝影機協同技術解決固定式攝影機有限的拍攝視角的問題,以及多個移動物件在畫面中產生重疊(Overlap)以及遮蔽(Occlution)等問題;再來利用多移動物體自動追蹤技術分別同時追蹤在畫面上不同的移動物體,然而在同一個畫面上同時追蹤多個移動物體,此時便會產生重疊(Overlap)以及遮蔽(Occlution)等問題,利用上述的多攝影機協同技術可為解決。
    目前在多物件追蹤方法上大致可分為三種,粒子濾波器(Particle Filter)、卡曼濾波器(Kalman Filter)、以及Cam Shift (Continuously Adaptive Mean-Shift),由於我們的系統必須是即時的多攝影機系統,所以對於耗費大量運算時間的方法是不適用的,因此本論文將使用具有一定的準確率,以及演算法較簡單快速的Cam Shift作為多物件追蹤方法上的技術。而Cam Shift較為簡單快速的原因是在對移動物體作預測時使用快速的移動估測,而本論文提出一種適應性的快速移動估測,使Cam Shift追蹤移動物體的準確率提升,進而使多攝影機協同技術解決重疊以及遮蔽方面來的更為準確及有效。實驗證實,自適應CamShift追蹤演算法的準確率比傳統的CamShift追蹤演算法來的更好,不論是在TKU的公開資料庫或是網路上開放資料庫,在最差的測試影像中也都至少有提升23%的準確性,而大部份的測試影像的準確率也都平均在90%左右,系統也都還維持在23FPS的水準。在多攝影機系統中座標轉換的準確性也有所提升,而系統的Frame Rate也還有在12FPS。
英文摘要
Smart video surveillance has been developed for a long time, and many approaches to track moving objects have been proposed in recent years. The research of good tracking algorithms becomes one of the main streams for the smart video surveillance research. Multiple moving object tracking is a fundamental task on smart video surveillance systems, because it provides a focus of attention for further investigation. Video surveillance using multiple cameras system has attracted increasing interest in recent years. Moving objects occlusion is a key operation using correspondence between multiple cameras for surveillance system. 
In this thesis, the current state-of-the-art in moving objects tracker for multiple cameras surveillance has been surveyed. An efficient modified adaptive CamShift structure is proposed to further reduce the computational cost and increase the object tracking information in occlusion image. In this work, a new CamShift approach, directional prediction for adaptive CamShift algorithm, is proposed to improve the tracking accuracy rate. 
According to the characteristic of the center-based motion vector distribution for the real-world video sequence, this thesis employs an adaptive pattern (ASP) search to refine the central area search. Furthermore for estimation in large motion situations, the strategy of the adaptive CamShift search can preserve good performance. Experimental results indicate that the accuracy rate of the adaptive CamShift algorithm is better than that of the CamShift algorithm. Furthermore, the proposed method has given an average accuracy rate of 90%, and the operation speed can reach 12 FPS with frame size of 320
第三語言摘要
論文目次
目錄
中文摘要	I
英文摘要	II
內文目錄	III
圖表目錄	VI

第一章  緒論	1
1.1 研究背景與動機	1
1.2 研究主題	4
1.3 論文架構	5

第二章  相關技術	6
2.1傳統CamShift演算法	7
2.2MeanShift演算法	11
2.3快速移動估測法	15
2.3.1三步搜尋法	16
2.3.2新三步搜尋法	17
2.3.3鑽石搜尋法	19
2.3.4六角形搜尋法	20
2.3.5扁平六角搜尋法	21
2.3.6其他搜尋法	22

第三章  適應性CAMSHIFT演算法	25
3.1演算法架構	25
3.2適應性移動估測模組	27
3.3禎間預測	32

第四章  多攝影機追蹤系統	36
4.1齊次座標矩陣	36
4.2多攝影機間的座標轉換	39
4.2.1多攝影機系統架構	40
4.2.2偵測前景物件	43
  4.2.2.1灰階與色彩模型的使用	47
  4.2.2.2偵測法的選用	50
  4.2.2.3形態學的補償	53
  4.2.2.4物件的連接與標籤化	60

第五章  實驗結果	65
5.1實驗說明	65
5.2單攝影機物件追蹤實驗結果	65
測試影像一	65
測試影像二	68
測試影像三	71
測試影像四	74
測試影像五	77
5.3多攝影機物件追蹤實驗結果	80
測試影像一	80
測試影像二	83
測試影像三	86

第六章  結論	89
參考文獻	90


圖目錄

圖2.1傳統CamShift演算法流程圖	8
圖2.2 HSV色彩模型	8
圖2.3統計分類示意圖	13
圖2.4中央偏差模型示意圖	15
圖2.5二維對數搜尋法	16
圖2.6三步搜尋法	16
圖2.7新三步搜尋法	18
圖2.8鑽石搜尋法	19
圖2.9六角形搜尋法	21
圖2.10扁平六角形搜尋法	22
圖3.1適應性CamShift演算法流程圖	26
圖3.2適應性移動估測模組	28
圖3.3移動估測選擇器(motion estimation selector , MES)	31
圖4.1投影轉換示意圖	35
圖4.2基於適應性CamShift演算法之多攝影機系統流程圖	40
圖4.3轉換座標模組示意圖	41
圖4.4影像的前景與背景	42
圖4.5基於適應性CamShift演算法之多攝影機系統流程圖	43
圖4.6過多的錯誤色彩資訊追蹤結果	44
圖4.7前景擷取流程圖	45
圖4.8 HSV色彩模型	46
圖4.9 HSI色彩模型	48
圖4.10偵測前景模式示意圖	51
圖4.11 (a)連續影像相減法(b)背景相減法	52
圖4.12 (a)前景畫面偵測的物件破碎與雜訊(b) 形態學補償後的二值影像	53
圖4.13像素相鄰示意圖	54
圖4.14(a)原始影像	55
圖4.14(b)膨脹運算後的影像	56
圖4.15(a)原始影像	56
圖4.15(b)侵蝕運算後的影像	56
圖4.16(a)原始影像	57
圖4.16(b)斷開後的二值影像	58
圖4.17(a)原始影像	58
圖4.17 (b)閉合後的二值影像	58
圖4.18 Labeling collisions示意圖	60
圖4.19本論文使用的CCL演算法(a)於第一次掃描時,當目前掃描到的像素P為前景時,需要比較的像素有A、B、C、D這4個	61
圖4.19 (b)給予P標籤值的演算法	63
圖4.19 (c)equivalence table的初始化,與第二次掃描時Find加速演算	63
圖5.1 PETS2000網路開放資料庫	64
圖5.2 PETS2000資料庫比較傳統與適應性CamShift演算法的座標差距圖	66
圖5.3 PETS2001網路開放資料庫	67
圖5.4 PETS2001資料庫比較傳統與適應性CamShift演算法的座標差距圖	69
圖5.5 TKU網路開放資料庫	70
圖5.6 TKU資料庫比較傳統與適應性CamShift演算法的座標差距圖	72
圖5.7 TKU網路開放資料庫	73
圖5.8 TKU資料庫比較傳統與適應性CamShift演算法的座標差距圖	75
圖5.9 TKU網路開放資料庫	76
圖5.10 TKU資料庫比較傳統與適應性CamShift演算法的座標差距圖	78
圖5.11 TKU網路開放資料庫	80
圖5.12 TKU資料庫比較圖	82
圖5.13 TKU網路開放資料庫	83
圖5.14 TKU資料庫比較圖	84
圖5.15 PETS2001網路開放資料庫	85
圖5.16 PETS2001資料庫比較圖	87


表目錄

表3.1移動估測法在各角度之追蹤結果	29
表3.2 AME Module使用禎間預測之追蹤結果	33
表3.3 AME與其他方法比較表	34
表5.1 PETS2000資料庫比較表	67
表5.2 PETS2001資料庫比較表	70
表5.3 TKU資料庫比較表	73
表5.4 TKU資料庫比較表	76
表5.5 TKU資料庫比較表	79
參考文獻
[1]	台灣區電機電子工業同會電子報2010.04.14 第 117 期
URL: http://www.teema.org.tw/epaper/20100414/industrial032.html
[2]	W.-M. Hu, T.-N. Tan, L. W, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE TSMC- Part C: Applications and Reviews, vol. 34, no. 3, pp. 334-352, Aug. 2004.
[3]	N. Jacobs and R. Pless, “Time scales in video surveillance,” IEEE Transaction on Circuits and System for Video Technology, vol. 18, no. 8, pp. 1106- 1113, Aug. 2008.
[4]	K. Chamnongthai, "Efficient particle filter using non-stationary gaussian based model," International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, vol., no., pp.468-471, May. 2011.
[5]	O. D. Nouar, G. Ali, and C. Raphael, “Improved object tracking with CamShift algorithm,” IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 2, pp. II-657-II-660, May. 2006.
[6]	M.O. Mehmood, “Multi-camera based human tracking with non-overlapping fields of view,” International Conference on Application of Information and Communication Technologie, pp.1-6, 14-16 Oct. 2009.
[7]	C. Zhang and Y. Qiao, “An improved CamShift algorithm for target tracking in video surveillance,” IT&T Conference, 2009.
[8]	A.C. Sankaranarayanan, A. Veeraraghavan, and R. Chellappa, "Object detection, tracking and recognition for multiple smart cameras," Proceedings of the IEEE , vol.96, no.10, pp.1606-1624, Oct. 2008
[9]	F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, "Multicamera people tracking with a probabilistic occupancy map," IEEE Transactions on Pattern Analysis and Machine Intelligence, , vol.30, no.2, pp.267-282, Feb. 2008.
[10]	 M. Xu, J. Orwell, L. Lowey, and D. Thirde, “Architecture and algorithms for tracking football players with multiple cameras,” Vision, Image and Signal Processing, IEE Proceedings - , vol.152, no.2, pp. 232- 241, 8 April 2005.
[11]	 C.-H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, "Camera handoff and placement for automated tracking systems with multiple omnidirectional cameras," Computer Vision and Image Understanding, vol.114, issue 2, pp. 179-197, February 2010.
[12]	 M.-S. Rafael, M.-C. R, M.-C. F.J, and C.-P. A, "Multi-camera people tracking using evidential filters," International Journal of Approximate Reasoning, vol.50, issue 5, pp. 732-749, May 2009.
[13]	 W. Hu, M. Hu, X. Zhou, T. Tan, J. Lou, Maybank, and S.; , "Principal axis-based correspondence between multiple cameras for people tracking," IEEE Transactions on  Pattern Analysis and Machine Intelligence, vol.28, no.4, pp.663-671, April 2006.
[14]	 T.T. Santos, and C.H. Morimoto, "People detection under occlusion in multiple camera views," Brazilian Symposium on Computer Graphics and Image Processing, pp.53-60, 12-15 Oct. 2008.
[15]	 F. Wang, and Y. Lin, "Improving particle filter with a new sampling strategy," International Conference on Computer Science & Education., pp.408-412, 25-28 July 2009
[16]	 G. R. Bradski. “Computer vision face tracking for use in a perceptual user interface”, Journal on Intel Technology, 1998.
[17]	 A.R. Smith, “Color gamut transform pairs,” SIGGRAPH 78, pp. 12-19, 1978.
[18]	 P. Hidayatullah, and H. Konik , “CAMSHIFT improvement on multi-hue object and multi-object tracking,” European Workshop on Visual Information Processing, pp.143-148, 4-6 July 2011.
[19]	 L. Sun, B. Wang, and T. Ikenaga, “Real-time non-rigid object tracking using Camshift with weighted back projection,” International Conference on Computational Science and Its Applications, pp.86-91, 23-26 March 2010.
[20]	 G. Xiao, Y. Chen, J. Chen,; and F. Gao, “Automatic Camshift tracking algorithm based on fuzzy inference background difference combining with twice searching,” International Conference on E-Health Networking, Digital Ecosystems and Technologies, vol.1, pp.1-4, 17-18 April 2010.
[21]	 Y. Li, X. Shen, and S. Bei, “Real-time tracking method for moving target based on an improved Camshift algorithm,” International Conference on Mechatronic Science, Electric Engineering and Computer, pp.978-981, 19-22 Aug. 2011.
[22]	 Y. Ling, J. Zhang, and J. Xing, “Video object tracking based on position prediction guide CAMSHIFT,” International Conference on Advanced Computer Theory and Engineering, vol.1, pp.V1-159-V1-164, 20-22 Aug. 2010.
[23]	 J. Kovacevic, S. Juric-Kavelj, and I. Petrovic, “An improved CamShift algorithm using stereo vision for object tracking,” MIPRO, 2011 Proceedings of the 34th International Convention, pp.707-710, May 2011.
[24]	 J.G. Allen, R.Y.D. Xu, and J.S. Jin, “Object tracking using CamShift algorithm and multiple quantized feature spaces,” Pan-Sydney Area Workshop on Visual Information Processing, Sydney, Australia. CRPIT, 36. Eds. ACS. 3-7. 2003.
[25]	 X.-Y. Tan and B.-L. Zhu, Improved mean shift algorithm for target tracking: College of Communication Engineering, Chongqing University, 2008.
[26]	 D. Comaniciu, V. Ramesh, P. Meer, “Real-time tracking of non-rigid objects using mean shift,” IEEE Conference on Computer Vision and Pattern Recognition, vol.2, pp.142-149, 2000.
[27]	 A. Djouadi, O. Snorrason and F.D. Garber, "The quality of training-sample estimates of the bhattacharyya coefficient," IEEE Transactions Pattern Analysis Machine Intelligence, vol. 12, pp.92-97, 1990.
[28]	 T. Kailath, “The divergence and bhattacharyya distance measures in signal selection , ” IEEE Transactions Communications Technology, vol. COM-15, pp. 52-60, 1967.
[29]	 J.-P. Jia, Y.-M. Chai, R.-C. Zhao, and Robust, " Tracking of objects in image sequences using multiple degrees of freedom mean shift algorithm," Journal of Image and Graphics on , vol.5, no.11, pp. 707-713, 2006.
[30]	 蕭奕弘, 精準預測快速區塊比對法於多幅視訊壓縮系統:國立中央大學電機工程研究所碩士論文, 2004.
[31]	 J.R. Jain and A.K.Jain, "Displacement measurement and its application in interframe image coding,” IEEE Trans. Comm. COM-29: pp. 1799-1808, 1981.
[32]	 T. Koga, et al, "Motion compensated intraframe coding for video conferencing,” In Proc. NTC 81. 1981.
[33]	 R. Li, B. Zeng and M.L Liou, " A new three-step search algorithm for block motion estimation" IEEE Transactions on Circuits and Systems for Video Technology, Vol. 4,pp.438-442, Aug. 1994.
[34]	 S. Zhu and K.-K. Ma, “A new diamond search algorithm for the fast block matching motion estimation,” IEEE International Inference on Information Communications and Signal Processing, pp.9-12, Sep 1997.
[35]	 G. Zhu, X. Lin, and L.-P. Chau, “Hexagon-based search pattern for fast block motion estimation”, IEEE Transaction on Circuits and System for Video Technology, vol. 12, o. no. 5, May 2002.
[36]	 T.-H. Chen, and Y.-F. Li, "A novel flatted hexagon search pattern for fast block motion estimation," International Conference on Image Processing, vol.3, pp. 1477- 1480, 24-27 Oct. 2004.
[37]	 L.-M. Po, and W.-C. Ma, "A novel four-step search algorithm for fast block motion estimation," IEEE Transactions on Circuits and Systems for Video Technology, vol 6, pp.313–317, Jun. 1996.
[38]	 L.-K. Liu and E. Feig. “A block-based gradient descent search algorithm for block motion estimation in video coding,” IEEE transactions on circuit and system for video technology, vol. 6, no.4, Aug. 1996.
[39]	 C.R. Wren, A. Azarbayejani, T. Darrell, and A.P. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.19, no.7, pp.780-785, Jul. 1997.
[40]	 B.P.L. Lo and S.A. Velastin, “Automatic congestion detection system for underground platforms,” International Symposium on Intelligent Multimedia, Video and Speech Processing, vol., no., pp.158-161, 2001.
[41]	 D. Stefano, L., S. Mattoccia, and M. Mola "A change-detection algorithm based on structure and colour," IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 252- 259, 21-22 July 2003.
[42]	 S.-Y. Chien, Y.-W. Huang, B.-Y. Hsieh, S.-Y. Ma, and L.-G. Chen "Fast video segmentation algorithm with shadow cancellation, global motion compensation, and adaptive threshold techniques," IEEE Transactions on Multimedia, vol.6, no.5, pp. 732- 748, Oct. 2004.
[43]	 鄧光喆, 低儲存容量之技術改良應用於視覺監視系統 : 淡江大學電機工程研究所碩士論文, 2010.
[44]	 K. Suzuki, I. Horiba, and N. Sugie, “Linear-time connected-component labelingbased on sequential local operations,” Computer Vision and Image Understanding, vol.89, pp.1-23, Jan. 2003.
[45]	 H. Hedberg, F. Kristensen, and V. Owall, “Implementation of a labeling algorithm based on contour tracing with feature extraction,” IEEE International Symposium on Circuits and Systems, pp.1101-1104, May 2007.
[46]	 L.He, Y. Chao, and K. Suzuki, “A run-based two-scan labeling algorithm,” IEEE Transactions on Image Processing, vol. 17, no. 5, pp.749-756, May 2008.
[47]	 T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein, Introduction to Algorithms, Third Edition, Boston: MIT Press,2009.
[48]	 http://www.cvg.rdg.ac.uk/slides/pets.html
[49]	 http://www.cvg.rdg.ac.uk/slides/pets.html
[50]	 TKU database. Available: http://amos.ee.tku.edu.tw/pattern/tracking/
論文全文使用權限
校內
校內紙本論文立即公開
同意電子論文全文授權校園內公開
校內電子論文於授權書繳交後5年公開
校外
同意授權
校外電子論文於授權書繳交後5年公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信