§ 瀏覽學位論文書目資料
  
系統識別號 U0002-1912200808504500
DOI 10.6846/TKU.2009.00694
論文名稱(中文) 以對稱小波轉換演算法應用於移動物件偵測與追蹤之研究
論文名稱(英文) A Study on Moving Objects Detection and Tracking Using Symmetric Wavelet Transform Scheme
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 97
學期 1
出版年 98
研究生(中文) 黃丁威
研究生(英文) Ding-Wei Huang
學號 695450022
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2008-12-12
論文頁數 73頁
口試委員 指導教授 - 江正雄(chiang@ee.tku.edu.tw)
委員 - 呂學坤(sklu@ee.fju.edu.tw)
委員 - 郭景明(jmguo@seed.net.tw)
委員 - 楊維斌(robin@ee.tku.edu.tw)
關鍵字(中) 監視器系統
對稱遮罩小波轉換架構
偽移動量
移動物件偵測與追蹤
關鍵字(英) Visual surveillance system
Symmetric Mask-Based Scheme
Fake motion
Moving object detection and tracking
第三語言關鍵字
學科別分類
中文摘要
近年來為了安全上的需求,監視系統已經被快速的發展,越來越多學者專家開始發展智慧型監視系統取代傳統的被動式監視系統。智慧型監視系統第一步是要先偵測移動物件,接下來將會把焦點擺在所偵測的區域做各種的影像後處理,例如物件分類、物件追蹤、以及行為描述的處理。在每個監視應用中偵測移動物件是最基本也是最重要的工作,正確的移動物件區域不僅是使後處理提供一個較佳的資訊,也能減少多餘計算量的處理。然而,要在現實生活環境中正確偵測出移動物件並非容易,因為在背景中有許多像是亮度的改變、偽移動量、高斯雜訊等問題將會導致電腦視覺偵測到非正確的移動物件。

傳統中有三種典型的偵測移動物件方法︰背景先減法、連續畫面差異以及光流法,但這三種移動物件偵測法普遍地對亮度變化、雜訊、樹上移動葉子的偽移動非常的敏感。近幾年有許多物件偵測與追蹤方法相繼的被提出,例如使用離散小波轉換或由平均方式產生的低維度影像等當作移動物件偵測與追蹤系統之前處理。但是,基於傳統離散小波轉換,原始影像經由二維方向(行和列)計算而產生的低低頻影像會造成大量計算負擔。而低維度影像又比離散小波轉換產生的低頻影像更模糊,這樣會降低後處理的精準度(像是物件追蹤和物件辨識)。

所以為了克服上述提及的問題,我們提出了一個方式,對稱式遮罩演算架構,藉由使用對稱式遮罩離散小波轉換偵測及追蹤物件以減少資料量。在對稱式遮罩演算架構中,只有對稱式遮罩離散小波轉換中的低低頻(5×5)遮罩矩陣會被使用,不像傳統離散小波分開處理行和列各自經由低通濾波器和取樣方式,此對稱式遮罩離散小波低低頻遮罩直接地計算低低頻影像資訊。我們提出的方式能減少影像轉換的計算量也能夠移除不屬於真實移動物件的偽移動量,並且可以維持住物體較慢移動量,以提供有效的完整移動物件範圍。
英文摘要
In recent years, visual surveillance systems for the purpose of security have been rapidly developed. More and more people try to develop intelligent visual surveillance systems to replace the traditional passive video surveillance systems. The intelligent visual surveillance system can detect moving objects in the initial stage and subsequently process the functions such as object classification, object tracking, and object behaviors description. Detecting moving object is a basic and significant task in every surveillance application. The accurate location of the moving object does not only provide a focus of attention for post-processing but also can reduce the redundant computation for the incorrect motion of the moving object. The successful moving object detection in a real surrounding environment is a difficult task, since there are many kinds of problems such as illumination changes, fake motion, and Gaussian noise in the background that may lead to detect incorrect motion of the moving object. 

There are three typical approaches for motion detection: background subtraction, frame difference, and optical flow. Generally, the above three moving object detection methods are all sensitive to illumination changes, noises, and fake motion such as moving leaves of trees. In past years, several approaches for object detecting and tracking for pre-processing were proposed, such as the discrete wavelet transform(DWT) and the low resolution image generated by replacing each pixel value of the original image with the average value of its neighbors and itself. But the LL band image produced by the original image size via two dimensions (row and column) calculation on the conventional DWT may cause high computing cost. These low resolution images become fuzzier than the LL band image generated by using DWT. It may reduce the preciseness of post-processing (such as object tracking and object identification).

To overcome the above-mentioned problems, we propose a method, symmetric mask-based scheme (SMBS), for detecting and tracking moving objects by using symmetric mask-based discrete wavelet transform (SMDWT). In the SMBS, only the LL (5×5) mask band of SMDWT is used. Unlike the traditional DWT method to process row and column dimensions separately by low-pass filter and down sampling, the LL mask band of SMDWT is used to directly calculate the LL band image. Our proposed method can reduce the image transfer computing cost and remove fake motion that is not belonged to the real moving object. Furthermore, we can retain a better slow motion of the object than that of the low resolution method and provide effective and complete moving object regions.
第三語言摘要
論文目次
中文摘要	I
英文摘要	III
內文目錄	V
圖表目錄	IX

第一章 緒論	1
1.1研究背景與動機	1
1.2論文主題與目標	3
1.3 論文架構	3

第二章 物件偵測與追蹤概念	5
2.1 概述	5
2.2 移動物件偵測技術	6
2.2.1 連續畫面間差異法	6
2.2.2 背景相減法	8
2.2.3 光流法	10
2.2.4 以顏色為基礎的分離法	11
2.2.5 以樣板為基礎的分離法	13
2.3 移動物件追蹤技術	15
2.3.1 區域追蹤	15
2.3.2 輪廓追蹤	16
2.3.3 特徵追蹤	17
2.3.4 模板追蹤	18

第三章 影像雜訊處理與相關應用	20
3.1 前言	20
3.2 影像雜訊處理	21
3.2.1 平滑濾波器	22
3.2.2 中值濾波器	24
3.2.3 離散小波轉換	25
3.3 相關應用追蹤系統	28
3.3.1 以中值濾波器或平滑濾波器為偵測追蹤方式	28
3.3.2 以離散小波轉換為偵測追蹤方式	31
3.4 對稱式遮罩演算架構	32
3.4.1 對稱式遮罩離散小波轉換	33
3.4.2 計算量複雜度	36

第四章 對稱式遮罩架構應用於移動物件偵測追蹤系統	40
4.1 系統架構	40
4.2 對稱式遮罩演算架構	41
4.3 偵測移動物件	42
4.4 補償移動物件	44
4.4.1 膨脹與侵蝕	44
4.4.2 斷開與閉合	46
4.5 追蹤移動物件	47
4.5.1 連通元素	47
4.5.2 外框與追蹤	49

第五章 實驗結果	51
5.1 實驗說明	51
5.2 連續畫面差異法閥值選取	51
5.3 雜訊過濾分析	53
5.4 追蹤效果分析	57
5.5 實驗數據分析	61

第六章 結論	66

參考文獻	68

圖目錄
圖1.1 人類行為分析與描述	2
圖2.1 連續畫面差異法流程圖	7
圖2.2 背景相減法流程圖	9
圖2.3 LUCAS-KANADE光流偵測法	11
圖2.4 HSV色彩空間偵測	13
圖2.5 樣板分類	14
圖2.6 多樣版偵測結果	14
圖2.7 區域追蹤	16
圖2.8 輪廓追蹤人形或手掌	17
圖2.9 物件受到遮蔽嚙合時輪廓追蹤法的情形	17
圖2.10 物件追蹤軌跡圖	18
圖2.11 人臉物件模板建立與追蹤	19
圖3.1 室外偽移動量雜訊	20
圖3.2 室內高斯雜訊	21
圖3.3 平滑濾波器遮罩(3×3)	22
圖3.4 含高斯雜訊的LENA圖	23
圖3.5 將圖3.4經過平滑濾波器遮罩(3×3)	23
圖3.6 將圖3.4經過平滑濾波器遮罩(5×5)	24
圖3.7 中值濾波器處理流程	25
圖3.8 將圖3.4經過中值濾波器遮罩(3×3)	25
圖3.9 二維離散小波轉換子頻帶分解	26
圖3.10 將圖3.4經過二維二階離散小波轉換子頻帶分解	27
圖3.11 前景物件偵測系統	28
圖3.12 低解析度平滑濾波器(2×2)示意圖	30
圖3.13 室外物件追蹤圖	31
圖3.14 低低頻影像	32
圖3.15 室外基於二維三階低低頻影像物件偵測追蹤	32
圖3.16 對稱式高高頻帶遮罩矩陣係數	34
圖3.17 對稱式高低頻帶遮罩矩陣係數	34
圖3.18 對稱式低高頻帶遮罩矩陣係數	35
圖3.19 對稱式低低頻帶遮罩矩陣係數	35
圖3.20 二維對稱式遮罩演算架構示意圖	36
圖3.21 傳統二維離散小波示意圖	37
圖3.22 平均濾波器和離散小波轉換法比較	39
圖4.1 基於對稱式遮罩演算架構的移動物件偵測追蹤流程圖	41
圖4.2 二維二階低低頻帶影像	42
圖4.3 移動物件偵測	43
圖4.4 像素鄰近示意圖	44
圖4.5 影像膨脹侵蝕示意圖	46
圖4.6 閉合運算過程	47
圖4.7 相關八鄰近標記方向	48
圖4.8 二值化影像標記結果	49
圖4.9 移動物件追蹤	50
圖5.1 閥值大小示意圖	53
圖5.2 室內雜訊處理	55
圖5.3 室外雜訊處理	56
圖5.4 物件追蹤範例	57
圖5.5 各種環境和影像階層追蹤結果	59
圖5.6 實驗數據長條圓柱圖	62
圖5.7 室外移動物件追蹤結果	63
圖5.8 室內移動物件追蹤結果	65
圖6.1 物件遮蔽嚙合情況	66

表目錄
表3.1 傳統二維離散小波轉換(DAUBECHIES COEFFICIENTS)和對稱式遮罩離散小波轉換計算複雜度	38
表5.1 對稱式小波各階層成功濾雜訊正確率	54
表5.2 對稱式小波各階層成功物件追蹤正確率	60
表5.3 平滑濾波器[41]各階層成功物件追蹤正確率	61
參考文獻
[1]W.-M. Hu, T.-N. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on System, Man, and Cybernetics, vol. 34, no. 3, pp. 334-352, August 2004.
[2]R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L.Wixson, “A system for video surveillance and monitoring,” Carnegie Mellon University, Technical Report., CMU-RI-TR-00-12, 2000.
[3]F.-H. Cheng and Y.-L. Chen, “Real time multiple objects tracking and identification based on discrete wavelet transform,” Pattern Recognition, vol. 39, no. 3, pp. 1126-1139, June 2006.
[4]F. E. Alsaqre and Y. Baozong, “Multiple moving objects tracking for video surveillance system,” International Conference on Signal Processing, vol. 2, pp.1301-1305, August 2004.
[5]C.-C. Hsieh and S.-S. Hsu, “A simple and fast surveillance system for human tracking and behavior analysis,” IEEE Conference on Signal-Image Technologies and Internet-Based System, pp. 812-828, December 2007.
[6]F. E. Alsaqre and Y. Baozong, “Moving object segmentation from video sequence,” Proceedings of Conference focused on Video/Image Processing and Multimedia Communication, pp. 193-199, July, 2003.
[7]J. B. Kim and H. J. Kim, “Efficient region-based motion segmentation for a video monitoring system,” Pattern Recognition Letter, vol. 24, no. 1, pp. 113-128, 2003.
[8]C.-H. Hsia, J.-M. Guo, and J.-S. Chiang, “An improved low complexity algorithm for 2-D integer lifting-based discrete wavelet transform using symmetric mask-based scheme,” IEEE Transactions on Circuits and Systems for Video Technology, Accepted paper.
[9]C.-H. Zhan, X.-H. Duan, S.-Y. Xu, Z. Song, and M. Luo, “An improved moving object detection algorithm based on frame difference and edge detection,” International Conference on Image and Graphics, pp. 519-523, August 2007.
[10]B.-X. Xiao, C. Lu, H. Chen, Y.-F. Yu, and R.-B. Chen, “Moving object detection and recognition based on the frame difference algorithm and moment invariant features,” Chinese Control Conference, pp. 578-581, July 2008.
[11]S. Ribaric, G. Adrinek, and S. Segvic, “Real-time active visual tracking system,” Proceedings of IEEE Mediterranean Electrotechnical Conference, vol.1, pp. 231-234, May 2004.
[12]K. K. Kim, S. H. Cho, H. J. Kim, and J.Y. Lee, “Detecting and tracking moving object using an active camera,” International Conference on Advanced Communication Technology, vol. 2, pp. 817-820, July 2005.
[13]Y.-J. Li, J.-F. Tang, R.-B. Wu, and F.-X. Gong, “Efficient object tracking based on local invariant features,” International Symposium on Communications and Information Technologies, pp. 697-700, September 2006.
[14]N. Zarka, Z. Alhalah, and R. Deeb, “Real-time human motion detection and tracking,” International Conference on Information and Communication Technologies: From Theory to Applications, pp. 1-6, April 2008.
[15]S.-Y. Chien, Y.-W. Huang, B.-Y. Hsieh, S.-Y. Ma, and L.-G. Chen, “Fast video segmentation algorithm with shadow cancellation, global motion compensation, and adaptive threshold techniques,” IEEE Transactions on Multimedia, vol. 6, no. 5, pp.732-748, October 2004.
[16]C. Stauffer and W. E. L. Grimson, “Adaptive background mixture mosels for real-time tracking,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 246-252, June 1999.
[17]J. L. Barron, D. J. Fleet, S. S. Beauchemin, and T. A. Burkitt, “Performance of optical flow techniques,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 236-242, June 1992.
[18]H. Yamada, T. Tominaga, and M. Ichikawa, “An autonomous flying object navigated by real-time optical flow and visual target detection,” Proceedings of IEEE International Conference on Field-Programmable Technology, pp. 222-227, December 2003.
[19]A. Mitiche and H. Sekkati, “Optical flow 3D segmentation and interpretation: a variational method with active curve evolution and level sets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 11, pp. 1818-1829, November 2006.
[20]S. M. Yoon and H. Kim, “Real-time multiple people detection using skin color, motion and appearance information,” IEEE International Workshop on Robot and Human Interactive Communication, pp. 331-334, September 2004.
[21]J. Yang, M.-J. Zhang, and J.-A. Xu, “A visual tracking method for mobile robot,” World Congress on Intelligent Control and Automation, vol. 2, pp. 9017-9021, June 2006.
[22]J. Pers and S. Kovacic, “Computer vision system for tracking players in sports games,” Proceedings of International Workshop on Image and Signal Processing and Analysis, pp. 177-182, June 2000.
[23]G. Wu, W.-J. Liu, X.-H. Xie, and Q. Wei, “A shape detection method based on the radial symmetry nature and direction-discriminated voting,” IEEE International Conference on Image Processing, vol. 6, pp. VI-169-VI-172, September 2007.
[24]S. J. Mckenna, “Tracking groups of people,” Computer Vision and Image Understanding, vol. 80, no. 1, pp. 42-56, October 2000.
[25]D.-T. Chen and J. Yang, “Robust object tracking via online dynamic spatial bias appearance models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 12, pp. 2157-2169, December 2007.
[26]H. Ki, J. Shin, and J. Paik, “Wavelet transform-based hierarchical active shape model for object tracking,” Proceedings of International Symposium on Intelligent Signal Processing and Communication Systems, pp. 256-261, November 2004.
[27]P. Tissainayagam and D. Suter, “Contour tracking with automatic motion model switching,” Pattern Recognition, vol. 36, no. 10, pp. 2411-2427, October 2003.
[28]N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 6, pp. 564-569, June 1999.
[29]B. Castaneda, Y. Luzanov, and J. C. Cockburn, “A modular architecture for real-time feature-based tracking,” IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, pp. V-685-8, Nay 2004.
[30]X.-G. Yu, C.-S. Xu, Q. Tian, and H.-W. Leong, “A ball tracking framework for broadcast soccer video,” IEEE International Conference on Multimedia and Expo, vol. 2, pp. II-273-6, July 2003.
[31]N. Krahnstoever and R. Sharma, “Appearance management and cue fusion for 3D model-based tracking,” IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. II-249-54, June 2003.
[32]A. I. Comport, E. Marchand, and F. Chaumette, “Robust model-based tracking for robot vision,” IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 692-697, October 2004.
[33]M. Tajana, J. Gaspar, A. Bernardino, and P. Lima, “On the use of perspective catadioptric sensors for 3D model-based tracking with particle filters,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2747-2752, November 2007.
[34]O. Javed and M. Shah, “Tracking and object classification for automated surveillance,” Proceedings of European Conference Computer Vision, vol. 4, pp. 343-357, 2002.
[35]S. G. Mallat, “A theory for multi-resolution signal decomposition: The wavelet representation,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674-693, July 1989.
[36]W. Ge and L.-Q. Gao, and Q. Sun, “A method of multi-scale edge detection based on lifting scheme and fusion rule,” International Conference on Wavelet Analysis and Pattern Recognition, vol. 2, pp. 952-955, November 2007.
[37]H.-H. Liu, X.-H. Chen, Y.-G. Chen, and C.-S. Xie, “Double change detection method for moving-object segmentation based on clustering,” IEEE International Symposium on Circuits and Systems, pp. 5027-5030, May 2006.
[38]J. Ahmed, M. N. Jafri, and J. Ahmad, “Target tracking in an image sequence using wavelet features and neural network,” IEEE TENCON, pp. 1-6, November 2005.
[39]F. A. Tab, G. Naghdy, and A. Mertins, “Multiresolution video object extraction fitted to scalable wavelet-based object coding,” IET Image Processing, vol. 1, no. 1, pp. 21-38, March 2007.
[40]R. Rifaat and W. Kinsner, “Experiments with wavelet and other edge detection techniques,” IEEE WESCANEX : Communications, Power and Computing, pp. 322-326, May 1997
[41]B. Sugandi , H. Kim, J. K. Tan, and S. Ishikawa, “Tracking of moving objects by using a low resolution image,” International Conference on Innovative Computing, Information and Control, pp. 408-408, September 2007.
[42]G. K. Kharate, V. H. Patil, and N. L. Bhale, “Selection of mother wavelet for image compression on basis of nature of image,” Journal of Multimedia, vol. 2, no. 6, November 2007.
[43]郭峰任,小波轉換應用於人體追蹤之研究︰淡江大學碩士論文,民國90年。
[44]張簡大敬,監視視訊中有關物件分離與追蹤技術︰義守大學碩士論文,民國94年。
[45]賴韋豪,應用小波技術於動態影像即時追蹤系統︰中華大學碩士論文,民國93年。
論文全文使用權限
校內
紙本論文於授權書繳交後2年公開
同意電子論文全文授權校園內公開
校內電子論文延後至2010-12-28公開
校內書目立即公開
校外
同意授權
校外電子論文延後至2010-12-28公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信