§ 瀏覽學位論文書目資料
  
系統識別號 U0002-0308201023092000
DOI 10.6846/TKU.2010.00067
論文名稱(中文) 低儲存容量之技術改良應用於視覺監視系統
論文名稱(英文) Application of an improved low storage capacity technique to surveillance system
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系碩士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 98
學期 2
出版年 99
研究生(中文) 鄧光喆
研究生(英文) Kuang-Che Teng
學號 697450269
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2010-06-23
論文頁數 81頁
口試委員 指導教授 - 江正雄
委員 - 蘇木春
委員 - 李佩君
委員 - 郭景明
委員 - 江正雄
委員 - 楊維斌
關鍵字(中) 監視系統
影像儲存
影像摘要
對稱性小波轉換
關鍵字(英) symmetric mask-based discrete wavelet transform
video synopsis
video archive
第三語言關鍵字
學科別分類
中文摘要
在監視系統被大量建置的趨勢下,表示有愈來愈多的影像需要被儲存,該如何節省儲存的空間便成為很重要的議題。然而,所有的解決方案卻都著墨在影像壓縮技術上!此外,瀏覽儲存好的影像也不輕鬆,必須浪費許多時間從每天二十四小時不斷錄製的影像中,搜尋出特定的人事物。目前雖有影像摘要技術來節省瀏覽時間,但卻得另外花費空間來儲存這些濃縮的影像。總之沒有同使可以解決以上兩大問題的方式。
所以,本論文基於Video Synopsis的概念,提出了一套影像處理流程;首先使用對稱性遮罩小波轉換(Symmetric Mask-based Discrete Wavelet Transform)來進行影像解析度的降階(Dyadic Down Sampling),並利用其低通濾波器的特性,濾除外界環境的雜訊與偽動作(Fake Motion);再去除時間軸的冗餘資訊,把未有事件發生的畫面刪除;之後利用自行設計的演算法,擷取出前景物件,最後計算畫面上的空間資訊,將連續事件整合於一張畫面上。如此一來,因為整合了連續的事件,便達成了精簡畫面張數的效果,且其中並未對原始像素值做任何變更,藉此完成了影像大量無損(Lossless)壓縮的目的,同時也縮減了影像的播放時間。
分析實驗結果,本論文提出的儲存解決方案,可適應室內與室外環境的自然變化,且只需花費原始未壓縮影像約15%以下的儲存空間;在VGA的解析度下亦可達到23 FPS的即時(Real-time)處理效能,故播放時間也就只需要15%左右,可快速瀏覽完畢一段影像。
此法不同於以往的影像壓縮,產生的簡短影像還可以再使用任何壓縮法來更加地減少儲存空間,是相當有效的低影像儲存容量技術。
英文摘要
Since there are more than one million surveillance cameras in public space of a city, we have faced the age of surveillance system. The most painful task is retrieving those volumes of video and trying to find some available capacity to store them in. The conventional way to shrink the video is compression, however this method causes some disadvantages, such as saving miserable space with lossless compression, and poor evidence with lossy compression. The conventional way to retrieve video is abstraction and index, but it is incompatible with video archiving because it needs more capacity.
This thesis presented a solution that can have it in both ways. The approach based on Video Synopsis uses the LL sub-band which is obtained from Symmetric Mask-based Discrete Wavelet Transform to down-sample the resolution of frames, so that the computing coast of the following step in this proposed flow could be reduced easily, and the high frequency noise could be decreased in the meanwhile. Then some algorithms are proposed to extract foreground objects. The key part is that the thesis proposed an algorithm to calculate the spatial information for showing objects simultaneously which originally occurred at different times, just like the stroboscopic effect.
According to the experimental result, this solution has some merits: good efficiency to handle real-time video archiving with 23 FPS in VGA resolution, lossless recording, only need low storage capacity with about below 15% space of original video, the duration of the generated video is about 15% run-time of original one, and suitable for outdoor environment.
第三語言摘要
論文目次
中文摘要	I
英文摘要	II
本論文目錄	III
圖目錄	VII
表目錄	IX

第一章	緒論	1
1.1.	研究背景	1
1.2.	論文動機與主題	2
1.3.	論文架構	4
第二章	相關技術	5
2.1.	概述	5
2.2.	傳統的監視系統儲存方式	5
2.2.1.	影像壓縮格式──選用MPEG-2	5
2.2.2.	增加壓縮率	5
2.3.	符合監視系統需求的儲存方式	7
2.3.1.	影像壓縮格式──選用Motion JPEG2000	7
2.3.2.	動態降低畫質	8
2.4.	無失真的影像儲存	10
2.4.1.	影像快速瀏覽與檢索技術	10
2.4.1.1.	Video Abstraction And Index	11
2.4.1.2.	Video Synopsis	12
2.4.1.3.	影像摘要法的比較	15
2.4.1.4.	探討MPEG-7的適用性	16
2.4.2.	影像快速瀏覽技術應用在儲存上的可能性	17
第三章	事件的偵測流程	19
3.1.	概述	19
3.2.	偵測前景物件	19
3.2.1.	灰階的使用	19
3.2.2.	應用離散小波轉換	20
3.2.3.	偵測法的選用	22
3.2.4.	臨界值(Threshold)	24
3.2.5.	光影變化時的背景更新	27
3.3.	去除殘餘雜訊	28
3.4.	物件的連接與標籤化	32
3.5.	計算物件邊界並分離	37
3.6.	系統方塊圖	38
第四章	簡短影像的建立方式	40
4.1.	概述	40
4.2.	閃頻現象(Stroboscopic Effect)的應用	40
4.3.	生成簡短影像之畫格	44
4.4.	誤判代價評估	48
4.5.	時間戳記	50
4.6.	流程圖	53
第五章	實驗結果	56
5.1.	實驗說明	56
5.2.	測試影像之一	56
5.3.	測試影像之二	58
5.4.	測試影像之三	60
第六章	分析與結論	63
6.1.	實驗結果分析	63
6.2.	結論	72
參考文獻	76

 
圖目錄
圖 2.1 Inerframe conding示意圖	6
圖 2.2 利用ME來實現VBR Motion JPEG2000	9
圖 2.3 使用LL band的差值來做Bit rate控制	10
圖 2.4 Video Synopsis示意圖	14
圖 3.1 偵測前景模式示意圖	24
圖 3.2 鏡頭雜訊分布與臨界值示意圖	26
圖 3.3 本論文提出之去雜訊法示意圖	29
圖 3.4 統計式去雜訊法(Statistical Anti-Noise)實作結果	31
圖 3.5 Labeling collisions示意圖	33
圖 3.6 本論文使用的CCL演算法	36
圖 3.7 MBB偵測演算法	38
圖 3.8 事件偵測系統方塊圖	39
圖 4.1 Video Synopsis with Stroboscopic Effect	41
圖 4.2 本論文所提方法之簡短影像畫格分類示意圖	43
圖 4.3 建立Dead Zone示意圖	46
圖 4.4 追蹤物判對Dead Zone的影響	49
圖 4.5 程式在處理影像時製作的Time Stamp資訊表	52
圖 4.6 依據Time Stamp表換算出的事件發生時間	53
圖 4.7 建立簡短影像流程圖	54
圖 5.1 測試影像1與其結果	57
圖 5.2 測試影像2與其結果	59
圖 5.3 測試影像3與其結果	61
圖 6.1 增加L的情形	64
圖 6.2 物件速度增快的情形	65
圖 6.3 多物件狀況與追蹤誤判情形	67
圖 6.4 路徑相同時的覆蓋情形	68
圖 6.5 重新初始化後的畫格	69
圖 6.6 判斷到物件重疊前的Occlusion覆蓋問題	70
圖 6.7 判斷到物件重疊後的Occlusion紀錄狀況	71
圖 6.8 Occlusion後重新初始化的畫格	72

 
表目錄
表 2.1 影像摘要技術比較表	15
表 4.1 Dead Zone Table	55
表 4.2 物件類別中重要的成員列表	55
表 5.1 測試影像1的比較表	57
表 5.2 測試影像2的比較表	59
表 5.3 測試影像3的比較表	61
表 6.1 本論文提出方案與其他技術比較表	74
參考文獻
[1]Generic Coding of Moving Pictures and Associated Audio Information, ISO/IEC 13818-2: Video (MPEG-2), May 1996.
[2]Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264 | ISO/IEC 144496-10 AVC), Joint Video Team of ISO/IEC and ITU-T, Nov. 2007.
[3]J. Ostermann, J. Bormans, P. List, D. Marpe, M. Narroschke, F. Pereira, T. Stockhammer, and T. Wedi, “Video coding with H.264/AVC: tools, performance, and complexity,” in IEEE Circuits and Systems Magazine, vol. 4, pp. 7-28, First Quarter 2004.
[4]Information technology-JPEG2000 image coding system-Part 3 : Motion JPEG2000, ISO/IEC 15444-3, 2007.
[5]Information technology-JPEG2000 image coding system-Part 1 : Core coding system, ISO/IEC 15444-1, 2004.
[6]A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG 2000 still image compression standard,” IEEE Signal Processing Mag., vol. 18, pp. 36-58, Sept. 2001.
[7]R. Miyamoto, H. Sugita, Y. Hayashi, H. Tsutsui, T. Masuzaki, T. Onoye, and Y. Nakamura, “High quality Motion JPEG2000 coding scheme based on the human visual system,” IEEE International Symposium on Circuits and Systems, vol. 3, pp. 2096-2099, May 2005.
[8]Jong Han Kim, Sang Beom Kim, and Chee Sun Won, “Motion JPEG2000 Coding Scheme Based on Human Visual System for Digital Cinema,” Pacific-Rim Symposium on Image and Video Technology, Lecture Notes in Computer Science vol. 4319, pp. 869-877, Dec. 2006.
[9]Yueting Zhuang, Yong Rui, T.S. Huang, and S. Mehrotra, “Adaptive key frame extraction using unsupervised clustering,” IEEE Proc. Image Processing, vol.1, pp. 866-870, Oct. 1998.
[10]M. A. Smith and T. Kanade, “Video skimming and characterization through the combination of image and language understanding techniques”, IEEE Proc. Computer Vision and Pattern Recognition, pp. 775-781, Jun. 1997.
[11]Xiaofei He, O. King, Wei-Ying Ma, Mingjing Li, and Hong-Jiang Zhang, “Learning a semantic space from user's relevance feedback for image retrieval,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 39-48, Jan. 2003.
[12]Bo-Wei Chen, Jia-Ching Wang, Jhing-Fa Wang,“A Novel Video Summarization Based on Mining the Story-Structure and Semantic Relations Among Concept Entities,” IEEE Transactions on Multimedia, vol. 11, pp. 295-312, Feb. 2009.
[13]Y. Pritch, A. Rav-Acha, and S. Peleg, “Nonchronological Video Synopsis and Indexing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 1971-1984, Nov. 2008.
[14]V. Choudhary, and A.K. Tiwari, “Surveillance Video Synopsis,” Proc. Computer Vision, Graphics & Image, pp. 207-212, Dec. 2008.
[15]Information Technology-Multimedia Content Description Interface - Parts 1 to 7, ISO/IEC 15938-1 to 15938-7, 2002 to 2003.
[16]MPEG-7 Overview (version 10), ISO/IEC JTC1/SC29/WG11N6828, Oct. 2004.
[17]B. Sugandi, Hyoungseop Kim, Joo Kooi Tan, and S. Ishikawa, “Tracking of Moving Objects by Using a Low Resolution Image,” International Conference on Innovative Computing, Information and Control, pp. 408-408, Sept. 2007.
[18]Chih-Hsien Hsia, Jing-Ming Guo, and Jen-Shiun Chiang, “Improved Low-Complexity Algorithm for 2-D Integer Lifting-Based Discrete Wavelet Transform Using Symmetric Mask-Based Scheme,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, pp. 1202-1208, Aug. 2009.
[19]F.-H. Cheng, and Y.-L. Chen, “Real time multiple objects tracking and identification based on discrete wavelet transform,” Pattern Recognition, vol. 39, no. 3, pp. 1126-1139, Jun. 2006.
[20]C.R. Wren, A. Azarbayejani, T. Darrell, and A.P. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 780-785, Jul. 1997.
[21]B.P.L. Lo and S.A. Velastin, “Automatic congestion detection system for underground platforms,” Proc. ISIMP 2001, pp. 158-161, May 2001.
[22]C. Stauffer and W.E.L. Grimson, “Adaptive background mixture models for real-time tracking,” Proc. IEEE CVPR 1999, pp. 246-252, June 1999.
[23]A. Elgammal, D. Harwood, and L.S. David, “Non-parametric model for background subtraction,” Proc. ECCV 2000, pp. 751-767, June 2000.
[24]D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. On Pattern Anal. And Machine Intell., vol. 24, no. 5, pp. 603-619, 2002.
[25]M. Seki, T. Wada, h. Fujuwara, and K. Sumi, “Background subtraction based on cooccurrence of image variations,” Proc. CVPR 2003, vol. 2, pp. 65-72, 2003.
[26]N.M. Oliver, B. Rosario, and A.P. Pentland, “A Bayesian computer vision system for modeling human interactions,” IEEE Trans. On Pattern Anal. And Machine Intell., vol. 22, no. 8, pp. 831-843, 2002.
[27]L. Di Stefano, S. Mattoccia, M. Mola, “A change-detection algorithm based on structure and colour,” IEEE Proc. Advanced Video and Signal Based Surveillance, pp. 252-259, July 2003.
[28]S.-Y. Chien, Y.-W. Huang, B.-Y. Hsieh, S.-Y. Ma, and L.-G. Chen, “Fast video segmentation algorithm with shadow cancellation, global motion compensation, and adaptive threshold techniques,” IEEE Transactions on Multimedia, vol. 6, no. 5, pp.732-748, October 2004.
[29]Sheng-Yan Yang, and Chiou-Ting Hsu, “Background Modeling from GMM Likelihood Combined with Spatial and Color Coherency,” IEEE International Conference on Image Processing, pp. 2801-2804, Oct. 2006.
[30]Kenji Suzuki, Isao Horiba, and Noboru Sugie, “Linear-time connected-component labeling based on sequential local operations,” in Computer Vision and Image Understanding, vol. 89, pp. 1-23, Jan. 2003.
[31]H. Hedberg, F. Kristensen, and V. Owall, “Implementation of a Labeling Algorithm based on Contour Tracing with Feature Extraction,” International Symposium on Circuits and Systems, pp. 1101-1104, May 2007.
[32]Lifeng He, Yuyan Chao and K. Suzuki, “A Run-Based Two-Scan Labeling Algorithm,” IEEE Transactions on Image Processing, vol. 17, pp. 749-756, May 2008.
[33]Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, Introduction to Algorithms, Third Edition, Boston : MIT Press, 2009.
[34]Jiangjian Xiao and Mubarak Shah, “Motion Layer Extraction in the Presence of Occlusion Using Graph Cuts,” IEEE Trans. on Pattern Anal. and Machine Intell., vol. 27, no. 10, pp. 1644-1659, Oct. 2005.
[35]J. Sun, W. Zhang, X. Tang, and H. Shum, “Background Cut,” Proc. Ninth European Conference on Computer Vision, pp. 628-641, 2006.
[36]Haifeng Xu, Akmal A. Younis, and Mansur R. Kabuka, “Automatic Moving Object Extraction for Content-Based Applications,” IEEE Trans. on Circuits and System for Video Technology, vol. 14, no. 6, pp. 796-812, June 2004.
[37]Munchurl kim, Jae Gark Choi, Daehee Kim, Hyung Lee, Myoung Ho Lee, Chieteuk Ahn and Yo-Sumg Ho, “A VOP Generation Tool: Automatic Segmentation of Moving Object in Image Sequences Based on Spatio- Temporal Information,” IEEE Trans. on Circuits System for Video Technology, vol.9, no. 8, pp. 1216-126, Dec. 2002.
[38]Liyuan Li, Weimin Huang, Irene Yu-Hua Gu, Qi Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Transactions on Image Processing, vol. 13, pp. 1459-1472, Nov. 2004.
[39]Akira Minezawa, Shuma Okazaki, Ichiro Matsuda, Susumu Itoh, Sei Naito, Atsushi Koike, “Performance Improvement of the Lossless Video Coding Scheme by Adapting the Number of 3D Predictors Frame by Frame", Proc. International Workshop on Advanced Image Technology 2008, 6 pages, Jan. 2008.
[40]BriefCam, 2010, Fact Sheet: BriefCam VS Online, PDF, http://briefcam.com/wp-content/uploads/2010/03/Product_sheet_VS_Online_V1.3_NEW.pdf
論文全文使用權限
校內
紙本論文於授權書繳交後2年公開
同意電子論文全文授權校園內公開
校內電子論文於授權書繳交後1年公開
校外
同意授權
校外電子論文於授權書繳交後1年公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信