淡江大學覺生紀念圖書館 (TKU Library)
進階搜尋


下載電子全文限經由淡江IP使用) 
系統識別號 U0002-0308201012475200
中文論文名稱 適應性濾波器應用於移動物件偵測之積體電路架構設計
英文論文名稱 The VLSI Architecture Design of Moving Objects Detection with Adaptive Filter
校院名稱 淡江大學
系所名稱(中) 電機工程學系碩士班
系所名稱(英) Department of Electrical Engineering
學年度 98
學期 2
出版年 99
研究生中文姓名 吳宗政
研究生英文姓名 Tsung-Cheng Wu
學號 697450020
學位類別 碩士
語文別 中文
口試日期 2010-06-23
論文頁數 107頁
口試委員 指導教授-江正雄
委員-蘇木春
委員-江正雄
委員-李佩君
委員-郭景明
委員-楊維斌
中文關鍵字 移動物件追蹤  適應性濾波器  最小均方誤差演算法 
英文關鍵字 visual surveillance system  LMS algorithm  moving objects detection 
學科別分類 學科別應用科學電機及電子
中文摘要 近年來隨著國際恐怖事件頻傳,公共安全領域的監控系統開始受到各個國家的重視,加上犯罪率不斷攀升,但安全維護人力始終有限,於是發展智慧型即時監控系統取代傳統被動式監控系統來減輕人力負擔是目前的趨勢。
  智慧型即時監控系統第一步是要先偵測出移動物件,對移動物件作出分類、追蹤…等等的處理後,進而適時地對維安人員發出警告,這對於預防犯罪方面有很大的幫助。在即時監控系統應用中,成功地偵測出移動物件是最基本但也是最重要的工作,因此本論文以此為研究主題,將適應性濾波器應用於智慧型監控系統中,提出以最小均方誤差演算法應用於移動物件偵測系統的架構,使其可以針對外變的環境中得到具有適應性的低通濾波器係數,並藉此濾波器來降低影像之解析度與雜訊分佈,以提高偵測物件的正確率並同時減少整體系統的運算量;未來監視系統的影像解析度將會越來越高,此時系統的運算時間會增加而可能導致無法達到即時性,為了加快其整體系統的運算速度,我們使用硬體架構設計的方式來實現本系統。整個偵測系統的設計係採用標準元件庫IC設計流程,以晶片自動化的佈局與繞線方式加以實現並完成後模擬,在工作頻率為50 MHz,若輸入影像解析度為QVGA (320x240),則每秒能夠處理約1291張影像。若提高影像的解析度為Full HD (1920x1080),則每秒能夠處理的影像約48張左右,仍然可以達到即時性。此外,我們依據不同的環境,將所提出之方法加以實測,並與其它方法之實驗結果相互分析比較。而透過實驗結果可以證明,本論文所提出之將適應性濾波器應用於移動物件偵測的系統,在外變因素較為複雜的環境下,以不同的測試樣本與其它降低影像解析度的方式互相比較,可以發現所提之方法可以得到較好的偵測結果,其偵測移動物件的成功率約有84.72%。
英文摘要 Intelligent surveillance system is an important issue for the security consideration in recent years. The first step of the intelligent surveillance system is moving objects detection. A successful detection can reduce redundant data and provide helpful information for post-processing such as moving objects tracking and analysis. Therefore, the moving objects detection of an intelligent surveillance system is a basic but essential task.
This thesis presents a new approach of moving objects detection by using the technique of least mean square (LMS) algorithm. This method can adapt the coefficients of low pass filter to the different environments. The low pass filter is used to reduce image resolution and noise effects such as Gaussian noise or fake motions. In the future, the high resolution images for surveillance system will be more common, and the computing time of the system will be longer. The system of high resolution images operated by the software platform may not achieve real-time, so we need to design and implement hardware architecture of the system to reduce the computing time for real-time processing. The proposed approach is further implemented by the VLSI architecture. For the implementation we follow the IC design flow of cell-based IC design. When image size is QVGA (320×240),the system can reach 1291 FPS (Frame per second).In the same way to design the VLSI architecture, the system can reach 48 FPS as image size is Full HD (1920×1080).
The proposed approach is compared with other methods, such as the technique of down-sampling, average filter, and SMDWT. The experimental results show that the accuracy rates of our approach are better than others in different environments. The accuracy rate for moving objects detection of the proposed approach is 84.72%.
論文目次 第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究主題 3
1.3 論文架構 5
第二章 最小均方 (LMS) 演算法 6
2.1 自適應濾波器 6
2.2 Wiener-Hopf 方程式 7
2.3 最陡坡降法 13
2.4 最小均方演算法 16
第三章 LMS演算法應用於移動物件偵測系統架構 19
3.1 系統架構 19
3.2 色相轉換 21
3.2.1 RGB色彩模型 23
3.2.2 YIQ與YUV色彩模型 24
3.3 降低影像解析度 26
3.3.1 以 LMS 演算法建立訓練模組 27
3.4 移動物件偵測 31
3.4.1 連續畫面差異法 31
3.5 形態學補償 34
3.5.1 膨脹與侵蝕 35
3.5.2 斷開與閉合 39
3.6 影像的標籤化 42
3.6.1 連通與元素 42
3.6.2 標籤化 44
第四章 系統之單晶片硬體架構設計 48
4.1 各級架構 48
4.2 色相轉換 52
4.3 降低影像解析度 57
4.4 移動物件偵測 68
4.5 偵測物件邊緣強化 72
4.6 形態學補償 75
4.6.1 膨脹運算電路 75
4.6.2 侵蝕運算電路 78
4.7 影像標籤化 82
第五章 實驗結果 87
5.1 設計流程 87
5.2 晶片佈局與規格 89
5.3 硬體架構模擬結果 91
5.3.1 功能模擬 91
5.3.2 結果分析 92
5.4 物件偵測實驗結果 94
5.4.1 直接次取樣方法 94
5.4.2 平均濾波次取樣方法 95
5.4.3 對稱式遮罩小波轉換降低解析度之方法 95
5.4.4 實驗結果比較 96
第六章 結論 103
參考文獻 (References) 104

圖2.1、自適應濾波器之示意圖 7
圖2.2、線性數位濾波器之示意圖 8
圖2.3、線性橫向濾波器 11
圖3.1、系統整體架構流程圖 20
圖3.2、色相轉換示意圖 22
圖3.3、彩色影像轉換為灰階影像 22
圖3.4、RGB色彩方塊 23
圖3.5、加色模式 24
圖3.6、YUV的色差信號與顏色對應關係 25
圖3.7、用LMS演算法訓練降階遮罩運算的模組 28
圖3.8、對稱性 5×5 低通濾波器之遮罩 29
圖3.9、遮罩運算之示意圖 30
圖3.10、連續畫面差異法之示意圖 32
圖3.11、連續畫面差異法 33
圖3.12、兩張二元影像的聯集運算 35
圖3.13、膨脹運算示意圖 36
圖3.14、侵蝕運算示意圖 37
圖3.15、像素點P與其近鄰示意圖 38
圖3.16、影像經由斷開運算之結果 39
圖3.17、影像經由閉合運算之結果 41
圖3.18、四連通與八連通示意圖 43
圖3.19、影像標籤化流程圖 44
圖3.20、標籤化時掃描影像的方向 45
圖3.21、標籤化時所檢查的近鄰點 45
圖3.22、影像標籤化結果之示意圖 47
圖4.1、整體系統硬體架構之流程圖 48
圖4.2、整體系統架構之方塊圖 49
圖4.3、色相轉換電路之內部示意圖 53
圖4.4、色相轉換運算處理單元之方塊圖 54
圖4.5、R係數之移位相加電路 57
圖4.6、5×5遮罩濾波器與其係數位置關係之示意圖 58
圖4.7、降低影像解析度電路之方塊圖 59
圖4.8、降低影像解析度內部電路之示意圖 62
圖4.9、5×5遮罩運算之示意圖 65
圖4.10、進行遮罩運算時運算結果重疊之示意圖 66
圖4.11、控制電路狀態之示意圖 67
圖4.12、二階降低影像解析度電路之示意圖 68
圖4.13、移動物件偵測之方塊圖 69
圖4.14、移動物件偵測內部電路之示意圖 71
圖4.15、物件邊緣強化電路之方塊圖 72
圖4.16、物件邊緣強化內部電路之示意圖 73
圖4.17、膨脹運算電路之方塊圖 76
圖4.18、膨脹運算電路內部之示意圖 78
圖4.19、侵蝕運算電路之方塊圖 79
圖4.20、侵蝕運算電路內部之示意圖 81
圖4.21、影像標籤化電路之方塊圖 82
圖4.22、影像標籤化電路之示意圖 85
圖5.1、標準元件庫IC設計流程 88
圖5.2、晶片佈局圖 90
圖5.3、RTL層級的模擬結果 91
圖5.4、Gate Level層級的模擬結果 92
圖5.5、AP&R之後的模擬結果 92
圖5.6、直接次取樣方法 95
圖5.7、對稱式遮罩之低低頻帶矩陣係數 96
圖5.8、實驗結果之折線圖一 98
圖5.9、實驗結果之折線圖二 99
圖5.10、實驗結果之折線圖三 100
圖5.11、計算移動物件偵測成功率之示意圖 102

表4.1、整體系統架構之腳位功能描述 49
表4.2、色相轉換電路之腳位功能描述 54
表4.3、降低影像解析度電路之腳位功能描述 59
表4.4、移動物件偵測電路之腳位功能描述 69
表4.5、物件邊緣強化電路之腳位功能描述 72
表4.6、膨脹運算電路之腳位功能描述 76
表4.7、侵蝕運算電路之腳位功能描述 80
表4.8、影像標籤化電路之腳位功能描述 83
表5.1、晶片規格列表 89
表5.2、記憶體規格列表 90
表5.3、實驗之遮罩係數 97
表5.4、實驗結果之分析一 98
表5.5、實驗結果之分析二 99
表5.6、實驗結果之分析三 100

參考文獻 [1]台灣區電機電子工業同會電子報2010.04.14 第 117 期
URL: http://www.teema.org.tw/epaper/20100414/industrial032.html
[2]B. Sugandi, H. Kim, J. K. Tan, and S. Ishikawa, “Tracking of moving objects by
using a low resolution image,” International Conference on Innovative Computing, Information and Control, pp. 408-408, September 2007.
[3]F.-H. Cheng and Y.-L. Chen, “Real time multiple objects tracking and identification based on discrete wavelet transform,” Pattern Recognition, vol. 39, no. 3, pp. 1126-1139, June 2006.
[4]C.-H. Hsia, D.-W. Huang, J.-S. Chiang, and Z.-J. Wu, “Moving object tracking using symmetric mask-based scheme,”Fifth International Conference on Information Assurance and Security, vol. 1, pp. 173-176, August 2009.
[5]S. Haykin, Adaptive Filter Theory, 2nd edition, Prentice Hall, 1991.
[6]G. Williams, Linear Algebra with applications, 5th edition, Jones and Bartlett, 2004.
[7]R. C. Gonazlez and R. E. woods, Digital Image Processing, 2nd edition,
Addison-Wesley, 1992.
[8]C.-H. Hsia, J.-M. Guo, and J.-S. Chiang, “Improved low complexity algorithm for 2-D integer lifting-based discrete wavelet transform using symmetric mask-based scheme.” IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 8, pp. 1-7, August 2009.
[9]URL: http://en.wikipedia.org/wiki/NTSC
[10]URL: http://en.wikipedia.org/wiki/PAL
[11]URL: http://en.wikipedia.org/wiki/YUV
[12]C.-H. Zhan, X.-H. Duan, S.-Y. Xu, Z. Song, and M. Luo, “An improved moving object detection algorithm based on frame difference and edge detection,” International Conference on Image and Graphics, pp. 519-523, August 2007.
[13]B.-X. Xiao, C. Lu, H. Chen, Y.-F. Yu, and R.-B. Chen, “Moving object detection and recognition based on the frame difference algorithm and moment invariant features,” Chinese Control Conference, pp. 578-581, July 2008.
[14]S. Ribaric, G. Adrinek, and S. Segvic, “Real-time active visual tracking system,” Proceedings of IEEE Mediterranean Electrotechnical Conference, vol.1, pp. 231-234, May 2004.
[15]K. K. Kim, S. H. Cho, H. J. Kim, and J.Y. Lee, “Detecting and tracking moving object using an active camera,” International Conference on Advanced Communication Technology, vol. 2, pp. 817-820, July 2005.
[16]Y.-J. Li, J.-F. Tang, R.-B. Wu, and F.-X. Gong, “Efficient object tracking based on local invariant features,” International Symposium on Communications and Information Technologies, pp. 697-700, September 2006.
[17]N. Zarka, Z. Alhalah, and R. Deeb, “Real-time human motion detection and tracking,” International Conference on Information and Communication Technologies: From Theory to Applications, pp. 1-6, April 2008.
[18]S.-Y. Chien, Y.-W. Huang, B.-Y. Hsieh, S.-Y. Ma, and L.-G. Chen, “Fast video
segmentation algorithm with shadow cancellation, global motion compensation, and adaptive threshold techniques,” IEEE Transactions on Multimedia, vol. 6, no. 5, pp.732-748, October 2004.
[19]C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for
real-time tracking,” IEEE Conference on Computer Vision and Pattern Recognition, pp.246-252, June 1999.
[20]J. L. Barron, D. J. Fleet, S. S. Beauchemin, and T. A. Burkitt, “Performance of optical flow techniques,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 236-242, June 1992.
[21]H. Yamada, T. Tominaga, and M. Ichikawa, “An autonomous flying object navigated by real-time optical flow and visual target detection,” Proceedings of IEEE International Conference on Field-Programmable Technology, pp. 222-227, December 2003.
[22]A. Mitiche and H. Sekkati, “Optical flow 3D segmentation and interpretation: a
variational method with active curve evolution and level sets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 11, pp. 1818-1829, November 2006.
[23] S. M. Yoon and H. Kim, “Real-time multiple people detection using skin color,
motion and appearance information,” IEEE International Workshop on Robot and Human Interactive Communication, pp. 331-334, September 2004.
[24] J. Yang, M.-J. Zhang, and J.-A. Xu, “A visual tracking method for mobile robot,”
World Congress on Intelligent Control and Automation, vol. 2, pp. 9017-9021, June 2006.
[25] J. Pers and S. Kovacic, “Computer vision system for tracking players in sports
games,” Proceedings of International Workshop on Image and Signal Processing and Analysis, pp. 177-182, June 2000.
[26] G. Wu, W.-J. Liu, X.-H. Xie, and Q. Wei, “A shape detection method based on the radial symmetry nature and direction-discriminated voting,” IEEE International Conference on Image Processing, vol. 6, pp. VI-169-VI-172, September 2007.
[27] J.-C. Huang and W.-S. Hsieh, “Wavelet-based moving object segmentation,” Electronics Letters, vol. 39, no. 39, pp. 1380-1382, Sept. 2003.
[28] J.-C. Huang, T.-S. Su, L.-J. Wang, and W.-S. Hsieh, “Double-change-detection method for wavelet-based moving object segmentation,” Electronics Letters, vol. 40, no.13, pp. 798-799, June 2004.
[29] S.-D. Jean, C.-M. Liu, C.-C. Chang, and Z. Chen, “A new algorithm and its VLSI architecture design for connected component labeling,” International Symposium on Circuits and Systems , vol. 2, pp. 565-568, May 1994.
[30] W.-K. Chan and S.-Y. Chien, “Subword parallel architecture for connected componentlabeling and morphological operations, ” IEEE Asia Pacific Conference on Circuits and Systems, pp. 936-939, Dec 2006.
[31] K. Suzuki, I. Horiba, and N. Sugie, “Linear-time connected-component labeling based on sequential local operations,” in Computer Vision and Image Understanding, vol. 89, pp. 1-23, January 2003.
[32]H. Hedberg, F. Kristensen, and V. Owall, “Implementation of a Labeling Algorithm based on Contour Tracing with Feature Extraction,” International Symposium on Circuits and Systems, pp. 1101-1104, May 2007.
[33]Lifeng He, Yuyan Chao and K. Suzuki, “A run-based two-scan labeling algorithm,” IEEE Transactions on Image Processing, vol. 17, pp. 749-756, May 2008.
[34]K. K. PARHI, VLSI Digital Signal Processing Systems, 1st edition, John Wiley, 1999.
論文使用權限
  • 同意紙本無償授權給館內讀者為學術之目的重製使用,於2011-08-03公開。
  • 同意授權瀏覽/列印電子全文服務,於2011-08-03起公開。


  • 若您有任何疑問,請與我們聯絡!
    圖書館: 請來電 (02)2621-5656 轉 2281 或 來信