§ 瀏覽學位論文書目資料
  
系統識別號 U0002-2801202608212900
論文名稱(中文) 基於車輛行駛軌跡分析之道路CCTV影像資料量降低機制研究
論文名稱(英文) A Study on Reducing Video Data Volume in Road CCTV Systems Based on Vehicle Trajectory Analysis
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 資訊工程學系碩士班
系所名稱(英文) Department of Computer Science and Information Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 114
學期 1
出版年 115
研究生(中文) 劉兆宸
研究生(英文) CHAO-CHEN LIU
學號 609410203
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2026-01-08
論文頁數 53頁
口試委員 口試委員 - 林偉川(wayne@takming.edu.tw)
口試委員 - 林其誼(chiyilin@mail.tku.edu.tw)
指導教授 - 陳瑞發(alpha@mail.tku.edu.tw)
關鍵字(中) CCTV
YOLO
物件偵測
遮蔽判斷
多項式迴歸
儲存節省
資料儲存
關鍵字(英) CCTV
YOLO
Object Detection
Occlusion Detection
Polynomial Regression
Energy-Efficient Storage
第三語言關鍵字
學科別分類
中文摘要
在道路監控系統中,CCTV 影像常被用於路面積水、坑洞與交通狀況分析。然而在實際部署環境中,車輛經常長時間遮蔽關鍵辨識區域,導致大量影像在錄製與儲存後仍無法直接使用,造成儲存資源浪費與影像運算負載增加。
本研究實作一套基於車輛 ROI 離開時間預測的資料儲存與後處理量降低系統,以道路 CCTV 影像作為輸入。系統前端採用 YOLO 物件偵測模型辨識車輛,並結合卡爾曼濾波與匈牙利演算法進行多目標追蹤,以產生具時間連續性的車輛軌跡資料。依據攝影機視角與道路配置,人工標定無遮蔽辨識區域(Region of Interest, ROI),並利用射線法判斷車輛是否遮蔽該區域。
為確保預測模型的穩定性與可用性,系統針對偵測錯誤、追蹤異常、軌跡過短或時間不連續等情況進行資料清理,並對座標資料進行正規化處理。後續以車輛在影像中位置的時間序列為基礎,透過資料窗方式建立多項式迴歸模型,預測車輛下一幀位置,並以遞迴方式推估其離開 ROI 的幀數,作為是否暫停或恢復影像錄製之控制依據。
實驗結果顯示,系統在 ROI 遮蔽期間可略過 88.2% 的無效影格,有效降低影像錄製與儲存成本;在 ROI 清空後,未錄影影格比例為 2.5%,並在保留大部分具分析價值之有效影像資料的同時,達成節省效益。整體而言,本研究所提出之方法可作為道路影像系統中資料儲存與處理量的降低機制,並可應用於 路面積水監測、路面坑洞偵測等道路影像分析任務,適合部署於儲存與運算資源受限的邊緣運算環境。
英文摘要
In road surveillance systems, CCTV video is commonly used for applications such as road flooding detection, pothole inspection, and traffic condition analysis. However, in real-world deployments, vehicles frequently occlude key regions of interest for extended periods. As a result, a large number of video frames are recorded and stored despite being unsuitable for direct analysis, leading to inefficient storage utilization and increased computational load for subsequent image processing.
This study implements an energy-efficient vision computing system based on vehicle ROI exit time prediction, using continuous road CCTV footage as input. At the front end, a YOLO-based object detection model is employed to detect vehicles, while multi-object tracking is performed by integrating a Kalman filter with the Hungarian algorithm to generate temporally consistent vehicle trajectories. According to the camera viewpoint and road layout, an unobstructed region of interest (ROI) is manually defined, and ray casting is applied to determine whether vehicles are occluding the ROI in real time.
To ensure the stability and practicality of the prediction model, the system performs data cleaning to remove detection errors, tracking anomalies, short trajectories, and temporally discontinuous tracks, followed by coordinate normalization. Based on the time series of vehicle Y-axis positions in the image, a polynomial regression model is constructed using a sliding window approach to predict the vehicle position in the next frame. The predicted results are recursively propagated to estimate the frame at which a vehicle exits the ROI, which serves as the control criterion for suspending or resuming video recording.
Experimental results show that the proposed system can skip 88.2% of ineffective frames during ROI occlusion, significantly reducing video recording and storage costs. After the ROI becomes clear, the proportion of missed frames is limited to 2.5%, demonstrating that most analytically valuable frames are preserved while achieving effective energy savings. Overall, the proposed method can serve as a front-end energy-saving and data filtering mechanism for road surveillance systems and can be applied to road flooding monitoring, pothole detection, and other road image analysis tasks. The system is particularly suitable for deployment in edge computing environments with limited storage and computational resources.
第三語言摘要
論文目次
目次
誌謝	i
目次	vi
圖目錄	viii
表目錄	ix
第 1 章 緒論	1
1.1	研究背景與動機	1
1.2	研究目的	1
第 2 章 文獻探討	3
2.1	YOLO 物件偵測演算法	3
2.2	物件追蹤演算法	4
2.3	辨識區域(ROI)	6
2.4	多項式迴歸於車輛軌跡預測之相關研究	7
2.5	邊緣影像運算與低成本之相關研究	8
第 3 章 研究方法	10
3.1	資料蒐集	11
3.1.1	CCTV、影像擷取	11
3.2	物件偵測	12
3.2.1	影像影格	12
3.2.2	偵測物件座標	13
3.3	物件追蹤	15
3.3.1	卡爾曼預測	15
3.3.2	匈牙利匹配	16
3.4	辨識區域	17
3.4.1	標記ROI區域	17
3.4.2	ROI區域座標	17
3.5	資料處理	18
3.5.1	資料清理	19
3.5.2	資料正規化	20
3.6	迴歸模型訓練	21
3.7	迴歸預測	23
3.7.1	是否離開ROI	23
3.7.2	預測離開幀數	25
3.7.3	系統評估	27
第 4 章 實務驗證	29
4.1	實驗目的	29
4.2	研究案例	29
4.3	物件偵測	30
4.4	物件追蹤	31
4.4.1	卡爾曼預測	31
4.4.2	匈牙利匹配	32
4.5	辨識區域	36
4.6	資料處理	37
4.6.1	資料清理	37
4.6.2	資料正規化	39
4.7	迴歸模型訓練	40
4.7.1	資料窗與次方評估	40
4.7.2	多項式迴歸模型建立	44
4.8	迴歸模型評估	45
4.8.1	模型預測效能之實務評估	45
4.8.2	降低資料儲存與處理量之實務評估	46
第 5 章 結論	48
5.1	結論	48
5.2	未來展望	49
參考文獻	50
附錄一	52
 
圖目錄
Figure 2-1不同模型於道路偵測的表現[2]	4
Figure 2-2速度與準確率關係[3]	5
Figure 2-3射線法示意[10]	7
Figure 2-4車輛軌跡與多項式關係[11]	8
Figure 3-1研究架構圖	10
Figure 3-2台北市政府全球資訊網CCTV畫面	12
Figure 3-3影格序列	12
Figure 3-4常見物件類別	13
Figure 3-5 Bounding Box示意	14
Figure 3-6物件追蹤流程圖	15
Figure 3-7 ROI區域示例	18
Figure 3-8資料清理流程	19
Figure 3-9迴歸模型訓練流程	21
Figure 3-10 擬合程度示例	23
Figure 3-11射線法流程	24
Figure 3-12射線法示例	24
Figure 3-13遞迴式預測	26
Figure 3-14影像輸入預測模型示例	27
Figure 4-1物件偵測輸出	31
Figure 4-2 Yolo+物件追蹤驗證	34
Figure 4-3 ROI區域座標	36
Figure 4-4物件軌跡盒鬚圖	37
Figure 4-5 幀差盒鬚圖	38
Figure 4-6 正規化處理	39
Figure 4-7 Window = 1與次方關係	41
Figure 4-8 Window = 2與次方關係	41
Figure 4-9 Window = 3與次方關係	42
Figure 4-10 Window = 4與次方關係	42
Figure 4-11 Window = 5與次方關係	43
Figure 4-12預測軌跡與實際軌跡	44
Figure 4-13 ROI 離開時間絕對誤差盒鬚圖	45
Figure 4-14 ROI 遮擋幀數圓餅圖	46
Figure 4-15 ROI 清空後應錄幀數圓餅圖	47

表目錄

Table 3-1 Bounding Box 參數	14
Table 3-2 物件追蹤輸出	17
Table 3-3資料清理原則表	20
Table 3-4 射線法演算法	25
Table 3-5預測準確度評估方式	27
Table 3-6降低資料儲存與處理量評估方式	28
Table 4-1 COCO資料集,常見道路物件樣本數	30
Table 4-2歷史影幀	32
Table 4-3 卡爾曼預測框	32
Table 4-4 實際偵測框	33
Table 4-5 物件追蹤ID	33
Table 4-6各資料窗最佳次方	43
參考文獻
[1]	J. Redmon, S. Divvala, R. Girshick and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 779-788, doi: 10.1109/CVPR.2016.91.
[2]	H. He, "Yolo Target Detection Algorithm in Road Scene Based on Computer Vision," 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 2022, pp. 1111-1114, doi: 10.1109/IPEC54454.2022.9777571.
[3]	A. Bewley, Z. Ge, L. Ott, F. Ramos and B. Upcroft, "Simple online and realtime tracking," 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 2016, pp. 3464-3468, doi: 10.1109/ICIP.2016.7533003.
[4]	N. Wojke, A. Bewley and D. Paulus, "Simple online and realtime tracking with a deep association metric," 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 2017, pp. 3645-3649, doi: 10.1109/ICIP.2017.8296962.
[5]	Y. Wu, J. Lim and M. -H. Yang, "Online Object Tracking: A Benchmark," 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 2013, pp. 2411-2418, doi: 10.1109/CVPR.2013.312.
[6]	A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan and M. Shah, "Visual Tracking: An Experimental Survey," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 7, pp. 1442-1468, July 2014, doi: 10.1109/TPAMI.2013.230.
[7]	A. C. Cob-Parro, C. Losada-Gutiérrez, M. Marrón-Romera,A. Gardel-Vicente and I. Bravo-Muñoz,"Smart video surveillance system based on edge computing,"Sensors, vol. 21, no. 9, p. 2958, 2021,doi: 10.3390/s21092958.
[8]	A. R. Kumar, B. Ravindran and A. Raghunathan,"Pack and detect: Fast object detection in videos using region-of-interest packing,"in Proc. ACM India Joint Int. Conf. on Data Science and Management of Data (CODS-COMAD),New York, NY, USA, 2019, pp. 150–156,doi: 10.1145/3297001.3297020.
[9]	Z. Zhang, Z. Pei, Z. Tang and F. Gu,"RoiSeg: An effective moving object segmentation approach based on region-of-interest with unsupervised learning,"Applied Sciences, vol. 12, no. 5, p. 2674, 2022,doi: 10.3390/app12052674.
[10]	E. Haines,"Point in polygon strategies,"in Graphics Gems IV,P. S. Heckbert, Ed.,San Diego, CA, USA: Academic Press, 1994, pp. 24–46,doi: 10.1016/B978-0-12-336156-1.50013-6.
[11]	M. Jin, M. Qu, Q. Gao, Z. Huang, T. Su and Z. Liang,"Advanced trajectory planning and control for autonomous vehicles with quintic polynomials," Sensors, vol. 24, no. 24, p. 7928, 2024,doi: 10.3390/s24247928.
[12]	F. Leon and M. Gavrilescu,"A review of tracking and trajectory prediction methods for autonomous driving,"Mathematics, vol. 9, no. 6, p. 660, 2021,doi: 10.3390/math9060660.
[13]	W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, "Edge Computing: Vision and Challenges," in IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637-646, Oct. 2016, doi: 10.1109/JIOT.2016.2579198.
[14]	M. Satyanarayanan, "The Emergence of Edge Computing," in Computer, vol. 50, no. 1, pp. 30-39, Jan. 2017, doi: 10.1109/MC.2017.9.
[15]	X. Chen, L. Jiao, W. Li and X. Fu, "Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing," in IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795-2808, October 2016, doi: 10.1109/TNET.2015.2487344. 
[16]	H. Li, K. Ota and M. Dong, "Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing," in IEEE Network, vol. 32, no. 1, pp. 96-101, Jan.-Feb. 2018, doi: 10.1109/MNET.2018.1700202.
論文全文使用權限
國家圖書館
同意無償授權國家圖書館,書目與全文電子檔於繳交授權書後, 於網際網路立即公開
校內
校內紙本論文立即公開
同意電子論文全文授權於全球公開
校內電子論文立即公開
校外
同意授權予資料庫廠商
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信