§ 瀏覽學位論文書目資料
  
系統識別號 U0002-2808202420123900
DOI 10.6846/tku202400729
論文名稱(中文) 加油站違規行為偵測
論文名稱(英文) Gas Station Violation Detection
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 資訊工程學系碩士在職專班
系所名稱(英文) Department of Computer Science and Information Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 112
學期 2
出版年 113
研究生(中文) 王裕益
研究生(英文) WANG YU-I
學號 711410026
學位類別 碩士
語言別 繁體中文
第二語言別
口試日期 2024-07-10
論文頁數 52頁
口試委員 口試委員 - 許哲銓(tchsu@scu.edu.tw)
口試委員 - 林承賢( cslin@mail.tku.edu.tw)
指導教授 - 陳建彰( ccchen34@mail.tku.edu.tw)
關鍵字(中) 異常行為
加油站行為監測
影像辨識
監控系統
深度學習
關鍵字(英) Abnormal behavior
gas station behavior monitoring
image recognition
monitoring system
deep learning
第三語言關鍵字
學科別分類
中文摘要
因爲少子化趨勢,促使人力成本大幅提升,需要將加油站,全面自助自動化,以減少人力成本。因此產生研究動機,透過PU-Learning與One-Class SVM,部分標記過的正常加油行為圖像,對影片圖檔做正負樣本分類。而辨識演算法,使用YOLO演算法,辨識影片中的連續行為,並且與加油機物聯網結合,再透過OpenCV等影像強化分析工具,達到盡可能辨識違規行為(例如將油料,加入非汽車的物體如塑膠桶或5公升寶特瓶等)。減少對於企業用戶駕駛司機,進入全自助加油站加油時,避免可能出現的徇私舞弊,與後續可能發生的社會安全(如縱火等)。透過2023年9月27日~2024年1月30日的加油站監控影像,共計約1600張加油中影像圖檔,採樣669張訓練Yolo模型。實驗結果呈現,透過實際加油站影像與物聯網資料串接,可以在共計60張違規行為圖檔中,找到47張違規行為,可以提高違規行為的辨識率達78%以上。
英文摘要
Due to the declining birthrate trend, labor costs have increased significantly, and gas stations need to be fully automated to reduce labor costs. Therefore, the research motivation was generated. Through PU-Learning and One-Class SVM, some labeled normal refueling behavior images were used to classify the positive and negative samples of the video images. The identification algorithm uses the YOLO algorithm to identify continuous behaviors in the video, and is combined with the IoT of the tanker, and then uses image enhancement analysis tools such as OpenCV to identify violations as much as possible (such as adding oil to non-car products). Objects such as plastic buckets or 5-liter plastic bottles, etc.). Reduce the risk of corporate users driving when entering fully self-service gas stations to refuel, to avoid possible favoritism and possible subsequent social safety hazards (such as arson, etc.). Through the gas station surveillance images from September 27, 2023 to January 30, 2024, there were a total of approximately 1,600 refueling image files, and 669 were sampled to train the Yolo model. The experimental results show that by concatenating actual gas station images and IoT data, 47 violations can be found out of a total of 60 pictures of violations, which can increase the identification rate of violations to more than 78%.
第三語言摘要
論文目次
目錄
第一章 緒論	1
1.1研究背景與目的	1
1.2論文架構	3
第二章 相關研究	4
2.1影像物件偵測	5
2.2 YOLO	6
2.3 YOLO歷代版本比較	7
2.4 主要參考理論方法與比較	9
第三章 本文提出的方法與進行步驟	14
3.1資料收集流程	14
3.2正負樣本預分類	17
3.3本文提出的加油違規行為資料分類	21
3.4使用YOLO的模型與各項參數	22
第四章 實驗結果與比較	31
4.1資料的預處理	31
4.2分類方法結合Yolo Model 辨識	42
4.3本文歸納的違規行為辨識方法	43
第五章 結論	48
文獻參考	49



表目錄
表一 各YOLO版本的性能比較	7
表二 PU-Learning與One Class SVM 優缺比較	13
表三 Yolo 各項指摽數值說明	22
表四 正樣本Yolo 參數	23
表五 負樣本Yolo 參數	25
表六 正負樣本 Yolo 參數	28
表七 硬體設備與輔助軟體版本	31
表八 Yolo 模型比較	31
表九 亮度擴充樣本比較	37
表十 對照組與調亮的樣本比較	38
表十一 對照組與亮度調亮並對比度加強擴充樣本比較	40
表十二 對比度與亮度調亮並提高對比對Yolo指摽比較	41

















圖目錄
圖1 範例圖(a) – (b) 為正常加油範例, (c)–(d)為違規加油範例	2
圖2 (a)-(b)為汽車類正常加油行為,(c)–(d)為機車類正常加油行為	14
圖3 (e)真實違規加油行為,(f)模擬機車違規加油,(g)只有人員時模擬違規加油	15
圖4資料收集流程圖	15
圖5 (a)–(b) Frigate NVR系統記錄下來的原始圖檔	16
圖6分類圖檔流程圖	18
圖7 以加油正常行為,取樣未處理圖檔視覺化分類效果圖	19
圖8 以違規行為,取樣未處理圖檔視覺化分類效果圖	21
圖9 客製化正樣本YOLO Model標記範例圖	23
圖10 正樣本Confidence和準確度Precision的關係圖	24
圖11 正樣本各項指標Loss值	24
圖12 客製化負樣本YOLO Model標記範例圖	26
圖13 負樣本Confidence和準確度Precision的關係圖	26
圖14 為負樣本各項指標Loss值	27
圖15 客製化正負樣本YOLO Model標記範例圖	28
圖16 正負樣本Confidence和準確度Precision的關係圖	29
圖17 為正負樣本各項指標Loss值	29
圖18運用混淆矩陣,以對照組資料驗證正負樣本Yolo 模型	34
圖19 (a)未左右顛倒處理 (b) 左右顛倒處理	35
圖20 運用混淆矩陣,以左右顛倒處理後測試資料,驗證正負樣本Yolo模型	36
圖21 (a) 未調亮處理 (b) 調亮處理	36
圖22 運用混淆矩陣,以提高亮度處理後測試資料,驗證正負樣本Yolo模型	37
圖23調亮後,擴充樣本後Yolo相關指摽	38
圖24 (a) 未增強對比處理 (b) 增強對比處理	39
圖25 運用混淆矩陣,以提高對比度處理後測試資料驗證正樣本Yolo模型	40
圖26調亮並增加對比度後,擴充樣本後Yolo相關指摽	41
圖27 結合分類方法與前版Yolo Model 辨識正負樣本	43
圖28 獨立負樣本 (a) (b) (c)	44
圖29 正負樣本Yolo 模型混淆矩陣圖	45
參考文獻
[1] Elkan, C., & Noto, K. (2008, August). Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining.
[2] B. Du, C. Liu, W. Zhou, Z. Hou, and H. Xiong, "Catch Me If You Can: Detecting Pickpocket Suspects from Large-Scale Transit Records," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 87-96. 
[3] Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017, August). Focal Loss for Dense Object Detection. arXiv preprint arXiv:1708.02002.
[4] Hu, J., Shen, L., Albanie, S., Sun, G., & Wu, E. (2017, September). Squeeze-and-Excitation Networks. arXiv preprint arXiv:1709.01507.
[5] Afsharirad, H., & Seyedin, S. A. (2019, July). Correction to: Salient object detection using the phase information and object model. In *Proceedings of the International Conference on Image Processing* (ICIP), Section 4.2: Phase-Based Detection Models.
[6] Dong, Z., Liu, Y., Feng, Y., Wang, Y., Xu, W., Chen, Y., & Tang, Q. (2022). Object Detection Method for High Resolution Remote Sensing Imagery Based on Convolutional Neural Networks with Optimal Object Anchor Scales. International Journal of Remote Sensing, 4.
[7] Y. Zhan, J. Yu, T. Yu, and D. Tao, "Multi-task Compositional Network for Visual Relationship Detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 
[8] O. C. Koyun, R. K. Keser, I. B. Akkaya, and B. U. Töreyin, "Focus-and-Detect: A small object detection framework for aerial images," *Signal Processing: Image Communication*, vol. 101, pp. 45-58, 2022.
[9] J. U. Kim, J. Kwon, H. G. Kim, and Y. M. Ro, "BBC Net: Bounding-Box Critic Network for Occlusion-Robust Object Detection," IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 6, pp. 1804-1817, Jun. 2019.
[10] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016, May). You only look once: unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.
[11] K. He, X. Zhang, S. Ren, and J. Sun, "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904-1916, Sep. 2015.
[12] Lin, T. Y., Dollar, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[13] Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path Aggregation Network (PANet) for Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[14] Wu, J., & Taghavi, Z. (2021). Positive-Unlabeled Recommendation with Generative. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining.
[15] D. M. J. Tax and R. P. W. Duin, "Support Vector Data Description," *Pattern Recognition Group, Faculty of Applied Sciences, Delft University of Technology*, The Netherlands, pp. 10-25, Jan. 2004, Editor: D. Fisher.
[16] Frigate NVR: Detecting specific images such as vehicles and personnel, and recording events. Retrieved from https://github.com/blakeblackshear/frigate.
[17] R. Kiryo, G. Niu, L. Du, and H. Kashima, "Positive-Unlabeled Learning with Non-Negative Risk Estimator," in Advances in Neural Information Processing Systems, 2017, vol. 30, pp. 709-719.
[18] Duan, Y., Lu, J., Shen, Y., & Yang, Q. (2020). Uncertainty-aware Positive-Unlabeled Learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
[19] W. Li, Y. Chen, and X. Zhang, "Positive Unlabeled Learning for Image Classification," Pattern Recognition, vol. 76, pp. 1-9, 2018.
[20] Denis, F., Gilleron, R., & Tommasi, M. (2019). Classification with Noisy Labels by Importance Reweighting and Balancing on Positive and Unlabeled Data. In International Conference on Learning Representations (ICLR).
(1)~ (15) J. Bekker and J. Davis, "Learning From Positive and Unlabeled Data: A Survey," in Advances in Neural Information Processing Systems, vol. 31, pp. 1-15, 2018. 
(16) ~ (21) Tax, D. M. J., & Duin, R. P. W. (1999). Support vector domain description. Pattern Recognition Letters, 20(11-13), 1191-1199.
(22)~ (29) Schölkopf, B., Platt, J. C., Shawe-Taylor, J., Smola, A. J., & Williamson, R. C. (2001). Estimating the support of a high-dimensional distribution. Neural Computation, 13(7), 1443-1471.
論文全文使用權限
國家圖書館
同意無償授權國家圖書館,書目與全文電子檔於繳交授權書後, 於網際網路立即公開
校內
校內紙本論文立即公開
同意電子論文全文授權於全球公開
校內電子論文立即公開
校外
同意授權予資料庫廠商
校外電子論文立即公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信