系統識別號 | U0002-0709202312574400 |
---|---|
論文名稱(中文) | 監控影片幀偽造偵測與定位 |
論文名稱(英文) | Inter-frame forgery detection and localization for surveillance videos |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 資訊工程學系碩士班 |
系所名稱(英文) | Department of Computer Science and Information Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 111 |
學期 | 2 |
出版年 | 112 |
研究生(中文) | 鄭仲婷 |
研究生(英文) | Chung-Ting Cheng |
學號 | 611410035 |
學位類別 | 碩士 |
語言別 | 繁體中文 |
第二語言別 | |
口試日期 | 2023-07-17 |
論文頁數 | 54頁 |
口試委員 |
口試委員
-
蔡耀弘(tyh@hcu.edu.tw)
口試委員 - 許哲銓(tchsu@scu.edu.tw) 指導教授 - 林承賢(cslin@mail.tku.edu.tw) |
關鍵字(中) |
自動編碼器 長短期記憶網路 視訊偽造偵測 |
關鍵字(英) |
Autoencoder LSTM (Long-Short-Term-Memory network) Video forgery detection |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
本研究的主題是影片竄改偵測,旨在檢測影片是否被篡改。隨著科技的快速發展,影像技術變得越來越成熟,同時也有越來越多的技術工具被開發出來,可以對影像進行竄改。這使得使用者可以根據不同的目的任意運用這些工具對一段原始影像進行後續處理。然而,這種廣泛的使用也帶來了一些問題,例如在法院證據中竄改監視錄像,或者在政治選舉中製作虛假的影片來左右民意。在本研究中,我們採用了兩種深度學習模型,即長短期記憶網絡(LSTM)和自編碼器(Autoencoder)建立一個影片竄改偵測模型,利用LSTM和自編碼器對影片的特徵進行學習和提取,然後通過比較原始影片和經過竄改的影片的特徵差異來檢測竄改行為。這樣的模型可以幫助識別和防止影片竄改,從而維護影像資料的真實性和可信度。 |
英文摘要 |
The subject of this research is video tampering detection, which aims to detect whether a video has been tampered with or not. With the rapid development of science and technology, imaging technology has become more and more mature, and more and more technical tools have been developed to tamper with images. This enables the user to use these tools for subsequent processing of an original image according to different purposes. However, this widespread use has created problems, such as tampering with surveillance footage in court evidence, or creating fake videos to sway public opinion during political elections. In this study, we adopted two deep learning models, long short-term memory (LSTM) and autoencoder to establish a film tampering detection model, use LSTM and autoencoder to learn and extract the features of the film, and then detect the tampering behavior by comparing the feature difference between the original film and the falsified film. Such a model can help identify and prevent film tampering, thereby maintaining the authenticity and trustworthiness of image material. |
第三語言摘要 | |
論文目次 |
目錄 第一章 緒論 1 1.1 研究背景與目的 1 1.2 研究目的 5 第二章 文獻探討 6 2.1 視訊偽造之相關研究 6 2.2 長短期記憶網路(Long Short-Term Memory, LSTM) 10 2.3 自動編碼器(AutoEncoder) 12 第三章 研究方法 13 3.1 系統流程架構 13 3.2 資料集 14 3.3 LSTM-Autoencoder模型訓練 16 3.3.1 LSTM模型訓練 16 3.3.2 LSTM-Autoencoder模型訓練 17 3.4 Tampering detection 18 第四章 實驗結果 19 4.1 LSTM 實驗結果 19 4.1.1 LSTM - Insertion forgery detection 19 4.1.2 LSTM - Deletion forgery detection 22 4.1.3 LSTM - Duplication forgery detection 25 4.1.4 LSTM - Shuffling forgery detection 28 4.2 LSTM-Autoencoder實驗結果 31 4.2.1 LSTM-Autoencoder - Insertion forgery detection 31 4.2.2 LSTM -Autoencoder - Deletion forgery detection 34 4.2.3 LSTM-Autoencoder - Duplication forgery detection 37 4.2.4 LSTM-Autoencoder - Shuffling forgery detection 40 4.3 Performance evaluation 43 第五章 結論 47 參考文獻 48 圖目錄 圖 一、 視訊偽造偵測分類 3 圖 二、Spatial的偵測分類 3 圖 三、Temporal的偵測分類 4 圖 四、Temporal 竄改方式示意圖 5 圖 五 LSTM RNN 結構圖 [41] 10 圖 六 LSTM 結構細節圖 [41]. 11 圖 七、Autoencoder示意圖 12 圖 八、系統流程圖 13 圖 九、LSTM - Insertion S1.L1 walking的Time_13-57中的View_003平均值偵測結果 20 圖 十、LSTM - Insertion S1.L1 walking的Time_13-57中的View_003標準差偵測結果 21 圖 十一、LSTM - Insertion S2.L1 walking的Time_12-34中的View_004平均值偵測結果 22 圖 十二、LSTM - Insertion S2.L1 walking的Time_12-34中的View_004標準差偵測結果 22 圖 十三、LSTM - Deletion S1.L1 walking的Time_13-57中的View_003平均值偵測結果 23 圖 十四、LSTM - Deletion S1.L1 walking的Time_13-57中的View_003標準差偵測結果 24 圖 十五、LSTM - Deletion S2.L1 walking的Time_12-34中的View_001平均值偵測結果 25 圖 十六、LSTM - Deletion S2.L1 walking的Time_12-34中的View_001標準差偵測結果 25 圖 十七、LSTM - Duplication S1.L1 walking的Time_13-57中的View_004平均值偵測結果 26 圖 十八、LSTM - Duplication S1.L1 walking的Time_13-57中的View_004標準差偵測結果 27 圖 十九、LSTM - Duplication S2.L1 walking的Time_12-34中的View_003平均值偵測結果 27 圖 二十、LSTM - Duplication S2.L1 walking的Time_12-34中的View_003標準差偵測結果 28 圖 二十一、LSTM - Shuffling S1.L1 walking的Time_13-57中的View_004平均值偵測結果 29 圖 二十二、LSTM - Shuffling S1.L1 walking的Time_13-57中的View_004標準差偵測結果 29 圖 二十三、LSTM - Shuffling S2.L1 walking的Time_12-34中的View_003平均值偵測結果 30 圖 二十四、LSTM - Shuffling S2.L1 walking的Time_12-34中的View_003標準差偵測結果 30 圖 二十五、LSTM -Autoencoder - Insertion S1.L1 walking的Time_13-57中的View_003平均值偵測結果 32 圖 二十六、LSTM -Autoencoder - Insertion S1.L1 walking的Time_13-57中的View_003標準差偵測結果 32 圖 二十七、LSTM -Autoencoder - Insertion S2.L1 walking的Time_12-34中的View_004平均值偵測結果 33 圖 二十八、LSTM-Autoencoder - Insertion S2.L1 walking的Time_12-34中的View_004標準差偵測結果 33 圖 二十九、LSTM-Autoencoder - Deletion S1.L1 walking的Time_13-57中的View_003平均值偵測結果 35 圖 三十、LSTM-Autoencoder - Deletion S1.L1 walking的Time_13-57中的View_003標準差偵測結果 35 圖 三十一、LSTM-Autoencoder - Deletion S2.L1 walking的Time_12-34中的View_001平均值偵測結果 36 圖 三十二、LSTM-Autoencoder - Deletion S2.L1 walking的Time_12-34中的View_001標準差偵測結果 36 圖 三十三、LSTM-Autoencoder - Duplication S1.L1 walking的Time_13-57中的View_004平均值偵測結果 38 圖 三十四、LSTM-Autoencoder - Duplication S1.L1 walking的Time_13-57中的View_004標準差偵測結果 38 圖 三十五、LSTM-Autoencoder - Duplication S2.L1 walking的Time_12-34中的View_003平均值偵測結果 39 圖 三十六、LSTM-Autoencoder - Duplication S2.L1 walking的Time_12-34中的View_003標準差偵測結果 39 圖 三十七、LSTM-Autoencoder - Shuffling S1.L1 walking的Time_13-57中的View_004平均值偵測結果 41 圖 三十八、LSTM-Autoencoder - Shuffling S1.L1 walking的Time_13-57中的View_004標準差偵測結果 41 圖 三十九、LSTM-Autoencoder - Shuffling S2.L1 walking的Time_12-34中的View_003平均值偵測結果 42 圖 四十、LSTM-Autoencoder - Shuffling S2.L1 walking的Time_12-34中的View_003標準差偵測結果 42 表目錄 表 一、資料集相關資訊 14 表 二、S1_L1 - Time_13-57竄改資訊 15 表 三、S2_L1 - Time_12-34竄改資訊 15 表 四、LSTM model 16 表 五、LSTM-Autoencoder model 17 表 六、LSTM Insertion偽造資訊 20 表 七、LSTM - Insertion S1.L1 walking的Time_13-57中的View_003的混淆矩陣 21 表 八、LSTM - Insertion S2.L1 walking的Time_12-34中的View_004的混淆矩陣 22 表 九、LSTM Deletion偽造資訊 23 表 十、LSTM - Deletion S1.L1 walking的Time_13-57中的View_003的混淆矩陣 24 表 十一、LSTM - Deletion S2.L1 walking的Time_12-34中的View_001的混淆矩陣 25 表 十二、LSTM Duplication偽造資訊 26 表 十三、LSTM - Duplication S1.L1 walking的Time_13-57中的View_004的混淆矩陣 27 表 十四、LSTM - Duplication S2.L1 walking的Time_12-34中的View_003的混淆矩陣 28 表 十五、LSTM Shuffling偽造資訊 28 表 十六、LSTM - Shuffling S1.L1 walking的Time_13-57中的View_004的混淆矩陣 29 表 十七、LSTM - Shuffling S2.L1 walking的Time_12-34中的View_003的混淆矩陣 30 表 十八、LSTM-Autoencoder Insertion偽造資訊 31 表 十九、LSTM-Autoencoder - Insertion S1.L1 walking的Time_13-57中的View_003的混淆矩陣 32 表 二十、LSTM-Autoencoder - Insertion S2.L1 walking的Time_12-34中的View_004的混淆矩陣 34 表 二十一、LSTM-Autoencoder Deletion偽造資訊 34 表 二十二、LSTM-Autoencoder - Deletion S1.L1 walking的Time_13-57中的View_003的混淆矩陣 35 表 二十三、LSTM-Autoencoder - Deletion S2.L1 walking的Time_12-34中的View_001的混淆矩陣 37 表 二十四、LSTM-Autoencoder Duplication偽造資訊 37 表 二十五、LSTM-Autoencoder - Duplication S1.L1 walking的Time_13-57中的View_004的混淆矩陣 38 表 二十六、LSTM-Autoencoder - Duplication S2.L1 walking的Time_12-34中的View_003的混淆矩陣 39 表 二十七、LSTM-Autoencoder Shuffling偽造資訊 40 表 二十八、LSTM-Autoencoder - Shuffling S1.L1 walking的Time_13-57中的View_004的混淆矩陣 41 表 二十九、LSTM-Autoencoder - Shuffling S2.L1 walking的Time_12-34中的View_003的混淆矩陣 42 表 三十、LSTM Model Performance evaluation 44 表 三十一、LSTM-Autoencoder Model Performance evaluation 45 |
參考文獻 |
參考文獻 [1] C.-C. Hsu, T.-Y. Hung, C.-W. Lin and C.-T. Hsu, “Video forgery detection using correlation of noise residue,” 2008 IEEE 10th Workshop on Multimedia Signal Processing, 2008. [2] I. W. Abdul, Wahab, M. Aminu, Bagiwa, M. Y. I. Idris, S. Khan and Z. Razak, “Passive video forgery detection techniques A survey,” 2014 10th International Conference on Information Assurance and Security, 2014. [3] Q. Dong, G. Yang and N. Zhu, “A MCEA based passive forensics scheme for detecting frame-based video tampering,” Digital Investigation, vol. 9, no. 2, 2012. [4] X. Jiang, W. Wang, T. Sun, Y. Q. Shi and S. Wang, “Detection of Double Compression in MPEG-4 Videos Based on Markov Statistics,” IEEE Signal Processing Letters, vol. 20, no. 5, 2013. [5] J. Chao, X. Jiang and T. Sun, “A novel video inter-frame forgery model detection scheme based on optical flow consistency,” Proceedings of the 11th international conference on Digital Forensics and Watermaking, 2012. [6] L. Li, X. Wang, W. Zhang, G. Yang and G. Hu, “Detecting Removed Object from Video with Stationary Background,” Digital Forensics and Watermaking, pp. 242-252, 2012. [7] T. Sun, W. Wang and X. Jiang, “Exposing video forgeries by detecting MPEG double compression,” 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012. [8] S.-Y. Liao and T.-Q. Huang, “Video copy-move forgery detection and localization based on Tamura texture features,” 2013 6th International Congress on Image and Signal Processing (CISP), 2013. [9] A. Subramanyam and S. Emmanuel, “Pixel estimation based video forgery detection,” 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013. [10] L. Su, T. Huang and J. Yang, “A video forgery detection algorithm based on compressive sensing,” Multimedia Tools and Applications, vol. 74, no. 17, 2014. [11] R. D. Singh and N. Aggarwal, “Optical flow and prediction residual based hybrid forensic system for inter-frame tampering detection,” Journal of Circuits, Systems and Computers, vol. 26, no. 7, 2016. [12] Y. Liu and T. Huang, “Exposing video inter-frame forgery by Zernike opponent chromaticity moments and coarseness analysis,” Multimedia Systems, vol. 23, no. 2, 2015. [13] S. Kingra, N. Aggarwal and R. D. Singh, “Inter-frame forgery detection in H.264 videos using motion and brightness gradients,” Multimedia Tools and Applications, vol. 76, no. 3, pp. 1-20, 2017. [14] J. A. Aghamaleki and A. Behrad, “Malicious inter-frame video tampering detection in MPEG videos using time and spatial domain analysis of quantization effects,” Multimedia Tools and Applications, vol. 76, no. 20, 2017. [15] M. R. Oraibi and A. M. Radhi, “Enhancement Digital Forensic Approach for Inter-Frame Video Forgery Detection Using a Deep Learning Technique,” Iraqi Journal of Science, vol. 63, no. 2, pp. 2686-2701, 2022. [16] Shehnaz and M. Kaur, “Texture Feature Analysis for Inter-Frame Video Tampering Detection,” Proceedings of International Joint Conference on Advances in Computational Intelligence, p. 305–318, 2022. [17] J. Bakas, B. A. Kumar and R. Naskar, “MPEG Double Compression Based Intra-Frame Video Forgery Detection using CNN,” 2018 International Conference on Information Technology (ICIT), 2018. [18] J. A. Aghamaleki and A. Behrad, “Inter-frame video forgery detection and localization using intrinsic effects of double compression on quantization errors of video coding,” Signal Processing: Image Communication, vol. 47, pp. 289-302, 2016. [19] S. Bian, W. Luo and J. Huang, “Detecting video frame-rate up-conversion based on periodic properties of inter-frame similarity,” Multimedia Tools and Applications, vol. 72, pp. 437-451, 2013. [20] J. Bakas, R. Naskar and R. Dixit, “Detection and localization of inter-frame video forgeries based on inconsistency in correlation distribution between Haralick coded frames,” Multimedia Tools and Applications, vol. 78, p. 4905–4935, 2019. [21] J. Guo, W. Liu, S. Xin, Z. Zhao and B. Zhang, “A Frame Level Feature Aggregation Method for Video target Detection,” 2021 33rd Chinese Control and Decision Conference (CCDC), 2021. [22] P. Keerthana, E. Nikita, R. Lakkshmi and R. S. Devi, “Tampering Detection in Video Inter-Frame using watermarking,” International Journal of Research in Engineering, Science and Management, vol. 2, no. 3, pp. 251-254, 2019. [23] S. Fad, Q. Han and L. Qiong, “Exposing video inter-frame forgery via histogram of oriented gradients and motion energy image,” Multidimensional Systems and Signal Processing, vol. 31, 2020. [24] Q. Li, R. Wang and D. Xu, “An inter-frame forgery detection algorithm for surveillance video,” Information (Switzerland), vol. 9, pp. 301-316, 2018. [25] M. Kobayashi, T. Okabe and Y. Sato, “Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions,” IEEE Transactions on Information Forensics and Security, 2010. [26] X. Kang, J. Liu, H. Liu and Z. J. Wang, “Forensics and counter anti-forensics of video inter-frame forgery,” Multimedia Tools and Applications, vol. 75, p. 13833–13853, 2016. [27] L. Su, H. Luo and S. Wang, “A Novel Forgery Detection Algorithm for Video Foreground Removal,” IEEE Access, vol. 7, pp. 109719 - 109728, 2019. [28] D. D’Avino, D. Cozzolino, G. Poggi and L. Verdoliva, “Autoencoder with recurrent neural networks for video forgery detection,” arXiv:1708.08754, 2017. [29] K. Xu, T. Sun and X. Jiang, “Video Anomaly Detection and Localization Based on an Adaptive Intra-Frame Classification Network,” IEEE Transactions on Multimedia, vol. 22, no. 2, pp. 394-406, 2020. [30] R. C. Pandey, S. K. Singh and K. K. Shukla, “Passive copy-move forgery detection in videos,” 2014 International Conference on Computer and Communication Technology (ICCCT), pp. 301-306, 2014. [31] Q. Wang, Z. Li, Z. Zhang and Q. Ma, “Video Inter-Frame Forgery Identification Based on Consistency of Correlation Coefficients of Gray Values,” Journal of Computer and Communications, vol. 2, pp. 51-57, 2014. [32] J. Xu, Y. Liang, X. Tian and A. Xie, “A novel video inter-frame forgery detection method based on histogram intersection,” 2016 IEEE/CIC International Conference on Communications in China (ICCC), pp. 1-6, 2016. [33] C. C. Huang, Y. Zhang and V. L. L. Thing, “Inter-frame video forgery detection based on multi-level subtraction approach for realistic video forensic applications,” 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), pp. 20-24, 2017. [34] S. M. Fadl, Q. Han and Q. Li, “Inter-Frame Forgery Detection Based on Differential Energy of Residue,” IET Image Processing, vol. 13, 2019. [35] D.-N. Zhao, R.-K. Wang and Z.-M. Lu, “Inter-frame passive-blind forgery detection for video shot based on similarity analysis,” Multimedia Tools and Applications volume, vol. 77, p. 25389–25408, 2018. [36] R. Parmani, S. Butala, A. Khanvilkar, S. Pawar and N. Pulgam, “Inter Frame Video Forgery Detection using Normalized Multi Scale One Level Subtraction,” Proceedings of International Conference on Communication and Information Processing (ICCIP) 2019, 2019. [37] M. Aloraini, M. Sharifzadeh and D. Schonfeld, “Sequential and Patch Analyses for Object Removal Video Forgery Detection and Localization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 3, 2021. [38] G.-Y. Kang, Y.-P. Feng, R.-K. Wang and Z.-M. Lu, “Edge and Feature Points Based Video Intra-frame Passive-blind Copy-paste Forgery Detection,” Journal of Network Intelligence, vol. 6, no. 3, 2021. [39] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, 1997. [40] R. K. S. K. Greff, J. Koutník, B. R. Steunebrink and J. Schmidhuber, “LSTM: A Search Space Odyssey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2222-2232, 2016. [41] F. A. Gers and J. Schmidhuber, “Recurrent nets that time and count,” Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, pp. 189-194, 2000. [42] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504-507, 2006. [43] “Xiph.org,” https://media.xiph.org/video/derf/. [44] “PETS 2009 Benchmark Data,” https://cs.binghamton.edu/~mrldata/pets2009. [45] “akiyo,” http://meru.cecs.missouri.edu/free_download/videos/. |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信