系統識別號 | U0002-2206202020390200 |
---|---|
DOI | 10.6846/TKU.2020.00637 |
論文名稱(中文) | 可逆式資訊隱藏之研究 |
論文名稱(英文) | A Study of Reversible Data Hiding |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 電機工程學系博士班 |
系所名稱(英文) | Department of Electrical and Computer Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 108 |
學期 | 2 |
出版年 | 109 |
研究生(中文) | 葉呈祥 |
研究生(英文) | Cheng-Hsiang Yeh |
學號 | 800440025 |
學位類別 | 博士 |
語言別 | 英文 |
第二語言別 | |
口試日期 | 2020-06-12 |
論文頁數 | 100頁 |
口試委員 |
指導教授
-
易志孝
共同指導教授 - 洪國銘 委員 - 林正雄 委員 - 林慧珍 委員 - 許榮隆 委員 - 許志旭 |
關鍵字(中) |
可逆式資訊隱藏 非線性回歸分析 多方向梯度預測 模式選擇 最小平方法 |
關鍵字(英) |
Reversible Data Hiding Nonlinear Regression Analysis Multi-directional Gradient Prediction Mode Selection Least Squares Method |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
可逆式資料隱藏技術,主要是應用在一些重要且不允許有任何失真的情境,例如軍事遙感影像、診斷醫學影像、美術保護與任何涉及法律問題的媒體。此系統在取出隱藏的訊息後,仍然可以完整的恢復原始影像。在可逆資料隱藏系統中,若提出的預測方法越準確,所產生的直方圖就越集中,將會導致移位最小化以避免失真。 本文提出一個新的可逆式資料隱藏系統,包含四種預測方法。在這些方法中,首先都先將像素值分組,取其中一組為遺失像素,並生產生待預測影像。第一種與第二種為基於影像修補TV與Fast的預測方法。將此待預測影像透過現有的影像修補方法TV或Fast進行修補,最後得到預測影像。第三種為基於最小平方法的預測方法。以預測像素為中心,設定一個搜集資料範圍,利用十字型搜集法,將資料搜集,最後透過最小平方法進行預測。第四種為基於多方向梯度與模型選擇預測法。利用平均法產生粗糙的預測影像,再利用此影像計算四個方向梯度資訊與八個相鄰像素的權重。接著,利用邊緣檢測法產生邊緣,計算邊緣方向。最後,利用邊緣方向與權重資訊,選擇最適合預測模式預測。 在嵌入階段,本文提出一個基於非線性回歸分析與嵌入選擇的嵌入方法。首先,蒐集自然影像,利用非線性回歸分析法,估測出多個二次曲線函數。利用這些二次曲線函數,將嵌入位置分類成嵌入區域與非嵌入區域,目的是減少不必要的移位,以增加嵌入後影像品質。最後,本文也提出自動嵌入範圍決定法。在嵌入資料前,可以利此方法產生最佳的嵌入範圍。此方法是利用區域標準差的大小進行排序,將區域標準差較小的區域優先嵌入,以增加嵌入後影像品質。為了評估所提出的可逆式隱藏技術的效果,我們使用不同的影像與其現有的方法進行比較。結果顯示,所提出的方案可以嵌入更多的資料,並且讓隱藏後資料失真更少。 |
英文摘要 |
Reversible data hiding (RDH) technique, which is mainly used in some important situations that do not allow any distortion, such as military remote sensing imagery, diagnostic medical imaging, fine art protection and any media involving legal issues. This system can recover the original image completely after extracting the hidden information. In the RDH system, an accurate prediction method can generate more centralized histogram, which would minimize the shifting and thus reduce distortion. In this paper, we propose a new RDH system that consists of four prediction methods. All these methods first divide the pixel values into groups and then take one of the groups as the missing pixels and generate the preparing predicted image. The first and the second methods are the prediction methods based on image inpainting TV and Fast, which is used to repair the preparing predicted image by the existing image inpainting method TV or Fast and finally obtain the predicted image. The third method is based on the least squares prediction (LSP), which we set a range of data collection with the prediction pixel as the center and use the cross-type collection method to collect the data, and finally make prediction with the least squares prediction method. The fourth method is a prediction method based on multi-directional gradient and mode selection (MGMS), which generates roughly predicted image by the average method, and then use this image to calculate its gradient information of four directions and the weight of eight neighboring pixels. Next, the edge image is generated by using the edge detection method, and then the edge-direction image is calculated. Finally, the edge direction image is used to select the most appropriate prediction mode. In the embedding stage, this paper proposes an embedding method based on nonlinear regression analysis and embedding selection. Firstly, natural images are collected to estimate multiple quadratic functions by using nonlinear regression analysis. The embedding positions are classified into embedding and non-embedding regions by using the quadratic functions, to reduce unnecessary shifting and thus increase the quality of the embedded image. Finally, this paper also proposes an automatic embedding range decision method, which can obtain the optimal embedding range before embedding the data. This method uses the size of the local standard deviation for sorting and embeds the smaller local standard deviation first to increase the quality of the embedded image. To evaluate the effectiveness of the proposed reversible hiding technique, this paper compares it with the existing methods by using different images. The results show that the proposed scheme can embed more data with less distortion. |
第三語言摘要 | |
論文目次 |
TABLE OF CONTENTS 摘要 I ABSTRACT II TABLE OF CONTENTS IV LIST OF FIGURES VII LIST OF TABLES XII CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Research Objective 1 1.3 Organization of Dissertation 2 CHAPTER 2 LITERATURE REVIEWS 3 2.1 Reversible Data Hiding 3 2.2 Reversible Watermarking Algorithm Using Sorting and Prediction 6 2.3 Image Inpainting 9 2.4 Mathematical Models for Local Non-Texture Inpainting (TV) 10 2.5 Fast Image Inpainting Based on Coherence Transport (Fast) 11 2.6 Least Squares Method 13 CHAPTER 3 PREDICTION METHODS 15 3.1 Prediction via Image Inpainting (TV, Fast) Scheme 15 3.2 Prediction via Least Squares Prediction (LSP) Scheme 18 3.3 Prediction via Multi-directional Gradient and Mode Selection (MGMS) Scheme 20 CHAPTER 4 REVERSIBLE DATA HIDING SYSTEM 28 4.1 Difference Image Generation 30 4.2 Embedding Algorithm 31 4.3 Extracting and Reversing Algorithm 32 4.4 Embedding Selection by Nonlinear Regression Analysis and Self-block Standard Deviation Statistics 33 4.4.1 Estimation Stage 33 4.4.2 Execution Stage 38 4.5 Automatic Embedding Range and Threshold Decision 41 4.6 Overflow and Underflow Problem 43 CHAPTER 5 EXPERIMENTAL RESULTS AND DISCUSSIONS 45 5.1 Comparison of Predictive Performance 47 5.1.1 Predictive Performance for Image Inpainting (TV, Fast) Scheme 47 5.1.2 Predictive Performance for Least Squares Prediction Scheme 49 5.1.3 Predictive Performance for Multi-Directional Gradient and Mode Selection Scheme 52 5.1.4 Comparison 54 5.2. Comparison of Difference Histograms 57 5.2.1 Difference Histograms via Image Inpainting (TV, Fast) Schemes 57 5.2.2 Difference Histogram via Least Squares Prediction Scheme 60 5.2.3 Difference Histogram via Multi-Directional Gradient and Mode Selection Scheme 65 5.2.4 Comparison 67 5.3. Comparison of Hiding Rate Versus Image Quality 70 5.4. Comparison of Data Hiding through Embedding Selection 84 5. 5. Comparison of General Embedding, Embedding after Sorting, and Embedding Selection 90 5.6. Comparison of Automatic Embedding Range Decision 93 CHAPTER 6 CONCLUSION AND FUTURE WORK 95 REFERENCES 97 LIST OF FIGURES Figure 2.1 Sub-sampling example 5 Figure 2.2 A example for inverse “S” scan of a 3×3 image block 5 Figure 2.3 Depiction of pixels in red set and gray set 7 Figure 2.4 Difference histogram 7 Figure 3.1 The flow of the prediction via image inpainting 16 Figure 3.2 (a) Two sets (b) Four sets 16 Figure 3.3 (a) The reference pixels for two sets (b) The reference pixels for four sets 16 Figure 3.4 Image classification for four sets 16 Figure 3.5 Preparing predicted image generation 17 Figure 3.6 Predicted image generation for boat image 17 Figure 3.7 Predicted image generation for peppers image 18 Figure 3.8 The flow of the prediction via least squares prediction 19 Figure 3.9 Data collection using the cross-type collection (9×9) 19 Figure 3.10 The positions of the four weights 20 Figure 3.11 The flow of the prediction via multi-directional gradient and mode selection scheme 21 Figure 3.12 Original image 21 Figure 3.13 Mirroring image 21 Figure 3.14 Preparing predicted image 22 Figure 3.15 Roughly predicted image 22 Figure 3.16 Mirroring roughly predicted image 22 Figure 3.17 The sobel masks for four kinds of direction 22 Figure 3.18 The gradient images for four kinds of direction 22 Figure 3.19 Edge image generation by Canny 25 Figure 3.20 Direction classification 26 Figure 3.21 A example for directional histogram statistics (4×4) 26 Figure 3.22 An example of the process for the main direction generation 26 Figure 3.23 The positions of the eight pixels 27 Figure 3.24 The positions of the eight weights 27 Figure 4.1 The block diagram of embedding process 29 Figure 4.2 The block diagram of extracting and reversing process 30 Figure 4.3 The statistics of embedding rate (prediction via MGMS method) 35 Figure 4.4 An example of nonlinear regression analysis by quadratic curve function (threshold td=2) (LSP scheme) 36 Figure 4.5 An example of nonlinear regression analysis by quadratic curve function (threshold td=2)( MGMS scheme) 36 Figure 4.6 Nonlinear regression analysis by nine quadratic curve functions 37 Figure 4.7 Nonlinear regression analysis by eight quadratic curve functions 37 Figure 4.8 Example of hiding data into an image of 3×3 pixels with embedding selection 39 Figure 4.9 Example of extraction and recovery from a processed image of 3×3 pixels with embedding selection 39 Figure 4.10 The flow of determining embedding range(ft1 and ft2) 42 Figure 4.11 The flow of determining the best embedding range(t1 and t2) 43 Figure 5.1 Ten original images 46 Figure 5.2 The PSNR versus six images curves of using image inpainting TV and Fast schemes for two sets 48 Figure 5.3 The PSNR versus six images curves of using image inpainting TV and Fast schemes for four sets 49 Figure 5.4 The PSNR versus six images bar graphs of using different block sizes for six images 50 Figure 5.5 The average PSNR versus different sizes curve for ten images 50 Figure 5.6 The PSNR versus six images bar graphs of using different block sizes for six images 53 Figure 5.7 The average PSNR versus different sizes curve for six images 53 Figure 5.8 The PSNR versus six images bar graphs of using Sachnev, TV and Fast scheme (two sets) 55 Figure 5.9 The average PSNR versus three methods curve for six images (two sets) 56 Figure 5.10 The PSNR versus six images bar graphs of using TV, Fast, LSP and MGMS schemes (four sets) 56 Figure 5.11 The average PSNR versus four methods curve for ten images (four sets) 57 Figure 5.12 The difference histograms on (a) Baboon; (b) Lena; (c) Airplane; (d) Peppers; (e) Boat; (f) Barbara 59 Figure 5.13 The difference histograms on (a) Baboon; (b) Lena; (c) Airplane; (d) Peppers; (e) Boat; (f) Barbara 59 Figure 5.14 Comparison of the sum of six images difference histogram for TV scheme and Fast scheme 60 Figure 5.15 The difference histograms on Baboon for different block sizes 61 Figure 5.16 The difference histograms on Airplane for different block sizes 61 Figure 5.17 The difference histograms on Lena for different block sizes 62 Figure 5.18 The difference histograms on Peppers for different block sizes 62 Figure 5.19 The difference histograms on Boat for different block sizes 63 Figure 5.20 The difference histograms on Barbara for different block sizes 63 Figure 5.21 The sum of six images difference histograms for different block sizes 64 Figure 5.22 The difference histograms on (a) Baboon; (b) Lena; (c) Airplane; (d) Peppers; (e) Boat; (f) Barbara 65 Figure 5.23 The difference histograms on (a) Baboon; (b) Lena; (c) Airplane; (d) Peppers; (e) Boat; (f) Barbara 66 Figure 5.24 The difference histograms on Baboon image 67 Figure 5.25 The difference histograms on Lena image 68 Figure 5.26 The difference histograms on Airplane image 68 Figure 5.27 The difference histograms on Peppers image 69 Figure 5.28 The difference histograms on Boat image 69 Figure 5.29 The difference histograms on Barbara image 70 Figure 5.30 The sum of six images difference histograms for four methods 70 Figure 5.31 The PSNR versus the capacity curves of the seven compared RDH algorithms for test images (a) Tiffany; (b) Baboon; (c) Lena; (d) Airplane; (e) Peppers; (f) Elaine; (g) Boat; (h) Barbara; (i) Lighthouse; (j) Woman 72 Figure 5.32 The PSNR versus the capacity curves of the seven compared RDH algorithms for test images (a) Tiffany; (b) Baboon; (c) Lena; (d) Airplane; (e) Peppers; (f) Elaine; (g) Boat; (h) Barbara; (i) Lighthouse; (j) Woman 74 Figure 5.33 The PSNR versus the capacity curves of the six compared RDH algorithms for test images (a) Tiffany; (b) Baboon; (c) Lena; (d) Airplane; (e) Peppers; (f) Elaine; (g) Boat; (h) Barbara; (i) Lighthouse; (j) Woman 76 Figure 5.34 The PSNR versus the capacity curves of the six compared RDH algorithms for test images (a) Tiffany; (b) Baboon; (c) Lena; (d) Airplane; (e) Peppers; (f) Elaine; (g) Boat; (h) Barbara; (i) Lighthouse; (j) Woman 78 Figure 5.35 The average of PSNR versus the capacity curves of the five compared RDH system and our system (TV 4 sets) for ten images 79 Figure 5.36 The average of PSNR versus the capacity curves of the five compared RDH system and our system (Fast 4 sets) for ten images 79 Figure 5.37 The average of PSNR versus the capacity curves of the five compared RDH system and our system (LSP 8×8) for ten images 80 Figure 5.38 The average of PSNR versus the capacity curves of the five compared RDH system and our system (MGMS 16×16) for ten images 80 Figure 5.39 The PSNR versus the capacity curves of the four compared RDH algorithms for test images (a) Tiffany; (b) Baboon; (c) Lena; (d) Airplane; (e) Peppers; (f) Elaine; (g) Boat; (h) Barbara; (i) Lighthouse; (j) Woman 83 Figure 5.40 The average of PSNR versus the capacity curves of the four compared RDH algorithms for ten images 83 Figure 5.41 Performance comparison of the original method and embedding selection method for six images 86 Figure 5.42 The average of PSNR versus the capacity curves of the two embedding methods for ten images 87 Figure 5.43 Performance comparison of the original method and embedding selection method for six images 89 Figure 5.44 The average of PSNR versus the capacity curves of the two embedding methods for ten images 89 LIST OF TABLES Table 5.1 Predicted images for four sets using image inpainting (TV) scheme 47 Table 5.2 Predicted images for four sets using image inpainting (Fast) scheme 48 Table 5.3 Comparison of PSNR for block size=4, 6, 8, 10, 12 51 Table 5.4 Comparison of PSNR for block size=16, 18, 20, 24, 30 51 Table 5.5 The execution-time comparison among the different block sizes 52 Table 5.6 Comparison of PSNR for different block sizes 54 Table 5.7 Comparison of our schemes based on LSP for Baboon image 90 Table 5.8 Comparison of our schemes based on LSP for Lena image 91 Table 5.9 Comparison of our schemes based on LSP for Boat image 91 Table 5.10 Comparison of our schemes based on LSP for Barbara image 91 Table 5.11 Comparison of our schemes based on MGMS for Baboon image 92 Table 5.12 Comparison of our schemes based on MGMS for Lena image 92 Table 5.13 Comparison of our schemes based on MGMS for Boat image 92 Table 5.14 Comparison of our schemes based on MGMS for Barbara image 93 Table 5.15 Comparison of two stages (RDH based on LSP) (messages=10,000) 93 Table 5.16 Comparison of two stages (RDH based on MGMS) (messages=10,000) 94 Table 5.17 Comparison of two stages (RDH based on LSP) (messages=20,000) 94 Table 5.18 Comparison of two stages (RDH based on MGMS) (messages=20,000) 94 |
參考文獻 |
[1]I. J. Cox, M. L. Millter, J. A. Bloom, J. Fridrich, T. Kalker, Digital Watermarking and Steganography, second ed.. Morgan Kaufmann, Burlington, USA, 2008. [2]J. Fridrich, M. Goljan, R. Du, “Invertible authentication,” Proc. SPIE, vol. 4314, 2001, pp. 197-208. [3]J. Fridrich, M. Goljan, R. Du, “Lossless data embedding-new paradigm in digital watermarking,” EURASIP J. Appl. Signal Process., vol. 2, 2002, pp. 185-196. [4]M. U. Celik, G. Sharma, A. M. Tekalp, E. Saber, “Lossless generalized-LSB data embedding,” IEEE Trans. Image Process., vol. 14 , no. 2, 2005, pp. 253-266. [5]M. U. Celik, G. Sharma, A. Tekalp, “Lossless watermarking for image authentication: A new framework and an implementation,” IEEE Trans. Image Process., vol. 15, no. 4, 2006, pp. 1042-1049. [6]J. Tian, “Reversible data embedding using a difference expansion,” IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 9, 2003, pp. 890-896. [7]A. M. Alatter, “Reversible watermark using the difference expansion of a generalized integer transform,” IEEE Trans. Image Process., vol. 13, no. 8, 2004, pp.1147-1156. [8]M. Fallahpour, “Reversible image data hiding based on gradient adjusted prediction,” IEICE Electron Express, vol. 20, no. 20, 2008, pp. 870-876. [9]X. Li, B. Yang, T. Zeng, “Efficient reversible watermarking based on adaptive prediction error expansion and pixel selection,” IEEE Trans Image Process, vol. 20, 2011, pp. 3524-3533. [10]I. C. Dragoi, D. Coltuc, “Local-prediction-based difference expansion reversible watermarking,” IEEE Transactions on image processing, vol. 23, no. 4, 2014, pp. 1779-1790. [11]K. C. Vinoth, V. Natarajan, “Hybrid local prediction error-based difference expansion reversible watermarking for medical images,” Computers and Electrical Engineering, vol. 53, 2016, pp. 333-345. [12]Z. Ni, Y. Q. Shi, N. Ansari, W. Su, “Reversible data hiding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 3, 2006, pp. 354-362. [13]K. S. Kim, M. J. Lee, H. Y. Lee, H. K. Lee, “Reversible data hiding exploiting spatial correlation between sub-sampled images,” Pattern Recognition, vol. 42, no. 11, 2009, pp. 3083-3096. [14]H. Luo, F. X. Yu, H. Chen, Z. L. Huang, H. Li, P. H. Wang, “Reversible data hiding based on block median preservation,” Journal of Information Sciences, vol. 181, no. 2, 2011, pp. 308-328. [15]Z. Zhao, H. Luo, Z. M. Lu, J. S. Pan, “Reversible data hiding based on multilevel histogram modification and sequential recovery,” International Journal of Electronics and Communications, vol. 65, no. 10, 2011, pp. 814-826. [16]Y. C. Li, C. M. Yen, C. C. Chang, “Data hiding based on the similarity between neighboring pixels with reversibility,” Digital Signal Processing, vol. 20, no.4, 2010, pp. 1116-1128. [17]W. He, G. Xiong, K. Zhou, J. Cai, “Reversible data hiding based on multilevel histogram modification and pixel value grouping,” J. Vis. Commun. Image R., vol. 40, 2016, pp. 459-469. [18]V. Sachnev, H. J. Kim, J. Nam, S. Suresh, Y. Q. Shi, “Reversible watermarking algorithm using sorting and prediction,” IEEE Transactions on Circuits and Systems for Video Technolog, vol. 19, no. 7, 2009, pp. 989-999. [19]W. J. Yang, K. L. Chung, H. Y. M. Liao, W. K. Yu, “Efficient reversible data hiding algorithm based on gradient-based edge direction prediction,” The Journal of Systems and Software, vol. 86, 2013, pp. 567-580. [20]R. M. Rad, K. Wong, J. M. Guo, “Reversible data hiding by adaptive group modification on histogram of prediction errors,” Signal Processing, vol. 125, 2016, pp. 315-328. [21]R. M. Rad, K. Wong, J. M. Guo, “A unified data embedding and scrambling method,” IEEE Trans. Image Process, vol. 23, no. 4, 2014, pp. 1463-1475. [22]C. Qin, C. C. Chang, Y. H. Huang, L. T. Liao, “An inpainting-assisted reversible steganographic scheme using a histogram shifting mechanism,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, 2013, pp. 1109-1118. [23]X. Li, B. Li, B. Yang, T. Zeng, “General framework to histogram-shifting-based reversible data hiding,” IEEE Transactions on Image Processing, vol. 6, 2013, pp. 2181-2191. [24]J. Wang, J. Ni, X. Zhang, Y. Q. Shi, “Rate and distortion optimization for reversible data hiding using multiple histogram shifting,” IEEE Transactions on Cybernetics, vol. 47, no. 2, 2017, pp. 315-326. [25]C. Guillemot, O. L. Meur, “Image inpainting: overview and recent advances,” IEEE Signal Processing Magazine, vol. 31, no. 1, 2014, pp. 127-144. [26]Z. Chen, C. Dai, Le. Jiang, B. Sheng, J. Zhang, W. Lin, Y. Yuan, “Structure-aware image inpainting using patch scale optimization,” Journal of Visual Communication and Image Representation, vol. 40, 2016, pp. 312-323. [27]M. Bertalmio, G. Sapiro, C. Ballester, V. Caselles, “Image inpainting,” in: Proc. ACM SIGGRAPH, 2000, pp. 417-424. [28]T. Chan, J. Shen, “Mathematical models for local non-texture inpainting,” SIAM J. Appl. Math., vol. 62, 2002, pp. 1019-1043. [29]L. I. Rudin, S. Osher, E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, 1992, pp. 259-268. [30]M. Bertalmio, L. Vese, G. Sapiro, S. Osher, “Simultaneous structure and texture image inpainting,” IEEE Trans. Image Process, vol. 12, no. 8, 2003, pp. 882-889. [31]F. Bornemann, T. Marz, “Fast image inpainting based on coherence transport”, J. Math. Imaging Vision, vol. 28, no. 3, 2007, pp. 259-278. [32]F. Li, T. Zeng, “A universal variational framework for sparsity-based image inpainting,” IEEE Trans. Image Process, vol. 23, no. 10, 2014, pp. 4242-4254. [33]D. H. Zhai, W. X. Dung, J. Yu, “Image Inpainting Algorithm Based on Double Cross TV,” Journal of University of Electronic Science and Technology of China, vol. 43, no. 3, 2014, pp. 432-436. [34]T. Marz, “A Well-Posedness Framework for Inpainting Based on Coherence Transport”, Foundations of Computational Mathematics, vol. 15, 2015, pp. 973-1033. [35]USC-SIPI Image Database. http://sipi.usc.edu/database/database.html. Accessed Jan 2020. [36]J. Canny, "A computational approach to edge detection," IEEE Trans. Pattern Anal. Machine Intell., vol. 6, 1986, pp. 679-698. [37]S. A. Glantz, B. K. Slinker, Primer of Applied Regression and Analysis of Variance. McGraw-Hill, 2016. |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信