§ 瀏覽學位論文書目資料
  
系統識別號 U0002-1301200910423100
DOI 10.6846/TKU.2009.00347
論文名稱(中文) 新的影像修補技術
論文名稱(英文) Several New Techniques for Image Inpainting
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 電機工程學系博士班
系所名稱(英文) Department of Electrical and Computer Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 97
學期 1
出版年 98
研究生(中文) 陳衍良
研究生(英文) Yen-Liang Chen
學號 889350046
學位類別 博士
語言別 英文
第二語言別
口試日期 2009-01-05
論文頁數 136頁
口試委員 指導教授 - 謝景棠(hsieh@ee.tku.edu.tw)
委員 - 陳稔
委員 - 施國琛
委員 - 顏淑惠
委員 - 黃仁俊
委員 - 謝景棠
關鍵字(中) 影像修補
多重解析
小波轉換
適應分解
修補優先權
浮水印
扭轉
自相似性匹配
細帶轉換
幾何流向
關鍵字(英) Image Inapainting
Multi-Resolution
Wavelet Transform
Adaptive Decomposition
Repairing Priority
Watermark
Warp Transform
Affine Matching
Bandelet Transform
Geometrical Flow
第三語言關鍵字
學科別分類
中文摘要
本論文根據破壞區域周圍的人類視覺感官特性所對應多重解析維度,提出適應性的影像修補演算法。我們探討適應性的多層解析分解、修補優先權的順序及不同像素修補決策法等等技術對輪廓及紋理產生的影響,依序提出四種影像修補技術:

1.	漸進式影像修補法:以小波轉換為基礎的數位影像修補演算法,即利用二階小波轉換將待修補影像分解至低、中、高三個不同頻率成分之小波層,進行影像修補工作。首先由低頻率小波層進行粗略解析之影像輪廓預測,以該層所獲得之粗略輪廓修補。依該修補結果為依據,漸漸提昇至中、高頻率小波層,進行更精細的紋理修補,使修補結果更接近人類視覺感官。
2.	適應性分解影像修補法:為了解決在大破壞區域的錯誤修補結果,提出適應性階層小波轉換之相似性數位影像修補演算法。依破壞區域的大小,決定相對應小波階層數以進行適應性分解,提高粗略輪廓的預測修補的正確性;並且根據同一影像中具有相似性輪廓及紋理的特性,提出自相似影像修補決策法進行修補。
3.	幾何流向為依據細帶修補法:雖然小波轉換可將影像適應性分解至不同小波解析層,但對不同走向之紋理成分無法有效分解,導致修補結果不夠細膩。為了改善此缺點,我們提出以Bandelet轉換為基礎之修補演算法。利用Bandelet轉換取得輪廓及紋理之幾何流向的資訊,再依此幾何流向進行數位影像修補,即可獲得更精細的修補結果。
4.	浮水印為依據修補法:若大破壞區域同時包含不同輪廓及紋理變化的物件時,將無法利用有限輪廓資訊進行粗略影像修補工作。為了解決這個問題,我們利用強健的影像輪廓浮水印技術,提供原影像約略的輪廓走向資訊,使修補法有所依循。再利用適應性多重解析層進行細膩修補。如此可以避免因粗略輪廓錯誤,而造成視覺上嚴重的整體修補錯誤。
   
 因此,本論文針對不同的破壞區域大小及紋理形式經由上述所提出的修補演算法進行實驗。實驗結果顯示:如果能夠對待修補影像進行分解至不同解析層,甚至依據不同大小的破壞區域能夠適應性的徹底解析,即可減少各層資訊的複雜度,以利漸進式修補法有效進行分析及決策。其次,適應性小波分解雖然提供足夠之小波分解層數,但Bandelet 轉換比小波轉換對輪廓及紋理走向更能有效描述,在各層小波係數更具有集中性;在該層的影像成分進行修補,其修補的結果更能提高細膩。最後,若將預先儲存於原影像中之浮水印做為修補參考,則對大破壞區域的影像有助於提高原始影像的重建率。
英文摘要
In this paper, we proposed the adaptive inpainting method according to the multi-resolution of nearing damaged district of human visual characteristics. We explore the impacts of these techniques of the decomposition, the priority in decision-marking and repair techniques on result of image inpainting. Form this, we proposed the four methods for restoration of damaged images.

1.	Progressive image inpainting: The digital image inpainting based on wavelet transform. This is, using the two-level wavelet transform to decomposition the image into three wavelet layers of different frequency components (low, middle and high) to carry on image inpainting procedure. First, contour estimation with coarse resolution is conducted on the low frequency wavelet layer, and the image is repaired according to the obtained coarse contour. Based on the repairing results, the wavelet layers are progressively repaired, gradually moving from lower to higher frequencies to carry out finer texture repair and producing results that are more consistent with the human visual perception.
2.	Adaptive decomposition inpainting: In order to resolve the issue of false repair results at sites with big damage district, we propose to perform adaptive decomposition of wavelet transform. The size and extent of the damaged region are evaluated to obtain the corresponding wavelet layers for carrying out adaptive decomposition of the image. By examining the similarity in contour and texture in the same image, self-similarity decision-making rules are then proposed to conduct image repair.
3.	Geometric Bandelet Inpainting: Although wavelet transform allows decomposition of an image into different resolution layers, it cannot achieve perfect decomposition on two-dimensional images. Therefore, if repair is conducted directly using the wavelet coefficients, the resulting image will not achieve the desirable refined quality. To overcome this and to acquire satisfactory image repair results, we propose to carry out image repair by taking advantage of the concept of bandelet transform, as well as the geometric flow of image contour and texture. 
4.	Watermark Inpainting: In the case when the damaged region contains different multiple objects, the limited contour information will not allow image repair to be carried out correctly. To solve this problem, the image contour watermark previously embedded in the image is used as a reference to guide the image repair work.     Thus, this method for repairing damaged images is based on the analysis of image watermark. 
    
In this thesis, we investigated restoration of damaged images using the four kinds of methods described above. The four methods are distinguished by their applicability to damaged regions of various sizes and textures. From our experimental results, we discovered that we can successfully decompose an image with large-scale damage into different resolution layers and even adaptively decompose the image according to the size and extent of damage. We were able to obtain sufficient number of image analysis layers and reduce the complexity of information in each layer to enable effective and progressive repairing on damaged images. In addition, by using bandelet transform, we were able to adaptively decompose damaged images according to the trend in their contours and textures, making the distribution of coefficients in each layer more concentrated and allowing finer repair results to be obtained. We also found that we can significantly increase the reconstructability of damaged images if the contour watermark of the original image is used as a reference for conducting image repair.
第三語言摘要
論文目次
CHAPTER 1 Introduction	1
1.1 Research Background	1
1.2 Thesis Contribution	1
1.3 Thesis Framework	3

CHAPTER 2 Progressive Image Inpainting	7
2.1 Introduction	7
2.2 Previous Related Work	9
2.2.1 The Image Multi-resolution Analysis	9
2.2.2 Priorities of the Block Inpainting Sequence	12
2.2.3 Directional Pixel-value Fill-in Algorithm(DPFA)	15
2.3 The Proposed Algorithm	17
2.3.1 The Progressive Image Inpainting Algorithm	18
2.3.2 Flow Chart of the Multi-resolution Analyzing Method	23
2.4 Experimental Results	26
2.4.1 The inpainting results from considering the multi- resolution wavelet coefficients	26
2.4.2 The influence of varied testing area dimensions on inpainting results.	27
2.4.3 A comparison of image inpainting results among current inpainting methods	29 
2.4.3.1 The comparison of results derived from various image inpainting algorithms	29
2.4.3.2 The results of utilizing the image inpainting algorithm on photos and paintings	32
2.5 Conclusion	35

CHAPTER 3 Image Inpainting Based on Self Similarity	37
3.1 Introduction	37
3.2 Previous Related Work	41
3.2.1 Adaptive Image Multi-Resolution Analysis	41
3.2.2 Repairing Order of Decision Mechanism	43
3.2.3 A Fractal Geometric Pixel Restoration Method	48
3.3 The Proposed Algorithm	51
3.3.1 Details of the GII Method	51
3.3.2 Explanation of the Entire Process	55
3.4 Experimental Results	57
3.4.1 A comparison of image inpainting results among current inpainting method	57
3.4.2 A comparison of processing time among current inpainting methods	58
3.4.3 The results of the image inpainting on the geometric images	60
3.4.4 The results of utilizing the image inpainting algorithm on photos	61
3.5 Conclusions	64

CHAPTER 4 Bandelet-Based Image Inpainting	65
4.1 Introduction	65
4.2 Previous Related Work	68
4.2.1 Geometric Flow	68
4.2.2 Bandelet Transform	69
4.3 The Proposed Algorithm	71
4.4 Experimental Results	80
4.5 Conclusions	84

CHAPTER 5 Inpaiting Application 1 - Wavelet Stage Best Neighborhood Matching	85
5.1 Introduction	85
5.2 Previous Related Work	88
5.2.1 BNM	88
5.2.2 Directional Texture Reconstruction	90
5.3 The Proposed Algorithm	93
5.3.1 Details of MLBNM	93
5.3.2 Flow Chart of the Proposed Algorithm	97
5.4 Experimental Results	100
5.4.1 Comparison of image repairing results with the best existing methods	101
5.4.2 Results of the image repair on an arbitrary image	107
5.5 Conclusions	109

CHAPTER 6 Inpaiting Application 2 – Watermark -Based Image Inpainting	111
6.1 Introduction	111
6.2 Previous Related Work	112
6.2.1 Digital watermarking	112
6.2.2 Canny edge detection	114
6.2.3 Reference image inpainting	116
6.3 The Proposed Algorithm	119
6.4 Experimental Results	121
6.5 Conclusions	123

CHAPTER 7 Summary and Future Development	125
7.1 Summary	125
7.2 Future Development	127
Reference Materials	129
Publishing Lists	135

List of Figures
Fig. 2.1 Dual-Frequency Analysis of Wavelet Transform	11           
Fig. 2.2 Results of the wavelet transformation analysis derived from various layers of a given image	12
Fig. 2.3 The importance of the consideration of textural extensions for image inpainting	14
Fig. 2.4 Within the section of repair Ω, the priority sequence of areas within ΔΩ can be derived from the image textural content of the areas awaiting repair.	15
Fig. 2.5 Three image textural components present between the areas under repair and its adjacent areas	17
Fig. 2.6 The results of applying layer 1 wavelet transformation to an image	17
Fig. 2.7 The comparison of various reconstructed images with different wavelet coefficients of frequency layers	18
Fig. 2.8 The “tree structure” correlation of wavelet transformation	21
Fig. 2.9 Flowchart of the proposed inpainting method	25
Fig. 2.10 Experimental results from utilizing the multi-layer wavelet coefficients	27
Fig. 2.11 A set of image inpainting results with various defected areas…... 28
Fig. 2.12 Comparison between various PSNR values of the inpainting results with differing defected dimension block heights	29
Fig. 2.13 The tested image and the inpainting results derived from various other methods.	31
Fig. 2.14 Zoom-in repair results derived from various other methods. 	31
Fig. 2.15 The tested image with vast areas of damage and the inpainting results derived from various other methods	32
Fig. 2.16 The inpainting results of a repeated pattern derived from the proposed method	33
Fig. 2.17 The inpainting results of a photo derived from the proposed method	34
Fig. 2.18 The inpainting results of an artistic composition derived from the proposed method	34
Fig. 3.1 the notation diagram of the damaged area	45
Fig. 3.2 Inpainting a damaged image by utilizing the different WT layers	47 
Fig. 3.3 Inpainting at different layers of WT: 4th, 3rd and 2nd level layers 	52
Fig. 3.4 the repair block include the valid pixels and the invalid pixels		54
Fig. 3.5 The inpainting results derived from various other methods.	58
Fig. 3.6 The inpainting results derived from various geometric images		60
Fig. 3.7 The test image1-repeated texture. 	61
Fig. 3.8 The test image2-repeated the shadows. 	62
Fig. 3.9 The test image3 -repeated photos. 	63
Fig. 4.1 The incorrect reference information leads to the incorrect repair result. 	69
Fig. 4.2 The damaged district may be carried out to repairing direction.		69
Fig. 4.3 The image can be divided into three categories. 	71
Fig. 4.4 The texture image been transformed using the geometrical flow.		72
Fig. 4.5 Quad tree of dyadic square image segmentation	73
Fig. 4.6 Aimed the different characteristics of image information to bandeletization. 	74
Fig. 4.7 Flowchart of the proposed inpainting method.	76
Fig. 4.8 The binary decomposition image.	77
Fig. 4.9 compare the repaired results.	79 
Fig. 4.10 The inpainting results.	81
Fig. 4.11 The inpainting results. 	81
Fig. 4.12 Experimental results from utilizing different methods 	83
Fig. 4.13 Experimental results from utilizing different methods 	83
Fig. 5.1 Structure of damaged block, range block, searching block, and neighboring information with their default sizes	90
Fig. 5.2. The simple experiment to find the Shantanu’s repair problem.		92
Fig. 5.3 The relation of each directional neighboring coefficient to repair the damaged coefficients on the damaged block. 	94
Fig. 5.4 The related position of the directive veins coefficient in wavelet resolution layer	94
Fig. 5.5 Compare the repair results in terms of the directional information	95
Fig. 5.6 The visual adjustment to solve the block effect of the reconstructed image. 	96
Fig. 5.7 Flowchart of the WSBNM	99
Fig. 5.8 The reconstructed results for “Goldhill” with block loss rate 10% Block size is 8 x 8	100
Fig. 5.9 Comparison of repair results of the PSNR for “Lena” achieved by BNM, JBNM, Shantanu’s method and WSBNM. 	102
Fig. 5.10 Comparison of repair results of the PSNR for “Baboon” achieved by BNM, JBNM, Shantanu’s method, and WSBNM	103
Fig. 5.11  Comparison of repair results of the PSNR for “Goldhill” achieved by BNM, JBNM, Shantanu’s, method and WSBNM.	103
Fig. 5.12 Comparison of repair results of the PSNR for “Barbara” achieved by BNM, JBNM, Shantanu’s method, and WSBNM.	104
Fig. 5.13 Comparison repair results of the processing time for “Lena” achieved by BNM, JBNM, Shantanu’s method, and WSBNM.	104
Fig. 5.14 Restoration results for “Baboon” with block loss rate of 5% and block size is 16 x 16. 	105
Fig. 5.15 Restoration results for “Barbara” with three whole lines losses. Block size is 16 x 16. 	106
Fig. 5.16 Restored results for “repeated stripe pattern” with the three kinds damage conditions and damage rate is 15%. Block size is 8 X 8		107                                                                                                 
Fig. 5.17 Restored results for “scenery” with the three kinds damage conditions and damage rate is 15%. Block size is 8 X 8	108
Fig. 5.18 Restored results for “portrait” with the three kinds of damage conditions and damage rate is 15%. Block size is 8 X 8	108
Fig. 6.1 The Sobel mask in x-direction and y-direction	115
Fig. 6.2 Using the caddy edge detection obtains the image contour	118
Fig. 6.3 The proposed watermark-based image inpainting	120
Fig. 6.4 The contour image	120
Fig. 6.5 Experimental results from utilizing different methods	122

List of Tables
Table 4.1 The comparison of the repairing time using different methods		59
參考文獻
[1]M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting”, SIGGRAPH, pp.417–424, July 2000.
[2]A. Telea, “An image inpainting technique based on the fast marching method”, Journal of Graphics Tools, vol. 9, no. 1, ACM Press, pp.25-36, 2004.
[3]M. Bertalmio, L. Vese, G. Sapiro, S. Osher, “Simultaneous structure and texture image inpainting”, Conf. on Computer Vision and Pattern Recognition, pp.882- 889, June 2003.
[4]M. Oliveira, B. Bowen, R. McKenna, Y. S. Chang, “Fast digital image inpainting”, Proceedings of the International Conf. on Visualization, Imaging and Image Processing,  pp.261-266, 2001.
[5]C. A. Zorzo Barcelos, M. Aurélio Batista, “Image inpainting and denoising by nonlinear partial differential equations”, the XVI Brazilian Symposium on Computer Graphics and Image Processing , pp.287- 293, October 2003.
[6]A. Criminisi, P. Perez and K. Toyama, “Region filling and object removal by exemplar-based inpainting”, IEEE Trans. on Image Proc., pp.1200-1212, 2004.
[7]L. Demanet, B. Song and T. Chan, “Image inpainting by correspondence maps: a deterministic approach”, UCLA CAM Report 03-40, August 2003.
[8]T. K. Shih, R. C. Chang, L. C. Lu and L. H. Lin, “Large block inpainting by color continuation analysis”, 10th International Multimedia Modelling Conference, Brisbane  Australia, pp.196-202, January, 2004.
[9]R. J. Cant and C. S. Langensiepen, “A multiscale method for automated inpainting”, Nottingham Trent University.
[10]A. Levin, A. Zomet and Y. Weiss, “Learning how to inpaint from global image statistics”, 9th IEEE International Conf. on Computer Vision, pp.305-312, October 2003.
[11]K. A. Patwardhan, and Guillermo Sapiro, "Projection based image and video inpainting using wavelets”, ICIP2003, pp. 857-860, September. 2003.
[12]C.S. Burrus, R. A. Gopinath, and H.Guo, Introduction to Wavelets and Wavelet Transform, Prentice-Hall, 1998.
[13]A. W. C. Liew, N. F. Law and D. T. Nguyen, “Multiple Resolution Image Restoration”, IEE Proceedings - Vision, Image and Signal Processing, Vol.144, pp.199-206, 1997.
[14]R. Wilson, “Image analysis and segmentation using mixture models”, IEE Seminar on London, UK, pp.11/1-11/6, February 2000.
[15]S. G. Chang, Z. Cvetkovi'c, and M. Vetterli, “Resolution enhancement of images using wavelet transform extrema Extrapolation”, Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing, vol.4, pp.2379-2382, 1995.
[16]Z. Mo, J. P. Lewis, and U. Neumann, “Face Inpainting with Local Linear Representations,” Proc. of British Machine Vision Conference, Vol I, pp.347-356, 2004.
[17]D. Zhang and Z. Wang, “Image information restoration based on long-range correlation,” IEEE Trans. Circuits and Systems for Video Technology, Vol.12, no.5,  pp.331-341, 2002.
[18]W. Li, D. Zhang, Z. Liu, and X. Qiao, “Fast block-based image restoration employing the improved best neighbor-hood matching approach,” IEEE Trans. Systems , Man, and Cybernetics, Vol.35, no.4, pp.546–555, July 2005.
[19]R. Bornard, E. Lecan, L. Laborelli, and J.H. Chenot, “Missing data correction in still images and image sequences,” in Proc. 10th ACM International Conference on Multimedia, pp.355–361, 2002.
[20]Y. L. Chen, C. T. Hsieh, and C. H. Hsu, “Progressive Image Inpainting Based on Wavelet Transform”, IEICE Trans. Fundamentals, Vol.E88-A, No.10, pp.2826-2834, October 2005.
[21]I. Drori, D. Cohen-Or, and H. Yeshurun, “Fragment-Based Image Completion,” in Proceedings of SIGGRAPH 2003, pp.303-312, 2003.
[22]Y. Hitoshi, H. Jorg, S. Hans-Peter “Image restoration using multiresolution texture synthesis and image inpainting,” Werner B.Proceedings Computer Graphics International, 2003, Tokyo Japan: IEEE Computer Society Press, pp.120-125, 2003.
[23]H. Grossauer, “A combined pde and texture synthesis approach to inpainting,” In T. Pajdla and J. Matas, editors, European Conference on Computer Vision, LNCS 3022, pp.214–224, 2004.
[24]S. D. Rane, J. Remus, and G. Sapiro, “Wavelet-domain reconstruction of lost blocks in wireless image transmission and packet-switched networks,” in IEEE International Conference on Image Processing, (In Press) , September 2002.
[25]K. Belloulate and J. Konard, “Fractal Image Compression with Region-Based Functionality,” IEEE Trans on Image Proc., Vol. II, No.4, pp.345-350, 2002.
[26]G. M. Davis, “A Wavelet-Based Analysis of Fractal Image Compression”, IEEE Trans. Image Proc., Vol.7, no.2, pp.141-154, 1997.
[27]W. H. Cheng, C. W. Hsieh, S. K. Lin, C. W. Wang, and J. L. Wu, “Robust Algorithm for Exemplar-based Image Inpainting,” The International Conference on Computer Graphics, Imaging and Vision 2005, pp.64-69, July 2005.
[28]T. Zhou, F. Tang, J. Wang, Z. Wang and Q. Peng, “Digital Image Inpainting with Radial Basis Functions,” J. Image Graphics, pp.1190-1196, September 2004.
[29]A. W. C. Liew, N. F. Law and D. T. Nguyen, “Multiple Resolution Image Restoration”, IEE Proceedings – Vision, Image and Signal Processing, Vol.144, pp.199-206, 1997.
[30]G. Peyre, and S. Mallat, “Surface compression with geometric bandelets,” In Proc. SIGGRAPH2005, ACM Press, pp.601-608, 2005.
[31]E. Le Pennec and S. Mallat, Sparse geometrical image representation with bandelets, IEEETrans. Image Process., 14 (2005), pp. 423–438, 2005.
[32]Le Pennec, E., and Mallat, S. “Bandelet Image Approximation and Compression,” SIAM Multiscale Modeling and Simulation, to appear, 2000. 
[33]P. Grattoni, E. Pollastri, A.Premoli, “A Contour Detection Algorithm Based on the Minimum Radial Inertia (MRI) Criterion,” CVGIP(43), No.1, pp. 22-36, July 1988.
[34]R. Eslami and H. Radha, “Wavelet-based contourlet transform and its application to image coding”, IEEE intl. Conf. on Image Processing, pp.3189-3192, 2004.
[35]B.A. Olshausen, P. Sallee, and M.S. Lewicki, "Learning sparse image codes using a wavelet pyramid architecture," in Advances in Neural Information Processing Systems 13, MIT Press: Cambridge, MA, pp. 887-893, 2000. 
[36]P.J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun., vol.31, no.4, pp.532-540, April 1983.
[37]E.H. Adelson, C.H. Anderson, J.R. Bergen, P.J. Burt, J.M. Ogden, "Pyramid methods in image processing," RCA Engineer, vol.29, no.6, pp.33- 41, 1984.
[38]D. Donoho, “Wedgelets: Nearly-minimax estimation of edges,” Ann. Statist., 27 (1999), pp. 353–382, 1999.
[39]Y. Fisher, “Fractal image compression, “ Springer-Verlag, New York, 1995.
[40]I.A. El Rube , M. Ahmed , M. Kamel, “Coarse-to-Fine Multiscale Affine Invariant Shape Matching and Classification,” Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2, p.163-166, August 23-26, 2004. 
[41]G. Peyre, and S. Mallat, “Discrete Bandelets with Geometric Orthogonal Filters,” Proceeding of ICIP, 2005.
[42]E. Candès, “Monoscale Ridgelets for the Representation of Images with Edges,” Technical Report, Department of Statistics, Stanford University, 1999.
[43]X. Huo and D. Donoho, “Beamlets and multiscale image analysis,” Springer, Lecture Notes in Computational Science and Engineering: Multiscale and Multiresolution Methods, 2001.
[44]E. Arias-Castro, D. L. Donoho and X. Huo, “Adaptive multiscale detection of filamentary structures embedded in a background of Uniform random points,” Annals of Statistics, 2004.
[45]A.N. Skodras, C. Christopoulos, and T. Ebrahimi, “Jpeg 2000: The upcoming still image compression standard,” Proc. of the 11th Portuguese Conf. on Pattern Recognition, pp.359-366, May 2000.
[46]S.S. Hemami, “Digital image coding for robust multi- media transmission,” in Symposium on Multimedia Communications and Video Coding, 1995.
[47]E. Chang, “An image coding and reconstruction scheme for mobile computing,” in Proc. 5th IDMS, pp.137-148, September. 1998.
[48]X. Lee, Y.Q. Zhang, and A. Leon-Garcia, “Information loss recovery for block-based image coding techniques—A fuzzy logic approach,” IEEE Trans. Image Proc., Vol.4, no.3., pp.259-273, March 1995.
[49]H. Sun and W. Kwok, “Concealment of damaged block transform coded images using projections onto convex sets,” IEEE Trans. Image Proc., Vol.4, no.4, pp.470-477, April 1995.
[50]S.S. Hemami and T.H.-Y. Meng, “Transform Coded Image Reconstruction Exploiting. Interblock Correlation,” IEEE Trans. on Image Proc., Vol.4, No.7, pp.1023-1027, July 1995.
[51]S.D. Rane, G. Sapiro, and M. Bertalmio, “Structure and texture filling-in of missing image blocks in wireless transmission and compression applications,” IEEE Trans. Image Proc., Vol.12, Issue.3, pp.296-303, March 2003. 
[52]Z. Wang, Y.L. Yu, and D. Zhang, “Best neighborhood matching: An information loss restoration technique for block-based image coding systems,” IEEE Trans. Image Proc., Vol.7, no.7, pp.1056-1061, July 1998.
[53]L. Xiao, C. Huang, H. Liang, and H. Wu, “Concealment of Damaged Block Coded Images Using Intelligent Two-Step Best Neighborhood Matching Algorithm,” Int. Conf. on Computer Graphics, Imaging and Visualization, pp.38-42 , July 2005. 
[54]M.N. Do and M. Vetterli, “Texture similarity measurement using Kullback-Leibler distance on wavelet subbands,” in. Proc. IEEE Int. Conf. on Image Proc., Vol III, pp.754-757, September 2000.
[55]J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans, Pattern Analysis and Machine Intelligence, vol.8, no.6, pp.679-697, November 1986.
[56]P.M. Zwet, M. Nettesheim, J. Gerbrands, and J. Reiber, “Derivation of optimal filters for the detection of coronary arteries”, IEEE Transactions on Medical Imaging, vol. 17, pp. 108-120, 1998.
[57]J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans, Pattern Analysis and Machine Intelligence, vol.8, no.6, pp.679-697, November 1986.
[58]S. Dumitrescu, W. Wu, N. “Memon,: On steganalysis of ran-dom LSB embedding in continuous-tone images,” Proc. International Conference on Image Processing, pp.641-644, 2002.
[59]D. Coltuc , J.-M. Chassery, “Very fast watermarking by reversible contrast mapping,” IEEE Signal Processing Letters 14, 4 (2007), pp. 255-258, April 2007.
[60]M. U. Celik, G. Sharma, A. M. Tekalp, and E. Saber, “Lossless generalized LSB data embedding,” IEEE Trans. Image Process., vol.14, no.2, pp.253–266, February 2005.
[61]L.-Y. Wei and M. Levoy, “Fast texture synthesis using tree-structured vector quantization,” in Siggraph 2000, Computer Graphics Proceedings. ACM Press / ACM SIGGRAPH / Addison Wesley Longman, pp.479–488, 2000.
[62]V. Kwatra and et al, “Graphcut textures: Image and video synthesis using graph cuts,” in SIGGRAPH, 2003. 
[63]L. Liang, C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum, “Real-time texture synthesis by patch-based sampling.” ACM Trans. Graph., vol.20, no.3, pp.127–150, 2001.
[64]Q. Wu and Y. Yu, “Feature matching and deformation for texture synthesis.” ACM Trans. Graph., vol.23, no.3, pp.364–367, 2004.
[65]M. Ashikhmin, “Synthesizing natural textures,” in Symposium on Interactive 3D Graphics, pp.217–226, 2001.
[66]A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin, “Image analogies,” in SIGGRAPH 2001, Computer Graphics Proceedings. ACM Press / ACM SIGGRAPH, 2001, pp.327–340, 2001.
[67]A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and transfer,” in SIGGRAPH 2001, Computer Graphics Proceedings. ACM Press / ACM SIGGRAPH, pp.341–346, 2001. 
[68]A. Sch¨odl, R. Szeliski, D. H. Salesin, and I. Essa, “Video textures,” in Siggraph 2000, Computer Graphics Proceedings. ACM Press / ACM SIGGRAPH / Addison Wesley Longman, pp.489–498, 2000.
[69]J. Sun, L. Yuan, J. Jia, and H.-Y. Shum, “Image completion with structure propagation,” in SIGGRAPH, 2005. 
[70]J. Jia and C.-K. Tang, “Image repairing: Robust image synthesis by adaptive nd tensor voting.” in CVPR, 2003.
[71]Y. Wexler, E. Shechtman, and M. Irani, “Space-time video completion.” in CVPR, pp.120–127, 2004.
[72]V. Kwatra, I. Essa, A. Bobick, and N. Kwatra, “Texture optimization for example-based synthesis,” ACM Transactions on Graphics, SIGGRAPH 2005, August 2005.
論文全文使用權限
校內
紙本論文於授權書繳交後5年公開
同意電子論文全文授權校園內公開
校內電子論文於授權書繳交後5年公開
校外
同意授權
校外電子論文於授權書繳交後5年公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信