系統識別號 | U0002-1807202115080900 |
---|---|
DOI | 10.6846/TKU.2021.00425 |
論文名稱(中文) | 結合真實人眼視覺系統的影像處理方法之研究 |
論文名稱(英文) | A Study on Image Processing Method Combined with Real Human Visual System |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 電機工程學系博士班 |
系所名稱(英文) | Department of Electrical and Computer Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 109 |
學期 | 2 |
出版年 | 110 |
研究生(中文) | 陳力銘 |
研究生(英文) | Li-Ming Chen |
學號 | 801440016 |
學位類別 | 博士 |
語言別 | 英文 |
第二語言別 | |
口試日期 | 2021-07-01 |
論文頁數 | 52頁 |
口試委員 |
指導教授
-
易志孝(126016@mail.tku.edu.tw)
共同指導教授 - 洪國銘(hkming@mail.knu.edu.tw) 委員 - 許榮隆(srl@gapps.knu.edu.tw) 委員 - 洪國銘(hkming@mail.knu.edu.tw) 委員 - 林正雄(jslin@mail.vnu.edu.tw) 委員 - 許志旭(hsuch@ems.cku.edu.tw) 委員 - 林慧珍(086204@gms.tku.edu.tw) |
關鍵字(中) |
人眼視覺系統 火災影像分析 深度學習 完形心理學 商標設計學 商標侵權 |
關鍵字(英) |
Human visual system Fire image analysis Deep learning Gestalt psychology Trademark design Trademark infringement |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
在影像處理技術中,人眼視覺系統模型(Human visual system model, HVS model)是當技術面對生物學和心理學這類複雜問題的簡化方法。HVS model會隨著我們對真實視覺系統的了解程度而不斷改善。然而,有些問題會因為我們對真實視覺系統理解度不足,進一步地忽略重要的特徵。在重要特徵被忽略的狀況,造成現有技術在一些真實視覺能夠輕易分辨的問題變得困難且複雜。為了能進一步理解真實視覺系統,本文根據完形心理學(Gestalt Psychology)學者 K. Koffka 所提出的視覺認知理論,對有的戶外火焰警報系統與商標影像檢索系統提出改善方法。 在現有戶外火焰警報系統中,使用不受地形範圍限制的影像火焰偵測系統作為系統基礎。然而,火焰偵測系統在偵測與定位火焰後,仍需要有負責的消防員從旁協助觀察是否為真實火災。會造成這種原因便是在人類的真實視覺系統中,我們觀察到火以及其他有用的資訊。火與其他有用的資訊再加上人類的經驗,便足以準確預測偵測到的火焰是否惡化。與火相關的人類經驗,便是人們耳熟能詳的火三角。在火三角中,氧氣、燃料與熱源這三種元素便是構成火焰燃燒的重要元素。而在火災相關研究中,火三角系統在時間與空間上放大後,對應的三種元素也轉變成氣候、植被與火源。除了氣候是普通攝影機無法捕捉的特徵,植被與火源便是能經常獲得的特徵。然而,現有的火焰偵測系統卻無法將植被特徵列入考慮,主要原因為植被這項特徵的干擾因素遠遠超過火源。本文根據深度學習中,卷積神經網路(Convolutional Neural Network, CNN)擁有的其中一種特性。該特性為神經網路對環境變化有一定的抵抗力。透過該特性提出一個與人類視覺相似的火災警報系統。提出系統對分類的影像資料進行轉移學習與系統測試。測試結果顯示,提出方法在false Negative 為0的情況下,將過去系統的誤報率(False Positive Rate, FPR)從40.47%降低至4.15%。這項結果證實提出方法的再分類確實能使現有的系統獲得與人類真實視覺系統較相似的火警判斷效果。 在現有商標影像檢索系統中,商標檢索的數學模型能有效且快速找出相似的商標。然而,在各國法庭上,許多實際商標判決往往無法使用這些系統作為評判標準。主要原因是數學模型雖然能找出兩個商標的特徵相似,卻無法說明使用的特徵與人類真實視覺系統上的關聯性。這樣的結果使得法官必須以自己的主觀認知作為判斷基準。本文根據 K. Koffka 對視覺認知理論做出七個面向的解釋,以及商標設計學的原理,提出對應的七個特徵。同時,我們提出新的數學模型實作這七個特徵,並用資料庫進行檢索測試。在實驗結果顯示,本文提出七個特徵的系統,除了與現存商標影像檢索系統一樣能正確判斷相似性外,還能夠說明商標之間哪個特徵相近。透過特徵的距離,進一步說明真實視覺系統為何認為兩個商標相同或相異。 本文根據HVS model,在不同領域提出兩套系統。在實驗測試中,證實提出HVS model系統能有效連結影像處理、視覺心理學與商標設計學之間的關聯。 |
英文摘要 |
In image processing technology, the human visual system model (HVS model) is a simplified method when technology faces such complex problems as biology and psychology. The HVS model will continue to improve as we understand the real vision system. However, there are some problems because of our insufficient understanding of the real visual system, which further neglects important features. In the situation where important features are ignored, the problems that can be easily distinguished in real vision in the prior art become difficult and complicated. In order to further understand the real visual system, this dissertation proposes basic improvement methods for some outdoor fire alarm systems and trademark image retrieval systems based on the human eye recognition pattern proposed by Gestalt psychologist K. Koffka. In the existing outdoor fire alarm systems, the image fire detection system, which is not restricted by the terrain range, is used. However, after the fire detection system detects and locates the fire, a responsible firefighter still needs to assist in observing whether it is a real. The reason for this is that in the real human visual system, we observe not only fire but also some other useful information. Fire and other useful information human experience are enough to accurately predict whether the detected fire will deteriorate. Fire triangle, coming from human experience, includes three ingredients, namely: oxygen, heat, and fuel, that are required for a fire to burn, the three elements of oxygen, fuel and heat are the important elements that make up the fire. In fire-related work, after the fire triangle system is enlarged in time and space, the corresponding three elements are also transformed into climate, vegetation and ignition. Removal of climate is a feature that cannot be captured by ordinary cameras, and vegetation and ignition are features that can often be obtained. However, existing fire detection systems cannot take vegetation features into consideration. The main reason is that the disturbing factors of the vegetation feature are very large. This dissertation is based on one of the feature of Convolutional Neural Network (CNN) in deep learning. This feature is that the neural network has a certain resistance to environmental changes. This dissertation proposes a fire alarm system similar to human vision. The proposed system performs transfer learning and system testing on the image data reclassified by CNN using the proposed method. The test results show that the proposed method reduces the false positive rate (FPR) of the past system from 40.47% to 4.15% when false negative (FN) is 0. This result confirms that the reclassification of the proposed method can indeed enable the existing system to obtain fire alarm accuracy similar to that of the real human visual system. In the existing trademark image retrieval system, the mathematical model of trademark retrieval can effectively and quickly find similar trademarks. However, in the courts of various countries, many actual trademark judgments often fail to use these systems as the criteria for judging whether there is confusion between trademarks. The main reason is that although the mathematical model can find the features of the two trademarks is similar, it cannot explain the correlation between the used features and the real human visual system. Such a result makes the judge must use his own subjective cognition as the criterion of judgment, and make the judgment deviate from justice. Based on K. Koffka's seven-oriented explanation of the recognition patterns of human eyes and the principles of trademark design, this dissertation proposes seven corresponding features. At the same time, we propose a new mathematical model to implement these seven features, and use the database to test. The experimental results show that the seven-feature system proposed in this dissertation can not only accurately judge the similarity as the existing trademark image retrieval system, but also explain which features of the trademarks are similar. Through the distance of the features, it further explains why the real visual system considers the two trademarks to be the same or different. According to the HVS model, this dissertation proposes two systems in different fields. In the experimental tests, this dissertation proves that the proposed HVS model system can effectively assist humans, and link image processing, visual psychology, and trademark design to achieve technologies that enhance human well-being. |
第三語言摘要 | |
論文目次 |
摘要 I ABSTRACT III TABLE OF CONTENTS V LIST OF FIGURES VII LIST OF TABLES VIII CHAPTER 1 INTRODUCTION 1 1.1 Background and Motivation 1 1.2 Research Objective 1 1.3 Organization of Dissertation 3 CHAPTER 2 LITERATURE REVIEWS 4 2.1 Deep Learning: Convolutional Neural Network 4 2.2 Visual psychology and Gestalt theory 7 2.3 Fire Detection Related Works 9 2.4 Trademark Infringement and Related Works 12 CHAPTER 3 FIRE ALARM METHOD BASED ON HUMAN VISUAL SYSTEM 14 3.1 Limitations of existing related work 14 3.2 The proposed vegetation features and implementation of the reclassification system 15 CHAPTER 4 EXPERIMENTAL RESULTS OF FIRE ALARM METHOD BASED ON HUMAN VISUAL SYSTEM 19 4.1 Test of CNN network architecture in reclassification system 20 4.2 Experimental description of the proposed reclassification system 22 4.3 Comparison of the proposed reclassification system with existing related work 25 CHAPTER 5 TRADEMARK RETRIEVAL WORK BASED ON HUMAN VISUAL SYSTEM 27 5.1 Visual weight of trademark design 27 5.2 Seven features of visual weight 29 5.3 Interpretable trademark image retrieval assistance system 32 CHAPTER 6 TRADEMARK RECOGNITION WORK EXPERIMENTS BASED ON HUMAN VISUAL SYSTEM 34 6.1 Feasibility test of the proposed trademark assistance system 34 6.2 Test results of the proposed trademark assistance system on actual cases 37 6.3 Comparison and explanation of differences between the proposed trademark assistance system and existing related work 40 CHAPTER 7 CONCLUSION AND FUTURE WORK 43 REFERENCE 44 LIST OF FIGURES Figure 1. Schematic diagram of features proposed by perceptual organization 2 Figure 2. The four foundations of visual principles 7 Figure 3. Extension of fire triangle 9 Figure 4. System flow chart of Huang and DU's proposed method [58] 9 Figure 5. Spread fire reclassification 16 Figure 6. Non-spread fire reclassification 16 Figure 7. The flowchart of fire alarm method based on human visual system 17 Figure 8. Propose the features of the test in fBackYardFire.avi 22 Figure 9. Propose the features of the test in forest5.avi 23 Figure 10. Propose the features of the test in sBtFence2.avi 24 Figure 11. Implementation flowchart of the proposed trademark assistance system 32 Figure 12. Statistics of the number of features whose distance is less than the standard deviation 35 Figure 13. Statistics where the distance of each feature is less than the standard deviation 36 Figure 14. Illustrate a hypothetical trademark case that uses visual psychology to confuse human vision 40 LIST OF TABLES Table 1. AlexNet Architecture with MATLAB 5 Table 2. GoogleNet Architecture with MATLAB 6 Table 3. Fire types of fire class by U.S. Fire Administration [73] [74] 15 Table 4. Data Reclassification in Bilkent Database 19 Table 5. The testing result of proposed system with Bilkent Database 20 Table 6. The Comparison of Correct Fire Alarm using Bilkent Database 25 Table 7. Past trademark litigation cases were evaluated using the proposed image similarity resolution system 37 Table 8. Comparison between the proposed method and the existing method in Figure 14 40 |
參考文獻 |
[1] Gonzalez, R. C. and Wood, R. E., Digital image processing, 3rd ed., Pearson Education, 2009. [2] Koffka, K., Principles Of Gestalt Psychology, Mimesis International, 2014. [3] Krizhevsky A., Sutskever, B., Hinton, G. E., "ImageNet Classification with Deep Convolutional Neural Networks," in Neural Information Processing Systems (NIPS), 2012. [4] Nair, V. Hinton, G.E., "Rectified Linear Units Improve Restricted Boltzmann Machines," in International Conference on Machine Learning, Haifa, Israel, 2010. [5] Tüske, Z., Tahir, M.A., Schlüter, R., Ney, H., "Integrating Gaussian mixtures into deep neural networks: Softmax layer with hidden variables," in IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, 2015. [6] Szegedy, C. et al., "Going deeper with convolutions," in IEEE conference on computer vision and pattern recognition, 2015. [7] Moritz, Max A., Morais, Marco E., Summerell, Lora A., Carlson, J. M.; Doyle, John, "Wildfires, complexity, and highly optimized tolerance," National Academy of Sciences of the United States of America, vol. 102, no. 50, p. 17912–17917, 13th Dec. 2005. [8] Li, Q., "Estimation of Fire Detection Time," in Performance-based Fire and Fire Protection Engineering, 2011. [9] Cheng, C., Sun, F., Zhou, X., "One Fire Detection Method Using Neural Networks," Tsinghua Science and Technology, vol. 16, no. 1, pp. 31-35, Feb. 2011. [10] Liang, Y.-H., Tian, W.-M., "Multi-sensor Fusion Approach for Fire Alarm Using BP Neural Network," in Intelligent Networking and Collaborative Systems (INCoS), Ostrawva, Czech Republic, 2016. [11] Fonollosa, J., Solórzano, A., Jiménez-Soto, J.M., Oller-Moreno, S., Marco, S., "Gas sensor array for reliable fire detection," in Procedia Engineering, 2016. [12] Vijayalakshmi, S. R., Muruganand, S., "A survey of Internet of Things in fire detection and fire industries," in IoT in Social, Mobile, Analytics and Cloud (I-SMAC), Palladam, India, 2017. [13] Malykhina, G.F., Guseva, A.I., Militsyn, A.V., "Early Fire Prevention in the Plant," in Industrial Engineering, Applications and Manufacturing (ICIEAM), St. Petersburg, Russia, 2017. [14] De Iacovo, A., Venettacci, C., Colace, L., Scopa, L., Foglia, S., "PbS Colloidal Quantum Dot Visible-Blind Photodetector for Early Indoor Fire Detection," IEEE Sensors Journal, vol. 17, no. 14, pp. 4454-4459, July 2017. [15] Wand, S., Xiao, X., Deng, T., Chen, A., Zhu, M., "A Sauter mean diameter sensor forfire smoke detection," Sensors and Actuators B: Chemical, vol. 281, pp. 920-932, February 2019. [16] Sowah, R., Ampadu, K.O., Ofoli, A.R., Koumadi, K., Mills, G.A., Norte, J., "A Fire-Detection and Control System in Automobiles," IEEE Industry Applications Magazine, vol. 25, no. 2, pp. 57-67, Jan. 2019. [17] Toreyin, B. U., Dedeoglu, Y., Cetin, A. E., "Contour based Smoke Detection in Video using Wavelets," in European Signal Processing Conference (EUSIPCO), 2006. [18] Xiong, Z., Caballero, R., Wang, H., Finn, M., Lelic, A.M., Peng P.-Y., "Video-Based Smoke Detection: Possibilities, Techniques, and Challenges," in IFPA-Fire Suppression & Detection Research & Applications, 2007. [19] Li, H., Chang, S., Li, Z., Shao, L., "Color Context Analysis Based Efficient Real-Time Flame Detection Algorithm," in 3rd IEEE Conference on Industrial Electronics and Applications (ICIEA), 2008. [20] Yuan, F., "A fast accumulative motion orientation model based on," Pattern Recognition Letters, vol. 29, no. 7, pp. 925-932, May 2009. [21] Calderara, S., Piccinini, P., Cucchiara, V., "Smoke detection in video surveillance: A mog model in the wavelet domain," in International Conference on Computer Vision Systems (ICVS), 2008. [22] Luo, Q., Han, N., Kan, J., Wang, Z., "Effective Dynamic Object Detecting for Video-Based Forest Fire Smog Recognition," in 2nd International Congress on Image and Signal Processing (CISP), 2009. [23] Li, J., Zou, X., Wang, L., "The design and implementation of fire smoke detection system based on FPGA," in Chinese Control and Decision Conference (CCDC), Taiyuan, China, 2012. [24] Lee, C.-Y., Lin, C.-T., Hong, C.-T., Su, M.-T., "Smoke detection using spatial and temporal analyses," International Journal of Innovative Computing, Information and Control, vol. 8, no. 6, pp. 1-11, 2012. [25] Habiboğlu, Y.H., Günay, O., Çetin, A.E., "Covariance matrix-based fire and flame detection method in video," Machine Vision and Applications, vol. 23, no. 6, pp. 1103-1113, Nov. 2012. [26] Mueller, M., Karasev, P., Kolesov, I., Tannenbaum, A., "Optical Flow Estimation for Flame Detection in Videos," IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2786-2797, July 2013. [27] Zhao, Y., Li, Q., Gu, Z., "Early smoke detection of forest fire video using CS Adaboost algorithm," Light and Electron Optics, vol. 126, no. 19, pp. 2121-2124, Oct. 2015. [28] Wang Y., "Smoke Recognition Based on Machine Vision," in International Symposium on Computer, Consumer and Control (IS3C), Xi'an, China, 2016. [29] Favorskaya, M., Pyataeva, A., Popov A., "Spatio-temporal Smoke Clustering in Outdoor Scenes Based on Boosted Random Forests," Procedia Computer Science, vol. 96, pp. 762-771, 2016. [30] Rabeb, K., Sebastien, F., Moez, B., Farhat, F., Eric, M., "Video smoke detection review: State of the art of smoke detection in visible and IR range," in Smart, Monitored and Controlled Cities (SM2C), Sfax, Tunisia, 2017. [31] Chi, R., Lu, Z.-M., Ji, Q.-G., "Real-time multi-feature based fire flame detection in video," IET Image Processing, vol. 11, no. 1, pp. 31-37, April 2017. [32] Vijayalakshmi, S. R., Muruganand, S., "Fire alarm based on spatial temporal analysis of fire in video," in The Second International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 2018. [33] Çelik, T., Demirel, H., "Fire detection in video sequences usinga generic color model," Fire Safety Journal, vol. 44, no. 2, pp. 147-158, 2009. [34] Wang, W., Zhou, H., "Fire detection based on flame color and area," in IEEE International Conference on Computer Science and Automation Engineering (CSAE), 2012. [35] Chen, J., Bao, Q., "Digital image processing based fire flame color and oscillation," Procedia Engineering, vol. 45, pp. 595-601, 2012. [36] Shidik, G. F., Adnan, F. N., Supriyanto, C., Pramunendar, R. A., Andono, P. N., "Multi Color Feature, Background Subtraction and Time Frame Selection for Fire Detection," in International Conference on Robotics, Biomimetics, Intelligent Computational Systems (ROBIONETICS), Yogyakarta, Indonesia, 2013. [37] Rudz, S., Chetehouna, K., Hafiane, A., Laurent, H., Séro-Guillaume, O., "Investigation of a novel image segmentation method dedicated to forest fire applications," Measurement Science and Technology, vol. 24, no. 7, June 2013. [38] Foggia, P., Saggese, A., Vento, M., "Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion," IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 9, pp. 1545-1556, Sept. 2015. [39] Poobalan, K., Liew, S. C., "Fire Detection Based on Color Filters and Bag-of-Features Classification," in IEEE Student Conference on Research and Department (IEEE SCOReD), 2015. [40] Dimitropoulos, K., Barmpoutis, P., Grammalidi, N., "Spatio-Temporal Flame Modeling and DynamicTexture Analysis for AutomaticVideo-Based Fire Detection," IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 2, pp. 339-351, Feb. 2015. [41] Chino, D.Y.T., Avalhais, L.P.S., Rodrigues, J.F., Traina, A.J.M., "BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis," in 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 2015. [42] Benjamin, Sam G., Radhakrishnan, B., Nidhin, T. G, Dr. Padma Suresh, L., "Extraction of Fire Region From Forest Fire Images Using Color Rules and Texture Analysis," in International Conference on Emerging Technological Trends (ICETT), 2016. [43] Zhang, Y., Guan, Y., Xu, X., Li, Y., Han, T., "Research on Auto Extraction of Interested Region in Fire Image Compression," in Networks Security, Wireless Communications and Trusted Computing, 2009. [44] Kim, D., Wang, Y.-F., "Smoke Detection in Video," in World Congress on Computer Science and Information Engineering, Los Angeles, CA, USA, 2009. [45] Wang, L., Ye, M., Zhu, Y., "A Hybrid Fire Detection using Hidden Markov Model and Luminance Map," in International Conference of Medical Image Analysis and Clinical Application (MIACA), 2010. [46] Rossi, L., Akhloufi, M., Tison, Y., "On the use of stereovision to develop a novel instrumentation systemto extract geometric fire fronts characteristics," Fire Safety Journal, vol. 46, pp. 9-20, 2011. [47] Wong, K.K., Fong, N.K., "Experimental study of video fire detection and its applications," Procedia Engineering, vol. 71, pp. 316-327, 2014. [48] Polivka, T.N., Wang, J., Ellison, L.T., Hyer, E.J., Ichoku, C.M., "Improving Nocturnal Fire Detection With the VIIRS Day–Night Band," IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 9, pp. 5503-5519, Sept. 2016. [49] Lin, Z., Chen, F., Niu, Z., Li, B., Yu, B., Jia, H., Zhang, M., "An active fire detection algorithm based on multi-temporal FengYun-3C VIRR data," Remote Sensing of Environment, vol. 211, no. 15, pp. 376-387, June 2018. [50] Yu, C., Zhang, Y., Fang, J., Wang, J., "Texture Analysis of Smoke for Real-Time Fire Detection," in 2009 Second International Workshop on Computer Science and Engineering, Qingdao, China, 2009. [51] Chen, S., Luo, C., Chen, Y., Zhang, W., Hou, J., Qian, J., "Design of large space fire alarm controller based on intelligent video surveillance," in Electric Information and Control Engineering, Wuhan, China, 2011. [52] Frizzi, S., Kaabi, R., Bouchouicha, M., Ginoux, J. M., Moreau, E., Fnaiech, F., "Convolutional neural network for video fire and smoke detection," in IEEE Annual Conference of the IEEE Industrial Electronics Society (IECON), 2016. [53] Wu, X., Lu, X., Leung, H., "An adaptive threshold deep learning method for fire and smoke detection," in IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017. [54] Yin, Z., Wan, B., Yuan, F., "A Deep Normalization and Convolutional Neural Network for Image Smoke Detection," IEEE Access, vol. 5, pp. 18429-18438, 27 Sep 2017. [55] Zhang, Q. X., Lin, G. H., Zhang, Y. M., Xu, G., Wang, J. J., "Wildland forest fire smoke detection based on faster R-CNN using synthetic smoke images," in Procedia Engineering, 2018. [56] Muhammad, K., Ahmad, J., Mehmood, I., Rho, S., Baik, S.W., "Convolutional Neural Networks Based Fire Detection in Surveillance Videos," IEEE Access, vol. 6, pp. 18174-18183, March 2018. [57] Xu, G., Zhang, Q., Liu, D., Lin, G., Wang, J., Zhang, Y., "Adversarial Adaptation From Synthesis to Realityin Fast Detector for Smoke Detection," IEEE Access, vol. 7, pp. 29471-29483, March 2019. [58] Huang, X. and Du, L., "Fire Detection and Recognition Optimization Based on Virtual Reality Video Image," IEEE Access, vol. 8, pp. 77951-77961, 2020. [59] Bryan A. Garner, Black's Law Dictionary (11th ed.), West Group, 2019. [60] "Taiwan Intellectual Property Office - Review criteria of "suspected misunderstanding"," 31 July 2017. [Online]. Available: https://www.tipo.gov.tw/public/Attachment/690b8217-d7da-48f5-9187-a7b9284cdbc0.pdf. [61] Gary, R., "Brand Names Before the Industrial Revolution," National Bureau of Economic Research, 2008. [62] Callmann, R., The Law of Unfair Competition, Trademarks and Monopolies (4th ed.), Callaghan, 1998. [63] Zhu, S. and Jin, Y., "Empirical analysis of reverse confusion in trademark infringement," Journal of Nanjing University of Posts and Telecommunications, 2019. [64] Tien, D. N. and Huu, H. H. N. and Thanh, H. L., "Trademark image retrieval based on scale, rotation, translation invariant features," in 2013 RIVF International Conference on Computing & Communication Technologies - Research, Innovation, and Vision for Future (RIVF), Hanoi, 2013. [65] Agrawal, D. and Jalal, A. S. and Tripathi, R., "Trademark image retrieval by integrating shape with texture feature," in 2013 International Conference on Information Systems and Computer Networks, Mathura, 2013. [66] Keyur D. Joshi and Sanket N. Bhavsar and Rajesh C. Sanghvi, "Image Retrieval System using Intuitive Descriptors," Technology, vol. 14, pp. 535-542, 2014. [67] Meenalochini, M., Saranya, K., Rajkumar, G. V. and Mahto, A., "Perceptual Hashing for Content Based image Retrieval," in 2018 3rd International Conference on Communication and Electronics Systems (ICCES), 2018. [68] Showkatramani, G., Khatri, N., Landicho, A. and Layog, D., "Deep Learning Approach to Trademark International Class Identification," in 18th IEEE International Conference On Machine Learning And Applications (ICMLA), Boca Raton, FL, USA, 2019. [69] Abadi, H. H. N. and Pecht, M., "Artificial Intelligence Trends Based on the Patents Granted by the United States Patent and Trademark Office," IEEE Access, vol. 8, pp. 81633-81643, 2020. [70] Yan, C., Gong, B., Wei, Y. and Gao, Y., "Deep Multi-View Enhancement Hashing for Image Retrieval," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 434, pp. 1445-1451, April 2021. [71] Trappey, C.V., Trappey, A.J. and Lin, S.C.C., "Intelligent trademark similarity analysis of image, spelling, and phonetic features using machine learning methodologies," Advanced Engineering Informatics, vol. 45, p. 101120, 2020. [72] Hung, K.-M., Chen, L.-M., Chen, T.-W, "A Novel Hierarchical Wildfire Alarm System based on Vegetation Features," Journal of Computer, (Accepted) June 2021. [73] Ted, B. et al., Fire Detection and Suppression Systems, 3rd Edition ed., International Fire Service Training Association, 2005, p. 10. [74] "Choosing and using fire extinguishers," U.S. Fire Administration, [Online]. Available: https://www.usfa.fema.gov/prevention/outreach/extinguishers.html. [Accessed 12th. Dec. 2017]. [75] Zhao, B., Huang, B., Zhong, Y., "Transfer Learning With Fully Pretrained Deep Convolution Networks for Land-Use Classification," IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 9, pp. 1436-1440, Sept. 2017. [76] Jing, P., Su, Y., Nie, L., Gu, H., "Predicting Image Memorability Through Adaptive Transfer Learning From External Sources," IEEE Transactions on Multimedia, vol. 19, no. 5, pp. 1050-1062, May 2017. [77] Huang, J., Zhou, Z., "Transfer metric learning for unsupervised domain adaptation," IET Image Processing, vol. 13, no. 5, pp. 804-810, Aptil 2019. [78] "Simple Fire and Smoke Video Clips," Bilkent EE Signal Processing Group, [Online]. Available: http://signal.ee.bilkent.edu.tr/VisiFire/Demo/SampleClips.html. [79] Kim, T., "Starbucks, A fierce battle to defend the logo," darts-ip, 5th. July 2019. [Online]. Available: https://www.darts-ip.com/starbucks-logo-battle/. [80] "7-Eleven Wins Trademark Infringement Suit vs. Super-7 Store," Convenience Store News, 23th June 2015. [Online]. Available: https://csnews.com/7-eleven-wins-trademark-infringement-suit-vs-super-7-store. [81] Li, Z.-W., "Learned the lesson of being defeated to "Hodilao" in the lawsuit, Haidilao registered 200 trademarks in 2 days (written by Traditional Chinese)," udn.com, 5th. Nov. 2020. [Online]. Available: https://udn.com/news/story/7332/4991502?from=udn-ch1_breaknews-1-0-news. [82] Li, Y. and Peng, G., "Trademark battle! Starbucks defeated in lawsuit against e-coffee (written by Traditional Chinese)," TVBS News, 22th. Dec. 2007. [Online]. Available: https://news.tvbs.com.tw/local/300998. [Accessed 16th. May 2016]. [83] Hung, K.-M., Chen, L.-M., Chen, T.-W, "Trademark infringement recognition assistance system based on human visual Gestalt psychology and trademark design," EURASIP Journal on Image and Video Processing, (Accepted) June 2021. [84] Bradley, S., "Design Principles: Visual Weight And Direction," Smashing Magazin, 12 Dec. 2014. [Online]. Available: https://www.smashingmagazine.com/2014/12/design-principles-visual-weight-direction/. [85] Bradley, S., Design Fundamentals - Elements, Attributes, & Principles (2nd. ed.), Boulder, Colorado: Vanseo Design, 2018. [86] Bradley, D. and Roth, G., "Adaptive Thresholding using the Integral Image," Journal of Graphics Tools, vol. 12, no. 2, pp. 13-21, 2007. [87] Eagleman, D., "Visual illusions and neurobiology," Nat Rev Neurosci, vol. 2, pp. 920-926, 2001. [88] Pedersen, B. M., "Logo Database - Graphis," Graphis Inc., 1986. [Online]. Available: http://www.graphis.com/logos/. [Accessed 2020]. [89] "Japan Platform for patent information | J-PlatPat (JPP)," National Center for Industrial Property Information and Training, [Online]. Available: https://www.j-platpat.inpit.go.jp/. [90] "USPTO United States Patent and Trademark Office," Office of the Chief Communications Officer, 16th Oct. 2019. [Online]. Available: https://www.uspto.gov/. [Accessed 31 Dec. 2020]. |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信