§ 瀏覽學位論文書目資料
  
系統識別號 U0002-2907202005071500
DOI 10.6846/TKU.2020.00867
論文名稱(中文) 以卷積類神經網路於衛星影像颱風結構與降雨量推估之研究
論文名稱(英文) Analyzing the Characteristics of Typhoon Structure from Satellite Imagery to Estimate Rainfall by Convolutional Neural Networks
第三語言論文名稱
校院名稱 淡江大學
系所名稱(中文) 水資源及環境工程學系碩士班
系所名稱(英文) Department of Water Resources and Environmental Engineering
外國學位學校名稱
外國學位學院名稱
外國學位研究所名稱
學年度 108
學期 2
出版年 109
研究生(中文) 許家宇
研究生(英文) Chia-Yu Hsu
學號 608480090
學位類別 碩士
語言別 英文
第二語言別
口試日期 2020-07-02
論文頁數 81頁
口試委員 指導教授 - 張麗秋
委員 - 張斐章
委員 - 張麗秋
委員 - 蔡孝忠
關鍵字(中) 類神經網路
卷積類神經網路
衛星影像
颱風解構特徵分析
向量化颱風路徑
總降雨量預測
時雨量預測
關鍵字(英) Artificial neural networks
convolutional neural networks (CNNs)
typhoon dissipation
satellite imagery
vectorized typhoon track
total rainfall forecast
hourly rainfall estimation
第三語言關鍵字
學科別分類
中文摘要
臺灣位於西北太平洋的亞熱帶上,為颱風主要經過之路徑上,每年平均有4-5個颱風侵臺;同時臺灣高山深谷縱列不絕,山脈南北縱走,多數河川呈東西分流,造成河短坡陡流急,使得颱風期間的強降雨常在數小時內造成下游氾濫,重創社會經濟,甚至危害人民生命安全,因此水庫成為臺灣最為重要的緩解洪水災害之水利設施,且颱風所帶來的豐沛雨量亦是水庫重要水源之一。因為臺灣地形的緣故,當颱風經過中央山脈群時,產生地形鎖定效應,颱風路徑位置對特定集水區所帶來的降雨量有高度相關性。隨著氣象衛星的進步,衛星雲圖的解析度與觀測能力獲得提升,近年衛星雲圖在預估颱風的強度與總降雨量上已成為許多數值模式不可或缺之初始條件;另一方面,近年來隨著人工智慧(Artificial Intelligence)技術的進步,在各個研究領域上,類神經網路(Artificial Neural Networks)成為相當熱門的應用工具之一。本研究將卷積類神經網路(Convolutional Neural Networks)之強大的圖像識別能力於分析衛星雲圖上颱風解構的特徵,並推估颱風伴之而來之總降雨量與預測短時距之時雨量。
本研究以石門水庫集水區為研究案例,蒐集2007年至2018年共計18場颱風的紅外線衛星雲圖(東亞地區解析度)、颱風路徑、石門集水區的雨量資料、2007年至2016年共計15場颱風的紅外線衛星雲圖(臺灣地區解析度)及QPESUMS資料;分析颱風解構過程中雲頂溫度下降百分率跟降雨分布之關係,並引用SOM神經元颱風路徑分類結果探討地形於颱風解構之影響,將分析結果以SOM之路徑分類來討論,進一步地提出了兩種CNN模式,分別為CNN-TR模式與CNN-HR模式。在CNN-TR模式中,將東亞地區解析度的紅外線衛星雲圖搭配颱風路徑作為模式輸入,輸出變數為該場颱風從海警發佈至海警解除之累積降雨量;以現有的18場颱風事件中,CNN-TR模式之4場測試結果RMSE為39.1 mm;且18場颱風之交叉驗證結果R2 可達0.74、RMSE為 121.3 mm,若去除兩場離群值,則結果R2 可達0.88、RMSE 降至71.3 mm;在相同條件下與氣候模式結果比較,氣候模式則為R2為0.69、RMSE為179.4 mm;結果顯示在大部分的SOM颱風路徑分類下,CNN-TR模式皆可提供高可信度的預報總雨量,而在2009年莫拉克颱風場次更可提前至兩天前準確預報總雨量。另CNN-HR模式使用臺灣地區解析度的紅外線衛星雲圖搭配QPESUMS資料作為模式輸入,輸出變數為未來1至3小時的時雨量預測,在測試場次中t+1時刻之結果R2達0.847、RMSE為3.15 mm、t+2時刻之R2達0.83、RMSE為4.5 mm及t+3時刻之R2達0.645、RMSE為4.67 mm。
英文摘要
Over past decades, typhoons have always caused a great number of natural disasters and severely impacted on not only the society but also the economy in Taiwan. Despite the horrible destruction, the abundant precipitation a typhoon brings is one of the most essential water resources for Taiwan. Therefore, water reservoirs play an important role in controlling flooding and storing flooding, which can be executed more efficiently with a highly reliable forecast of typhoon-related rainfall. Due to the topography of Taiwan, phase-locked effect has an influence on specific watersheds when a typhoon goes across the Central Mountain Range (CMR) so that there is a relationship existing between the total rainfall and its track. With the advance in technology, more precise and multiple meteorological observations are available during typhoon period, making it more accurate to estimate rainfall in real time. Satellite imagery, whose capability of observation and resolution are also enhanced in recent years, has contributed for disaster preparedness over 40 years, being used as the initial condition for estimating the intensity and rainfall of a typhoon by many numerical models. On the other hand, because of the progress of artificial intelligence (AI), artificial neural networks (ANNs) have been a state-of-the-art and popular concept to analyze data. This meteorological study utilizes convolutional neural networks (CNNs), which is an algorithm based on ANNs and shows its powerful ability of image identification and object classification, to learn the characteristics which typhoons dissipate on satellite images and provide an objective technique to estimate the typhoon-related rainfall, including the total rainfall and the hourly rainfall in 1 to 3-h ahead on Shihmen Watershed. Moreover, a self-organizing (SOM) topological map of typhoon track is introduced to investigate the effect of topography of the CMR on dissipating a typhoon. For different purposes, two models based on CNNs are developed to estimate total rainfall and hourly rainfall during typhoon events, respectively. The CNNs model to estimate the total rainfall is trained with satellite infrared brightness temperature (East Asia area coverage) and vectorized typhoon track from 14 typhoon events during 2007 to 2018, achieving a RMSE of 39.1 mm with 4 typhoon events for testing data. To evaluate the performance of this CNNs model, a cross-validation on the 18 typhoon events is implemented. The CNN model can achieve a R2 of 0.88 and RMSE of 71.3 mm with outliers deleted, while the climatology model achieves a R2 of 0.69 and RMSE of 179.4 mm under the same condition. The result shows that the CNN model can forecast the total rainfall with high credibility two days in advance. Subsequently, the other CNN model is used to estimate 1 to 3-h ahead hourly rainfall, trained with satellite infrared brightness temperature (Taiwan area coverage) and QPESUMS data from 12 events during 2007 to 2018. The result shows that it can achieve a R2 of 0.847, 0.83 and 0.645 and a RMSE of 3.15, 4.5 and 4.67 mm in 1-h-ahead, 2-h-ahead and 3-h-ahead estimation, respectively.
第三語言摘要
論文目次
謝誌		I
中文摘要		III
Abstract		VI
Catalog		VIII
List of Figures	X
List of Tables	XIV
Chapter 1	Introduction	1
1.1 Background	1
1.2 Motivation and Purpose	2
Chapter 2	Literature Review	4
2.1 Application of Artificial Neural Network	4
2.2 Application of Convolutional Neural Network	5
2.3 Typhoon analysis through satellite images	7
Chapter 3	Methodology	9
3.1 Artificial neural networks	9
3.2 Convolutional neural networks	10
3.2.1 Architecture of CNNs	11
3.2.2 Fully Connected Layer	14
Chapter 4	Case Study	16
4.1 Study Area	16
4.2 Data Collection	18
4.2.1 Satellite Observations	18
4.2.2 Typhoon events and Rainfall data from rain gauges	21
4.2.3 QPESUMS data	25
4.4 Data Processing and Model Setting	26
4.3.1 Preparation for Total Rainfall Prediction	26
4.3.2 Preparation for Hourly Rainfall Prediction	35
4.4 Climatology Model	39
Chapter 5	Results and Discussions	43
5.1 Total Rainfall Prediction	43
5.1.1 Data Analysis for Cloud Fraction	43
5.1.2 Performance of the CNN-TR	48
5.1.3 Comparison to Climatology Model	50
5.2 Hourly Rainfall Prediction	58
Chapter 6	Conclusions and Suggestions	60
6.1 Conclusions	60
6.2 Suggestions	62
References		63
Appendix A.	Results of Data Analysis for Cloud Fraction	66
Appendix B.	Results of Hourly Rainfall Prediction	70

List of Figures
Figure 3.1 Architecture of Convolutional Neural Networks (CNNs)	11
Figure 3.2 (a) Input of an image “0”. (b) The digitalized image of the input. (c) The digitalized image after normalization. (d) The feature map implemented by a kernel.	13
Figure 4.1 The Location of Shihmen Watershed and 10 rainfall stations.	17
Figure 4.2 IR1 brightness temperature images of Taiwan coverage with a resolution of 0.0115° latitude/longitude at 0932 12 Jul 2013 during Typhoon Soulik.	20
Figure 4.3 IR1 brightness temperature images of East Asia coverage with a resolution of 0.0313° latitude/longitude at 0932 12 Jul 2013 during Typhoon Soulik.	20
Figure 4.4 SOM topological map of typhoon track. The neuron of each cluster is marked on the upper right corner.	23
Figure 4.5 The tracks of 18 typhoon events grouped in the different neuron clusters. The number of neuron (cluster) is presented on the top of each sub-figure.	24
Figure 4.6 The coverage of QPESUMS data with estimated rain displayed at 1330 28 Sep 2008 during Typhoon Jangmi (200815)	25
Figure 4.7 (a) An IR1 image with a coverage of East Asia at 1830 12 Sep 2008 during Typhoon Sinlaku (200813), (b) The boundary extracted from (a), (c) corresponding image within the boundary and (d) The cloud fraction under 219K with the typhoon center marked in blue “x”. The dark-cyan boundary indicates the location of Shihmen Watershed, and the blue dash-dotted line indicates the radius of 34-kt wind.	28
Figure 4.8 (a) Plotting a vectorized typhoon track with weight value in the coverage same as the extracted IR1 data from 1130 11 Sep 2008 to 2030 15 Sep 2008 with 3-hour interval during Typhoon Sinlaku (200813). (b) corresponding image within the boundary. The 1-D typhoon track (red line) would be excluded when implementing the CNN-TR model.	31
Figure 4.9 The architecture of the CNN-TR model to estimate total rainfall.	34
Figure 4.10 (a) An IR1 image with a coverage of Taiwan at 0532 19 Sep 2010 during Typhoon Fanapi (201011) (b) The corresponding QPE image re-gridded to the same resolution to the IR1 image.	36
Figure 4.11 The architecture of the CNN-HR model to estimate total rainfall.	38
Figure 4.12 The climatology map for Shihmen Watershed gotten by typhoon events from 1980 to 2018.	42
Figure 5.1 The relationship between rainfall temporal distribution and typhoon center location in typhoon tracks of neuron 3. (a, c, e) The relationship between the amount of cloud fraction change and rainfall temporal distribution for Typhoons Soudelor, Dujuan, and Megi. The bar represents rainfall hyetograph against left Y-axis. The red-dash line means the time of typhoon landfall, and (b), (d), (f) are the IR1 image at the time of typhoon landfall.	46
Figure 5.2 The relationship between rainfall temporal distribution and typhoon center location in typhoon tracks of neuron 14 and 15. (a, c, e) The relationship between the amount of cloud fraction change and rainfall temporal distribution for Typhoons Wipha, Maria, and Fitow. The red dash line means the time when typhoon center was closest to Taiwan, and (b), (d),( f) are the IR1 image at the time with typhoon center most closest to Taiwan .	47
Figure 5.3 Learning curves (i.e., RMSE for each epoch) for CNN-TR with training data (blue line) and testing data (red line).	49
Figure 5.4 Comparison of outputs of Typhoons Sepat (200708), Morakot (200908), Fitow (201323) and Soudelor (201513) by CNN-TR with the observation.	49
Figure 5.5 (a) The result of CM-S1 using 17 events for training and 1 event for testing by cross-validation in turn. The figures present the estimated total rainfall for x-axis and observed total rainfall for y-axis with R2 and RMSE shown in upper left legend. (b) The scatter plot relative to Figure 5.5 (a) with the outlier (Typhoon Fanapi and Nesat) deleted.	54
Figure 5.6 (a) The result of CM-S2 using 116 events for training and 1 event for testing by cross-validation in turn. (b) The scatter plot relative to Figure 5.6 (a) with the outlier (Typhoon Fanapi and Nepartak) deleted.	55
Figure 5.7 (a) The result of CNN using 17 events for training and 1 event for testing by cross-validation in turn. (b) The scatter plot relative to Figure 5.7 (a) with the outlier (Typhoon Saola and Nesat) deleted.	56
Figure A 1 The relationship between rainfall temporal distribution and typhoon center location in typhoon tracks of neuron 4. (a, c, e) The relationship between the amount of cloud fraction change and rainfall temporal distribution. (b), (d), (f) are the IR1 image at the time of typhoon landfall.	66
Figure A 2 The relationship between rainfall temporal distribution and typhoon center location in typhoon tracks of neuron 5. (a, c, e) The relationship between the amount of cloud fraction change and rainfall temporal distribution. (b), (d), (f) are the IR1 image at the time of typhoon landfall.	67
Figure A 3 The relationship between rainfall temporal distribution and typhoon center location in typhoon tracks of neuron 1, 2 and 7. (a, c, e) The relationship between the amount of cloud fraction change and rainfall temporal distribution. (b), (d), (f) are the IR1 image at the time of typhoon landfall.	68
Figure A 4 The relationship between rainfall temporal distribution and typhoon center location in typhoon tracks of neuron 8, 9 and 13. (a, c, e) The relationship between the amount of cloud fraction change and rainfall temporal distribution. (b), (d), (f) are the IR1 image at the time of typhoon landfall.	69
Figure B 1 Learning curves of CNN-HR for 1 to 3-h-ahead estimation with IR1+QPE.	70
Figure B 2 Scatter plots of 1 to 3-h-ahead estimations (Y-axis) and observations (X-axis) from training events and testing events with IR1+QPE.	71
Figure B 3 Line Charts of 1 to 3-h-ahead estimations (red line) and observations (blue line) from training events with IR1+QPE.	72
Figure B 4 Line Charts of 1 to 3-h-ahead estimations (red line) and observations (blue line) from testing events with IR1+QPE.	73
Figure B 5 Learning curves of CNN-HR for 1 to 3-h-ahead estimation with QPE.	74
Figure B 6 Scatter plots of 1 to 3-h-ahead estimations (Y-axis) and observations (X-axis) from training events and testing events with QPE.	75
Figure B 7 Line Charts of 1 to 3-h-ahead estimations (red line) and observations (blue line) from training events with QPE.	76
Figure B 8 Line Charts of 1 to 3-h-ahead estimations (red line) and observations (blue line) from testing events with QPE.	77
Figure B 9 Learning curves of CNN-HR for 1 to 3-h-ahead estimation with IR1.	78
Figure B 10 Scatter plots of 1 to 3-h-ahead estimations (Y-axis) and observations (X-axis) from training events and testing events with IR1.	79
Figure B 11 Line Charts of 1 to 3-h-ahead estimations (red line) and observations (blue line) from training events with IR1.	80
Figure B 12 Line Charts of 1 to 3-h-ahead estimations (red line) and observations (blue line) from testing events with IR1.	81

List of Tables
Table 4.1 The data collected in this study.	18
Table 4.2 The properties of satellite observation data in this study.	19
Table 4.3 The information of typhoon event used in this study.	22
Table 4.4 The model setting, input and output shape in each layer of CNN-TR. 
The architecture is shown in Figure 4.9 in detail.	33
Table 4.5 The model setting, input and output shape in each layer of CNN-HR. 
The architecture is shown in Figure 4.11 in detail.	37
Table 4.6 List of typhoon events used for building climatology map.	40
Table 4.7 List of typhoon events used for building climatology map. (Continued).	41
Table 5.1 Comparison of the estimation of CM-S1, CM-S2 and CNN-TR among the 18 typhoon events by relative error.	57
Table 5.2 Results of hourly rainfall estimation with each input combination.	59
參考文獻
1.	Alvey, G. R., Zawislak, J., andZipser, E. (2015). Precipitation properties observed during tropical cyclone intensity change. Monthly Weather Review, 143(11), 4476–4492.
2.	Cerveny, R. S., andNewman, L. E. (2000). Climatological relationships between tropical cyclones and rainfall. Monthly Weather Review, 128(9), 3329–3336.
3.	Chang, C. P., Yeh, T. C., andChen, J. M. (1993). Effects of terrain on the surface structure of typhoons over Taiwan. In Monthly Weather Review (Vol. 121, Issue 3, pp. 734–752).
4.	Chang, F. J., Chiang, Y. M., Tsai, M. J., Shieh, M. C., Hsu, K. L., andSorooshian, S. (2014). Watershed rainfall forecasting using neuro-fuzzy networks with the assimilation of multi-sensor information. Journal of Hydrology, 508, 374–384.
5.	Chang, L. C., Chang, F. J., Yang, S. N., Tsai, F. H., Chang, T. H., andHerricks, E. E. (2020). Self-organizing maps of typhoon tracks allow for flood forecasts up to two days in advance. Nature Communications, 11(1), 1–13.
6.	Chang, S. W. J. (1982). The orographic effects induced by an island mountain range on propagating tropical cyclones. In Monthly Weather Review (Vol. 110, Issue 9, pp. 1255–1270).
7.	Chao, C. C., Liu, G. R., andLiu, C. C. (2011). Estimation of the upper-layer rotation and maximum wind speed of tropical cyclones via satellite imagery. Journal of Applied Meteorology and Climatology, 50(3), 750–766.
8.	Chen, B.-F., Chen, B., Lin, H.-T., andElsberry, R. L. (2019). Estimating Tropical Cyclone Intensity by Satellite Imagery Utilizing Convolutional Neural Networks. Weather and Forecasting, 34(2), 447–465.
9.	Collobert, R., andWeston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. Proceedings of the 25th International Conference on Machine Learning, 160–167.
10.	Dvorak, V. F. (1975). Tropical Cyclone Intensity Analysis and Forecasting from Satellite Imagery. In Monthly Weather Review (Vol. 103, Issue 5, pp. 420–430).
11.	Feng, Y., Shebotnov, S., Brenner, C., andSester, M. (2018). Ensembled convolutional neural network models for retrieving flood relevant tweets. CEUR Workshop Proceedings, 2283(October).
12.	He, K., Zhang, X., Ren, S., andSun, J. (2016). Deep Residual Learning for Image Recognition. IEEE Conf. on Computer Vision and Pattern Recognition, 45(8), 770–778.
13.	Hong, J. S., Fong, C. T., Hsiao, L. F., Yu, Y. C., andTzeng, C. Y. (2015). Ensemble typhoon quantitative precipitation forecasts model in Taiwan. Weather and Forecasting, 30(1), 217–237.
14.	Hung, N. Q., Babel, M. S., Weesakul, S., andTripathi, N. K. (2009). An artificial neural network model for rainfall forecasting in Bangkok, Thailand. Hydrology and Earth System Sciences, 13(8), 1413–1425.
15.	Ichim, L., andPopescu, D. (2019). FLOODED AREAS EVALUATION FROM AERIAL IMAGES BASED ON CONVOLUTIONAL NEURAL NETWORK Loretta Ichim and Dan Popescu University POLITEHNICA of Bucharest , Bucharest , ROMANIA. IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, 9756–9759.
16.	Jiang, H. (2012). The relationship between tropical cyclone intensity change and the strength of inner-core convection. Monthly Weather Review, 140(4), 1164–1176.
17.	Jiang, J., Liu, J., Qin, C. Z., andWang, D. (2018). Extraction of urban waterlogging depth from video images using transfer learning. Water (Switzerland), 10(10).
18.	Joyce, R. J., Janowiak, J. E., Arkin, P. A., andXie, P. (2004). CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. Journal of Hydrometeorology, 5(3), 487–503.
19.	Kidder, S. Q., Kusselson, S. J., Knaff, J. A., Ferraro, R. R., Kuligowski, R. J., andTurk, M. (2005). The tropical rainfall potential (TRaP) technique. Part I: Description and examples. Weather and Forecasting, 20(4), 456–464.
20.	Kingma, D. P., andBa, J. L. (2015). Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 1–15.
21.	Latifovic, R., Pouliot, D., andCampbell, J. (2018). Assessment of convolution neural networks for surficial geology mapping in the South Rae geological region, Northwest Territories, Canada. Remote Sensing, 10(2).
22.	Lee, C. S., Huang, L. R., Shen, H. S., andWang, S. T. (2006). A climatology model for forecasting typhoon rainfall in Taiwan. Natural Hazards, 37(1–2), 87–105.
23.	Liu, G. R., Chao, C. C., andHo, C. Y. (2008). Applying satellite-estimated storm rotation speed to improve typhoon rainfall potential technique. Weather and Forecasting, 23(2), 259–269.
24.	Murao, H., Nishikawa, I., andKitamura, S. (1993). for the Rainfall Estimation using Satellite Imagery. 1211–1214.
25.	Peng, B., Meng, Z., Huang, Q., andWang, C. (2019). Patch similarity convolutional neural network for urban flood extent mapping using bi-temporal satellite multispectral imagery. Remote Sensing, 11(21).
26.	Valverde Ramírez, M. C., DeCampos Velho, H. F., andFerreira, N. J. (2005). Artificial neural network technique for rainfall forecasting applied to the São Paulo region. Journal of Hydrology, 301(1–4), 146–162.
27.	Wee, G. (2019). Investigating the effect of typhoon track on rainfall spatial distribution in a watershed using artficial neural networks. Tamkang University.
28.	Wei, C., Hung, W. C., andCheng, K. S. (2006). A multi-spectral spatial convolution approach of rainfall forecasting using weather satellite imagery. Advances in Space Research, 37(4), 747–753.
論文全文使用權限
校內
校內紙本論文延後至2022-07-31公開
同意電子論文全文授權校園內公開
校內電子論文延後至2022-07-31公開
校內書目立即公開
校外
同意授權
校外電子論文延後至2022-07-31公開

如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信