
系統識別號 
U00022402201919490400 
中文論文名稱

仿生自動化K線圖閱讀系統 
英文論文名稱

Let machine read candlestick charts like human beings 
校院名稱 
淡江大學 
系所名稱(中) 
資訊工程學系全英語碩士班 
系所名稱(英) 
Master’s Program, Department of Computer Science and Information Engineering (Englishtaught program 
學年度 
107 
學期 
1 
出版年 
108 
研究生中文姓名 
郭修志 
研究生英文姓名 
SiouJhih Guo 
學號 
606780012 
學位類別 
碩士 
語文別 
英文 
口試日期 
20181228 
論文頁數 
52頁 
口試委員 
指導教授洪智傑 委員彭文智 委員林莊傑

中文關鍵字 
深度學習
量化交易
K線圖

英文關鍵字 
Deep Learning
Quantitative Trading
Candlestick Chart

學科別分類 
學科別＞應用科學＞資訊工程

中文摘要 
自18世紀以來，K線圖長期被視為市場分析中俱要重要地位的輔助工具。K線圖原理為將歷史價格圖切分為大小不同的區間，對每一區間判讀其樣式，再將每一種樣式對於未來趨勢的影響組合為最終決策。相較於其他方法，K線圖成功之處在於，其能夠解析更大量歷史資訊，自中抽取對於未來趨勢至關重要的部分，同時亦具有極高精準度。本研究透過近年異軍突起的深度學習中，大量研究者採用的深度卷積網路，建構能夠閱讀K線圖並預測未來價格趨勢的自動化決策系統。設計原理仿自人類交易員閱讀K線圖時之過程－－集合由各小區間圖形樣式對漲跌之影響，產出最終對價格趨勢的預測。系統由三大元件構築而成：將大區間價格資訊拆分為小區間的區間拆分器；將小區間圖形轉化為低維度、並自中抽取圖形特徵的Autoencoder；以及自各小區間的特徵中，判讀出最終價格趨勢的RNN。此系統以台灣期貨交易所上的6檔交易標的，TX、MTX、TE、TF、XIF、GTF進行訓練及測試，並與利用傳統指標如：SMA、K/D線等的既有方法進行比較，本研究之系統可得到更高的精確度，證實本研究之有效性與可行性。 
英文摘要 
Candlestick charts have been very important tool for human traders while making trading decisions since 18th century. Inspired from people reading candlestick charts for decision making, this paper proposed a deep network framework, Deep Candlestick Predictor (DCP), to forecast the price movements by reading the candlestick charts rather than the numerical data from financial reports. DCP contains a chart decomposer which can decomposes given candlestick chart into several subcharts, an CNNAutoencoder which can derive the best representation for subcharts, and a RNN which can forecast the price movements of the k+1th day. Extensive experiments are conducted by daily prices from real dataset of 6 future merchandises of stock indices in Taiwan Future Exchange, which totally have 21,819 trading days. The experimental results show that the proposed framework DCP could achieve higher accuracy than the traditional indexbased model, which shows the effectiveness of the concept of designing a deep network to read candlestick charts like human beings. 
論文目次 
Table of Content
Abstract I
List of Content IV
List of Figure VI
List of Table VII
List of Formula VIII
Chapter 1 Introduction 1
Chapter 2 Related Works 6
2.1 Candlestick Charts Analysis 6
2.2 Time Series Forecasting 9
2.2.1 RNN 9
2.2.2 GRU 11
2.2.3 CNN 13
2.3 CNNAutoencoder 15
Chapter 3 Problem Formulation 17
Chapter 4 DCP 19
4.1 Chart Decomposer 19
4.2 CAE 21
4.3 RNN 23
Chapter 5 Experiment Result 25
5.1 Settings and Dataset 25
5.1.1 Dataset 25
5.1.2 Experiment Workflow 30
5.2 FeatureEfficiency 31
5.2.1 IEM 31
5.2.2 Performance Evaluation 33
5.3 ModelEfficiency 37
5.3.1 1D CNN 38
5.3.2 2D CNN 40
5.3.3 Performance Evaluation 42
Chapter 6 Conclusion 44
References 47
List of Figure
Figure 1 : 20days candlestick chart 2
Figure 2 : candlesticks 2
Figure 3 : Method summary 4
Figure 4 : Architecture of RNN 10
Figure 5 : General representation of RNN 10
Figure 6 : Calculation overview of GRU 13
Figure 7 : CNN overview 14
Figure 8: Workflow of CAE 16
Figure 9 : An illustrative example of a 3day subchart 20
Figure 10 : CAE overview 22
Figure 11 : RNN overview 24
Figure 12 : IEM overview 33
Figure 13 : 1D CNN overview 38
Figure 14 : 1D tensor 38
Figure 15 : 1D CNN model 39
Figure 16 : 2D CNN overview 40
Figure 17 : Original subchart 40
Figure 18 : Concatenated subcharts for 2D CNN 40
Figure 19 : 2D CNN Model 41
List of Table
Table 1 : Comparison between different method 8
Table 2 : Future merchandises 28
Table 3 : Distribution of trend on each year 29
Table 4 : NestedCV results of 10 bestperforming model of DCP 34
Table 5 : NestedCV results of 3 different classifier of IEM 34
Table 6: Scores of DCP (Experiment 10) and IEM(SVM) which is tested on TX merchandise in 2016 34
Table 7 : Detail of NestedCV on each year on DCP(Experiment 10) and IEM(SVM) 35
Table 8 : Detail of NestedCV of best 10 model of DCP 36
Table 9 : Number of data and accumulative total number of data 37
Table 10 : NestedCV results of 10 bestperforming model of 1D CNN 43
Table 11 : NestedCV results of 10 bestperforming model of 2D CNN 43
Table 12 : Performance comparison of best models of RNN, 1D CNN and 2D CNN 43

參考文獻 
[1] S. Asur and B. A. Huberman, "Predicting the future with social media," in Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent TechnologyVolume 01, 2010.
[2] D. Charles and R. Julie, Technical Analysis, 2006.
[3] G. C. Cawley and N. L. C. Talbot, "On overfitting in model selection and subsequent selection bias in performance evaluation," Journal of Machine Learning Research, vol. 11, pp. 20792107, 2010.
[4] S. Nison, Beyond Candlesticks: New Japanese Charting Techniques Revealed, Wiley, 1994.
[5] S. Nison, Japanese Candlestick Charting Techniques: A Contemporary Guide to the Ancient Investment Techniques of the Far East, New York Institute of Finance, 2001.
[6] H. A. Latane and R. J. Rendleman Jr, "Standard deviations of stock price ratios implied in option prices," The Journal of Finance, vol. 31, pp. 369381, 1976.
[7] T. Kamo and C. Dagli, "Hybrid approach to the Japanese candlestick method for financial forecasting," Expert Systems with applications, vol. 36, pp. 50235030, 2009.
[8] K. Martiny, "Unsupervised Discovery of Significant Candlestick Patterns for Forecasting Security Price Movements.," in KDIR, 2012.
[9] E. Ahmadi, M. H. Abooie, M. Jasemi and Y. Z. Mehrjardi, "A nonlinear autoregressive model with exogenous variables neural network for stock market timing: The candlestick technical analysis," International Journal of Engineering, vol. 29, pp. 17171725, 2016.
[10] E. Ahmadi, M. Jasemi, L. Monplaisir, M. A. Nabavi, A. Mahmoodi and P. A. Jam, "New efficient hybrid candlestick technical analysis model for stock market timing on the basis of the Support Vector Machine and Heuristic Algorithms of Imperialist Competition and Genetic," Expert Systems with Applications, vol. 94, pp. 2131, 2018.
[11] C.F. Tsai and Z.Y. Quan, "Stock prediction by searching for similarities in candlestick charts," ACM Transactions on Management Information Systems (TMIS), vol. 5, p. 9, 2014.
[12] Z.Y. Quan, "Stock prediction by searching similar candlestick charts," in Data Engineering Workshops (ICDEW), 2013 IEEE 29th International Conference on, 2013.
[13] K.i. Kamijo and T. Tanigawa, "Stock price pattern recognitiona recurrent neural network approach," in Neural Networks, 1990., 1990 IJCNN International Joint Conference on, 1990.
[14] K. H. Lee and G. S. Jo, "Expert system for predicting stock market timing using a candlestick chart," Expert systems with applications, vol. 16, pp. 357364, 1999.
[15] J. T. Connor, R. D. Martin and L. E. Atlas, "Recurrent neural networks and robust time series prediction," IEEE transactions on neural networks, vol. 5, pp. 240254, 1994.
[16] G. Dorffner, "Neural networks for time series processing," in Neural network world, 1996.
[17] P. J. Werbos and others, "Backpropagation through time: what it does and how to do it," Proceedings of the IEEE, vol. 78, pp. 15501560, 1990.
[18] S. Hochreiter, "The vanishing gradient problem during learning recurrent neural nets and problem solutions," International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems, vol. 6, pp. 107116, 1998.
[19] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk and Y. Bengio, "Learning phrase representations using RNN encoderdecoder for statistical machine translation," arXiv preprint arXiv:1406.1078, 2014.
[20] X. Wang, W. Jiang and Z. Luo, "Combination of convolutional and recurrent neural network for sentiment analysis of short texts," in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, 2016.
[21] D. Tang, B. Qin and T. Liu, "Document modeling with gated recurrent neural network for sentiment classification," in Proceedings of the 2015 conference on empirical methods in natural language processing, 2015.
[22] K. Tran, A. Bisazza and C. Monz, "Recurrent memory networks for language modeling," arXiv preprint arXiv:1601.01272, 2016.
[23] A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012.
[24] Y. Kim, "Convolutional neural networks for sentence classification," arXiv preprint arXiv:1408.5882, 2014.
[25] D. Britz, "Understanding Convolutional Neural Networks for NLP," 2018. [Online]. Available: http://www.wildml.com/2015/11/understandingconvolutionalneuralnetworksfornlp/.
[26] A. Krizhevsky and G. E. Hinton, "Using very deep autoencoders for contentbased image retrieval.," in ESANN, 2011.
[27] P. Vincent, H. Larochelle, Y. Bengio and P.A. Manzagol, "Extracting and composing robust features with denoising autoencoders," in Proceedings of the 25th international conference on Machine learning, 2008.
[28] H. Noh, S. Hong and B. Han, "Learning deconvolution network for semantic segmentation," in Proceedings of the IEEE international conference on computer vision, 2015.
[29] P. Baldi, "Autoencoders, unsupervised learning, and deep architectures," in Proceedings of ICML workshop on unsupervised and transfer learning, 2012.
[30] M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on computer vision, 2014.
[31] M. D. Zeiler, D. Krishnan, G. W. Taylor and R. Fergus, "Deconvolutional networks," 2010.
[32] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, vol. 15, pp. 19291958, 2014.
[33] T. Salimans and D. P. Kingma, "Weight normalization: A simple reparameterization to accelerate training of deep neural networks," in Advances in Neural Information Processing Systems, 2016.
[34] J. Patel, S. Shah, P. Thakkar and K. Kotecha, "Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques," Expert Systems with Applications, vol. 42, pp. 259268, 2015.
[35] C. Sun, A. Shrivastava, S. Singh and A. Gupta, "Revisiting unreasonable effectiveness of data in deep learning era," in Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.
[36] R. Rothe, "Applying deep learning to realworld problems," 2018. [Online]. Available: https://medium.com/merantix/applyingdeeplearningtorealworldproblemsba2d86ac5837.
[37] I. Sutskever, J. Martens and G. E. Hinton, "Generating text with recurrent neural networks," in Proceedings of the 28th International Conference on Machine Learning (ICML11), 2011.
[38] H. Sak, A. Senior and F. Beaufays, "Long shortterm memory based recurrent neural network architectures for large vocabulary speech recognition," arXiv preprint arXiv:1402.1128, 2014.
[39] R. Pascanu, T. Mikolov and Y. Bengio, "On the difficulty of training recurrent neural networks," in International Conference on Machine Learning, 2013.
[40] S. Kombrink, T. Mikolov, M. Karafiát and L. Burget, "Recurrent neural network based language modeling in meeting recognition," in Twelfth annual conference of the international speech communication association, 2011.
[41] Y. Goldberg, "Neural network methods for natural language processing," Synthesis Lectures on Human Language Technologies, vol. 10, pp. 1309, 2017.
[42] W. S. Cleveland and R. McGill, "Graphical perception: Theory, experimentation, and application to the development of graphical methods," Journal of the American statistical association, vol. 79, pp. 531554, 1984.
[43] D. Bahdanau, K. Cho and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.

論文使用權限 
同意紙本無償授權給館內讀者為學術之目的重製使用，於20190225公開。同意授權瀏覽/列印電子全文服務，於20190225起公開。 


