淡江大學覺生紀念圖書館 (TKU Library)
進階搜尋


下載電子全文限經由淡江IP使用) 
系統識別號 U0002-1407202022393100
中文論文名稱 深度學習在多用戶大規模輸入輸出系統中預編碼設計上之研究
英文論文名稱 Research on Precoding Design in Multi-user Large-scale Input-Output System by Deep Learning
校院名稱 淡江大學
系所名稱(中) 電機工程學系碩士班
系所名稱(英) Department of Electrical And Computer Engineering
學年度 108
學期 2
出版年 109
研究生中文姓名 林榆澂
研究生英文姓名 Yu-Cheng Lin
學號 607460051
學位類別 碩士
語文別 中文
口試日期 2020-07-09
論文頁數 54頁
口試委員 指導教授-李光啟
委員-陳喬恩
委員-何建興
中文關鍵字 深度學習  人工神經網路 
英文關鍵字 Deep Learning  Artificial Neural Network 
學科別分類 學科別應用科學電機及電子
中文摘要 隨著5G標準化,多用戶大規模多輸入多輸出系統(MU M-MIMO system)的設計為一項關鍵技術,其中迫零預編碼器為常見的簡單預編碼方式,在高信噪比情境下,可將多用戶大規模多輸入多輸出系統y=nHFx+n改寫為y=x+n,使得y容易解碼為x。一般而言,迫零預編碼器的主要困難在於赫米特威沙特矩陣HH^H的逆運算,但藉由Chebyshev多項式加速方法,可利用HH^H的特徵值直接推估x,降低運算複雜度。
已知近來研究中,多有以深度學習途徑求解矩陣特徵值之研究,惟其往往對矩陣尺寸與特性有諸多限制,雖取得良好成果,但也缺代應用性。本篇研究試圖以並聯的神經網路架構,增進深度學習方法在矩陣特徵值問題的適用性,並以條件篩選方式增進資料的特徵。
英文摘要 With the standardization of 5G, the design of a multi-user large-scale multi-input multi-output system (MU M-MIMO system) has became a key technology. And the zero-forcing precoder is a simple precoding method. In high signal-to-noise ratio, the received signal y is decoded into the transmitted signal x easily. Generally speaking, the main difficulty of the zero-forcing precoder is the inverse operation of the Hermitian Wishart matrix, HH^H. However, by the Chebyshev polynomial acceleration method, the eigenvalues of HH^Hcan be used to directly estimate x, reducing the computational complexity.

In recent studies, there are many studies on solving matrix eigenvalues by deep learning. However, those studies often have many restrictions on the size and property of the matrix. Although good performances have been achieved, those studies are also lacking in applicability. This study attempts to improve the applicability of deep learning methods to matrix eigenvalue problems with a parallel neural network architecture and use conditional sorting to improve the features of the data.
論文目次 目錄
致謝 Ⅰ
中文摘要 Ⅱ
英文摘要 Ⅲ
目錄 ⅠⅤ
圖目錄 ⅤⅠ
表目錄 ⅤⅢ
第一章 緒論 1
1.1 研究動機與目的 1
1.2 研究方法 3
1.3 論文章節架構 5
第二章 相關研究與背景資料 6
2.1 問題描述 6
2.2 估測編碼後訊號之研究途徑 7
2.2.1 逆矩陣近似法與定點迭代方法 8
2.2.2 結合漸進特徵值機率分佈函數之Chebyshev加速方法 10
2.3 以深度學習途徑估測矩陣特徵值 14
第三章 倡議之神經網路 20
3.1 輸入層 20
3.2 分類條件與正規化 20
3.3 隱藏層與啟動函數 24
3.4 前向傳遞與後向傳遞 25
第四章 實驗結果 31
4.1 估測矩陣特徵值 31
4.1.1 天線相關性ρ=0 32
4.1.2 天線相關性ρ=0.2 35
4.1.3 天線相關性ρ=0.5 38
4.1.4 天線相關性ρ=0.8 41
4.2 位元錯誤率 44
第五章 結論與未來展望 49
參考文獻 52

圖目錄
圖1.1 分別以1個、5個、10個及20個神經元擬合拋物線函數 4
圖2.1 不同天線相關性下的單邊克羅內克模型之AEPDF 13
圖2.2 單一輸出之全連結神經網路 18
圖2.3 多輸出之全連結神經網路 19
圖3.1 倡議之神經網路結構 22
圖3.2 多輸出之神經網路結構 26
圖3.3 單一輸出之神經網路結構 26
圖4.1 ρ=0時,矩陣W相應特徵值λ_1 32
圖4.2 ρ=0時,矩陣W相應特徵值λ_2 32
圖4.3 ρ=0時,矩陣W相應特徵值λ_8 33
圖4.4 ρ=0時,矩陣W相應特徵值λ_15 33
圖4.5 ρ=0時,矩陣W相應特徵值λ_16 34
圖4.6 ρ=0.2時,矩陣W相應特徵值λ_1 35
圖4.7 ρ=0.2時,矩陣W相應特徵值λ_2 35
圖4.8 ρ=0.2時,矩陣W相應特徵值λ_8 36
圖4.9 ρ=0.2時,矩陣W相應特徵值λ_15 36
圖4.10 ρ=0.2時,矩陣W相應特徵值λ_16 37
圖4.11 ρ=0.5時,矩陣W相應特徵值λ_1 38
圖4.12 ρ=0.5時,矩陣W相應特徵值λ_2 38
圖4.13 ρ=0.5時,矩陣W相應特徵值λ_8 39
圖4.14 ρ=0.5時,矩陣W相應特徵值λ_15 39
圖4.15 ρ=0.5時,矩陣W相應特徵值λ_16 40
圖4.16 ρ=0.8時,矩陣W相應特徵值λ_1 41
圖4.17 ρ=0.8時,矩陣W相應特徵值λ_2 41
圖4.18 ρ=0.8時,矩陣W相應特徵值λ_8 42
圖4.19 ρ=0.8時,矩陣W相應特徵值λ_15 42
圖4.20 ρ=0.8時,矩陣W相應特徵值λ_16 43
圖4.21 ρ=0時,BER-SNR作圖 45
圖4.22 ρ=0.2時,BER-SNR作圖 46
圖4.23 ρ=0.5時,BER-SNR作圖 47
圖4.24 ρ=0.8時,BER-SNR作圖 48


表目錄
表2.1 三個代表性逆矩陣近似法 9
表2.2 三個代表性定點迭代方法 10
表2.3 常見的損失函數 15
表3.1 針對不同天線相關性之統計 21

參考文獻 參考文獻
[1] E. Bjornson, J.Hoydis and L. Sanguinetti, “Massive MIMO Has Unlimited Capacity,” IEEE Trans. On Wireless Comm. Vol. 17, no. 1. Pp. 574-590, Nov. 2017.
[2] C. Zhang, Z. Li, L. Shen, F. Yan, M. Wu, X. Wang, “A low-complexity massive MIMO precoding algorithm based on Chebyshev iteration,” IEEE Journals and Magazines, vol. 5, no. 99, pp. 22545-22551, Oct. 2017
[3] F. Jin, Q. Liu, H. Liu and P. Wu, “A Low Complexity Signal Detection Scheme Based on Improved Newton Iteration for Massive MIMO Systems,” IEEE Comm. Lett., vol. 23, no. 4, pp. 748-751, April 2019.
[4] Wen Shen, “An Introduction to Numerical Computation,” World Scientific Publishing, Dec. 2015.
[5] Bjorck, “Numerical Methods in Matrix Computations,” Springer International Publishing, 2015.
[6] James W. Demmel, Applied Numerical Linear Algebra, SIAM publishing, 1997.
[7] K. K. Lee and C.-E. Chen, “An improved matrix inversion approximation method for massive MIMO systems with transmit antenna correlation,” IEEE Proc. 18th Intl. Workshop on Signal Processing Advances in Wireless Comm. (SPAWC), pp. 1-5, July 2017.
[8] K. K. Lee and C.-E. Chen, “An Eigen-based Approach for Enhancing matrix Inversion Approximation in Massive MIMO systems,” IEEE Trans. Vehicular Tech., vol. 66, no. 6, pp. 5480-5484, Oct. 2016.
[9] K. K. Lee, Y.-H. Yang and J.-W Li, "A Low-Complexity AEPDF-assisted Precoding Scheme for Massive MIMO Systems with Transmit Antenna Correlation," Journal of Signal Processing Systems, 2020
[10] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. “Learning polynomials with neural networks,” In ICML, 2014.
[11] K. Goulianis, A. Margaris, I. Refandis, K. Diamantaras, “Solving polynomial systems using a fast adaptive back propagation–type neural network algorithm,” Euro. Jnl of Applied Mathematics, 2017, 1–37.
[12] D. Freitas, L. G. Lopes and F. Morgado-Dias, "A Neural Network Based Approach for Approximating Real Roots of Polynomials," International Conference on Mathematical Applications (ICMA), Jul. 2018.
[13] Z. Zhao (2019), "Machine Learning and Real Roots of Polynomials," (Bachelor's thesis). Retrieved from https://www.math.ucdavis.edu/files/1415/5249/2664/thesis-ZekaiZhao-Final.pdf
[14] Y. Tang, and J. Li, "Another neural network based approach for computing eigenvalues and eigenvectors of real skew-symmetric matrices", Computers & Mathematics with Applications, vol. 60, pp. 1385-1392, 2010.
[15] X. Zou, Y. Tang, S. Bu, Z. Luo, and S. Zhong, “Neural-networkbased approach for extracting eigenvectors and eigenvalues of real normal matrices and some extension to real matrices,” Journal of Applied Mathematics, vol. 2013, Article ID 597628, 2013.
[16] H. Tana, G. Yang, B. Yu, X. Liang, and Y. Tang, “Neural Network Based Algorithm for Generalized Eigenvalue Problem,” 2013 International Conference on Information Science and Cloud Computing Companion, pp. 446-451, 2013.
[17] Q. Zhang and X. Z. Wang, “Complex-valued neural network for hermitian matrices,” Engineering Letters, vol. 25, no. 3, pp312-320, 2017.
[18] I. Goodfellow, Y. Bengio and A. Courville, Deep Learning. 2016. Available from https://www.deeplearningbook.org/
論文使用權限
  • 同意紙本無償授權給館內讀者為學術之目的重製使用,於2020-08-10公開。
  • 同意授權瀏覽/列印電子全文服務,於2020-08-10起公開。


  • 若您有任何疑問,請與我們聯絡!
    圖書館: 請來電 (02)2621-5656 轉 2486 或 來信