系統識別號 | U0002-0207201312205100 |
---|---|
DOI | 10.6846/TKU.2013.00050 |
論文名稱(中文) | HDFS分散式檔案系統容錯管理架構 |
論文名稱(英文) | Fault-Tolerant Management Framework for Hadoop Distributed File System |
第三語言論文名稱 | |
校院名稱 | 淡江大學 |
系所名稱(中文) | 資訊工程學系碩士班 |
系所名稱(英文) | Department of Computer Science and Information Engineering |
外國學位學校名稱 | |
外國學位學院名稱 | |
外國學位研究所名稱 | |
學年度 | 101 |
學期 | 2 |
出版年 | 102 |
研究生(中文) | 廖治凱 |
研究生(英文) | Jhih-Kai Liao |
學號 | 600410103 |
學位類別 | 碩士 |
語言別 | 繁體中文 |
第二語言別 | 英文 |
口試日期 | 2013-06-28 |
論文頁數 | 66頁 |
口試委員 |
指導教授
-
林其誼(chiyilin@gmail.com)
委員 - 林振緯(jwlin@csie.fju.edu.tw) 委員 - 蔡智強(jichiangt@nchu.edu.tw) 委員 - 林其誼(chiyilin@gmail.com) |
關鍵字(中) |
HDFS Sub_NameNode Centroid Point Routing hops |
關鍵字(英) |
HDFS Sub_NameNode Centroid Point Routing hops |
第三語言關鍵字 | |
學科別分類 | |
中文摘要 |
由於現今網路發展迅速,大量的應用從原本的單機操作轉為透過網路多機操作,也促使了雲端運算技術的發展。例如由Yahoo出資研發的Hadoop、MapReduce,Google開發的GFS、Big table…等。其中Hadoop所使用的Hadoop Distributed File System(HDFS),主要使用Master/Slave的架構配置,由單一NameNode來管理整個系統,多個DataNode來負責幫助系統儲存資料。在此種配置之下,由單一節點掌握大量重要的Metadata資料,若此節點發生錯誤,造成資料檔案毀損,整個系統會因此而無法正常運作並發生Single Point of Failure(SPOF)的問題,SPOF對於整個系統而言會造成巨大的損失。並且在傳統Master/Slave配置的HDFS當中,所有的要求以及回應都必須要經過Master Node處理,因此造成網路上大量的資料都往NameNode擁入,使得網路速度緩慢,來回資料傳遞耗時。使得整體系統效能不彰。 因此,在本研究當中以Job為單位,每個Job動態配置一個Sub_NameNode來負責管理此Job,藉此來舒緩網路壅塞的情形,同時也加快了Master和Slave之間溝通的速度,並且將Metadata分散到不同的節點同時也分散了資料毀損的風險。有效地把會發生Single Point of Failure的點分為兩種,分別為NameNode和Sub_NameNode,並且針對不同種的SPOF節點皆提出有效解決SPOF的方法,降低其所帶來的影響。 |
英文摘要 |
Due to the rapid development of modern Internet, the mode of operation of a large number of applications has changed from single-machine to a cluster of machines over the network. This trend also contributed to the development of cloud computing technology, among which Google invented the MapReduce framework, Google File System (GFS), and BigTable, and Yahoo invested the open-source Hadoop project to implement those technologies proposed by Google. The Hadoop Distributed File System (HDFS) is based on the master/slave model to manage the entire file system. Specifically, a single NameNode acting as the master manages a large number of slaves called DataNodes. Since the NameNode is responsible for maintaining a lot of important metadata information, a NameNode crash can render the entire file system unusable. That is, the NameNode forms a Single Point of Failure (SPOF). In addition, in the master/slave model, all the requests and responses have to go through the master. It is obvious that without load sharing, the NameNode forms a performance bottleneck. Therefore, in this research we propose to allocate Sub_NameNodes dynamically for each MapReduce job, in order to relieve the network congestion, and accelerate the speed of communication between the master and the slaves. Our approach also reduces the risk of data loss by replicating the metadata to the Sub_NameNodes. Once the NameNode fails, its state can be reconstructed from the Sub_NameNodes. The simulation results show significant reduction on both the number of communication hops and the communication time. |
第三語言摘要 | |
論文目次 |
目錄 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機 2 1.3 研究目的與重要性 5 1.4 論文架構 6 第二章 相關研究 7 2.1 HDFS基礎概念 7 2.2 SecondaryNameNode and Backup node 11 2.3 HDFS Federation 14 2.4 Centroid Point 18 第三章 DASNN演算法機制設計 23 3.1 Dynamic Allocation Sub_NameNode Algorithm 24 3.1.1 尋找Centroid Point 27 3.1.2 尋找Sub_NameNode 28 3.2 SPOF解法 32 3.2.1 Failure in NameNode 32 3.2.2 Failure in Sub_NameNode 33 第四章 實驗模擬與分析 34 4.1 參數設定 34 4.2 單一實驗下各個Job所花費Hop數及Time比較 37 4.3 多次實驗下總花費Hop數及Time比較 38 4.4 針對不同參數設定分析 40 4.4.1 切割Task的數量與效能比較 40 4.4.2 nGM對於系統效能的影響 44 4.4.3 nJob對於系統效能的影響 47 4.4.4 nVM對於系統效能的影響 51 4.5 實驗分析總結 53 第五章 結論 54 參考文獻 55 附錄 – 英文論文 57 圖目錄 圖 1-1 Single Point of Failure示意圖 4 圖 2-1 配置大量運算節點的Rack 7 圖 2-2 NameNode Architecture 9 圖 2-3 HDFS Architecture 10 圖 2-4 Federation 配置架構圖[7] 15 圖 2-5 Federation 多重Namespace管理[7] 16 圖 2-6 樹狀網路架構圖 18 圖 2-7 相近兩點溝通所需經過之路徑不一定會最短 19 圖 2-8 由11個Nodes所組成的樹狀結構 21 圖 2-9 移除Node 1所產生的三個子樹 21 圖 2-10 移除Node 4所產生的三個子樹 22 圖 3-1 不同Job分配不同Sub_NameNode 24 圖 3-2 整體流程圖 25 圖 3-3 NameNode任務轉移至Sub_NameNode圖示 27 圖 3-4 尋找Centroid Point例1 28 圖 3-5 尋找Centroid Point例2 30 圖 3-6 DASNN演算法流程圖 31 圖 4-1 Task node和Sub_NameNode間Hop數的計算 36 圖 4-2 單一實驗下多個Job比較 37 圖 4-3 多次實驗下結果比較 38 圖 4-4 多次實驗下所提升之系統效能 39 圖 4-5 單一Job切割為5 Tasks 41 圖 4-6 單一Job切割為50 Tasks 42 圖 4-7 Job切割為5 Tasks和50Tasks的Hop count比較 43 圖 4-8 nGM對共同祖先層級的影響 44 圖 4-9 nGM為4 46 圖 4-10 nGM為32 46 圖 4-11 不同nGM減少的Hop count比較 47 圖 4-12 nJob為10 48 圖 4-13 nJob為50 48 圖 4-14 不同nJob減少的Hop count比較 49 圖 4-15 平均一個Job所減少之Hop count 50 圖 4-16 1024個Node 52 圖 4-17 4096個Node 52 圖 4-18 不同Node數減少的Hop count比較 53 表目錄 表 4-1 參數配置表 34 表 4-2 參數配置表2 40 表 4-3 5 tasks vs. 50 tasks 40 表 4-4 nGM=4 vs. nGM=32 44 表 4-5 10 Jobs vs. 50 Jobs 47 表 4-6 1024 VMs vs. 4096 VMs 51 |
參考文獻 |
[1] J. Dean and S. Ghemawat, “Mapreduce: simplified data processing on large clusters,” Proc. Usenix Symp. Opearting Systems Design &Implementation (OSDI 2004), pp.137-150, Dec. 2004. [2] Sanjay Ghemawat , Howard Gobioff , Shun-Tak Leung, “The Google file system,” Proceedings of the nineteenth ACM symposium on Operating systems principles, 2003.(SOSP ’03),Oct. 2003. [3] D. Borthakur. HDFS Architecture. http://hadoop.apache.org/common/ docs/r0.20.0/hdfs design.html, Apr. 2009. [4] Gary Shao, Francine Berman and Rich Wolski, “Master/slave computing on the Grid,” Heterogeneous Computing Workshop, 2000. (HCW 2000) Proceedings. 9th, 2000. [5] Craig Zilles and Gurindar Sohi, “Master/Slave Speculative Parallelization,” Microarchitecture, 2002. (MICRO-35). Proceedings. 35th Annual IEEE/ACM International Symposium on, 2002. [6] Tom White, “Hadoop: The Definitive Guide,” 2nd Edition, Sep. 2010. [7] HDFS Federation. http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html, Feb. 2013 [8] Scaling HDFS cluster using NameNode Federation. https://issues.apache.org/jira/secure/attachment/12453067/high-level-design.pdf [9] Dhruba Borthakur, “The High Availability story for HDFS so far,” Presented at ApacheCon at Oakland, California, Nov. 2009. [10] Christer A. Hansen, "Optimizing Hadoop for the cluster", Institue for Computer Science, University of Troms0, Norway, http://oss.csie.fju.edu.tw/~tzu98/Optimizing%20Hadoop%20for%20the%20cluster. pdf, Retrieved online Oct. 2012. [11] Qinlu He, Zhanhuai Li and Xiao Zhang, “Study on Cloud Storage System based on Distributed Storage Systems,” 2010 International Conference on Computational and Information Sciences (ICCIS), Dec. 2010. [12] Andr’e Oriani and Islene C. Garcia, “From Backup to Hot Standby:High Availability for HDFS,” Reliable Distributed Systems (SRDS), 2012 IEEE 31st Symposium on, Oct. 2012. [13] Feng Wang, Jie Qiu, Jie Yang, Bo Dong, Xinhui Li, and Ying Li, ”Hadoop high availability through metadata replication,” Proc. of the First CIKM Workshop on Cloud Data Management, pp. 37-44, 2009. [14] S.K.S. Gupta and P.K.Srimani, “Adaptive Core Selection and Migration Method for Multicast Routing in Mobile Ad Hoc Networks,” Parallel and Distributed Systems, IEEE Transactions on, pp. 27-38, Jan. 2003. [15] Kariv and S.L. Hakimi, “An Algorithmic Approach to Network Location Problems. ii: The P-Medians,” Proc. SIAM J. Applied Mathematics, vol. 37, no. 3, pp. 539-560, Dec. 1979. [16] S.K.S. Gupta and P.K. Srimani, “An Adaptive Protocol for Reliable Multicast in Multihop Radio Networks,” Proc. 2nd IEEE Workshop Mobile Computing Systems and Applications (WMCSA ’99), pp. 111-122, Feb. 1999 |
論文全文使用權限 |
如有問題,歡迎洽詢!
圖書館數位資訊組 (02)2621-5656 轉 2487 或 來信