淡江大學覺生紀念圖書館 (TKU Library)
進階搜尋


下載電子全文限經由淡江IP使用) 
系統識別號 U0002-2008201211144600
中文論文名稱 雲端運算中一致性協定之探討
英文論文名稱 Research on consensus protocols in cloud computing
校院名稱 淡江大學
系所名稱(中) 電機工程學系碩士班
系所名稱(英) Department of Electrical Engineering
學年度 100
學期 2
出版年 101
研究生中文姓名 徐偉銘
研究生英文姓名 Wei-Ming Hsu
電子信箱 daniel06_13@hotmail.com
學號 698450433
學位類別 碩士
語文別 中文
口試日期 2012-06-11
論文頁數 70頁
口試委員 指導教授-莊博任
委員-陳省隆
委員-李維聰
委員-許獻聰
委員-莊博任
委員-吳庭育
中文關鍵字 雲端運算  容錯系統  備份  原子廣播  一致性問題 
英文關鍵字 Cloud computing  Fault tolerance system  Replication  Atomic broadcast  Consistency  Two-phase commit protocol 
學科別分類 學科別應用科學電機及電子
中文摘要 近年來,由於使用者對於服務的需求量隨者網路服務的盛行也日漸劇
增,雲端運算的架構也因此被提出,雲端運算與以往的網格運算並沒有顯著的不同。兩者都是分散式運算的延伸,網格運算是著重於整合眾多異構平台,而雲端運算則強調在本地端資源有限的情況下,利用網路方式取得遠方的運算資源,提供使用者便利,相對的需考量的問題也隨之變多,例如資料的遷移性、資訊的安全性和系統的容錯性等。
Chubby是2007年Google提出來的一個分散式容錯系統實現於雲端運算架構中;也是Yahoo釋出的hadoop軟體中的zookeeper,實現方式是通過對檔案的建立操作來實現“加鎖”,系統主要是能支持著多個google的系統,如Google File System(GFS)和bigtable等系統。Chubby利用備份的方式來達到容錯的功能,其架構本身主要參考paxos演算法來實現,主要是利用訊息傳遞的方式,來維護各個伺服器在雲端運算下的各個資料一致性。
本論文基於paxos的方法來提出一個新的機制,希望可以應用於雲端運算的架構上。我們利用劃分區域的方式來分配各個伺服器需要處理的任務,並且採用輪替的方式來平衡處理的伺服器的負載,如此方法可以有效的提升方法的效能,並且利用consisten hashing方式來維護伺服器配置,讓我們的方法更可以實現於雲端架構上。
實驗證明,我們機制提出的方式,能更比其他機制更有效率的處理訊息並且可以維持著低延遲的方式處理;對於伺服器的擴充方面,相較於其他機制更可以維持著高效能,有效的利用各個伺服器擴充的資源。
英文摘要 The Internet usage growth rapidly, for Internet service provider, they have to handle big data, due to Cloud Computing has been widely discussed in recent years. Therefore, how to maintain high availability in Cloud Computing is a main challenge.
Cloud Computing maintain high availability to serve large number of users through replication. Therefore, we have to replicate a service on multiple servers. They make sure that if some replicas fail, the service can still be performed by the operational nodes. However, once a service is replicated, ensuring consistency of all the replicas is not easy. Conflicting requests submitted simultaneously to different nodes can lead to inconsistent states in the replicas and meaningless replies to clients.
Atomic broadcast can ensure service transmission consistency. libPaxos is one of atomic broadcast algorithm and it is an algorithm based on message-passing model of consistency.
In this paper, we compared libPaxos, Mencius and RingPaxos. libPaxos use two-phase commit protocol(2PC) to ensure consistency. Mencius improves libPaxos method to distribute every server’s loading by rotating mechanism. RingPaxos use ring topology to improve libPaxos’ tree topology.
The proposed scheme is improving libPaxos method, and first, we divide region that can let messages distributed processing, and use rotating mechanism to balance server’s loading, These features can improve protocol’s performance effectively.
The experiment shows that our scheme can sustain high throughput under high client load. For scalability of protocols, our scheme also can sustain high throughput and low latency.
論文目次 目錄
誌 謝 I
中文摘要 II
英文摘要 IV
目錄 VI
圖目錄 X
表目錄 XII
第一章、 緒論 1
1.1 前言 1
1.2 章節大綱 5
第二章、 背景知識與相關研究 6
2.1 背景環境 6
2.2 相關研究 7
2.3 libPaxos 8
2.3.1 Actor 9
2.3.2 Phase 1 11
2.3.3 Phase 2 13
2.3.4 伺服器損毀 15
2.3.5 一致性演算法範例 16
2.4 Mencius 18
2.4.1 Actor 18
2.4.2 Phase 1 20
2.4.3 Phase 2 21
2.4.4 特殊狀況 22
2.4.5 伺服器損毀 24
2.5 RingPaxos 25
2.5.1 Actor 25
2.5.2 Phase 1 27
2.5.3 Phase 2 29
2.5.4 伺服器損毀 32
第三章、 本論文提出之機制 33
3.1 Consistent hashing 35
3.1.1 伺服器配置 37
3.1.2 協定拓樸的維護 38
3.2 Actor 38
3.2.1 Acceptor 39
3.2.2 Proposer 39
3.2.3 Leader-Proposer 40
3.3 Phase 1 41
3.4 Phase 2 44
3.5 伺服器損毀 46
3.6 區域個數最佳化 47
第四章、 模擬 49
4.1 模擬環境 49
4.2 各機制的效能 50
4.2.1 各機制的訊息處理能力比較 52
4.2.2 各機制的訊息資訊量大小比較 53
4.3 各機制伺服器擴充效能 54
4.4 各機制的平均延遲 57
4.4.1 傳遞訊息量的平均延遲 57
4.4.2 伺服器增加量的平均延遲 59
4.5模擬結論 62
第五章、 結論與未來工作 64
參考文獻 67
圖目錄
圖2.1 Zookeeper[11]處理訊息的架構 7
圖2.2 libPaxos’s Phase 1處理流程 11
圖2.3 libPaxos’s Phase 2處理流程 13
圖2.4 libPaxos 訊息處理流程 15
圖2.5 Mencius’s Phase 1處理流程 20
圖2.6 Mencius’s Phase 2處理流程 21
圖2.7 Mencius的例外處理 22
圖2.8 Mencius 訊息處理流程 23
圖2.9 RingPaxos’s Phase1處理流程 27
圖2.10 RingPaxos的網路拓樸 29
圖2.11 RingPaxos’s Phase2處理流程 30
圖2.12 RingPaxos訊息處理流程 31
圖3.1 ARP的拓樸建立 33
圖3.2 Consistent hashing[17]的拓樸建立 35
圖3.3 ARP’s Phase 1處理流程 41
圖3.4 ARP訊息處理流程 43
圖3.5 ARP’s Phase 2處理流程 44
圖3.6 ARP最佳化區域比較 47
圖4.1 各機制的效能-vps比較 52
圖4.2 各機制的效能-KBps比較 53
圖4.3 各機制增加acceptor量的vps比較 56
圖4.4 各機制平均延遲之於訊息量 59
圖4.5 各機制平均延遲之於伺服器擴充量 61

表目錄
表4.1 效能的模擬參數 50
表4.2 各機制scalability的效能比較 54
表4.3 訊息測量的平均延遲環境設定 57
表4.4 伺服器增加的平均延遲環境設定 59
表4.5 各個協定比較 62
參考文獻 [1] http://big5.buynow.com.cn/gate/big5/robbin.javaeye.com/blog/524977
[2] http://adam.heroku.com/past/2009/7/6/關聯式資料_databases_dont_scale
[3] NoSQL, http://nosql-database.org/
[4] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S. Sivasubramanian, P. Vosshall, and W. Vogels, “Dynamo: amazon’s highly available key-value store,” Proc. of 21th ACM SIGOPS symposium on Operating systems principles (SOSP ’07), Dec. 2007, pp. 205–220.
[5] S. Ghemawat, H. Gobioff, and S. Leung, “The Google file System,” Proc. of 19th ACM SIGOPS symposium on Operating systems principles (SOSP ’03), Dec. 2003, pp. 29-43.
[6] F. Chang, J. Dean, S. Ghemawat, W. Hsieh, D. Wallach,M. Burrows, T. Chandra, A. Fikes, and R. Gruber, “Bigtable: A Distributed Storage System for Structured Data,” Proc. of the 7th Symposium on Operating System Design and Implementation, (OSDI ’06), May. 2006, pp. 205-218.
[7] M. Burrows, “The Chubby lock service for loosely-coupled distributed systems,” Proc. of the 7th Symposium on Operating System Design and Implementation, (OSDI ’06), May. 2006 , pp. 335–350.
[8] S. Ghemawat, and J. Dean, “MapReduce: Simplified data processing on large clusters,” Proc. of the 6th USENIX Symposium on Operating Systems Design&Implementation, (OSDI ’04 ), Sep. 2004, pp. 10-10.
[9] K. Shvachko, H. Huang, S. Radia, and R. Chansler, “The hadoop distributed file system,” In 26th IEEE Symposium on Massive Storage Systems and Technologies, (MSST ’10), May. 2010, pp. 1-10.
[10] P. Hunt, M. Konar, F. Junqueira, and B. Reed, “Zookeeper: Wait-free coordination for Internetscale services,” Proc. of the USENIX Annual Technical Conference, (ATC ’10), June. 2010, pp. 145-158.
[11] B. Reed, and F. P. Junqueira, “A simple totally ordered broad-cast protocol,” Proc. of the 2nd Workshop on Large-Scale Distributed Systems and Middleware, (LADIS ’08), 2008, pp. 1–6.
[12] L. Lamport, “The part-time parliament,” ACM Transactions on Computer Systems (TOCS), Vol. 16, Issue 2, May. 1998, pp.133-169.
[13] L. Lamport, “Paxos made simple,” ACM SIGACT News, 32(4), Dec. 2001, pp.18–25.
[14] Y. Mao, F. Junqueira, and K. Marzullo, “Mencius: Building efficient replicated state machines for WANs,” Proc. of the 8th USENIX Symposium on Operating Systems Design&Implementation, (OSDI ’08), 2008, pp. 369-384.
[15] L. Lamport, A. Hydrie, and D. Achlioptas, “Multi-leader distributed system,” U.S. patent 7 260 611 B2, Aug. 2007.
[16] P.J. Marandi, M. Primi, N. Schiper, and F. Pedone, “Ring Paxos: A high-throughput atomic broadcast protocol,” Proc. of the Dependable Systems and Networks, (DSN ’10), 2010, pp. 527-536.
[17] http://www.haogongju.net/art/1067868
[18] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad, R. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, Y. Yerushalmi, “Web Caching with Consistent Hashing,” Proc. of the 8th international conference on World Wide Web, (WWW ’99), May. 1999, pp. 1203-1213.
[19] D. Karger, E. Lehman, T. Leighton, M. Levine, D. Lewin, and R. Panigrahy, “Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the world wide web,” In ACM Symposium on Theory of Computing, 1997, pp. 654–663.
[20] Microsoft ISN, “Cache Array Routing Protocol(CARP),” http://www.microsoft.com/ISN/whitepapers/cache_array_routing.asp.
[21] The Network Simulator-NS2, http://www.isi.edu/nsnam/ns/.
[22] T. Benzel, R. Braden, D. Kim, C. Neuman, A.Joseph, K. Sklower, R. Ostrenga, and S. Schwab, “Design, deployment, and use of the DETER testbed,” Proc. of the DETER Community Workshop on Cyber-Security and Test, Aug. 2007, pp. 1-1
[23] M. Primi, ”Paxos made code,“ Master thesis submitted to the Faculty
of Informatics of the University of Lugano, 2009.
[24] A. Lakshman, and P. Malik, ” Cassandra: a decentralized structured storage system,” ACM SIGOPS Operating Systems Review archive Volume 44 Issue 2, April. 2010, pp. 35-40.
[25] L. Lamport, R. Shostak, and M. Pease, ”The Byzantine Generals Problem,” ACM Transactions on Programming Languages and Systems, (TOPLAS ’82), Vol. 4, Issue 3, July. 1982, pp. 382-401.
[26] GeoIP, http://www.maxmind.com/app/geolite
[27] X. Li, J. Misra, and C. G. Plaxton,” Concurrent maintenance of rings,” Proc of the 21th ACM symposium on Principles of distributed computing, (PODC '04), 2004, pp.126–148.
[28] X. Li, J. Misra, and C. G. Plaxton,” Maintaining the Ranch topology,” Journal of Parallel and Distributed Computing, Vol. 70, Issue 11, Nov 2010, pp. 1142-1158.
論文使用權限
  • 同意紙本無償授權給館內讀者為學術之目的重製使用,於2012-08-28公開。
  • 同意授權瀏覽/列印電子全文服務,於2012-08-28起公開。


  • 若您有任何疑問,請與我們聯絡!
    圖書館: 請來電 (02)2621-5656 轉 2281 或 來信