1 / 23

The Google File System (GFS)

The Google File System (GFS). Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. Introduction. Design constraints 设计要求 Component failures are the norm 1000s of components Bugs, human errors, failures of memory, disk, connectors, networking, and power supplies

aldis
Télécharger la présentation

The Google File System (GFS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Google File System (GFS) Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung

  2. Introduction • Design constraints设计要求 • Component failures are the norm • 1000s of components • Bugs, human errors, failures of memory, disk, connectors, networking, and power supplies • Monitoring, error detection, fault tolerance, automatic recovery监控,错误检测,容错,自动恢复 • Files are huge by traditional standards • Multi-GB files are common • Billions of objects

  3. Introduction • Design constraints • Most modifications are appends • Random writes are practically nonexistent • Many files are written once, and read sequentially • Two types of reads • Large streaming reads • Small random reads (in the forward direction) • Sustained bandwidth more important than latency 高度可用的带宽比低延迟更加重要 • File system APIs are open to changes

  4. Interface Design • Not POSIX compliant 不遵守POSIX规范 • Additional operations • Snapshot • Record append

  5. Architectural Design • A GFS cluster • A single master 在逻辑上只有一个master • Multiple chunkservers per master • Accessed by multiple clients • Running on commodity Linux machines • A file 数据以文件形式存储在Chunk Server上 • Represented as fixed-sized chunks(数据块) • Labeled with 64-bit unique global IDs • Stored at chunkservers • 3-way Mirrored across chunkservers

  6. Application GFS client GFS chunkserver Linux file system GFS chunkserver Linux file system GFS chunkserver Linux file system Architectural Design (2) chunk location? GFS Master chunk data?

  7. Architectural Design (3) • Master server • Maintains all metadata • Name space, access control, file-to-chunk mappings, garbage collection, chunk migration • GFS clients • Consult master for metadata • Access data from chunkservers • Does not go through VFS • No caching at clients and chunkservers due to the frequent case of streaming 客户端大部分是流式顺序读写,并不存在大量的重复读写,缓存文件数据对提高系统整体性能的作用不大,服务器不缓存是因为Linux操作系统会把经常访问的数据存放在内存中

  8. Single-Master Design • Simple • Master answers only chunk locations • A client typically asks for multiple chunk locations in a single request • The master also predicatively provide chunk locations immediately following those requested

  9. Chunk Size • 64 MB • Fewer chunk location requests to the master • Reduced overhead to access a chunk • Fewer metadata entries • Kept in memory - Some potential problems with fragmentation碎片

  10. Metadata • Three major types • File and chunk namespaces文件和数据块的命名空间 • File-to-chunk mappings文件到数据块的映射 • Locations of a chunk’s replicas数据块副本的位置

  11. Metadata • All kept in memory • Fast! • Quick global scans • Garbage collections • Reorganizations • 64 bytes per 64 MB of data • Prefix compression

  12. Chunk Locations • No persistent states • Polls chunkservers at startup • Use heartbeat messages to monitor servers • Simplicity • On-demand approach vs. coordination • On-demand wins when changes (failures) are often

  13. Operation Logs • Metadata updates are logged • e.g., <old value, new value> pairs • Log replicated on remote machines 多处远程机器上创建日志副本 • Take global snapshots (checkpoints) to truncate(截短) logs • Memory mapped压入内存 (no serialization/deserialization) • Checkpoints can be created while updates arrive • Recovery • Latest checkpoint + subsequent log files

  14. Consistency Model • Relaxed consistency • Concurrent changes are consistent but undefined • An append is atomically committed at least once - Occasional duplications • All changes to a chunk are applied in the same order to all replicas 对chunk所有副本的修改操作顺序一致 • Use version number to detect missed updates 使用版本号来检测是否由于chunk服务器宕机而丢失某些更新操作

  15. System Interactions • Master节点为Chunk的一个副本建立一个租约, 我们把这个副本叫做主Chunk • 主Chunk对Chunk的所有更改操作进行序列化。所有的副本都遵从这个序列进行修改操作。 • Lease 租约 租约的初始超时设置为60秒。不过,只要Chunk被修改了,主Chunk就可以申请更长的租期,通常会得到Master节点的确认并收到租约延长的时间。这些租约延长请求和批准的信息通常都是附加在Master节点和Chunk服务器之间的heartbeat消息中来传递。有时Master节点会试图提前取消租约(例如,Master节点想取消在一个已经被改名的文件上的修改操作)。即使Master节点和主Chunk失去联系,它仍然可以安全地在旧的租约到期后和另外一个Chunk副本签订新的租约

  16. Data Flow • Separation of control and data flows 控制流与数据流分离 • Avoid network bottleneck • Updates are pushed linearly among replicas • Pipelined transfers 数据以管道的方式,顺序的沿着一个精心选择的Chunk服务器链推送。 • 13 MB/second with 100 Mbps network

  17. Snapshot • Copy-on-write approach 写时拷贝方法 • 撤销快照的文件所拥有的chunk的所有租约 • New updates are logged while taking the snapshot • Commit the log to disk • Apply to the log to a copy of metadata • A chunk is not copied until the next update

  18. Master Operation • GFS没有针对每个目录实现能够列出目录下所有文件的数据结构。GFS也不支持文件或者目录的链接。 • 在逻辑上,GFS的名称空间就是一个全路径和元数据映射关系的查找表。利用前缀压缩,这个表可以高效的存储在内存中。

  19. Locking Operations • 每个Master节点的操作在开始之前都要获得一系列的锁。通常情况下,如果一个操作涉及/d1/d2/…/dn/leaf,那么操作首先要获得目录/d1,/d1/d2,…,/d1/d2/…/dn的读锁,以及/d1/d2/…/dn/leaf的读写锁。注意,根据操作的不同,leaf可以是一个文件,也可以是一个目录。 • 采用这种锁方案的优点是支持对同一目录的并行操作。比如,可以再同一个目录下同时创建多个文件:每一个操作都获取一个目录名的上的读取锁和文件名上的写入锁。目录名的读取锁足以的防止目录被删除、改名以及被快照。文件名的写入锁序列化文件创建操作,确保不会多次创建同名的文件

  20. Replica Placement • 两大目标:最大化数据可靠性和可用性,最大化网络带宽利用率。 • 必须在多个机架间分布储存Chunk的副本。这保证Chunk的一些副本在整个机架被破坏或掉线的情况下依然存在且保持可用状态。这还意味着在网络流量方面,尤其是针对Chunk的读操作,能够有效利用多个机架的整合带宽。 • 每个需要被重新复制的Chunk都会根据几个因素进行排序。其中一个因素是Chunk现有副本数量和复制因数相差多少。例如,丢失两个副本的Chunk比丢失一个副本的Chunk有更高的优先级

  21. Garbage Collection • 当一个文件被应用程序删除时,Master节点象对待其它修改操作一样,立刻把删除操作以日志的方式记录下来。但是,Master节点并不马上回收资源,而是把文件名改为一个包含删除时间戳的、隐藏的名字。当Master节点对文件系统命名空间做常规扫描的时候,它会删除所有三天前的隐藏文件(这个时间间隔是可以设置的)。直到文件被真正删除,它们仍旧可以用新的特殊的名字读取,也可以通过把隐藏文件改名为正常显示的文件名的方式“反删除”。当隐藏文件被从名称空间中删除,Master服务器内存中保存的这个文件的相关元数据才会被删除。这也有效的切断了文件和它包含的所有Chunk的连接(

  22. Fault Tolerance and Diagnosis • 快速恢复 • 不管Master服务器和Chunk服务器是如何关闭的,它们都被设计为可以在数秒钟内恢复它们的状态并重新启动。 • Chunk复制 • Master复制 • GFS中存在一些“影子”Master服务器,这些“影子”服务器在“主”Master服务器宕机的时候提供文件系统的只读访问。 • 处于GFS系统外部的监控进程会在其它的存有完整操作日志的机器上启动一个新的Master进程。

  23. Fault Tolerance and Diagnosis • 数据完整性 • 我们把每个Chunk都分成64KB大小的块。每个块都对应一个32位的校验和(checksum)。和其它元数据一样,Checksum与其它的用户数据是分开的,并且保存在内存和硬盘上,同时也记录操作日志。 • 在Chunk服务器空闲的时候,它会扫描和校验每个不活动的Chunk的内容。这使得我们能够发现很少被读取的Chunk是否完整。

More Related