1 / 32

Mirror File System A Multiple Server File System

Mirror File System A Multiple Server File System. John Wong CTO John.Wong@TwinPeakSoft.com Twin Peak s Software Inc. Multiple Server File System. Conventional File System – UFS, EXT3 and NFS Manage and store files on a single server and its storage devices Multiple Server File system

helkins
Télécharger la présentation

Mirror File System A Multiple Server File System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mirror File SystemA Multiple Server File System John Wong CTO John.Wong@TwinPeakSoft.com Twin Peaks Software Inc.

  2. Multiple Server File System • Conventional File System – UFS, EXT3 and NFS • Manage and store files on a single server and its storage devices • Multiple Server File system • Manage and store files on multiple servers and their storage devices

  3. Problems • Single resource is vulnerable • Redundancy provides a safety net • Disk level => RAID • Storage level => Storage Replication • TCP/IP level => SNDR • File System level => CFS, MFS • System level => Clustering system • Application => Database

  4. Why MFS? • Many advantages over existing technologies

  5. Unix/Linux File System Application 1 Application 2 User Space UFS/EXT3 Kernel Space Disk Driver Data

  6. Network File System Application Application Application Application UFS/EXT3 NFS (Client mount) NFSD Data

  7. UFS | NFS Application Application Application UFS/EXT3 UFS/EXT3 NFS (Client mount) NFSD Data B Data B

  8. UFS + NFS • UFS manages data on the local server’s storage devices • NFS manages data on remote server’s storage devices • Combine these two file systems to manage data on both local and remote servers storage devices

  9. MFS = UFS + NFS Active MFS Server Passive MFS Server Application Application Application Application MFS NFS UFS/EXT3 UFS/EXT3 Data Data

  10. Building Block Approach • MFS is a kernel loadable module • MFS is loaded on top of UFS and NFS • Standard VFS interface • No change to UFS and NFS

  11. File System Framework File System Operation calls File System Operation calls File Operation System Calls Other System calls Statfs() mount () link () umount () sync () rmdir () lseek () ioctl () open () read () creat () write () close () mkdir () VFS interfaces Vnode interfaces PCFS NFS (2) NFS (1) HSFS PCFS UFS (1) VxFS UFS (2) QFS Data Data Data Data Optical drive Network SOLARIS Internal, Core Kernel Architecture, Jim Mauro. Richard McDougall, PRENTICE HALL

  12. MFS Framework File System Operation calls Other System calls File Operation System Calls umount () read () mount () lseek () mkdir () rmdir () creat () sync () open () write () close () Statfs() ioctl () link () Vnode interfaces VFS interfaces MFS PCFS Vnode VFS interface QFS UFS(1) VxFS PCFS NFS (1) HSFS NFS (2) UFS (2) Data Data Data Data Optical drive Network

  13. Transparency • Transparent to users and applications • - No re-compilation or re-link needed • Transparent to existing file structures • - Same pathname access • Transparent to underlying file systems • - UFS, NFS

  14. Mount Mechanism • Conventional Mount • - One directory, one file system • MFS Mount • - One directory, two or more file systems

  15. Mount Mechanism • # mount –F mfs host:/ndir1/ndir2 /udir1/udir2 • First mount the NFS on a UFS directory • Then mount the MFS on top of UFS and NFS • Existing UFS tree structure /udir1/udir2 becomes a local copy of MFS • Newly mounted host:/ndir1/ndir2 becomes a remote copy of MFS • Same mount options as NFSexcept no ‘-o hard’ option

  16. MFS mfsck Command • # /usr/lib/fs/mfs/mfsck mfs_dir • After MFS mount succeeds, the local copy may not be identical to the remote copy. • Use mfsck (the MFS fsck) to synchronize them. • The mfs_dir can be any directory under MFS mount point. • Multiple mfsck commands can be invoked at the same time.

  17. READ/WRITE Vnode Operation • All VFS/vnode operations received by MFS • READ related operation: read, getattr,…. • those operations only need to go to local copy (UFS). • WRITE related operation: write, setattr,….. • those operations go to both local (UFS) and remote (NFS) copy simultaneously (using threads)

  18. MirroringGranularity • Directory Level • Mirror any UFS directory instead of entire UFS filesystem • Directory A mirrored to Server A • Directory B mirrored to Server B • Block Level Update • Only changed block is mirrored

  19. MFS msync Command • # /usr/lib/fs/mfs/msync mfs_root_dir • A daemon that synchronizes MFS pair after a remote MFS partner fails. • Upon a write failure, MFS: - Logs name of file to which the write operation failed • - Starts a heartbeat thread to verify the remote MFS server is back online • Once the remote MFS server is back online, msync uses the log to sync missing files to remote server.

  20. Active/Active Configuration Server Server Active MFS Server Active MFS Server Application Application Application Application MFS MFS UFS NFS UFS NFS Data B Data A

  21. MFS Locking Mechanism MFS uses UFS, NFS file record lock. Locking is required for the active-active configuration. Locking enables write-related vnode operations as atomic operations. Locking is enabled by default. Locking is not necessary in active-passive configuration.

  22. Real -Time and Scheduled • Real-time • -- Replicate file in real-time • Scheduled • -- Log file path, offset and size • -- Replicate only changed portion of a file

  23. Applications • Online File Backup • Server File Backup, active  passive • Server/NAS Clustering, active  Active

  24. MFS = NTFS + CIFS Window Desktop/Laptop Remote Server Application Application Application Application MFS CIFS NTFS NTFS Data Data

  25. Online File BackupReal-time or Scheduled time MFS MFS LAN or Wan Folder Folder Folder MFS User Desktop/Laptop ISP Server

  26. Server Replication Primary Secondary Heartbeat App Email Mirror File System Mirror File System Mirror File System Mirroring Path : /home : /var/spool/mail

  27. Enterprise Clusters Central Mirroring Path App App Mirror File System App App App App App Mirror File System Mirror File System Mirror File System Mirror File System

  28. Advantages • Building block approach • -- Building upon existing UFS, EXT3 , NFS, CIFS infrastructures • No metadata is replicated • -- Superblock, Cylinder group, file allocation map are not replicated. • Every file write operation is checked by file system • -- file consistency, integrity • Live file, not raw data replication • -- The primary and backup copy both are live files

  29. Advantages • Interoperability • -- Two nodes can be different systems • -- Storage systems can be different • Small granularity • -- Directory level, not entire file system • One to many or many to one replication

  30. Advantages • Fast replication • -- Replication in Kernel file system module • Immediate failover • -- No need to fsck and mount operation • Geographically dispersed clustering • -- Two nodes can be separated by hundreds of miles • Easy to deploy and manage • -- Only one copy of MFS running on primary server is • needed for replication

  31. Why MFS? • Better Data Protection • Better Disaster Recovery • Better RAS • Better Scalability • Better Performance • Better Resources Utilization

  32. Q & A Application Application Application Application MFS MFS Data A Data B

More Related