1 / 20

Coda System Administration

Coda System Administration. Part 1. Aims. Overview of Coda subsystems Subsystems use of configuration data Sharing of configuration data by subsystems Distribution of configuration data to servers Scripts manipulating configuration data. Client. KERNEL. vutil. codacon. tcp. signals.

jonathon
Télécharger la présentation

Coda System Administration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Coda System Administration Part 1

  2. Aims • Overview of Coda subsystems • Subsystems use of configuration data • Sharing of configuration data by subsystems • Distribution of configuration data to servers • Scripts manipulating configuration data

  3. Client KERNEL vutil codacon tcp signals /dev/cfs0 DeviceIoControl tcp 1423 coda_console srv rpc2 rpc2 advice_srv Venus udp ??? udp 1355 coda_opcons udp 1363 coda_venus udp 1361 coda_filesrv pioctl rpc2 udp cmon • cfs, repair, hoard • clog • au auth2 rpc2 udp udp 1357 coda_auth

  4. Client Configuration • Depends on: • dirs /usr/coda/{etc,spool,venus.cache} • device /dev/cfs0 • file /usr/coda/etc/vstab with a few auth2/srv servers in it (for auth2 & rootvol) • entries in /etc/services • venus-setup sets these up halfway decently

  5. Config related work for clients • Weed out hard-wired paths and host names • Choose UDP ports (10?? range is too low) • Register a device number for BSD /dev/cfs • Getrootvol and clog/au have problems with vstab.

  6. Server subsystems volutil udp volutil.tk udp udp 1361 udp 1361 udp srv updateclnt backup volutil.tk volutil.tk udp Auth2.tk udp 1359 coda_updsrv auth2 Update

  7. Server Config Files • About 36 config files, some of form “.file” • Files most heavily manipulated by some 15 shell scripts • Some data is not in config files but in “serverinfo” files • vice-server-setup sets up a super simple stand alone server in one blow

  8. Configuration files • Server depends on: • dirs: /vice/{db,vol,bin,auth2,backup} & /vice/vol/remote • RVM related information • Volume/Cluster/Host information • Rootvolume information • backup information • authentication, protection information • update server information • rc scripts, pid files, lock files • server partitions: “dynamically discovered”

  9. RVM • RVM • RVM data & log device • their lengths, rvm start address, heap/static length, nlists, chunksize • Problems • server doesn’t come up easily with non-standard parameters • the rvm config data is not kept on server (serverinfo files /afs/cs/project/coda-braam) • Setup: vice-rvmsetup • Related scripts: startserver/norton

  10. Proposals for RVM config • RVM info for server held in a server database • startserver & norton use this • vice-rvmsetup offers a collection of “known good” setup parameters for size parms etc

  11. Cluster information • /vice/db/servers: contains pairs (hostname, serverid). • /vice/db/hosts: hostnames & their IP addresses • /.host, .scm: only used by scripts • /ROOTVOLUME: used by GetRootVol RPC

  12. Proposals for cluster info • Obsolete: /vice/db/hosts, /.host (are kludges) • /.scm should go in a server database • /ROOTVOLUME should go in a server db

  13. Proposed Coda DB [global] ROOTVOLUME = coda.root.readonly cell=microsoft.com scm = scarlatti servers = scarlatti, puccini, rossini, …. [puccini] [partitions] /vicepa raw_type /vicepb tree_type depth=5,width=4 [rvm] data=/dev/sda5 datasize=10000000 [rossini] ……….

  14. Update • Crucial databases are in /vice/db. • /vice/db is kept identical on all servers in cluster • /vice/db/files is list of files to be kept up to date by update client, contacting update server on SCM • /vice/db/files contains: • auth2.tk, volutil.tk, auth.pw, vice.pdb, vice.pcf, pro.db • servers, hosts, VSGDB, VLDB, VRDB • dumplist • files • AFS has migrated this into ubik

  15. Volume databases • Dichotomy: • almost all RPC’s to srv only invoke RVM resident volume information • GetVolInfo uses the external databases • Here our concern is with volume information held in files on servers, not in RVM.

  16. Volume creation • Input: name, groupid, vsgid, partition, full bu day • Find servers from vsgid & VSGDB • volutil -h server$i createvol name.$i groupid • as a result server$i updates its /vice/vol/VolumeList • fetch VolumeList from all servers: make BigVolumeList • rebuild VLDB from BigVolumeList • parse /vice/vol/AllVolumes, rebuild VRDB • enter backup information in /vice/db/dumplist

  17. Volume Databases: VSGDB • VSGDB - volume storage group database • simple struct: (vsgid, serverlist) • text

  18. Volume databases: VRDB • Only rebuilt inside the createvol_rep shell script • Binary, made on SCM from /vice/vol/VRList through volutil mkvrdb /vice/volVRList • /vice/vol/VRList: text, structure: (name,groupid,noservers,vol1,vol2,..vol8,vsgid) • VRList build from /vice/vol/AllVolumes • All volumes is a parsed version of BigVolumeList • /vice/vol/BigVolumeList on SCM is union of /vice/vol/VolumeList on all the servers

  19. Volume Databases: VLDB • Binary, made on SCM by bldvldb.sh from /vice/vol/BigVolumeList through “volutil bldvldb /vice/vol/BigVolumeList”. • Side effect: /vice/vol/AllVolumes is created which is text version of VLDB • BigVolumList: for S in servers do; cat /vice/vol/remote/VolumeList.$S >> BigVolumeList • /vice/vol/remote populated through (ks)rcp or ftp • Contains lots of info: backup volumes, quota, space, creation dates etc.

  20. VLDB (ctd) • Bldvldb.sh called inside creatvol{_rep} scripts. • Trouble maker: used to hang >50% cases • Fix (mostly implemented): • run update server on all servers • use updatefetch (new client of update server) not rfs, ftp or {ks}rcp • fetch VolumeList only from hosts where volume was created • Separate script for updating VRDB

More Related