1 / 9

Status of Florida Tier2 Center

Status of Florida Tier2 Center. A mini tutorial on ROCKS appliances. Jorge L. Rodriguez February 2003. Future UF System Architecture. From previous talk. Keep multiple head/pool of compute node architecture Move user file systems from multiple headnodes and onto grinhead

Télécharger la présentation

Status of Florida Tier2 Center

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Status of Florida Tier2 Center A mini tutorial on ROCKS appliances Jorge L. Rodriguez February 2003

  2. Future UF System Architecture From previous talk • Keep multiple head/pool of compute node architecture • Move user file systems from multiple headnodes and onto grinhead • Simplify user maintenance • Implement this with NIS or LDAP directory services • ROCKS is developing an LDAP implementation of user directory services for their cluster. • Cluster management based on latest version of ROCKS • The grinhead node will be the ROCKS server • Define current configuration as appliance and modules • Integrate the new 3ware RAID fileserver as an appliance • Integrate FNAL’s ROCKS based tier1/tier2 system • Must be able to handle local requirements, including GRID and local projects of interests • Must be able to implement update scheme

  3. System Architecture frontend node: (grinhead) User file system ROCKS database, dhcp, NIS, NFS server, Ganglia… • Head and Compute nodes etc (CE,WN, SE ): expressed as ROCKS appliances • Complete description including installation of VDT and local scheduler configuration • Good way of communicating installation procedures within the t1/t2 centers. grinux computational nodes

  4. Building ROCKS appliances A dot graph of the Florida cluster • graphs and nodes • nodes are collections of packages that provide function and include software and configurations • graphs are collection of all nodes and edges defining a cluster • Appliances are collection of nodes and edges that define a machine • left most node in a graph (root)

  5. Appliances: graphs, nodes and xml • Modify the graph ( the default graph is in defaults.xml) Add statements in xml /home/install/profiles/2.3/graphs/defaults.xml <edge from="frontend-dgt" to="frontend-florida"/> <edge from="frontend-dgt" to="vdt"/> <edge from="frontend-dgt" to="VOMS"/> <edge from="frontend-dgt" to="Mona-Lisa"/> Note: ROCKS provides a site-nodes/ directory where one can insert nodes. It allows one to modify or replace existing nodes but I’m not sure exactly how to use it … • Add the nodes (.xml) files An appliance is just a node at the far left of the graph (root)

  6. Appliances: ROCKS Database • Add the Appliance to the ROCKS database • Add an entry to the appliance table id = primary key (auto increment) name = appliance name (e.g. frontend-igt) shortname = appliance short name (e.g. f-igt) graph = xml graph filename where appliance lives node = root of graph (e.g. frontend-igt) • Add an entry to the memberships table id = primary key (auto increment) name = human readable name (used by insert-ethers) appliance = primary key from appliance table (id. appliance) distribution = primary key from distribution table compute = “yes” pbs jobs will be scheduled here “no” they won’t

  7. Some useful commands • Building the dot.gif files, for you viewing pleasure cd .../profiles/2.3 : On ROCKS frontend node kpp [-g default-meet.xml ] | dot –Tgif > default-meet.gif • kpp is the kickstart pre processor, without argument it produces a directed graph file that can be read by dot. dot draws the directed graph and outputs in various formats • Building the kickstart file cd …/profiles/2.3 kpp <appliance name> | kgen > foo.kickstart • kpp with an argument generates more xml which is read by kgen and translated to a kickstart file. See section 4. in rocks doc “Leveraging Standard Core Technologies to Programmatically Build Linux Cluster Appliances”

  8. Installing the machine The final step: install the appliance on a machine • Use the usual insert-ethers methods • Start up insert-ethers program on frontend node • Select the new appliance type from menu • Crank up the blank machine • Mucking with tables (no power cycling) • modify the nodes table, replacing old appliance • insert-ethers –-update • This regenerates all configurations files with new DB info • shoot-node: reinstalls node as new appliance • This hasn’t been fully tested yet??

  9. Status • Appliances defined • headnodes (CE): Derived from slave-node-auto plus UF specific nodes + VDT installation + … • computenodes (WN): Straight from ROCKS • fileservers (SE) : Derived from slave-node plus UF specific nodes • Status: • WN are done, installation tested, good to go! • CE appliance has been installed on one machine, more xml needed • SE appliance defined need to write xml and test • Need to test installation without insert-ethers • Hope to finish before end of next week • Major problems with new RAID fileserver • 3 ware + WD 180 GB drive problem!!!! • Sensitive to loose connectors? Giving them one last chance!

More Related