1 / 11

Jefferson Lab Site Report

Jefferson Lab Site Report. Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA Sandy.Philpott@jlab.org 757-269-7152 http://cc.jlab.org HEPiX -RZTUBS, 4/6/00. Jefferson Lab Common User Environment (CUE). Mass Storage. JLABN* NT Domain. SUN E4000

marcy
Télécharger la présentation

Jefferson Lab Site Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Jefferson LabSite Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA Sandy.Philpott@jlab.org 757-269-7152 http://cc.jlab.org HEPiX -RZTUBS, 4/6/00

  2. Jefferson Lab Common User Environment (CUE) Mass Storage JLABN* NT Domain SUN E4000 OSM Symbios /stage Central Fileservers RAServer CIFS (NetApp) NetApp /home,/apps, /group,/site Metastor /work JLABH/S Central Computing jput/jget STK Redwoods NFS DLT jukebox jexport OS: H-HP-UX 10.20 L-Linux RH5.2 N-NT 4.0 S-Solaris 2.6 /cache CACHEL /cache FARML/S CACHEL IFARMH/L/S Network: Batch farm - LSF Interactive farm Farm cache

  3. Central Computing • UNIX • Solaris - stable; 2.6 • HP-UX - no new systems; 10.20 (still a lot around) • AIX – decommissioned • Windows • no physics computing; desktop only • Currently NT 4.0; Single Master Domain • Testing 2000 client; no immediate plans for server • Network Appliance – concurrent NFS & CIFS • Guarantee 3 years full support, 5 years “gold” (new!) • /home, /group, /site, /apps, /mail

  4. Experimental Physics Computing • File Servers • 5TB total /work- Metastor– RAID 5 - NFS • 800GB New Linux /cache server testing – RAID 0 • Batch Farming • 75 RH Linux dual-processors – 2700 SPECint95 • LSF licensing/pricing issue being resolved • Mass Storage • CA OSM – now on 2 SUN servers (new) • STK Powderhorn silo - /mss • 8 Redwoods • 10 new 9840s working

  5. Batch Farm Nodes Cisco 2900 Batch Farm Nodes Work Cache Work Cache Cisco 2900 Mss Mss Batch Farm Nodes Ifarm CISCO 5500 Ifarm Ifarm • Current Configuration • Central Cisco 5500 switch. • Cisco 2900 switches with gigabit uplinks to some farm nodes. • Work and Cache on same file servers with 100 mbit Ethernet. • Local stage disk on mss nodes. • Direct access to ifarm systems. 100 mbit 1000 mbit FCAL (100MByte)

  6. Batch Farm Nodes Cisco 2900 Batch Farm Nodes Work Work Cache Cache Cisco 2900 Mss Mss Ifarm Ifarm Ifarm Batch Farm Nodes CISCO 5500 • First Stage • Add separate cache file servers. • Cache servers are divided into groups. • TapeServer will copy to cache servers via our in house protocol (not NFS). • Allow NFS access only to interactive cache systems. • Data flow should become MSS-Cache-Farm-Work-MSS. 100 mbit 1000 mbit FCAL (100MByte)

  7. Work Work Cache Cache Cache Mss Mss Analysis Farm Nodes Analysis Farm Nodes Batch Farm Nodes Batch Farm Nodes Batch Farm Nodes Cisco 2900 Cisco 2900 Cisco 2900 Cisco 2900 Cisco 2900 Foundry BigIron 8000 • Second Stage • Replace central switch with a Foundry BigIron 8000 (faster backplane). • Farm nodes on Cisco 2900 switches. • Separate Analysis Farm. • Upgrade work file servers and use gigabit Ethernet. • More cache file servers. • Increased staging disk space on mss nodes. • Future tape drives may be on FCAL. FCAL Switch 100 mbit 1000 mbit FCAL (100MByte)

  8. Work Work Work Cache Cache Cache Mss Mss Mss Analysis Farm Nodes Analysis Farm Nodes Batch Farm Nodes Batch Farm Nodes Batch Farm Nodes Cisco 2900 Cisco 2900 Cisco 2900 Cisco 2900 Cisco 2900 Foundry BigIron 8000 Stager Stager • Third Stage • More farm nodes, cache, and work servers. • More mss nodes sharing file systems on a SAN. • More staging disk space. • Stage nodes copy data to and from stage disks. • OSM is replaced. FCAL Switch 100 mbit 1000 mbit FCAL (100MByte)

  9. Projects Status • Distributed Web Servers – Linux, Apache - done • Distributed Systems Management – no recent changes • Mon, Jman – Jlab management database – Linux/MySQL • User Support/HelpDesk – CCPR implemented (was GNATS) • Linux MySQL database • NT ColdFusion\IIS interface Desktop Support • Security • SSH, Secure IMAP - get rid of clear-text passwords! • Telnet only open to 1 Internet-accessible machine • Secure IMAP protocol only, after April 2000 • DMZ – DNS, Secure IMAP (no Web mirror or secure FTP yet) • Grid…

  10. Projects Status (cont) • Windows • 2000 Pro in eval; wait to eval Server • 95/98 support ends June 30 (no domain authentication) • SMS – beginning widespread use • Linux: announcing CC desktop support this week • 2 Levels – standard CUE configuration • 1 – no root, /home • 2 – root, no /home (Samba 2.6?) • /site, 2 /apps versions exported readonly to entire site • Kickstart installs with floppy • Nightly autoRPM security updates

  11. Projects Status (cont) • UNIX/NT Integration – Passwords – target June 30 Why we need: • even just 2 accounts (UNIX & NT) are confusing to users! (they have at least 10 different names based on use); recently added a third calendar account, and have to request it • remote and UNIX-only users can’t change NT password, but it expires and they can’t access services based on NT authentication (dial-in, MIS) Next phases: • User Accounts • Groups/Netgroups Common Password Implementation • Java • Mysql databases – user, pending actions • Public/private key authentication • Web & command line user interface • User interface -> Master controller • Input from web interface,command line • Outputs passwords to NT,NIS,DB

More Related