1 / 69

Creating a Campus NT Network using NT4 and OpenVMS

Creating a Campus NT Network using NT4 and OpenVMS. ES166. Presenters. David Lyon ( dclyon@csupomona.edu ) Systems Analyst Cal Poly Pomona University Hari Singh ( hsingh@csupomona.edu ) Information Technology Consultant Cal Poly Pomona University. Introduction.

jaunie
Télécharger la présentation

Creating a Campus NT Network using NT4 and OpenVMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Creating a Campus NT Network using NT4 and OpenVMS ES166

  2. Presenters David Lyon (dclyon@csupomona.edu) Systems Analyst Cal Poly Pomona University Hari Singh (hsingh@csupomona.edu) Information Technology Consultant Cal Poly Pomona University

  3. Introduction • This session presents a case study of the implementation of a mixed Windows NT/OpenVMS environment campus wide. We will discuss the details related to the implementation including design, testing, deployment and technical issues. We will also discuss security, policy/procedures and plans for future growth.

  4. Rationale • Project was birthed out of the idea that campus users and technicians would benefit from a unified campus computing environment. • Possibility of linking the OpenVMS and NT environment was real with the benefit of a unified environment and single sign on for users. • Mail, database, interactive, Web and NT access could be accessed from a single account with details hidden from the user.

  5. Motivators • Client license hassle with PWV5 • Disparate network environments • Duplication of effort was prevalent • Threat of more islands forming • Central IT department becoming isolated from departments (problem of trust) • Multiple authentication needed for VMS (mail, etc) and file sharing • Many spare DEC Alpha servers existed (64bit VMS) • NT applications due to be deployed (Citrix, Peoplesoft,etc.)

  6. Status Quo • There were 20-30 DEC Alpha and VAX servers scattered about campus running OpenVMS and PATHWORKS. • Departments were implementing Windows NT and had little management privilege on the central IT provided DEC servers. • Each server was stand alone and required separate account management. The servers were getting little to no use due to lack of collaboration and resources. • Servers became a nightmare to access due to license problems (PATHWORKS client).

  7. Departments ProposedEnvironment VMS BDC VMS BDC VMS BDC Central IT Dept Central Backup PDC NT BDC NT Switch Utility VMS Auth VMS Group Shares Cluster/VMS

  8. Initial Design and Technical Approach • Design based on a single campus wide NT domain • PDC would reside on a server maintained by the central IT department • Campus technicians would share in the administration • Environment would run over TCP/IP and span subnets • Existing hardware and software would be used with minimal expenditures needed

  9. Initial Design and Technical Approach • Large spare parts cache existed and servers could still be upgraded (64 to 256MB) • Cooperation and collaboration needed for success • Planned downtime of servers, services and network hardware was essential • A stable network infrastructure was key

  10. Initial Design and Technical Approach • Domain - a single domain, campus wide. Trusting domains would eventually be phased out. • Naming - multiple WINS servers would be configured to be replication partners. • PDC -the PDC would be a DEC Alpha box running PATHWORKS Advanced Server. A BDC would be deployed in the same subnet. It was decided later that the PDC/BDC should be Windows NT based.

  11. Initial Design and Technical Approach • Alphas - DEC Alpha servers would be loaded with OpenVMS 7.1/2 and Advanced server and would act as department file servers and would be a repository of "group" personal shares and other data. The Advanced Server would enable them to be BDCs and they could be managed using the NT GUI tools (server manager, etc.). • DFS - Microsoft Distributed File Service would be installed on the PDC and main BDC and would be used for mapping common and personal shares.

  12. Initial Design and Technical Approach • Backups - data that needed to be included in the central IT backup rotation (off site, heavily monitored and managed) would be stored on the DEC Alphas. A backup system was already in place that used DECnet to back up files across the network to a central server with a high capacity tape drive. Central IT staff would handle those backups.

  13. Initial Design and Technical Approach • Single Signon - Advanced Server would be installed on the central DEC Cluster (central IT department). The software would be tuned to have few users and would primarily be used for single signon. • VMS logins would validate passwords against Windows NT.

  14. Authentication Flow

  15. Initial Testing • The proposed implementation would be tested on a subset of hardware to determine workability. The testing would be done by the team that created the proposal.

  16. Initial Testing - Preparation • Early versions of PATHWORKS Advanced Server used (Version 6.0) • Existing DEC 3000 servers upgraded to OpenVMS 7.1 (needed to run Advanced Server s/w) • An Intel box running NT4/SP3 was prepared to act as PDC • DFS was installed on the PDC.

  17. Initial Testing - Results • Advanced Server - functioned as desired with no major issues. • Performance - some testing was done on the DEC3000s but no performance issues were identified. • Single Signon - feature worked but issues arose later that were ultimately resolved. • Dave Client – (from www.thursby.com) did work with Advanced server. Dave is an NT client for Macintosh.

  18. Initial Testing - Results • DFS - DFS (Distributed File System) tested to determine if it would interoperate with Advanced Server. No issues. • PW Conversion Utility - this utility designed to merge existing PWV5 shares/users into the domain, functioned well. • PDC/BDC – communication (synchronization, etc.) between the NT and VMS box worked fine (same subnet). Remote VMS BDC also worked fine.

  19. Initial Testing - Report • The results were reported to management. Testing participants were impressed with the results and were inclined that this could go forward in a campus wide manner.

  20. System Configuration • System configuration (H/W and software) changed as the project evolved but we report the current baseline here.

  21. System Configuration - DEC Alpha 2000/3000 • OpenVMS 7.2, TCP/IP 5.0-9, TNT (OpenVMS mgmt station) • PATHWORKS Advanced Server 6.0B • DECnet over TCP/IP (for backups) • DFO (Dec File Optimizer 2.4) • Perl/UNIX utilities (easier management) • Backup utilities

  22. System Configuration - Alpha 1000 Backup/Data Server • Same as DEC Alpha 3000 • DEC Scheduler • Pathworks for OpenVMS (Macintosh) • High capacity tape, disk farm

  23. System Configuration - Cluster • OpenVMS 7.1, TCP/IP 4.2 • PATHWORKS Advanced Server 6.0B • Perl/UNIX utilities • OSU 3.0a Web Server (for Web management of NT/Adv. Server) • SSLeay 0.8.1 for secure connections to Web • Note – we had to go with 7.1 and TCP 4.2. This is a main frame system running many other applications.

  24. System Configuration - NT DC • Windows NT4/SP6 • Services for UNIX (SFU 1.0A) • Timbuktu Pro 32 (2.0) • Microsoft DFS (4.1) • WINS (Windows Internet Name Service) • Diskeeper 4

  25. Redundancy • PDC has dual drives which will be mirrored • BDC can assume PDC role in 30 minutes • Hot spare maintained for DEC 3000s • Utility server (alpha 1000) under Compaq maintenance • Data stores can be moved quickly to spare servers in the event of serious hardware failure

  26. Configuration Procedure - DEC Alpha servers • Install OpenVMS, TCP/IP, DECnet over IP, Perl, Advanced Server • Configure TCP/IP common services, including PWIP drivers • Configure Advanced Server as follows... @sys$update:pwrk$config

  27. Basic Configuration

  28. Transport Configuration

  29. Main Configuration Menu

  30. Configuration Procedure - DEC Cluster • Install Advanced Server and configure as BDC (previous slides) • Cluster already configured except for Advanced Server • VMS accounts could be set to authenticate against NT using the following command… $ mcr authorize modify <user>/flags=extauth

  31. Configuration Procedure - PDC/BDC • Install Windows NT SP6, SFU, Timbuktu, DFS, Diskeeper and WINS • Configure DFS • Configure Timbuktu • Configure WINS replication partners • Close security holes

  32. Deployment • Implementation would be phased in. The first order of business was to move the PDC from an outside department to the central IT department where it would reside in the main campus computer room along with other central servers.

  33. Implementation Phases • VAX6440 Conversion – transition old Vax6440 running Pathworks version 5 to an Alpha 1000 running Advanced Server. The PATHWORKS upgrade utility was used. Some serious planning was required but the end result was quite successful. • Alpha 1000 PDC – promote the 1000 to PDC. There was no WINS server (at the time) in the subnet and we saw this as a significant problem. Intel PDC was used instead. (See Technical Issues)

  34. Implementation Phases • Upgrade IT central server to 7.1/Advanced server (BDC) • Install Advanced Server on main cluster • Create mechanism to merge-in existing Cluster accounts and enable single signon • Create tools that Help Desk can use to monitor network and to change passwords • Convert VMS servers to run DECnet over IP

  35. Implementation Phases • Transition PDC to Intel/NT in central IT computing room • Publish management policies • Install DFS (Distributed File Service) on central PDC • Enable management team to add NT accounts via Web • Transition to new Intel/NT PDC (first was a loaner)

  36. Implementation Phases • Configure other DEC3000 servers as BDCs across campus. The DEC Alphas are configured quickly and in a uniform way via VMS command procedure. • Upgrade Advanced Server to 6.0B on all DEC servers • Create account audit tool • Set up group directory structure for personal and common share areas • Install Services for UNIX, Timbuktu and Diskeeper on PDC

  37. Implementation Phases • Relocate off-site DEC3000 servers to main computer room, utilizing Cisco VLAN • DEC3000s to remain dedicated data servers to their departments but would be easily accessible • DEC3000 alphas self maintained due to variety of spare parts • Configure a Intel/NT BDC in the same subnet as the PDC (NT PDC hot spare)

  38. Technical Issues • Adv Server .vs. Intel as PDC - felt is wise to ultimately use Intel/NT as the PDC. Had mixed recommendations. Advanced server listed as NT 3.51 server manager. • DCE/DFS traffic became an issue. • Single Signon Failure - required Advanced Server 6.0-ECO2. Also, in cluster where Adv. Server not running on all nodes, need “def/sys/exec pwrk$acme_server node1,node2,nodex” in startup. Exclude nodes Adv. Server not running on.

  39. Technical Issues • Administrator Notifications – dial-in WinNT user complained of getting CPP Administrator broadcasts while at home. • Advanced Server License - existing PAK was only good for PATHWORKS Advanced Server 6, not the Advanced Server that was bundled into OpenVMS 7.2. We opted to stay with version 6 which did not include a registry or long file names. • Downgrading – it was difficult to downgrade from Adv. Server 7.2 to 6.0B. We had to deassign logicals, delete files so install of 6.0B worked.

  40. Technical Issues • Admin tools "set file" command - this tool generally worked will for setting NT security but we uncovered a bug with a simple workaround. • There is seemingly no way to remove a rights holder (I.e. Everyone -> Change). You can change Everyone to “NONE” but that prevents access. Workaround is to make sure parent directory has desired permissions before creating subdirectory.

  41. Technical Issues • Admin Show File - there was an issue with the security view listing inaccurate information. Compaq suggested removing V5 security (pwrk$deleteace) but this was of no help. It appears that later patches to Advanced Server may have corrected this. • Advice was to trust security as viewed from Windows NT Properties.

  42. Technical Issues • Directory Caching Bug - bug in Advanced Server prevents caching of large directories accurately. Workaround is to disable it at cost of performance. This is done in lanman.ini. • VMS System Disk Sharing - VMS disks can all be mapped (disk$). This needs to be disabled in lanman.ini! The noautoshare keyword in lanman.ini was a challenge to set due to the many drives on the cluster. • Performance – DFO (defrag) on OpenVMS made a noted improvement.

  43. Technical Issues • PW6 Naming Problem – name cached somewhere in local subnet prevented its use. Clearing caches (WINS, etc) made no apparent difference. Name was OK outside of subnet. • Start Pending – this state for the browser in Advanced Server is normal if another machine is browse master. • FTP/RSH Passwords – bug causes FTP/RSH to use uppercase password. Workaround is to set NT password to uppercase.

  44. Technical Issues • WINS Corruption – browse problems appeared to be caused by WINS. Remote users could not “find” servers until WINS was rebuilt. Also, local browsing was broken due to network mask problem. Cluster members did not show up in neighborhood. • WIN95 Access – win95 clients needed to join domain to get access to a member server. • Timbuktu Pro – client characters are not sent to login screen properly. Cannot login.

  45. Technical Issues • Renaming VMS Server – there were no serious issues in renaming of a server. One does need to clear all DECnet caches and node registrations (@sys$manager:net$configure).The ncl tool can be used to clear cache. This applies if using DECnet over IP. • PWIP and DECnet – must have PWIP drivers loaded (TCPIP$CONFIG) for DECnet over IP.

  46. Technical Issues • Single Signon Failure – according to Compaq technical support, if NT authentication fails, you must set the VMS authorize flag (/flags=noextauth). There is no other way to get OpenVMS logins working without privileges. • Tech Support also said that a BDC must take over domain if PDC fails in order for single signon to work without hanging. Your NT network MUST be stable.

  47. Technical Issues • Admin Password Changes – the issue came up but there is apparently no way to limit Administrators that can modify a password. Compaq suggested using other groups for local administration. This turned into a BIG issue.

  48. Technical Issues • Conversion Utility – the PATHWORKS V5 to V6 conversion utility functioned well. Make sure you read the log file and make necessary corrections. Don’t forget to remove V5 security after you deem conversion successful. • Extraneous Shares – you may wish to remove extraneous shares (PWLIC, etc) after Advanced Server installation.

More Related