Hardware
SGI Origin2000
The SGI Origin 2000 is a scalable distributed shared memory architecture. Currently the 64 MIPS R1000, 64 bit superscalar processors of the system are shared between batch users and the interactive graphics users. There are 8 Onyx2 infiniteReality Graphics Pipes for high performance visualization. The system is housed in 6 towers in the machine room in Merrill Engineering Building. An associated graphics lab contains a number of 24 inch high resolution monitors.
During the week of May 24th, 1999 an additional 32 processors is scheduled to be added to the system. The machine will be configured initially with a 64 node batch system, and the 32 nodes with the graphics pipes for graphics users. We will run the system from time to time with all 96 processors together.
IBM SP
The SP system (IBM's scalable distributed memory system) at CHPC consists of a 64 node production system and a 10 node (12 processor - there are 2, 2 way SMP nodes) test system. The production system has nodes ranging in speed from 66 - 120 Mhz. The test system has nodes from 160 to 332 Mhz. Contained in these systems are POWER2, POWER2 SuperChip, and POWERPC 604e processors.
SGI PowerChallenge
The Power Challenge is a 20 processor shared memory computer complex consisting of a 16 processor batch machine and a 4 processor interactive machine. It uses the MIPS R8000 chip, with 64 bit registers and a cycle time of 75 M Hz. It has 16 Kbytes data and instruction caches on chip, and 4 Mbytes of secondary data/instruction off the chip. It has 2 Gbytes of 4 way interleaved core (RAM) memory, and 12 Gbytes of disk space.
Intel Cluster
The Intel Cluster consists of 32 - 350 Mhz Pentium II batch processors and a 450 Mhz Dual Pentium II master node. This system is not yet available for general production, but initial tests have shown some very promising results. We expect to be expanding this system in the near future.
INSCC Building
The Intermountain Network and Scientific Computation Center facilitates multidisciplinary research activities and collaboration among researchers from different academic departments. By locating these research activities in the same space and sharing computer and network infrastructures, the University intends to accelerate research in solving complex problems that require large research teams with expertise in a variety of disciplines and access to advanced computer facilities.
INSCC provides:
- Space for multidisciplinary research activities addressing computational modeling to solve problems of national and global importance.
- State-of-the-art, high-performance computing, networking, and visualization resources.
- Model infrastructure to support a distributed computing research environment
The center's participants have access to a number of state-of-the-art computational tools, including the hardware described above and an archival storage system. A network backbone connects these systems using fiber/ATM technology, at speeds of up to 622 Mbytes/sec. The building's network can provide these advanced services to any location within the center. The building also connects through the Utah Educational Network gateway to the most advanced national computer networks. This connectivity will give researchers in the center access to both local and remote computing resources. The Center for High-performance Computing is responsible for the operation, maintenance, and constant upgrade of these resources.
The INSCC provides approximately 49,000 assignable square feet of computer labs, physics labs, offices, and shared support areas. Approximately 36 percent of the space is dedicated to computer labs, 16 percent to physics labs, and 31 percent to faculty, staff, postdoctoral, and shared space conference rooms, teleconference resources, and computer visualization facilities.
Research
The following research groups are among the those using CHPC systems and doing collaborative research with CHPC Staff:
Please note: These are all external links and are maintained by the various research departments. These links should have a long expected life, but the author does not have any way to assure this. If you find that any of them no longer work, please try The CHPC Research page, which has a long expected lifetime. This is maintained with links to our various research activities.
CHPC Staff
CHPC consists of the following groups:
- Administrative: The Administrative staff at CHPC consists of: our Director who reports to the Vice President of Research, three Assistant Directors (Systems, Networking and User Services), and 2 secretarial staff.
- Staff Scientists: We currently have 5 staff scientists, one in each of the areas: Statistics, Molecular Sciences, Parallel Computing, Visualization and Numerical Methods. The staff scientists report to our Director, do research and provide high level User Support.
- Systems: Our systems staff currently consists of 8 full time positions, reporting to the Assistant Director of Systems. The systems staff has 4 FTES devoted to our High Performance Computing systems, and 4 FTES primarily responsible for desktop and server support within the INSCC building.
- Networking: Our network staff consists of 3 full time positions reporting to the Assistant Director of Networking. They are primarily responsible for all networking in the INSCC building and also doing research on Advanced Networks.
- User Services: The user services staff consists of 4 full time and from 5 to 7 part-time positions, reporting to the Assistant Director of User Services.
User Services
Applying high performance computing to scientific, statistical, engineering and visualization research requires a broad base of user support services. Continued growth as a regional center for high performance computing and networking activities requires addressing the issues of education, training, consulting, help desk staff, a central location for vendor documentation, short courses, and site specific user manuals.
The consultants provide general support for both high performance computing platforms and computing support for occupants of the INSCC building. Each consultant also support applications and research groups particular to their own field. The current consultants support numerical and mathematical simulations of the heart, high energy physics, parallel numerical methods, and visualization consultants. The consultants gain a high degree of expertise by working on their research, and also gain much experience from their interaction with other users.
Problem Tracking
The last two months at CHPC we've been migrating to the GNATS problem tracking system. We are still in the early stages of this project and have run into a number of problems. Many of our staff still does not understand our configuration and both staff and users sometimes get cryptic messages they do not understand how to read.
Prior to the implementation of GNATS, we used a combination of three systems: consult, request and problems. User services used an email box "consult" as a way to track correspondence with our users, and outstanding problems. If we required our HPC systems group to fix something or answer a question, we used a command line system called "request". Finally for INSCC building problems, we had an email list "problems" to which tenants of the building submitted questions and problems. This list was routed to the help desk and the INSCC systems support group. The GNATS system has now replaced these three problem tracking mechanisms.
The user services group started testing the GNATS system in February and March of 1999. During March we had all email sent to the Help Desk entered into the system and tracked. April 1st we started trying to track all "requests" and "problems" within GNATS, eliminating use of the request system, and copying GNATS on the problems email list. We recently eliminated the "problems" email list and replaced it with an alias to GNATS. We trained the entire staff in early April. Follow-up training will be scheduled in June 1999.
User Documentation
All CHPC documentation is on our web site: www.chpc.utah.edu. We have User's Guides for each of our HPC platforms, instructions on how to get help and subscribe to our various email lists, policy information, a FAQ and much more. We recently re-designed the entire site, and are now in the process of updating the content. There is still much work to be done in this area. We have some tutorials online and plan to expand them as well.
We currently provide interactive means for user to apply for accounts, apply for allocations, request an account be deleted, and request a data port in the INSCC building and we're working on an interactive interface to the GNATS problem tracking system.
Summary and Future Plans
CHPC is committed to keeping up with the cutting edge of current high performance computing technologies and software. The Consulting Center staff members regularly attend seminars and workshops on advanced high performance computing topics. As high performance computing technology advances, the need for a knowledgeable user support facility also grows. Future plans include expanding the problem tracking and reporting system, and creating a database to track items in the Library.