Abstracts

Session day Session time Session number Presentor(s) Title Abstract
Monday 8:30 AM I Jeff Brooks (CRAY) TUTORIAL I
Cray SV1 Performance
This tutorial will help users get the most performance out of their Cray SV1 systems. Architecture of the system will be discussed, particularly those items which differ from previous Cray systems. The tutorial will cover cache use techniques with vector processing and parallel performance techniques using autotasking and streaming. Several examples will be provided and a tuning guide will be distributed.
Monday 8:30 AM II Vince Schuster (The Portland Group) TUTORIAL II
High Performance Fortran: Practice and Experience on the CRAY T3E, SGI Origin, and Cluster Systems
High Performance Fortran (HPF) is a high-level directive-based set of extensions to Fortran 95 which enable parallelization of Fortran applications for shared-, distributed-, or hybrid-memory computing systems. MPI is a low-level library-based standard for message-passing that enables very fine control of parallelization on these same types of systems. The renewed emphasis on distributed- and hybrid-memory systems for high performance computing makes HPF attractive for use in combination with MPI to simplify application porting, tuning, and maintenance. Several production applications in geophysical processing, ocean modeling, astrophysics, and high-energy physics have been developed using HPF, or HPF in combination with MPI, on the CRAY T3E. This tutorial will review coding strategies that have made these applications effective from both a performance and maintenance standpoint. It will also address issues involved in porting these applications from the CRAY T3E to later generation high-performance computing systems.

Objective of the tutorial:
The objective of this tutorial will be to give participants a working knowledge of HPF. Furthermore, attendees will get a first-hand glimpse of how HPF can be used in combination with MPI. Participants will be given guidance on how to make informed decisions as to how to choose a programming model (MPI, HPF, OpenMP, or some combination of these).
Monday 8:30 AM III Virginia Bedford and Liam Forbes (ARSC) TUTORIAL III
Configuring a Secure System
This tutorial will provide suggestions for configuring a secure system, including how to use some of the open source security tools. It will also cover the presenters' experiences and recommendations for specific issues within the Unicos, Unicos/mk and IRIX operating systems. We hope that participants will learn several new ideas that they can implement in their own environment.
Monday 11:00 AM 1 G S.W. de Leeuw, LAC Co-Chair (DELFT) Introduction to the Conference
Monday 11:15 AM 1 G Guus S. Stelling (DELFT) Water Control by High Performance Computing
Monday 12:00 PM 1 G CUG Board of Directors CUG Update
Monday 2:00 PM 2 G Jim Rottsolk (CRAY) Cray Inc. Corporate Direction
Monday 3:00 PM 2 G Dave Kiefer (CRAY) Cray Inc. Product Roadmap
Monday 4:00 PM 3 A Vito Bongiorno (CRAY) T90
Monday 4:00 PM 3 B William White (CRAY) T3E
Monday 4:00 PM 3 C R.H. Leary, W. Pfeifer, L. Carter, and A. Snavely (SDSC) Evaluation of the Tera Multithreaded Architecture Computer Multithreading has received considerable attention in recent years as a promising way to hide memory latency in high-performance computers. Tera Computer of Seattle has designed and built a state-of-the-art multithreaded computer called the MTA. Its intended benefits are high processor utilization, scalable performance on applications that are difficult to parallelize, and reduced programming effort.

The largest MTA (and the only one outside of Seattle) is at the San Diego Supercomputer Center on the campus of the University of California, San Diego. Over the past two years numerous kernels and full applications have been ported from other high-end parallel computers to the Tera MTA and tuned for optimal performance by researchers at UCSD and other collaborating institutions. This paper summarizes results and conclusions obtained to date from this experience base.
Monday 4:30 PM 3 C TBA (CRAY) MTA
Monday 4:00 PM 3 D (Available for BoF)
Tuesday 8:30 AM 4 G Dave Kiefer (CRAY) Cray Inc. Hardware Overview
Tuesday 9:15 AM 4 G John Dawson (CRAY) Cray Inc. Software Overview
Tuesday 10:00 AM 4 G Roger Dagitz (CRAY) Cray Inc. Customer Service Overview
Tuesday 11:00 AM 5 G Jim Harrell (CRAY) Operating System Direction
Tuesday 11:30 AM 5 G John Dawson (CRAY) Cray Programming Environment
Tuesday 12:00 PM 5 G Don Mason (CRAY) System View of the SV2
Tuesday 2:00 PM 6 A David Gigrich (BCS) Avoiding Megaword Memory Leaks on a Cray T90 This paper will address the inherent problem with the use of dynamic memory on a Cray T90. We will illustrate how multi-million word blocks of memory can become lost to the system as the analyst's memory management scheme (allocatable arrays) competes against both the compiler and system routines for valuable heap-space. The solution used to avoid this problem will be discussed in detail.
Tuesday 2:30 PM 6 A Hans-Hermann Frese (ZIB) Performance of Fortran Programming Models on the Cray T3E The Cray T3E Programming Environment provides different programming models for Fortran applications using implicit or explicit parallel programming. In this paper we shall investigate performance results for the different parallel programming models with respect to application kernels. The time-to-solution in the development process of parallel applications will be considered, too.
Tuesday 3:00 PM 6 A James Giuliani and David Robertson (OSC) Performance Tuning for Cray's SV1 Architecture, The introduction of cached vector operations and multistreaming processors in Cray's SV1 architecture will result in new design and performance tuning issues for developers of many scientific applications. In particular, code developed for Y-MP/C90/T90 series machines may require a significant additional tuning effort to achieve efficient performance on the SV1. Following an overview of the relevant SV1 architectural features and their theoretical performance implications, we describe several real-world research applications which were profiled and tuned on both Cray T90 and SV1 systems at OSC. We analyze performance results in detail, highlighting tuning techniques that prove beneficial on the SV1. Finally, we discuss the insights gained into the process of migrating applications to Cray's new architecture roadmap.
Tuesday 2:00 PM 6 B Sergei Maurits and Jeff McAllister (ARSC) Applications of Vis5D in the Cray T3E MPP Environment The volumetric visualizing package Vis5D gained noticeable popularity during recent years. Designed primarily as a tool for a single-processor environment, the package is not straight-forwardly applicable to MPP situations, let alone the porting challenges. A suite of MPP C-routines "Almost_Vis5D" was developed at the ARSC to bridge this gap and to facilitate a convenient run-time output directly into Vis5D-compatible format. Performing direct output from each PE in a partition to an individual file, the package effectively eliminates the interprocessor data transfers for I/O and boosts the computational performance. The "Almost_Vis5D" package provides a number of options for the run-time or post-processing visualization for the entire domain or just its portion in full or in abbreviated resolution.
Tuesday 2:30 PM 6 B Joanna Leng, John Brooke, and Terry Hewitt (MCC), and Huw Davies (U. of London) Visualization of Spherical Geometries Produced by Large Scale Simulation on the Cray T3E at Manchester, UK, We discuss the problems arising from the visualization of data from a geophysical application running on 512 processors of the Cray T3E. The computational domain is a spherical shell and the user wishes to cut through the computational domain in a variety of ways and visualize the data, thus compounding the problems caused by the sheer size of the dataset. We present solutions designed to allow the user to make informed decisions about data management and discuss how this can be extended to monitoring and steering the simulation on the T3E.
Tuesday 3:00 PM 6 B Stephen Pickles, Stephen Ord, Fumie Costen, John Brooke, and Terry Hewitt (MCC) Probing the Universe via an Intercontinental Cluster of T3Es, We describe how signals from the Jodrell Bank Mark II radio telescope were processed on a metacomputer consisting of three T3E machines linked via an intercontinental network. This work was part of a metacomputing demonstration that won an award at SC99. We describe the problems of processing experimental data on this metacomputer and show how problems with low intercontinental bandwidth were overcome.
Tuesday 2:00 PM 6 C Tina Butler (NERSC) NERSC Experiances with Security Tools and Monitoring The US DOE's National Energy Research Scientific Computing Center (NERSC) is responsible for providing an open high performance computing environment for the DOE research community. We will describe how NERSC uses Cray MLS-based security tools, third-party tools like ssh and tripwire, and locally developed tools to secure and monitor its' T3E and SV1 systems. Supported by the US Department of Energy under Contract No. DE-AC03-76SF00098.
Tuesday 2:30 PM 6 C Kurt Carlson (ARSC) Reporting Unicos and Unicos/mk System Utilization, This talk addresses measuring system utilization under Unicos[/mk] using CSA accounting and other data reduction and reporting tools and techniques. How to "account" for system usage and produce reasonable metrics on responsiveness to users. The myth of expansion factors will be debunked. Use of accounting data for isolating problems will also be discussed.
Tuesday 3:00 PM 6 C Bruno Loepfe (ETHZ) and Olivier Byrde (CRAY) SuperCluster SV1: The Next Step, In the first two phases of the project, we dealt primarily with the user aspects of a SuperCluster. Now in phase three, the main focus is operational aspects. Three subjects will be presented based on our experience: NQE versus LSF, clusterwide dump/restore, and clusterwide resiliency features.
Tuesday 4:00 PM 7 A Patricia Langer (CRAY) SV1
Tuesday 4:00 PM 7 B Charlie Clark (CRAY) Customer Service and Operations Q&A Representatives of Cray Hardware Engineering, Software Development, and Service organizations will discuss issues and questions on all aspects of Cray Service, Hardware, and Software. The questions will come from the following:
- The CUG Cray Computer Services Survey
- Questions submitted to the Computer Services SIG chair by email (Leslie Southern, leslie@osc.edu) any time before the CUG meeting
- Questions placed in the OSC site folder during the CUG meeting. NOTE: These need to be submitted by the end of day Monday, May 22nd
- Questions will also be taken from the floor
This tends to be a lively session that addresses a wide variety of issues
Tuesday 4:00 PM 7 C Erv Kuhnke, Steve Finn, and Chris Macneill (DTRA) Ace Agent for the Cray We are a small installation with limited system administration staff. We wanted to increase system security by implementing single use passcodes. We will discuss our experience in integrating the SecurID ACE Server and the ACE Client with an SV1 running Unicos 10.0.0.6. We will discuss our selection of the SecurID product, and our implementation options. We considered alternatives such as Kerberos, and use of a front-end machine to perform the authentication and feed the pre-authenticated users to the SV1. Other options included use of VPN hardware/software to perform the authentication. In this short panel we hope to discuss these topics, what we chose to do, and the challenges of making it work.
Tuesday 4:00 PM 7 D TBA (Available for BoF)
Wednesday 8:30 AM 8 A Chair, Jeff Terstriep (UIUCNCSA) SIG Meeting:
Communications & Data Management Group
Mass Storage Focus Area, Chair, Kevin Wohlever (OSC)
Networking Focus Area,
Chair, *open*
Wednesday 9:30 AM 8 A Chair, Chuck Keagle (BCS) SIG Meeting:
Operating Systems Group
UNICOS Focus Area, Chair, Ingeborg Weidl (MPG)
IRIX Focus Area,
Chair, Cheryl Wampler (LANL)
Security Focus Area,
Chair, Virginia Bedford (ARSC)
Wednesday 8:30 AM 8 B Chair, Hans-Hermann Frese (ZIB) SIG Meeting:
Programming Environments Group
Compilers & Libraries Focus Area, Chair, David Gigrich (Boeing)
Software Tools Focus Area,
Chair, Guy Robinson (ARSC)
Wednesday 9:30 AM 8 B Chair, Eric Greenwade (INEEL) SIG Meeting:
High Performance Solutions Group
Applications Focus Area, Chair, Larry Eversole (JPL)
Visualization Focus Area,
Chair, John Clyne (NCAR)
Performance Focus Area,
Chair, Michael Resch (RUS)
Wednesday 8:30 AM 8 C Chair, Leslie Southern (OSC) SIG Meeting:
Computer Services Group
Operations Focus Area, Chair, Brian Kucic (UIUC-NCSA)
Wednesday 9:30 AM 8 C Chair, Leslie Southern (OSC) SIG Meeting:
Computer Services Group
User Services Focus Area, Chair, Chuck Niggley (NAS)
Wednesday 11:00 AM 9 G prof. B.P.T. Veltman, Chairman of the Board Advisory Council for Science and Technology Policy  
Wednesday 11:15 AM 9 G Henk Dijkstra (University of Utrecht) Thirty Years of Simulaton of Ocean Circulation: Status and Future Ocean models are an essential part of global climate models, since ocean currents take care of about half of the total poleward heat transport. Simulation of the global ocean circulation started about thirty years ago with the development of the G(eophysical) F(luids) D(ynamics) L(aboratorium) ocean model. Since then, many more models have been developed and currently many aspects of the mean paths and variability of ocean currents, such as the Gulf Stream, can be simulated in quite detail. In this presentation, an overview will be given what has been achieved, which challenges lie ahead and what obstacles have to be overcome.
Wednesday 11:45 AM 9 G Werner Krotz-Vogel (Pallas) Portable MPI Tools at Work—Cracking Performance Problems Vampir, the leading MPI performance analysis tools, is now available in a new and improved version. Vampir 2.5 features a streamlined user-interface, additional displays and a source-code display, while keeping all the unique features of previous Vampir releases. This presentation will cover
- a brief introduction to Pallas, a leading European vendor of software tools for parallel computing
- Vampir 2.5, visualization and analysis of MPI programs, focus on 'news'
- Vampirtrace 2.0, low overhead MPI profiling library, news on T3E 'shmem' - Dimemas, graphical performance prediction tool.
Wednesday 2:00 PM 10 A Guy Robinson (ARSC) Experiences in Getting Researchers Started on Parallel Systems: Rapid Prototyping One problem often facing users new to high performance computing is that the actual scientific computation is still maturing and being developed. This dual complexity can be eased by the use of high level parallel languages such as HPF, co-array FORTRAN, UPC and ZPL. Examples will be described as to how these have aided researchers to acquire the necessary understanding of both parallel systems and their algorithms to make a success of their projects.
Wednesday 2:30 PM 10 A Mathilde Romberg (KFA) UNICORE: Beyond Web-based Job-Submission UNICORE (Uniform Interface to Computer Resources) is a software infrastructure to support a uniform, secure Web-based access to distributed resources. The talk will give an overview of the architecture, its security features, the user functions, and the current implementation status.
Wednesday 3:00 PM 10 A Norbert Meyer, Pawel Wolniewicz, and Miroslaw Kupczyk (Poznan) Simplifying Administration and Management Processes in the Polish National Cluster The Polish National Cluster was built mostly on SGI and Cray computer systems which uses LSF (Load Sharing Facility) and NQE (Network Queuing Environment) batch systems allowing to run jobs to distributed environment. The idea was to make much easier the administration process of maintaining users accounts on distributed and independent sites (different supercomputing centres). Therefore was created a mechanism localized above a batch queuing system called Virtual User Account System allowing a load balance of machines installed in different computing sites without overhead related to creating and maintaining additional user accounts.
Wednesday 2:00 PM 10 B Rolf Rabenseifner (RUS) Automatic MPI Counter Profiling This paper presents an automatic counter instrumentation and profiling module added to the MPI library on Cray T3E and SGI Origin2000 systems. A detailed summary of the hardware performance counters and the timings of all MPI calls of any MPI production program is gathered during execution and written on a special syslog-file and optionally to a user's file. Weekly and monthly a statistical summary is computed and the user specific part is sent by mail to each user.
Wednesday 2:30 PM 10 B Piotr Bala (WARSAWU) and Terry W. Clark (U. Chicago) Pfortran and Co-Array Fortran—Tools for Parallelization of a Large Scale Scientific Application In this study two similar parallelization tools, Pfortran and Co-Array Fortran, are discussed in the parallelization of Quantum Dynamics, a non-trivial scientific application. We found good performance results with both, which we discuss relative to an HPF implementation.
Wednesday 3:00 PM 10 B Hector Eduardo Gonzalez, Enrique Cruz, and Jorge Carrillo (UNAM) Exact Solution of Linear System Equations Using Chinese Remainder Theorem A parallel code using automatic and OpenMP implementations for solving linear system of equations with integer coefficients (a and b) is presented. The vector solution is obtained by "chinese remainder theorem" without flops operations. This algorithm can be extended to real numbers using integer operations.
Wednesday 2:00 PM 10 C Bruce Loftis, John Towns, and Scott Koranda (UIUCNCSA) User Support and the Virtual Machine Room NCSA and the National Computational Science Alliance are moving toward a Virtual Machine Room (VMR) which will seamlessly connect many distributed computational facilities, data archives, virtual reality facilities, and even large scientific instruments using high-speed networks and the new "grid" technologies. The grid and the Virtual Machine Room will radically transform the way computational scientists get their work done. The deployment of the VMR offers new challenges and will require new models and new technologies for supporting users.
Wednesday 2:30 PM 10 C Terry Hewitt (MCC) Pay as you go Supercomputing: Does it work? UK academia now gets its supercomputer service from a consortium of Silicon Graphics, Computer Sciences Corporation and the University of Manchester under the private finance initiative of the UK government. It is essentially a pay-per-use basis. This talk will review the mechanism in use for UK academia.
Wednesday 3:00 PM 10 C TBA
Wednesday 4:00 PM 11 G Sally Haerer, CUG President (OR-ST) CUG Report
Wednesday 4:15 PM 11 G Margaret Simmons, CUG Secretary (SDSC) CUG Bylaws
Wednesday 4:45 PM 11 G Leslie Southern (OSC) CUG Elections
Wednesday 5:05 PM 11 G Gary Jensen (UIUCNCSA) CUG Preview: Next Workshops and Conferences
Wednesday 5:25 PM 11 G Leslie Southern (OSC) CUG Election Results
Thursday 8:30 AM IV Paul Ernst (SGI) TUTORIAL IV
UNICOS/IRIX Differences in DMF
This tutorial presents the differences between the UNICOS and IRIX implementations of the Data Migration Facility. Topics covered will include feature differences, installation and configuration changes, DMF tape interface information (TMF and OpenVault), and conversion information for sites considering changing platforms from UNICOS to IRIX.
Thursday 8:30 AM V Gustavo Galimberti and Dan Higgins (SGI) TUTORIAL V
Configuration and Management of Large Origin and SN Mips Systems
This tutorial will cover some of the aspects that are the most relevant in terms of configuring and managing a large system. It will focus on areas that are key to efficient job administration and good job performance in large Origin and SN MIPS systems such as accounting, job limits, scheduling, work load management, ccNUMA support, and partitioning.
Thursday 8:30 AM VI Dave Ellis (SGI) TUTORIAL VI
Programming for Optimal Use of I/O
This tutorial will address I/O Optimization for both SCSI and Fibre Channel attached disk subsystems. While the intended audience is the user community, some system specifics will be covered. These include File System Block size determinations for striped volumes, stripe vs. concatenated volumes, appropriate selection of stripe sizes for XLV and XVM volumes, systune parameters that effect disk I/O performance, journal placement options and optimal allocation group placement. This talk will be IRIX specific.
Thursday 8:30 AM VII Brian Gaffey (SGI) TUTORIAL VII
XFS and CXFS
This tutorial will discuss SGI's new cluster file system. It will cover the features of CXFS, the hardware required to run it and how CXFS has been implemented as an extension to XFS. XVM, a new volume manager required for CXFS, will also be presented. Installation and early customer experiences will be included.
Thursday 11:00 AM 12 G SGI Corporate Direction
Thursday 11:45 AM 12 G SGI Service Report
Thursday 2:00 PM 13 A Bill Harrod and Louis Hackerman (SGI) SGI Tensor Processor Unit The SGI Tensor Processor Unit (TPU) is a unique, high-performance, advanced Digital Signal Processor (DSP). The TPU functions as a shared memory co-processor that provides order-of-magnitude improvements in time-to-solution for signal and image processing applications and related algorithms. The TPU is a standard XTalk I/O card that connects to Octane, Origin or Onyx2 host systems via an XIO slot. Combined with the scalability of SGI’s cc-NUMA architecture, the TPU makes Octane, Onyx and Origin systems capable of achieving performance and price/performance levels that are unbeatable with competing alternatives in the High Performance Computing DSP (HPC-DSP) market. This presentation will present an overview of the TPU hardware and software, including performance results for various signal processing algorithms.
Thursday 2:30 PM 13 A Michael Pettipher, Michael Bane, and Ian Smith (MCC) and Rainer Keller (RUS) A Comparison of MPI and OpenMP Implementations of a Finite Element Analysis Code In this paper we describe the steps involved, the effort required and the performance achieved in both MPI and OpenMP implementations of a Fortran 90 finite element analysis code on an SGI Origin 2000 using the MIPSpro compiler. We demonstrate that a working OpenMP version is easier to write, and then explain how to overcome some restrictions of the first version of the API (including the MIPSpro compiler) to obtain better performance.
Thursday 3:00 PM 13 A Bob Ciotti, Jim Taft, and Jens Petersohn (NAS) Early Experiences with the 512 Processor Single System Image Origin2000 This paper covers issues and modifications made to the IRIX operating system, performance of key applications (such as a sustained performance of 60gigaflops on a production CFD code), application techniques for utilizing shared memory, and operational issues such as reliability with the 512 processor single system image Origin2000 system installed at the NAS facility in October 99.
Thursday 2:00 PM 13 B Bernd Hetze and Manfred Buchroithner (DRESDENU) Aspects of 3D Visualization of a Complex Cave System In cooperation with the Visualization Group of the Computer Center the Institute of Cartography has produced a digital visualization of the Dachstein Southface Cave (Austria) using a SGI Onyx2. The aim of the project was to model the cave and the surrounding Alpine landscape panorama as realistically as possible. The paper shows both subterraneous data acquisition, data processing and ends with a virtual flight out of the cave and high above the synthetic Dachstein Mountains.
Thursday 2:30 PM 13 B Ron Arnett (INEEL) Parallel Processing of a Groundwater Contaminant Code The U. S. Department of Energy's Idaho National Engineering and Environmental Laboratory is conducting a field test of experimental enhanced bioremediation of trichoroethylene (TCE) contaminated groundwater. The MT3DMS groundwater transport code with a particle tracking option was used to simulate and evaluate the field test. Total wall-clock simulation time was about 5 days per run, making a several month projected model calibration period. Since profiling showed that the code was a good candidate for parallel processing, parallel directives were added and the code was rerun on a multiple-processor, shared memory machine. The total wall-clock time was reduced to about two days, which allowed the model to be calibrated in a matter of weeks rather than months.
Thursday 3:00 PM 13 B TBA
Thursday 2:00 PM 13 C Jim Sherburne (SGI) LINUX and IRIX Development Environments SGI development environment status, including compiler and development tools status for IRIX, and a look at current and future compiler and development environments for SGI's LINUX platforms.
Thursday 2:30 PM 13 C Gabriel Broner (SGI) SGI's Operating System Plans for HPC This talk will cover the work SGI is doing on IRIX and LINUX (both single system image and clusters) in support of high performance computing.
Thursday 3:00 PM 13 C Lynne Johnson (SGI) HPC LINUX SGI Intel based systems will utilize LINUX based solutions. At SGI we are developing, jointly with other members of the HPC community, systems that take advantage of the standardization of LINUX, and offer additional features needed in the HPC space.
Thursday 4:00 PM 14 A Dave Morton (SGI) Origin2000
Thursday 4:45 PM 14 A Alexander Morton (SGI) SN Mips System and Performance Information and Discussion This talk and discussion will cover the upcoming SN Mips system status, details, and a look into performance.
Thursday 4:00 PM 14 B Michael Brown (SGI) Visualization
Thursday 4:45 PM 14 B John Clyne (NCAR) Visualization Theatre: Bring Your Videos!! This is a highly informal session that invites CUG attendees to bring in short (2-3 minute) video tapes that showcase the use of computer graphics and scientific visualization at their institution.
Thursday 4:00 PM 14 C (Available for BoF)
Thursday 4:45 PM 14 D (Available for BoF)
Friday 8:30 AM 15 A LaNet Merrill (SGI) Anatomy of a SAN This paper will cover the SGI supported hardware, software and roadmap for SAN.
Friday 9:00 AM 15 A Neil Bannister and Laraine Mackenzie (SGI) Data Migration Facility, Tape Management Facility and Tape Device Driver Update This paper will review status and plans for SGI DMF, TMF and the IRIX tape device driver. It will cover both existing and future strategies for the above three product lines.
Friday 9:30 AM 15 A Alan Powers (NAS) SGI's DMF with Failsafe The Numerical Aerospace Simulation Facility (NAS) at NASA Ames Research Center (AMES) has installed DMF on several Origin 2000 production platforms. The largest system is managing over 100 TB of archival storage. The failsafe version will need to integrate both FC (STK 9840) and SCSI (STK SD3 & 9490) tape drives. Overview of the system configurations, disk and tape performance, and benefits will be discussed.
Friday 10:00 AM 15 A SGI's DMF with Failsafe (continued)
Friday 8:30 AM 15 B Thomas R. Elken (SGI) Performance of the 12000-shrink Microprocessor on the SGI 2000 Series
Friday 9:00 AM 15 B Gustavo Galimberti (SGI) SN MIPS Partitions This capability allows an SN MIPS machine to be partitioned into multiple machines. This provides for configuration flexibility, for good reliability characteristics, and for good performance as inter partition communication utilizes the fast hardware interconnect. Current status and plans will be presented.
Friday 9:30 AM 15 B Sergio E. Zarantonello (SGI) SGI IA-32 and IA-64 Cluster Systems, Performance and Applications This talk will focus on the SGI 1200 and SGI 1400 Linux clusters. We will talk about recommended configurations, development and runtime environments, and performance on kernel benchmarks and selected applications. Comparisons between different cluster networks (e.g. Myrinet and Giganet) and different flavors of MPI (e.g. MPICH, MVICH, MPI-SOFTECH) will be reported. Next generation IA-64 product cluster solutions will also be presented and discussed.
Friday 10:00 AM 15 B Steve Reinhardt (SGI) SGI Plans for IA-64-based Supercomputers SGI announced 2 years ago its adoption of the IA-64 architecture as its future microprocessor, and the first generation Itanium(tm) processor is nearing customer availability. SGI will deliver highly scalable systems based on IA-64, coupling a next-generation interconnection network with a highly scalable OS to deliver production-worthy high-end computing with near-commodity costs. The OS will be based on LINUX. This talk will cover directions and plans for system deliveries through the next 4 years.
Friday 8:30 AM 15 C Dan Higgins and Gustau Galimberti (SGI) IRIX Resource Management Plans and Status New developments in the area of resource management provide IRIX with much better support for large systems. Some of the enhancements available today include job limits, accounting, and increased repeatability.
Friday 9:00 AM 15 C TBA
Friday 9:30 AM 15 C Mike Pflugmacher (UIUCNCSA) Integration of Maui Scheduler with LSF at NCSA NCSA is working with Maui Scheduler developers to integrate the scheduler with Platform's LSF batch system on a multiple Origin 2000 machine cluster. The talk will cover the basic interfaces to make the system function as well as enhancements made to the scheduler to facilitate NCSA's batch system requirements.
Friday 10:00 AM 15 C TBA
Friday 11:00 AM 16 G Kees Nieuwenhuis, LAC Co-Chair (SARA) Introduction
Friday 11:15 AM 16 G Vincent Icke (University of Leiden) Radiation Hydrodynamics in Astrophysics All of physics is astrophysics, a truism that is particularly apparent when one considers the behavior of gases under astrophysical conditions. Great progress has been made in two-dimensional hydrodynamics, and 3D hydro is beginning to come into its own. Magnetic fields, too, are being included. However, it is well known that radiation plays a big and often dominant role in the behavior of astrophysical gases. The problem is formidable in all respects: theoretically, because it involves so many different processes, and computationally because the use of simplifying symmetries is almost never warranted. Thus we are facing the prospect of having to compute the evolution of gas under extreme physical conditions, in three dimensions, with non-local interactions, and with an enormous range of scales. This talk will review some of the relevant problems and applications, and the computational implications thereof.
Friday 12:00 PM 16 G CUG Board of Directors CUG Update
Friday 2:00 PM 17 G SGI Hardware Report
Friday 2:45 PM 17 G SGI Software Report
Friday 4:00 PM 18 A TBA (SGI) Customer Service: Operations Q&A Panel
Friday 4:00 PM 18 B Constantinos S. Lerotheou, S.P. Johnson, P.F. Leggett, E.W. Evans, and M. Cross (U. Greenwich) An Interactive Environment for the Rapid Parallelization of Fortran Mesh-based Codes The Computer Aided Parallelization Tools (CAPTools) can be used to transform serial Fortran programs to a parallel form, and in doing so, can exploit high performance parallel systems. Whole program, interprocedural dependence analysis is essential to produce effective directive or message passing based parallel source code. The tools are initially targeted at structured and unstructured mesh-based codes using an appropriate partitioning strategy.
Friday 4:00 PM 18 C (Available for BoF)
Friday 4:00 PM 18 D (Available for BoF)