The Cray User Group welcomes Thomas Schulthess, Director of the Swiss National Supercomputing Center, as the keynote speaker on Tuesday, May 6, 2014.
“Supercomputers: instruments for science or dinosaurs that haven’t gone extinct yet?”
High-performance computing has dramatically improved scientific productivity over the past 50 years. It turned simulations into a commodity that all scientists can now use to produce knowledge and understanding about the world and the universe, using data from experiment and theoretical models that can be solved numerically. Since the beginnings of electronic computing, supercomputing – loosely defined as the most powerful scientific computing at any given time – has led the way in technology development. Yet, the way we interact with supercomputers today has not changed much since the days we stopped using punch cards. I do not claim to understand why, but nevertheless would like to propose a change in how we develop models and applications that run on supercomputers.
Thomas Schulthess received his PhD in physics from ETH Zurich in 1994. He is a professor for computational physics at ETH Zurich and Director of the Swiss National Supercomputing Center in Lugano, Switzerland. Thomas holds a visiting distinguished professor appointment at ORNL, where he was group leader and researcher in computational materials science for over a decade before moving to ETH Zurich in 2008. His current research interests are in development of efficient and scalable algorithms for the study of strongly correlated quantum systems, as well as electronic structure methods in general. He is also engaged in the development of efficient tools and simulations systems for other domain areas, such as meteorology/climate and geophysics.
Wednesday, May 7th – Dr. Oliver Fuhrer, Senior Scientist at the Federal Office of Meteorology and Climatology MeteoSwiss, Zurich
Higher grid resolution, larger ensembles as well as growing complexity of weather and climate models demand ever-increasing compute power. Since 2013, several large hybrid high performance computers that contain traditional CPUs as well as some type of accelerator (e.g. GPUs) are online and available to the user community. Early adopters of this technology trend may have considerable advantages in terms of available resources and energy-to-solution. On the downside, a substantial investment is required in order to adapt applications to such accelerator-based supercomputer.
Within the COSMO Consortium and the Swiss HP2C Initiative, a version of the weather and regional climate prediction model able to run on GPUs is being developed. This contribution will give an overview of the status of this version and present a roadmap of further plans. The adaptions that have been made to the model (and why these adaptions will also profit CPU-based hardware architectures) will be presented. While the physical parameterizations have been ported to GPUs using OpenACC compiler directives, the dynamical core was refactored with a C++ based domain specific language for structured grids which provides both CUDA and OpenMP back ends. We will discuss our experience and advantages and disadvantages of these two porting approaches. This contribution will give a detailed description of the challenges presented by such a large refactoring effort using different languages on Cray systems, along with performance results on three different Cray systems at CSCS: Rosa (XE6), Todi (XK7), Daint (XC30).
Dr. Oliver Fuhrer is a senior scientist in the modeling group of the Federal Office of Meteorology and Climatology MeteoSwiss, Zurich. He has over 12 years experience in the fields of high performance computing, regional climate simulation and numerical weather prediction. He has applied and developed parallel software and conducted research starting on vector machines and later on massively parallel architectures at the Swiss National Supercomputing Centre. Recently, Fuhrer acted as a PI or co-PI on three projects within the Swiss High Performance and High Productivity Computing (HP2C) initiative and Platform for Advanced Scientific Computing (PASC) and has had the scientific lead for developing a hardware oblivious and performance portable implementation of the dynamical core of the COSMO model. These efforts have resulted in an implementation of COSMO capable of running production simulations on hybrid architectures.