Highlights

    PASC16 Conference will offer plenary sessions, one public lecture, minisymposium presentations, contributed talks, a paper session, and a poster session. The program will provide time for discussions within PASC scientific disciplines (inter-PASC Networks discussions), and an exhibition space.

    Additionally, the conference includes an information event dedicated to users of the Swiss National Supercomputing Centre (CSCS).

    Plenary Presentations

    Please periodically check the list of plenary presentations.

    Big Data Visual Analysis; Chris Johnson (University of Utah, USA)

    We live in an era in which the creation of new data is growing exponentially such that every two days we create as much new data as we did from the beginning of mankind until the year 2003. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most important tools to understand such large and often complex data. In this talk, I will present state-of-the-art visualization techniques, applied to important Big Data problems in science, engineering, and medicine

    Chris Johnson is the founding director of the Scientific Computing and Imaging (SCI) Institute at the University of Utah where he is a Distinguished Professor of Computer Science and holds faculty appointments in the Departments of Physics and Bioengineering. His research interests are in the areas of scientific computing and scientific visualization. Professor Johnson founded the SCI research group in 1992, which has since grown to become the SCI Institute, employing over 200 faculty, staff and students. Professor Johnson serves on several international journal editorial boards, as well as on advisory boards to national research centers. Professor Johnson has received a number of awards, including the NSF Presidential Faculty Fellow (PFF) award from President Clinton in 1995 and the Governor's Medal for Science and Technology from Governor Michael Leavitt in 1999. He is a Fellow of the American Institute for Medical and Biological Engineering, a Fellow of the American Association for the Advancement of Science, and in 2009 he was elected a Fellow of the Society for Industrial and Applied Mathematics (SIAM) and received the Utah Cyber Pioneer Award. In 2010 he received the Rosenblatt Award from the University of Utah and the IEEE Visualization Career Award.  In 2012 he received the IEEE IPDPS Charles Babbage Award, and in 2013 the IEEE Sidney Fernbach Award. In 2014, Professor Johnson was elected an IEEE Fellow.


    Large Scale Computation in Seismic Exploration; Dave Nichols (Schlumberger Limited, USA)

    Seismic exploration for oil and gas has been one of the largest commercial users of high performance computing for more than 50 years. We record petabyte datasets and use analysis and imaging techniques that require large-scale wave equation modelling. The problem size, and ambition level, of this modelling had increased regularly over the years. However, despite the many changes in computer architectures the methods used have been quite stable for at least 15 years. We have chosen to use explicit time-domain finite-difference methods on many different hardware platforms. I will discuss why this has been so successful and whether any current and future trends will require us to move to other computational methods.

    Dave Nichols is a research advisor at Schlumberger.
    His work crosses the boundary between geophysics, applied mathematics, and high performance computing.
    He has worked for Schlumberger for 21 years in a variety of roles. These have included:

    • Geophysical research.
    • Research management at the long term research labs in Cambridge UK.
    • Completions marketing at the engineering centers in Texas.
    • Building tools and processes to evaluate new technology funding opportunities across all internal R&D groups.

    From 2010-2014 he worked for Schlumberger at Stanford University and was a consulting professor in the geophysics department.
    He currently advises the Schlumberger R&D organization on a range of topics associated with seismic imaging.
    Before joining Schlumberger he worked on novel imaging algorithms and optimization tools. He received his PhD in Geophysics from Stanford University in 1994.

     

    Random Explorations of Material Structure Space; Chris J. Pickard (University of Cambridge, UK)

    The use of stochastic optimisation strategies for first principles structure prediction is now well established. There are many examples of these techniques making genuine discoveries. Ab Initio Random Structure Searching (AIRSS), in which initial starting structures are randomly generated and relaxed repeatedly, is extremely simple, reliable and suited to high throughput computation. Typical functional materials are ternary, or quaternary compounds. It is important to perform a search over compositional space as thoroughly and broadly as possible. I will discuss how AIRSS may be used to do this, paying particular attention to pulling apart structures we have already found, to make new, random ones.

    Chris Pickard is the inaugural Sir Alan Cottrell Professor of Materials Science in the Department of Materials Science and Metallurgy, University of Cambridge. Previously he was Professor of Physics, University College London (2009-2015), and Reader in Physics, University of St Andrews (2006-2008). He has held both EPSRC Advanced and Leadership Research Fellowships, and is currently a Royal Society Wolfson Research Merit Award holder (2015). He is a lead developer of the widely used CASTEP code, and introduced both the GIPAW approach to the prediction of magnetic resonance parameters and Ab Initio Random Structure Searching. In 2015 he won the Rayleigh Medal and Prize of the Institute of Physics.


    Exascale Computing and Beyond; Marc Snir (Argonne National Laboratory and University of Illinois at Urbana-Champaign, USA)


    The US, like the EU and other countries, is engaged in a national initiative that aims to deploy exascale computing platforms early in the next decade. The outlines of such platforms are starting to emerge. We shall survey, in our talk, the current roadmap for exascale computing and the main challenges this roadmap entails. We shall also discuss the likely evolution of HPC beyond exascale, in the "post-Moore" era.

    Marc Snir is Director of the Mathematics and Computer Science Division at the Argonne National Laboratory and Michael Faiman and Saburo Muroga Professor in the Department of Computer Science at UIUC. Marc Snir received a Ph.D. in Mathematics from the Hebrew University of Jerusalem in 1979. He spent time at NYU, the Hebrew University and IBM Research, before joining UIUC. He has published numerous papers on computational complexity, parallel algorithms, parallel architectures, interconnection networks, parallel languages and libraries and resilience. Marc is Argonne Distinguished Fellow, AAAS Fellow, ACM Fellow and  IEEE Fellow. He recently won the IEEE Award for Excellence in Scalable Computing and the IEEE Cray Award.

    Challenges for Climate and Weather Prediction in the Era of Heterogeneous Computer Architectures: Oscillatory Stiffness, Time-Parallelism, and the Slow Manifold; Beth Wingate (University of Exeter, UK)

    For weather or climate models to achieve exascale performance on next-generation heterogeneous computer architectures they will be required to exploit on the order of million- or billion-way parallelism. This degree of parallelism far exceeds anything possible in today's models even though they are highly optimized. In this talk I will discuss the mathematical issue that leads to the limitations in space- and time-parallelism for climate and weather prediction models - oscillatory stiffness in the PDE. I will go on to discuss recent successful time-parallel algorithms including the fast-converging asymptotic parareal method and a time-parallel matrix exponential.

    Beth Wingate is a Professor of Mathematics at the University of Exeter. Previous to this she was a Senior Scientist at the Los Alamos National Laboratory.  Her recent research is focused on dynamics in the Arctic Ocean, the slow/fast dynamics of the air-sea interface, and time-parallel methods for climate modeling intended to take advantage of increased parallelism available with heterogeneous computer architectures.

    Simulations of Hydrogen Ingestion Flashes in Giant Stars; Paul Woodward (University of Minnesota, USA)

    My team at the University of Minnesota has been collaborating with the team of Falk Herwig at the University of Victoria to simulate brief events in the lives of stars that can greatly affect the heavy elements they synthesize in their interiors and subsequently expel into the interstellar medium. These events are caused by the ingestion of highly combustible hydrogen-rich fuel into the convection zone above a helium burning shell in the deeper interior. Although these events are brief, it can take millions of time steps to simulate the dynamics in sufficient detail to capture subtle aspects of the hydrogen ingestion. To address the computational challenge, we exploit modern multicore and many-core processors and also scale the simulations to run efficiently on over 13,000 nodes of NSF's Blue Waters machine at NCSA. Results of these simulations will be described along with some of the key numerical and computational techniques that make these simulations practical.

    Paul Woodward received his Ph.D. in physics from the University of California, Berkeley in 1973. He has focused his research on simulations of compressible flows in astrophysics, studying problems in star formation, supersonic jet propagation, convection in stars, and astrophysical turbulence.  He is a Fellow of the Minnesota Supercomputing Institute and directs the Laboratory for Computational Science & Engineering (LCSE) within the University of Minnesota's Digital Technology Center. The LCSE concentrates on high performance parallel computation and the data analysis and visualization that this requires. Woodward received the IEEE's Sidney Fernbach award in large-scale computing in 1995 and, with 12 collaborators at Livermore, Minnesota, and IBM, received the Gordon Bell prize in the performance category in 1999. His recent work combines simulation of convection, turbulence, and combustion in stellar interiors with the development of specialized techniques for exploiting the power of large networks of many-core computing devices.