Sessions

Instead of long talks and the audience busy with their laptops, this symposium is meant to be interactive and encourage genuine discussions among experts in the field. The idea is for a session leader to give a very brief talk (10 to 15 minutes) or an introduction to a topic and then initiate and motivate a discussion among all participants. All attendees are expected to be active participants. In order to keep the discussion going, a session lead may have to take on the role of a teacher, devil's advocate, or interviewer.
Each session has, in addition to the session lead, a wingmate. The wingmate's task is to assist the session lead in promoting an active discussion among all participants. In that role the wingmate may play devils advocate to the session lead, pose additional questions to the session lead or the participants, or support the session lead by providing additional information or answers to questions and challenges. In short, the wingmate's task is to assist the session lead in assuring that the discussion does not run dry and make it interesting for all participants.

Currently planned sessions

  • Session

    • Title: Next steps
    • Session lead: Jeffrey Nichols, Oak Ridge National Laboratory
    • Session wingmate: Thomas Sterling, Indiana University
    • We are on the road to exascale and have a slightly better idea of what these systems will look like than we did five years ago. Is ongoing research on track to make these systems scalable, less power hungry, usable for the intended application domains, and more resilient to faults? Where do we stand and are any course corrections necessary? How should we address the remaining challenges?
  • Session

    • Title: Expanding the scope of traditional HPC systems
    • Session lead: Kathryne O'Brien, IBM
    • Session wingmate: Duncan Roweth, Cray
    • Traditional large-scale scientific applications are not enough to drive the market. New areas, such as analytics, may be served by exascale systems or smaller systems using exascale technologies. In which market areas can exascale-capable machines play a role? What compromises have to be made to enable this broader-use spectrum? What technologies, not specifically designed for supercomputing, can be leveraged to reach an exascale sooner, cheaper? Vice versa, how can HPC technologies and methods help the larger data center/cloud space?
  • Session

    • Title: HPC runtime opportunities and challenges
    • Session lead: Thomas Sterling, Indiana University
    • Session wingmate: TBD
    • This session will discuss the need for exposing and exploiting information about system execution state on a continuing basis and applying it to task scheduling and resource management as well as to discover new parallelism on the fly. The objective of runtime system software for HPC is to make dramatic improvements in efficiency and scalability. But it imposes additional overheads that can also be a source of performance degradation. This session will consider the balance between these contending influences.
  • Session

    • Title: Common community APIs
    • Session lead: Larry Kaplan, Cray
    • Session wingmate: Dimitrios S. Nikolopoulos, Queen's University of Belfast
    • A variety of vendor specific hardware and software is expected to be developed for Exascale and HPC. This variety may pose a challenge to both application and other software designers and limit portability. In which areas could common APIs help to bridge the portability gap while providing access to new features? What is a good way to create the necessary APIs and standardize them with vendor and user participation? How soon should this be done? What standardization challenges exist?
  • Session

    • Title: The programmer's burden
    • Session lead: Ron Brightwell, Sandia National Laboratories
    • Session wingmate: Turlough Downes, Dublin City University
    • What high-level changes are going to be required for applications to reach exascale. How important will communication avoidance be? Will programmer awareness of power and reliability be required? To what level of detail? Will Bulk Synchronous Programming (BSP) survive? Must it?
  • Session

    • Title: Fault tolerance
    • Session lead: Christian Engelmann, Oak Ridge National Laboratory
    • Session wingmate: Larry Kaplan, Cray
    • Permanent and transient faults may occur continuously in an exascale system due to decreased component reliability and increased component counts. What fault types and frequencies should be expected? Can evolutionary fault tolerance approaches provide resilience at exascale, or are more revolutionary concepts needed? Which layer (OS, runtime, and/or application) is responsible for assuring resilience?
  • Session

    • Title: Co-design
    • Session lead: Sudip Dosanjh, LBL/NERSC
    • Session wingmate: Aidan Thompson and Simon Hammond, Sandia National Laboratories
    • What has been learned so far? Is it working? What are the implication for system software? How far down the software stack should/can co-design go?
  • Session

    • Title: Research challenges
    • Session lead: Barney Maccabe, Oak Ridge National Laboratory
    • Session wingmate: Vladimir Getov, University of Westminster
    • How can research help address challenges expected to arise with the advent of exascale systems? Predicted issues, such as fault tolerance, power dissipation, usability, programmability, and scalability are hot topics in research laboratories. Are there issues not being addressed? Is progress in these areas advancing fast enough?
  • Session

    • Title: Exascale simulations
    • Session lead: Sudhakar Yalamanchili, Georgia Institute of Technology
    • Session wingmate: Arun Rodrigues, Sandia National Laboratories
    • Discuss the need and challenges of simulating exascale systems.
  • Session

    • Title: Application perspective
    • Session lead: Thomas Schulthess, Swiss National Supercomputing Center (CSCS)
    • Session wingmate: Henry Tufo, University of Colorado at Boulder
    • Are hybrid systems here to stay? If so, how will they be programmed? Physical constraints of computer hardware are forcing a rethinking of our programming models. Which models are finding acceptance among scientific programmers and how can they contribute to the design of future exascale supercomputing systems?