XSEDE16 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Tutorials [clear filter]
Monday, July 18

8:00am EDT

Student Tutorial: Supercomputing in Plain English
Prerequisites: One semester of programming in C, C++ or Fortran, recently, is encouraged but not required (a modest number of brief code examples will be examined). Basic experience with any Unix-like operating system (could be Linux but doesn’t have to be), recently, is encouraged but not required (an introductory Linux commands tutorial will be included). No previous HPC experience will be required.

Requirements: A laptop (Windows, MacOS or Linux) or a tablet (iOS, Android or Windows Mobile) is strongly recommended but not required.


* Lecture: Overview: What the Heck is Supercomputing?

This session provides a broad overview of High Performance Computing (HPC). Topics include: what is supercomputing?; the fundamental issues of HPC (storage hierarchy, parallelism); hardware primer; introduction to the storage hierarchy; introduction to parallelism via an analogy (multiple people working on a jigsaw puzzle); Moore's Law; the motivation for using HPC.

* Lab: Running A Job on a Supercomputer

In this hands-on lab session, you’ll get an account on one or more supercomputers, and you’ll get a chance to run a job. If you’re new to Unix-like operating systems, this will be a great introduction; if your Unix/Linux skills have gotten a little rusty, this will be a great refresher.

* Lecture: The Tyranny of the Storage Hierarchy

This session focuses on the implications of a fundamental reality: fast implies expensive implies small, and slow implies cheap implies large. Topics include: registers; cache, RAM, and the relationship between them; cache hits and misses; cache lines; cache mapping strategies (direct, fully associative, set associative); cache conflicts; write-through vs. write-back; locality; tiling; hard disk; virtual memory. A key point: Parallel performance can be hard to predict or achieve without understanding the storage hierarchy.

* Lab: Running Benchmarks on a Supercomputer

In this hands-on lab session, you’ll benchmark a matrix-matrix multiply code to discover the configuration that gets the best performance.

* Other topics may be introduced if time permits.

Content: Older versions of the lecture slides and exercise descriptions (which will be updated) may be found as follows:

* Lecture: Overview: What the Heck is Supercomputing?

* Lab: Running A Job on a Supercomputer

* Lecture: The Tyranny of the Storage Hierarchy


* Lab: Running Benchmarks on a Supercomputer

Monday July 18, 2016 8:00am - 12:00pm EDT
Chopin Ballroom

8:00am EDT

Tutorial: Building a Campus Cyberinfrastructure Hub on the Cloud with HUBzero
HUBzero is a powerful, open source software platform for creating dynamic web sites that support scientific research and educational activities (hubzero.org). Used by communities ranging from nanotechnology, earthquake engineering, earth sciences, data curation, and healthcare, it is a proven framework for building a science gateway, and a key part of many
organizations’ cyberinfrastructure.

The HUBzero platform provides an application framework for developing and deploying interactive computational tools. It has modules to aid in data sharing and publication as well as a complete user management system and robust group management system to facilitate collaboration across the entire site. The HUBzero Application also offers a comprehensive project management module.

HUBzero can serve as a complete cyberinfrastructure solution for an organization, managing tens of thousands of users, terabytes of data, and can be a onestop gateway to numerous, preconfigured grid computing resources. Its highly integrated services support collaborative research, data and results publication, and the platform can archive research and data for long term access and analysis.

Today, HUBzero is available as an cloud appliance (AMI) on Amazon Web Services’ Marketplace, so it is easier than ever to deploy and maintain a hub for your organization.

This tutorial will provide an overview of this integrated cyberinfrastructure, describe a selected set of of common usage patterns for HUBzero Hubs, and example Hubs of campus CI and science domains. The second portion of the tutorial will provide examples and handson exercises on how to use Amazon Web Services to deploy your own instance of the HUBzero platform and how to configure your AWS HUBzero instance to connect to XSEDE resources, building a web platform to support collaboration, data sharing, and computation at your own institution.

Monday July 18, 2016 8:00am - 12:00pm EDT
Windsor InterContinental Miami

8:00am EDT

Tutorial: High Performance Modeling and Simulation with Eclipse ICE
The Eclipse Integrated Computational Environment (ICE) is a scientific workbench and workflow environment designed to improve the user experience for modeling and simulation applications. ICE makes it possible for developers to deploy rich, graphical, interactive capabilities for their science codes, in a common, cross-platform user environment. This tutorial will teach attendees how to extend ICE to add custom plugins for tailoring the environment to their specific high-performance modeling and simulation applications.

The tutorial will begin with an overview of the architecture ICE uses for managing modeling and simulation workflows. Attendees will then:

- Learn how to create a simple, multicore/multithreaded high-performance simulator from scratch
- Learn how to extend the workbench to generate input files
- Learn the various ways that jobs can be launched in parallel from Eclipse
- Learn how to visualize data in 3D and analyze results locally or remotely
- Learn to generate custom UI widgets to make a unique domain-specific user focus
- Learn how to write run and execute scripts with EASE that modify the workbench
- Learn how to enhance the workbench to include links to source code repositories which developers can download and configure in ICE

This tutorial will include extensive documentation and exercises for attendees, and will focus on each attendee developing a custom high-performance modeling and simulation tool from scratch utilizing all the main features available in ICE. Attendees will be expected to have a least a moderate level of Java programming skills and some familiarity with Eclipse programming. Sample simulators will be available for those without an idea in mind and a science background is not required. Docker containers will be provided for Windows users.

Videos of applications similar to those that attendees will create are available on the Eclipse ICE YouTube Channel: https://www.youtube.com/channel/UCfmcuxKkDBPRmhbC5GoMwSw

Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: How to Boost the Performance of your MPI and PGAS Applications with MVAPICH2 Libraries?
MVAPICH2 software, supporting the latest MPI 3.1 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10/40 GigE/iWARP and RoCE (V1 and V2) networking technologies. MVAPICH2-X software library provides support for hybrid MPI+PGAS (UPC, OpenSHMEM, CAF, and UPC++) programming models with unified communication runtime. MVAPICH2 and MVAPICH2-X software libraries (http://mvapich.cse.ohio-state.edu) are powering several supercomputers in the XSEDE program including Gordon, Comet, Lonestar4, and Stampede. These software libraries are being used by more than 2,550 organizations world-wide in 79 countries to extract the potential of these emerging networking technologies for modern systems. As of March '16, more than 358,000 downloads have taken place from this project's site. These software libraries are also powering several supercomputers in the TOP 500 list like Stampede, Tsubame 2.5 and Pleiades. A large number of XSEDE users are using these libraries on a daily-basis to run their MPI and PGAS applications. However, many of these users and the corresponding system administrators are not fully aware of all features, optimizations and tuning techniques associated with these libraries. This tutorial is aimed to address these concerns and provide a set of concrete guidelines to XSEDE users to boost performance of their applications. Further, as accelerators such as GPUs and MICs are commonly available on XSEDE resources, it is becoming an increasingly challenging task to extract best performance and scalability for user applications on such systems. We will also present tuning and optimization techniques for such systems. We will start with an overview of the MVAPICH2 libraries and their features. Next, we will focus on installation guidelines, runtime optimizations and tuning flexibility in-depth. An overview of configuration and debugging support in MVAPICH2 libraries will be presented. Support for GPUs and MIC enabled systems will be presented. The impact on performance of the various features and optimization techniques will be discussed in an integrated fashion. `Best Practices' for a set of common XSEDE applications will be presented. Advanced optimization and tuning of MPI applications using the new MPI-T feature (as defined by the MPI-3 standard) in MVAPICH2 will also be discussed. A set of case studies related to application redesign will be presented to take advantage of hybrid MPI+PGAS programming models. Finally, the use of MVAPICH2-EA library, aimed to reducing the energy footprint of HPC applications, will be explained.

Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: Introduction to CUDA Programming in C and Fortran
This tutorial is a beginning/intermediate course on programming NVIDIA GPUs with CUDA. After a short segment on why we are using accelerators in high performance computing and on accelerator hardware, we will describe all of the pieces necessary to write a CUDA program in C and Fortran. The example will be a stencil update, which is simple enough to be written in a few lines of code. The code design will be guided by the hardware; we will put emphasis on motivating common design principles by the desire to write fast code for GPU accelerators. In the second part of the presentation, we will focus on two common optimization strategies: using shared memory and overlapping computation with data transfer using CUDA streams. Experience with writing serial code in C or Fortran will be helpful to follow the examples.

Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: Introduction to Python Pandas for Data Analytics
This tutorial is a beginner level course on tackling data analytics using the Python pandas module. Python is a high-level object oriented language that has found wide acceptance in the scientific computing community. Ease of use and an abundance of software packages are some of the few reasons for this extensive adoption. Pandas is a high-level open-source library that provides data analysis tools for Python. We will also introduce necessary modules such as numpy for fast numeric computation and matplotlib/bokeh for plotting to supplement the data analysis process. Experience with a programming language such as Python, C, Java or R is recommended but not necessary.

Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: Introduction to Scientific Visualization and Data Sharing
Visualization is largely understood and used as an excellent communication tool by researchers. This narrow view often keeps scientists from fully using and developing their visualization skillset. This tutorial provides an understanding of visualization and its utility in error diagnostic and exploration of data for scientific insight. When used effectively visualization can provide a complementary and effective toolset for data analysis, which is one of the most challenging problems in computational domains. In this tutorial we plan to bridge these gaps by providing end users with fundamental visualization concepts, execution tools, customization and usage examples. Finally, short hands on tutorials on Data Sharing using SeedMe.org will be provided. The tutorial comprises of four closely related sessions as follows:
1 Visualization fundamentals: Lecture – Assay of standard techniques and their utility (45 min)
2 Hands on Visualization with VisIt software on your computer/laptop (80 mins)
3 Remote visualization with VisIt software on Comet/Gordon cluster at SDSC. We will provide training accounts. (30 mins)
4 VisIt software ships with SeedMe Python module for sharing visualizations rapidly. A swift introduction to using SeedMe.org will be provided (20 mins)

INSTRUCTOR: Amit Chourasia, San Diego Supercomputer Center, UCSD Duration: Half Day


INTENDED AUDIENCE LEVEL: Beginner for visualization, Data sharing session will be useful for all attendees

The tutorial is aimed to jump start attendees with visualization, by providing them rapid background and how to use a tool. The attendees will gain understanding of standard visualization techniques, application scenarios and best practices. Attendees will get a hands on introduction to the VisIt visualization software and apply the standard techniques on sample data. By the end of the tutorial attendees will gain proficiency in creating sophisticated visualizations and pursue more advanced concept on their own. Attendees will also learn how to use HPC resources like Comet/Gordon and conduct remote visualization. Finally they will be introduced to SeedMe platform and how to use it for data and visualization sharing.

1. Computer + mouse with scroll wheel (laptop trackpads are very difficult to use for 3D navigation, so mouse is recommended)
2. VisIt software (version 2.9.2, yes please install this specific version, not the latest )
3. Download Sample Data https://wci.llnl.gov/content/assets/docs/simulation/computer-codes/visit/visit_data_files.tar.gz
4. Account on SeedMe.org (You may create an account during the tutorial)


Session 1 (Lecture): Visualization Fundamentals
In this session we will provide a rapid introduction to fundamental visualization concepts. We will provide an assay of visualization techniques available accompanied by example application scenarios. We will also discuss best practices and shortcomings of visualization techniques. These fundamentals will help attendees to apply and innovate existing techniques for their own research.
• Introduction to Visualization
• Perception overview with eye color sensitivity
• Visualization Techniques
• Application Examples
• Best Practices

Session 2 (Hands on): Visualization with VisIt
This session will provide a quick over view of VisIt and bulk of the session will be devoted to enable users to get a hands on experience with VisIt application. The attendees will create several visualizations on their laptops by following instructor’s guidance.
• VisIt Introduction
• VisIt basics (how VisIt works, one plot & 2 operators)
• Visit plot survey
• Expressions
• Commands and Scripting
• Moviemaking

Session 3 (Hands on): Remote Interactive Visualization
This session will provide a instructions on how to create system host profile and connect to XSEDE host like Comet and perform remote interactive visualization.
• Remote Visualization (network permitting)

Session 4 (Hands on): Data Sharing using SeedMe.org
This session will provide instructions on how leverage the SeedMe infrastructure to share visualizations within and outside your research group.
• SeedMe overview
• Command line interaction with SeedMe.org
• SeedMe integration with VisIt


Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: Secure Coding Practices & Automated Assessment Tools
This is a newly updated tutorial retaining key material from our previous tutorials at XSEDE (secure programming) with the addition of new material on the use of automated analysis tools. This tutorial would be a benefit not only to new attendees but also to attendees of the previous tutorials. Our tutorial focuses on the programming practices that can lead to security vulnerabilities, and on automated tools for finding security weaknesses.

Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: Using Comet’s Virtual Clusters
Comet is an XSEDE HPC resource hosted and operated at SDSC. This tutorial introduces the virtual cluster capability of Comet, a unique feature that provides research groups, projects, and campuses with the ability to fully define their own software environment with a set of dynamically allocated virtual machines. This tutorial introduces Comet's virtual cluster capability, and has hands-on material to cover the different modes of usage anticipated for virtual clusters. We begin the tutorial with detailed information on the Comet system architecture, the design and architecture of the virtual cluster capability, and how it compares to other virtualized and cloud services. The high performance of the virtualized clusters combining the full AVX2 feature set of the Haswell processors and the InfiniBand HCAs using SR-IOV for MPI will be covered. We then follow with information on how to build, configure, and manage virtual clusters using the Cloudmesh client, a tool to easily interface with multiple clouds from the command line and a command shell. The hands-on section of the tutorial is divided into three sections: Installing and configuring a virtual cluster; Running MPI applications within the virtual cluster; Simple automation to start and stop virtual machines dynamically. SDSC and IU Staff will be available to meet with individual users, to further discuss usage of Comet, at the conclusion of the tutorial. INTENDED AUDIENCE LEVEL: Intermediate This tutorial is appropriate for people with some Linux system administration or management experience.

Monday July 18, 2016 8:00am - 12:00pm EDT

8:00am EDT

Tutorial: XSEDE New User Tutorial: Using Science Gateways
Overview: This tutorial will build upon the well-attended, well-reviewed XSEDE15 tutorial (http://sched.co/3YdG). The purpose of this tutorial is to supplement the standard XSEDE new user tutorial with overviews of how to use science gateways so that the new users can start using XSEDE for scientific research right away, at the conference, and continue at their home institution, without getting bogged down in the allocation process at the beginning. The tutorial is also appropriate for XSEDE Campus and Domain Champions who are interested in using science gateways to support their outreach and support work on their local campuses. The target audience members are scientists in particular domains (chemistry, neuroscience, atmospheric science) who are new to XSEDE and who optionally are familiar with common software packages in their field, but who do not have deep experience with using supercomputers and clusters. Campus Champions who work closely with new users are also encouraged to attend. The tutorial will provide a brief overview of XSEDE and the science gateway program, including a list of other available gateways not covered by the tutorial. The bulk of the tutorial will be a sequence of hands-on activities that introduce attendees to domain specific gateways. The tutorial organizers will work with XSEDE conference organizers and the outreach team to recruit new user attendees from the selected domains.

Attendees will not need to be researchers in a specific science domain to participate in a specific gateway exercise. The organizers will provide all required input files and data for the exercises. Attendees will be encouraged to work with gateway providers on their specific research problems. Each hands on session will demonstrate how to create an account on the science gateway; how to create, submit, and monitor a submission of a specific application code; and how to retrieve final results. Each session will also include a feedback opportunity to help new users understand the additional capabilities of each gateway and optionally try out attendee-provided input data. The following are participating gateways.

CIPRES (http://www.phylo.org/sub_sections/portal/) is a public resource for inference of large phylogenetic trees. It is designed to provide all researchers with access to large computational resources of XSEDE through a simple browser interface. The CIPRES Science Gateway provides access to a number of important parallel phylogenetics codes to insure the fastest possible run times for submitted jobs. Users configure job runs by completing a web form; on submission the user-entered information is used to configure the runs efficiently on XSEDE resources. CIPRES has supported 17,000+ users and 2,000+ publications in all areas of biology since 2009. Example tutorial material: http://www.phylo.org/tools/flash/cipresportal2_data_management.htm and http://www.phylo.org/tools/flash/cipresportal2_task_management.htm

SEAGrid (http://seagrid.org) enables researchers to run commonly used computational chemistry codes such as Gaussian, Lammps, Tinker and other applications on XSEDE resources. GridChem has been used by about 600 users since 2005, supporting more than 100 research papers, and more than 10 Graduate theses. The science gateway registered with XSEDE SEAGrid provides a Java desktop client called GridChem and manages user registration and jobs and related data through a virtual organization and transparently provided through the client. Example tutorial information: https://scigap.atlassian.net/wiki/display/SEAGrid/SEAGrid+Documentation

The Neuroscience Gateway (NSG - http://www.nsgportal.org) facilitates access and use of NSF’s High Performance Computing (HPC) resources by neuroscientists. NSG offers to computational neuroscientists free supercomputer time acquired using an NSG-managed allocation. Through a simple web-based portal, the NSG provides an administratively and technologically streamlined environment for uploading models, specifying HPC job parameters, querying running job status, receiving job completion notices, and storing and retrieving output data. Neuronal simulation tools provided by NSG includes NEURON, GENESIS, Brian, PyNN, NEST, and Freesurfer. More tools are added based on users’ requests. Example tutorial information: http://www.nsgportal.org/tutorial.html

The Apache Airavata Test Drive Gateway is a general purpose gateway that can be used to execute a wide range of codes, including computational chemistry, bioinformatics, computational fluid dynamics, and weather modeling applications. The Test Drive gateway can also be used to quickly enable new XSEDE applications to be available through a Web interface. This module of the tutorial will allow users to configure an advanced, non-hydrostatic numerical weather prediction model (WRF-ARW) through a graphical user interface, specifying physics and numerical techniques options, selecting analyses for initial/boundary conditions. The generated model outputs will be displayed as plots generated by the NCAR graphics library. Example tutorial information:(Walk through users to perform a generic WRF tutorial but utilizing XSEDE resources http://www2.mmm.ucar.edu/wrf/OnLineTutorial/)

avatar for Suresh Marru

Suresh Marru

Member, Indiana University
Suresh Marru is a Member of the Apache Software Foundation and is the current PMC chair of the Apache Airavata project. He is the deputy director of Science Gateways Research Center at Indiana University. Suresh focuses on research topics at the intersection of application domain... Read More →

Monday July 18, 2016 8:00am - 12:00pm EDT
Bayfront B

8:00am EDT

Tutorial: Building the Modern Research Data Portal Using the Globus Platform
New Globus REST APIs, combined with high-speed networks and Science DMZs, create a research data platform on which developers can create entirely new classes of scientific applications, portals, and gateways. Globus is an established service that is widely used for managing research data on XSEDE and campus computing resources, and it continues to evolve with the addition of data publication capabilities, and improvements to the core data transfer and sharing functions. Over the past year we have added new identity and access management functionality that will simplify access to Globus using campus logins, and facilitate the integration of Globus, XSEDE, and other research services into web and mobile applications.

In this tutorial, we use real-world examples to show how these new technologies can be applied to realize immediately useful capabilities. We explain how the Globus APIs provide intuitive access to authentication, authorization, sharing, transfer, and synchronization capabilities. Companion code (supplemented by iPython/Jupyter notebooks) will provide application skeletons that workshop participants can adapt to realize their own research data portals, science gateways, and other web applications that support research data workflows.

Monday July 18, 2016 8:00am - 5:00pm EDT
Merrick I

8:00am EDT

Tutorial: Introduction to Quantum Computing 2016
Introduction to Quantum Computing 2016, a full day tutorial, will review potential approaches to quantum computing and explore adiabatic quantum computing in particular.

Quantum computing has progressed from ideas and research to implementation and product development. The original gate or circuit model of quantum computing has now been supplemented with several novel architectures that promise a quicker path to implementation and scaling. Additionally, there are multiple physical devices capable of providing controllable evolution of a quantum wavefunction which could form the basis for a quantum computer.

Real quantum computers implement a specific architecture using a particular choice of underlying devices. The combination of architecture and device gives rise to the programming model and operating characteristics of the system. The programming model in turn determines what kinds of algorithms the system can execute.

This introductory course includes a hands-on laboratory in which students will formulate and execute programs on a live quantum computer in Canada.

The attendees should:
- gain a high level understanding of the different types of potential quantum computers
- become familiar with adiabatic quantum computing architecture and programming model
- be able to run quantum algorithms live on a D-Wave quantum computer.

The class will be a lecture /lab format, labs will have step by step instructions with support staff to guide attendees and answer questions as they develop their own quantum algorithms. Attendees will be exposed to the types of problems and applications suitable for today’s quantum technology. It is our hope that attendees will leave this tutorial inspired to create new algorithms and applications, and to explore the adiabatic or other paradigms of quantum computing. A class website will be created allowing attendees to access the presentation materials.

It is recommended attendees are working toward or have a degree in Computer Science, Math, and Physics, or have a degree of mathematical maturity and/or familiarity with algorithms and data structures.

Logistics & Requirements:
o Length of the tutorial (full-day (7 hours))
o Percentage of content split: beginner 30%, intermediate 50%, advanced 20%
o Requirements for attendees:
• Class size 20 -30 participants
Participants will need to provide their own laptop and be able to connect via RDP (Microsoft remote desktop client recommended for Windows and OSX devices) to individual AWS hosted VM instances running Windows 2012 R2 server which contain our software. Participants with non-Windows desktops should install one of the following clients:
- Mac OSX/iOS: https://itunes.apple.com/ca/app/microsoft-remote-desktop/id715768417?mt=12 may need upgrade of the OS to the latest patch level.
- Android: same as for OSX/iOS can be found on the Google Play store at https://play.google.com/store/apps/details?id=com.microsoft.rdc.android&hl=en may need OS upgrades to the latest patch levels.
- Linux: “RDesktop” found at the following location: http://www.rdesktop.org/ may require dependent packages to be installed and comes in a tar.gz format.

Monday July 18, 2016 8:00am - 5:00pm EDT
Sandringham InterContinental Miami

8:00am EDT

Tutorial: Programming Intel's 2nd Generation Xeon Phi (Knights Landing)
Intel's next generation Xeon Phi, Knights Landing (KNL), brings many changes from the first generation, Knights Corner (KNC). The new processor supports self-hosted nodes, connects cores via a mesh topology rather than a ring, and uses a new memory technology, MCDRAM. It is based on Intel’s x86 technology with wide vector units and hardware threads. Many of the lessons learned from using KNC do still apply, such as efficient multi-threading, optimized vectorization, and strided memory access.
This tutorial is designed for experienced programmers familiar with MPI and OpenMP. We’ll review the KNL architecture, and discuss the differences between KNC and KNL. We'll discuss the impact of the different MCDRAM memory configurations and the different modes of cluster configuration. Recommendations regarding MPI task layout when using KNL with the Intel OmniPath fabric will be provided.
As in past tutorials, we will focus on the use of reports and directives to improve vectorization and the implementation of proper memory access and alignment. We will also showcase new Intel VTune Amplifier XE capabilities that allow for in-depth memory access analysis and hybrid code profiling.

Monday July 18, 2016 8:00am - 5:00pm EDT
Merrick II

8:00am EDT

Tutorial: Using the novel features of Bridges and optimizing for its Intel processors
Pittsburgh Supercomputing Center (PSC)’s Bridges is a new XSEDE resource that will enable the convergence of HPC and Big Data within a highly flexible environment. This hands-on tutorial will demonstrate how to best leverage the unique features and capabilities of Bridges : extremely large shared memory (up to 12TB per node); virtual machines; GPU computing; Spark and Hadoop environments; interactivity; in-memory databases; rich data collections; graph analytics capabilities and powerful programming environment. The tutorial will also explain how to get a successful allocation for Bridges.

Monday July 18, 2016 8:00am - 5:00pm EDT

8:00am EDT

Tutorial: The many faces of data management, interaction, and analysis using Wrangler.
Link to slides
The goal of this tutorial is to provide guidance to participants on large-scale data services and analysis support with the newest XSEDE data research system, Wrangler. Being a mostly first of its kind XSEDE resource, both user and XSEDE staff training is needed to enable the novel research opportunities Wrangler presents. The tutorial consists of two major components. The morning sessions focus on helping user to familiar with the unique architecture and characteristics of Wrangler System and a set of data services the wrangler supports, including large scale file based data managements, database services, and data sharing services. The morning presentation includes introduction on the Wrangler system and its user environment, use of reservations for computing, data systems for structured and unstructured data, and data access layers using both Wranglers replicated long term storage system and high speed flash storage system. We will also introduce the Wrangler graphical interfaces, including the Wrangler Portal, Web based tools served by Wrangler including Jupyter notebooks and RStudio, and the iDrop web interface for iRODS. The afternoon session will focus on data driven analysis support on wrangler. The presentations are center around use of the dynamic provisioned of Hadoop ecosystem on Wrangler. The presentations include introduction on the core Hadoop cluster for big data analysis, using existing analysis routines through Hadoop Streaming, interactive analysis with Spark, using Hadoop/Spark with the often more familiar to researchers Python and R interfaces. 

Monday July 18, 2016 8:00am - 5:00pm EDT

1:00pm EDT

Tutorial: Hands on with Jetstream
Jetstream is the first of its kind production cloud resource intended to support science and engineering research. As part of XSEDE, the goal is to aid researchers across the United States that need modest amounts of interactive computing power. Part of the goal in implementing Jetstream is to increase the disciplinary diversity of the XD ecosystem as well as to reach non-traditional researchers that may have HPC needs but have not had adequate access or have faced other barriers to HPC usage. In our session, we would spend the first thirty minutes discussing the architecture and use of Jetstream followed by a short question and answer session, and then spend the remainder of the time showing Jetstream in use as well as allowing attendees to get on and try the system.

Intended Audience Level: All (especially for researchers, Champions, and any other interested users)
Prerequisites: Laptops, XSEDE Portal accounts, ssh keys

Monday July 18, 2016 1:00pm - 3:00pm EDT
Windsor InterContinental Miami

1:00pm EDT

Tutorial: Building Parallel Scientific Applications Using GenASiS
In this half-day tutorial, intermediate to advanced users will be exposed to GenASiS Basics, a collection of Fortran 2003 classes furnishing extensible object-oriented implementations of some rudimentary functionality common to many large-scale scientific applications. Use of these classes allows application scientists to focus more of their time and effort on numerical algorithms, problem setup and testing, shepherding production runs to completion, and data analysis and interpretation. By running fluid dynamics and molecular dynamics examples---fundamentally different models, requiring the solution of different equations, using different techniques and different parallelization strategies---participants will gain a top-down perspective and appreciation for how GenASiS Basics facilitates development of a variety of scientific applications. By running selected unit test programs that exercise and exemplify the usage of selected GenASiS Basics classes, participants will get a bottom-up overview of the range of functionality provided by GenASiS Basics.

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Deploying and Using JupyterHub in an HPC environment
JupyterHub, when properly configured and integrated with third-party code, can be used to support Jupyter Notebooks and parallel IPython clusters dispatched directly and automatically in HPC compute cluster environments. This access trivializes access to HPC resources while providing a common interface that can be deployed in any environment.

In this tutorial we will

- Introduce the concept of the Jupyter notebook
- Teach visualization and data analytics using the Jupyter notebook
- Demonstrate parallelization of Python code using ipyparallel and mpi4py
- Demonstrate using Spark from the CU-Boulder JupyterHub implementation
- Explain the implementation details of our JupyterHub HPC environment

Previous tutorials offered by Research Computing at the University of Colorado boulder are at:

A few example tutorials that will provide the basis for this proposal are:
- Python_notebook
- Python_DataAnalysis
- Intro_Spark

We have also run successfully tutorials at XSEDE in 2014 and 2015.

INTENDED AUDIENCE LEVEL: Beginner-Intermediate

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Introduction to Brown Dog: An Elastic Data Cyberinfrastructure for Autocuration and Digital Preservation
In modern-day “Big Data” science, the diversity of data (unstructured, uncurated, and of different formats) and software provides major challenges for scientific research, especially with the reproducibility of results. The NSF DIBBs Brown Dog Project[1] aims to build cyberinfrastructure to aid autocuration, indexing, and search of unstructured and uncurated digital data. It is focusing on an initial set of science use cases (green infrastructure, critical zone studies, ecology, social science) to guide the overarching design, with user-accessible extensibility as an important driving requirement for the project development. The Brown Dog is composed of two highly extensible services, Data Access Proxy (DAP) and Data Tilling Service (DTS). These services aim to leverage/reuse any existing pieces of code, libraries, services, or standalone software (past or present), accessible through an easy-to-use and programmable interface. DAP focuses on file format conversions; DTS does content based analysis/extraction on/from a file. These services wrap relevant conversion and extraction operations within arbitrary software with reusability purpose, manage their deployment in an elastic manner, and manage job execution from behind a deliberately compact REST API. Underpinning these core services are the foundational tools, which do the actual work of conversion or extraction. These tools are integrated into the Brown Dog services via a Brown Dog Tools Catalogue.

This tutorial aims to give the attendee the knowledge to be able to add a tool to the Brown Dog Tools Catalogue, and to be able to integrate Brown Dog capabilities into an application via the API.

Specifically, the two components to the tutorial will cover:

* Adding a conversion or extraction tool to Tools Catalogue - the user creates a Brown Dog wrapper script around a 3rd party tool exposing some data transformation functionality within that tool. We will walk attendees through the process of creating and adding the tool to the Tools Catalogue.

* Utilizing Brown Dog transformation services, through the brown dog API, through a new or pre-existing application.

We will provide attendees with stub code for integration of a tool into the Tools Catalogue, as well as a stub client application that will be used as an example of how to program against Brown Dog.


Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Introduction to Python
This tutorial covers the Python programming language including all the information needed to participate in the XSEDE16 Modeling Day event on Tuesday. Topics covered will be variables, types, operators, input/output, control structures, lists, functions, math libraries, and plotting libraries. To participate fully in the hands-on exercises, attendees should come with the Anaconda Python 2.7 package downloaded and installed on their computer. You can get Anaconda Python at https://www.continuum.io/downloads. The tutorial is intended for Python beginners.

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Introduction to Scientific Workflow Technologies on XSEDE
This is a proposal for a joint tutorial between the ECSS workflows team, and the teams from the following workflow technologies: Swift, Makeflow/Work Queue, RADICAL-Pilot, and Pegasus. The goal is that attendees will leave the tutorial with an understanding of the workflow related services and tools available, that they will understand how to use them on XSEDE through hands-on exercises, and that they will be able to apply this knowledge to their own workloads when using XSEDE and other computing resources. The tutorial will be based on the successful XSEDE15 tutorial (http://sched.co/3YdC). All tutorial material is available from https://sites.google.com/site/xsedeworkflows/. The tutorial format will be primarily hands on and interactive.

One major obstacle when running workflows on XSEDE is where to run the workflow engine. Larger project and groups might have their own submit hosts, but it is common that users struggle finding a home for their workflow runs. For this reason, one effort that the ECSS workflows team have set up, based on feedback from the XSEDE14 workflow birds-of-a-feather session, is an IU Quarry hosted submit host. The host is based on as a clone of the login.xsede.org single sign on host. Thus, just like login.xsede.org, any XSEDE user with an active allocation will automatically have access. For the tutorial, we will also provide Jetstream-packaged VMs of the workflow software, as appropriate. With the host, we are also assembling content for a website highlighting tested workflow systems with XSEDE specific examples that users could use for trying out the different tools. These examples will be used as a basis for the examples in the proposed tutorial’s hands-on exercises.

Swift is a simple language for writing parallel scripts that run many copies of ordinary programs concurrently as soon as their inputs are available, reducing the need for complex parallel programming. The same script runs on multi-core computers, clusters, clouds, grids and supercomputers, and is thus a useful tool for moving your computations from laptop or workstation to any XSEDE resource. Swift can run a million programs, thousands at a time, launching hundreds per second. This hands-on tutorial will give participants a taste of running simple parallel scripts on XSEDE systems and provide pointers for applying it to your own scientific work.

Makeflow is a workflow engine for executing large complex workflows, with workflows up to thousands of tasks and hundreds of gigabytes. In this section of the tutorial, users will learn the basics of writing a Makeflow, which is based on the traditional Make construct. In the hands-on example, the users will learn to write Makeflow rules, run a makeflow locally, as well as running the tasks on XSEDE resources. The users will be introduced to Work Queue, a scalable master/worker framework, and create workers on XSEDE resources and connect them to the makeflow. The users will learn to use Work Queue to monitor workflows and the basics of debugging makeflows.

The Pegasus Workflow Management System sits on top of HTCondor DAGMan. In this section of the tutorial, users will learn how to create abstract workflows, and plan, execute, and monitor the resulting executable workflow. The first workflow will be run locally on the submit host, while the two other hands-on examples will be about running workflows on XSEDE resources. One workflow will include running jobs across resources, and highlights the workflow system’s data management capability in such setups. Another workflow will be about about using the pegasus-mpi-cluster tool to execute a high-throughput workload in an efficient and well-behaved manner on one of the XSEDE high performance computing resources.

RADICAL-Pilot allows a user to run large numbers of tasks concurrently on a multitude of distributed computing resources. A task can be a large-parallel simulation or a single-core analysis routine. RADICAL-Pilot is a “programmable” Pilot-Job system developed in Python. After discussing the concept of “Pilot Jobs”, we introduce how to use RADICAL-Pilot to support task-level parallelism. We then demonstrate how to write simple Python applications that use RADICAL-Pilot to execute coupled tasks on distributed computing resources. Additionally, the user can specify input and output data on the tasks that will be handled transparently by the system.

Attendee prerequisites: The participants will be expected to bring in their own laptops with the following software installed: SSH client, Web Browser, PDF reader. We assume basic familiarity with working in a Linux environment.

Special Needs: Even though the submit host enables users with existing allocations to use their own accounts, we would like to have access to a set of XSEDE training accounts for users who currently do not have active allocations.

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Parallel I/O - for Reading and Writing Large Files in Parallel
Link for Presentation
This half-day tutorial will provide an overview of the practices and strategies for the efficient utilization of parallel file systems through parallel I/O. The target audience of this tutorial is analysts and application developers who do not have prior experience with MPI I/O, HDF5, and T3PIO. However, they should be familiar with C/C++/Fortran programming and basic MPI. A brief overview of the related basic concepts will be included in the tutorial where needed. All the concepts related to the tutorial will be explained with examples and there will be a hands-on session. By the end of the tutorial, the audience will have learnt to implement parallel I/O (through MPI I/O and the high-level libraries discussed in this tutorial) and will be motivated to apply the knowledge gained to improve the I/O performance of their applications.

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Scientific Computing without the Command-line: An Intro to the Agave API
High performance computing has become a critical resource in an increasing number of scientific disciplines. For many of these “nontraditional” computing communities, the learning curve to become proficient at the command-line is a burden, but it also constrains the computational side of that scientific domain to a subset of specialists. Conversely, bringing computationally demanding apps upon which a scientific community relies to a more accessible interface can provide a platform for broader adoption, collaboration, and ultimately accelerated discovery.
This workshop demonstrates how to use the Agave API to run complete scientific workflows using XSEDE high performance computing resources without touching the command-line. Through hands-on sessions, participants will look at managing and sharing data across XSEDE resources, and we will tackle how to orchestrate job execution across systems and capture metadata on the results (and the process) so that parameters and methodologies are not lost. We will guide users on how to leverage software modules on the system as well as custom code installed in user space. Finally, we will discuss examples how both individual labs as well as large projects are using XSEDE and Agave to provide web interfaces for everything from crop modeling to molecular simulations. The Agave API and all the tools used are open-source software.

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: SciGaP Tutorial: Developing Science Gateways using Apache Airavata
Science gateways, or Web portals, are an important mechanism for broadening and simplifying access to computational grids, clouds, and campus resources. Gateways provide science-specific user interfaces to end users who are unfamiliar with or need more capabilities than provided by command-line interfaces. In this tutorial, we present SciGaP, which includes participation from the CIPRES, UltraScan, Neuroscience, and SEAGrid Gateways combined with the Apache Airavata middleware for managing jobs and data. Our goal is to show participants how to build and run gateways using both software and collected experience from some of the most heavily used XSEDE science gateways. This tutorial will build on the well attended and well reviewed XSEDE14 and XSEDE15 tutorials. Extensive tutorial material is available from https://s.apache.org/scigap-xsede14 and https://s.apache.org/scigap-xsede15.

avatar for Suresh Marru

Suresh Marru

Member, Indiana University
Suresh Marru is a Member of the Apache Software Foundation and is the current PMC chair of the Apache Airavata project. He is the deputy director of Science Gateways Research Center at Indiana University. Suresh focuses on research topics at the intersection of application domain... Read More →

Monday July 18, 2016 1:00pm - 5:00pm EDT

1:00pm EDT

Tutorial: Using the XDMoD Job Viewer to Improve Job Performance
XDMoD has recently been enhanced with a Job Viewer feature that provides users, HPC support specialists, and others with access to detailed job level information. This information includes: accounting data, details about the application being run, summary and detailed job performance metrics, and time series plots describing CPU user, memory usage, memory bandwidth, network interconnect, parallel file system and flops for individual jobs. Virtually all XSEDE affiliated resources have been instrumented with TACC_stats, and as a result are now supplying job level performance data directly to XDMoD. Users and support personnel can use this information to determine the efficiency of a given job, and to guide the improvement of job efficiency for subsequent jobs. This tutorial will instruct the user in how to use the XDMoD Job Viewer, describe the type of job-level information that it includes, and guide participants in the use of the Job Viewer as a user support and analysis tool. This tutorial will be beneficial to all levels of users from beginners looking to understand how to launch parallel HPC jobs properly to advanced users trying to optimize job performance.

Tentative Outline for XSEDE16 Job Viewer Tutorial:

1. Overview of XDMoD (Thomas Furlani)
a. Focus on impact to improve code efficiency/Performance
b. Examples
c. Introduce Job Viewer as a tool

2. Brief Demo of XDMoD (Matthew Jones or Robert DeLeon)
a. Overview
b. Application Kernels
c. Job level monitoring (SUPReMM)
d. Job Viewer Introduction

3. Detailed Demo of XDMoD Job Viewer (Joseph White or Matthew Jones)
a. How to use the Job Viewer
b. Information provided by the Job Viewer
c. How to use the Job Viewer as a user support tool

4. Hands-on use of XDMoD and the Job Viewer (CCR personnel)
a. Pick a few example cases to show in greater detail
b. Emphasis on Job Viewer workflow
c. Secondary emphasis on using XDMoD for efficiency and quality of service

5. Summary presentation—Take home lessons (All)

Monday July 18, 2016 1:00pm - 5:00pm EDT
Chopin Ballroom

1:00pm EDT

Tutorial: XSEDE New User Tutorial
This tutorial will provide training and hands-on activities to help new users learn and become comfortable with the basic steps necessary to first obtain, and then successfully employ an XSEDE allocation to accomplish their research or educational goals. The tutorial will consist of three sections: The first part of the tutorial will explain the XSEDE allocations process and how to write and submit successful allocation proposals. The instructor will describe the contents of an outstanding proposal and the process for generating each part. Topics covered will include the scientific justification, the justification of the request for resources, techniques for producing meaningful performance and scaling benchmarks, and navigating the XRAS system through the XSEDE Portal for electronic submission of proposals. The second section, "Information Security Training for XSEDE Researchers," will review basic information security principles for XSEDE users including: how to protect yourself from on-line threats and risks, how to secure your desktop/laptop, safe practices for social networking, email and instant messaging, how to choose a secure password and what to do if your account or machine have been compromised. The third part of the tutorial will cover the New User Training material that is been delivered remotely quarterly, but will delve deeper into these topics. New topics will be covered, including how to troubleshoot a job that has not run, and how to improve job turnaround by understanding differences in batch job schedulers on different platforms. We anticipate significant interest from Campus Champions, and therefore we will explain how attendees can assist others, as well as briefly describe projects that are being currently carried out in non-traditional HPC disciplines.

Monday July 18, 2016 1:00pm - 5:00pm EDT
Bayfront B

3:00pm EDT

Tutorial: Teach GPU Accelerating Computing - Hands-on with NVIDIA Teaching Kit for University Educators
As performance and functionality requirements of interdisciplinary computing applications rise, industry demand for new graduates familiar with accelerated computing with GPUs grows. In the future, many mass-market applications will be what are considered "supercomputing applications" as per today's standards. This hands-on tutorial introduces a comprehensive set of academic labs and university teaching material for use in introductory and advanced parallel programming courses. The teaching materials start with the basics and focus on programming GPUs, and include advanced topics such as optimization, advanced architectural enhancements, and integration of a variety of programming languages.

PRE-REQUISITES: Laptop to participate in hands-on aspect

Monday July 18, 2016 3:00pm - 5:00pm EDT
Windsor InterContinental Miami
Filter sessions
Apply filters to sessions.