Loading…
XSEDE16 has ended
Tutorials [clear filter]
Monday, July 18
 

8:00am EDT

Tutorial: Building the Modern Research Data Portal Using the Globus Platform
New Globus REST APIs, combined with high-speed networks and Science DMZs, create a research data platform on which developers can create entirely new classes of scientific applications, portals, and gateways. Globus is an established service that is widely used for managing research data on XSEDE and campus computing resources, and it continues to evolve with the addition of data publication capabilities, and improvements to the core data transfer and sharing functions. Over the past year we have added new identity and access management functionality that will simplify access to Globus using campus logins, and facilitate the integration of Globus, XSEDE, and other research services into web and mobile applications.

In this tutorial, we use real-world examples to show how these new technologies can be applied to realize immediately useful capabilities. We explain how the Globus APIs provide intuitive access to authentication, authorization, sharing, transfer, and synchronization capabilities. Companion code (supplemented by iPython/Jupyter notebooks) will provide application skeletons that workshop participants can adapt to realize their own research data portals, science gateways, and other web applications that support research data workflows.


Monday July 18, 2016 8:00am - 5:00pm EDT
Merrick I

8:00am EDT

Tutorial: Introduction to Quantum Computing 2016
Introduction to Quantum Computing 2016, a full day tutorial, will review potential approaches to quantum computing and explore adiabatic quantum computing in particular.

Quantum computing has progressed from ideas and research to implementation and product development. The original gate or circuit model of quantum computing has now been supplemented with several novel architectures that promise a quicker path to implementation and scaling. Additionally, there are multiple physical devices capable of providing controllable evolution of a quantum wavefunction which could form the basis for a quantum computer.

Real quantum computers implement a specific architecture using a particular choice of underlying devices. The combination of architecture and device gives rise to the programming model and operating characteristics of the system. The programming model in turn determines what kinds of algorithms the system can execute.

This introductory course includes a hands-on laboratory in which students will formulate and execute programs on a live quantum computer in Canada.

The attendees should:
- gain a high level understanding of the different types of potential quantum computers
- become familiar with adiabatic quantum computing architecture and programming model
- be able to run quantum algorithms live on a D-Wave quantum computer.

The class will be a lecture /lab format, labs will have step by step instructions with support staff to guide attendees and answer questions as they develop their own quantum algorithms. Attendees will be exposed to the types of problems and applications suitable for today’s quantum technology. It is our hope that attendees will leave this tutorial inspired to create new algorithms and applications, and to explore the adiabatic or other paradigms of quantum computing. A class website will be created allowing attendees to access the presentation materials.

It is recommended attendees are working toward or have a degree in Computer Science, Math, and Physics, or have a degree of mathematical maturity and/or familiarity with algorithms and data structures.

Logistics & Requirements:
o Length of the tutorial (full-day (7 hours))
o Percentage of content split: beginner 30%, intermediate 50%, advanced 20%
o Requirements for attendees:
• Class size 20 -30 participants
Participants will need to provide their own laptop and be able to connect via RDP (Microsoft remote desktop client recommended for Windows and OSX devices) to individual AWS hosted VM instances running Windows 2012 R2 server which contain our software. Participants with non-Windows desktops should install one of the following clients:
- Mac OSX/iOS: https://itunes.apple.com/ca/app/microsoft-remote-desktop/id715768417?mt=12 may need upgrade of the OS to the latest patch level.
- Android: same as for OSX/iOS can be found on the Google Play store at https://play.google.com/store/apps/details?id=com.microsoft.rdc.android&hl=en may need OS upgrades to the latest patch levels.
- Linux: “RDesktop” found at the following location: http://www.rdesktop.org/ may require dependent packages to be installed and comes in a tar.gz format.


Monday July 18, 2016 8:00am - 5:00pm EDT
Sandringham InterContinental Miami

8:00am EDT

Tutorial: Programming Intel's 2nd Generation Xeon Phi (Knights Landing)
Intel's next generation Xeon Phi, Knights Landing (KNL), brings many changes from the first generation, Knights Corner (KNC). The new processor supports self-hosted nodes, connects cores via a mesh topology rather than a ring, and uses a new memory technology, MCDRAM. It is based on Intel’s x86 technology with wide vector units and hardware threads. Many of the lessons learned from using KNC do still apply, such as efficient multi-threading, optimized vectorization, and strided memory access.
This tutorial is designed for experienced programmers familiar with MPI and OpenMP. We’ll review the KNL architecture, and discuss the differences between KNC and KNL. We'll discuss the impact of the different MCDRAM memory configurations and the different modes of cluster configuration. Recommendations regarding MPI task layout when using KNL with the Intel OmniPath fabric will be provided.
As in past tutorials, we will focus on the use of reports and directives to improve vectorization and the implementation of proper memory access and alignment. We will also showcase new Intel VTune Amplifier XE capabilities that allow for in-depth memory access analysis and hybrid code profiling.


Monday July 18, 2016 8:00am - 5:00pm EDT
Merrick II

8:00am EDT

Tutorial: Using the novel features of Bridges and optimizing for its Intel processors
Pittsburgh Supercomputing Center (PSC)’s Bridges is a new XSEDE resource that will enable the convergence of HPC and Big Data within a highly flexible environment. This hands-on tutorial will demonstrate how to best leverage the unique features and capabilities of Bridges : extremely large shared memory (up to 12TB per node); virtual machines; GPU computing; Spark and Hadoop environments; interactivity; in-memory databases; rich data collections; graph analytics capabilities and powerful programming environment. The tutorial will also explain how to get a successful allocation for Bridges.


Monday July 18, 2016 8:00am - 5:00pm EDT
Alahambra

8:00am EDT

Tutorial: The many faces of data management, interaction, and analysis using Wrangler.
Link to slides
The goal of this tutorial is to provide guidance to participants on large-scale data services and analysis support with the newest XSEDE data research system, Wrangler. Being a mostly first of its kind XSEDE resource, both user and XSEDE staff training is needed to enable the novel research opportunities Wrangler presents. The tutorial consists of two major components. The morning sessions focus on helping user to familiar with the unique architecture and characteristics of Wrangler System and a set of data services the wrangler supports, including large scale file based data managements, database services, and data sharing services. The morning presentation includes introduction on the Wrangler system and its user environment, use of reservations for computing, data systems for structured and unstructured data, and data access layers using both Wranglers replicated long term storage system and high speed flash storage system. We will also introduce the Wrangler graphical interfaces, including the Wrangler Portal, Web based tools served by Wrangler including Jupyter notebooks and RStudio, and the iDrop web interface for iRODS. The afternoon session will focus on data driven analysis support on wrangler. The presentations are center around use of the dynamic provisioned of Hadoop ecosystem on Wrangler. The presentations include introduction on the core Hadoop cluster for big data analysis, using existing analysis routines through Hadoop Streaming, interactive analysis with Spark, using Hadoop/Spark with the often more familiar to researchers Python and R interfaces. 


Monday July 18, 2016 8:00am - 5:00pm EDT
Dupont
 
Filter sessions
Apply filters to sessions.