XSEDE16 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Accelerating Discovery [clear filter]
Wednesday, July 20

8:30am EDT

AD: A Parallel Evolutionary Algorithm for Subset Selection in Causal Inference Models
Science is concerned with identifying causal inferences. To move beyond simple observed relationships and associational inferences, researchers may employ randomized experimental designs to isolate a treatment effect, which then permits causal inferences. When experiments are not practical, a researcher is relegated to analyzing observational data. To make causal inferences from observational data, one must adjust the data so that they resemble data that might have emerged from an experiment. Traditionally, this has occurred through statistical models identified as matching methods. We claim that matching methods are unnecessarily constraining and propose, instead, that the goal is better achieved via a subset selection procedure that is able to identify statistically indistinguishable treatment and control groups. This reformulation to identifying optimal subsets leads to a model that is computationally complex. We develop an evolutionary algorithm that is more efficient and identifies empirically more optimal solutions than any other causal inference method. To gain greater efficiency, we also develop a scalable algorithm for a parallel computing environment by enlisting additional processors to search a greater range of the solution space and to aid other processors at particularly difficult peaks.

Wednesday July 20, 2016 8:30am - 9:00am EDT
Chopin Ballroom

9:00am EDT

AD: Three Dimensional Simulations of Fluid Flow and Heat Transfer with Spectral Element Method
This paper presents a computational approach for simulating three dimensional fluid flow and convective heat transfer involving viscous heating and Boussinesq approximation for buoyancy term. The algorithm was implemented with a modal spectral element method for accurate resolutions to coupled nonlinear partial differential equations. High order time integration schemes were used for time derivatives. Simulation results were analyzed and verified. They indicate that this approach is viable for investigating convective heat transfer subject to complex thermal and flow boundary conditions in three dimensional irregular domains.

Wednesday July 20, 2016 9:00am - 9:30am EDT
Chopin Ballroom

9:30am EDT

AD: Performance and Scalability Analysis for Parallel Reservoir Simulations on Three Supercomputer Architectures
In this work, we tested the performance and scalability on three supercomputers of different architectures including SDSC’s Comet, SciNet’s GPC and IBM’s Blue Gene/Q systems, through benchmarking parallel reservoir simulations. The Comet and GPC systems adopt a fat-tree network and they are connected with InfiniBand interconnects technology. The Blue Gene/Q uses a 5-dimensional toroidal network and it is connected with custom interconnects. In terms of supercomputer architectures, these systems represent two main interconnect families: fat-tree and torus. To demonstrate the application scalability for supercomputers with today’s diversified architectures, we benchmark a parallel black oil simulator that is extensively used in the petroleum industry. Our implementation for this simulator is coded in C and MPI, and it offers grids, data, linear solvers, preconditioners, distributed matrices and vectors and modeling modules. Load balancing is based on the Hilbert space-filling curve (HSFC) method. Krylov subspace and AMG solvers are implemented, including restarted GMRES, BiCGSTAB, and AMG solvers from Hypre. The results show that the Comet is the fastest supercomputer among tested systems and the Blue Gene/Q has the best parallel efficiency. The scalability analysis helps to identify the performance barriers for different supercomputer architectures. The study of testing the application performance serves to provide the insights for carrying out parallel reservoir simulations on large-scale computers of different architectures.

Wednesday July 20, 2016 9:30am - 10:00am EDT
Chopin Ballroom

10:30am EDT

AD: A Scalable High-performance Topographic Flow Direction Algorithm for Hydrological Information Analysis
Hydrological information analysis based on Digital Elevation Models (DEM) provide hydrological properties derived from high-resolution topographic data represented as elevation grid. Flow direction detection is one of the most computationally intensive functions. As the resolution of DEM becomes higher, the computational bottleneck of this function hinders the use of these DEM data in large-scale studies. As the computation of flow directions for the study extent needs global information, the parallelization involves iterative communications. This paper presents an efficient parallel flow direction detection algorithm that identifies spatial features (e.g., flats) that can or cannot be computed locally. An efficient sequential algorithm is then applied to resolve those local features, while communication is applied to compute non-local features. This strategy significantly reduces the number of iterations needed in the parallel algorithm. Experiments show that our algorithm outperformed the best existing parallel (i.e., the d8 algorithm in TauDEM) by two orders of magnitude. The parallel algorithm exhibited desirable scalability on Stampede and ROGER supercomputer.

Wednesday July 20, 2016 10:30am - 11:00am EDT
Chopin Ballroom

11:00am EDT

AD: Implementation of Simple XSEDE-Like Clusters: Science Enabled and Lessons Learned
The Extreme Science and Engineering Discovery Environment (XSEDE) has created a suite of software that is collectively known as the XSEDE-Compatible Basic Cluster (XCBC). It is designed to enable smaller, resource-constrained research groups or universities to quickly and easily implement a computing environment similar to XSEDE computing resources.The XCBC acts as both an enabler of local research and as a springboard for seamlessly moving researchers onto XSEDE resources when the time comes to scale up their efforts to larger hardware. The XCBC system consists of the Rocks Cluster Manager, developed at the San Diego Supercomputer Center for use on Gordon and Comet, and an XSEDE-specific “Rocks Roll'', containing a selection of libraries, compilers, and scientific software curated by the Campus Bridging (CB) group. The versions of software included in the roll are kept up to date with those implemented on XSEDE resources.
The Campus Bridging team has helped several universities implement the XCBC, and finds the design to be extremelyuseful for resource-limited (in time, administrator knowledge, ormoney) research groups or institutions. Here, we detail our recentexperiences in implementing the XCBC design at university campuses across the country. These XCBC implementations were carried out with Campus Bridging staff traveling on-site to the partner institutions to directly assist with the cluster build. Results from a site visits at partner institutions show how the Campus Bridging team helped accelerate cluster implementation and research by providing expertise and hands-on assistance during cluster building. We also describe how, following a visit from Campus Bridging staff, the XCBC has accelerated research and discovery at our partner institutions.

Wednesday July 20, 2016 11:00am - 11:30am EDT
Chopin Ballroom

11:30am EDT

AD: Using High Performance Computing To Model Cellular Embryogenesis
C. Elegans is a primitive multicellular organism (worm) that shares many important biological characteristics that arise as complications within human beings. It begins as a single cell and then undergoes a complex embryogenesis to form a complete animal. Using experimental data, the early stages of life of the cells are simulated by computers. The goal of this project is to use this simulation to compare the embryogenesis stage of C. Elegans cells with that of human cells. Since the simulation involves the manipulation of many files and large amounts of data, the power provided by supercomputers and parallel programming is required. The serial agent based simulation program NetLogo completed the simulation in roughly six minutes. By comparison, using the parallel agent based simulation toolkit RepastHPC, the simulation completed in under a minute when executing on four processors of a small cluster. Unlike NetLogo, RepastHPC does not contain a visual element. Therefore, a visualization program, VisIt, was used to graphically show the data produced by the RepastHPC simulation.

Wednesday July 20, 2016 11:30am - 12:00pm EDT
Chopin Ballroom