High Performance Computing / Research Computing

Sewanee SDI provides limited access to a small high-performance computing cluster for computationally intensive projects.

The primary research paths at Sewanee are through services provided by the National Science Foundation. Through Grants and Funding, the NSF offers several research and computing resources that are available to Sewanee Faculty and Students.

National and Regional Supercomputing Resources

Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) is an NSF-funded consortium which supports a network of shared supercomputing resources housed at major U.S. research institutions. Through ACCESS, member institutions gain access to flexible pools of time on large supercomputers. Using these systems, researchers are able to complete computational processes in one hour that would take a single desktop computer decades to complete.  

ACCESS is the primary login method for many services discussed throughout this page. Sewanee researchers are required to request an ACCESS ID before they can submit requests for any NSF funded project.

Researchers can request an ACCESS ID here:  GETTING STARTED

Systems available on ACCESS include:

  • JetStream2, a cloud-based virtual machine service hosted by Indiana University that can provide a variety of systems including large memory and GPU instances for researchers and classes. *Requires ACCESS ID.

  • Open Science Pool (OSPool), consisting of a network of more than 60,000 computers available to run serial jobs with a short queue time. *Requires ACCESS ID.

  • Stampede3, a supercomputer at the Texas Advanced Computing Center with 1,858 compute (more than 140,000 cores), over 330 terabytes of RAM, 13 petabytes of storage, and almost 10 petaflops of peak capability. *Requires ACCESS ID.

  • ACCESS CloudBank for Research, a service-based platform that enables access to commercial cloud resources using a flexible, multi-cloud infrastructure. The platform currently supports the following commercial clouds: AWS, Google Cloud, IBM Cloud, and Microsoft Azure.
High Throughput Computing Resources

If you have a large job that can be broken down into small, independent pieces, high throughput computing (HTC) may be a way to reduce the time needed for your calculation.  Instead of running a program on one large computer, you can create hundreds or thousands of small jobs that are sent to Open Science Pool (OSPool), a set of thousands of computers across the country.  Anyone can create an account at OSG Connect and start submitting jobs using the HTCondor system.

MATCH Services

MATCH Services matches researchers with expert support based on their needs to solve research needs like expanding code, transitioning from lab computers to HPC, or introducing new technologies.

Recommended Use
MATCH Services is an opportunity for researchers to get help with improvements like expanding your code functionality, transitioning from lab computers to HPC, or introducing new technologies into your workflow. 
*Requires ACCESS ID.
Open Storage Network (OSN)
The Open Storage Network (OSN) is an NSF-funded cloud storage resource, geographically distributed among storage pods. OSN is a collaboration between MGHPCC, SDSC, NCSA, Rice, JHU, and RENCI, with a federation of pod-owning sites and contributions from other advanced computing centers. Each OSN pod currently hosts 1.5 PB or more of storage, and is connected to R&E networks between 40 and 100Gbit. OSN storage is allocated in buckets and is accessible using S3 interfaces, including with tools such as Rclone, Cyberduck, and the AWS CLI, or via REST API interfaces. 
Allocations and Getting Started: ALLOCATIONS
*Requires ACCESS ID.
FABRIC

FABRIC (FABRIC is Adaptive ProgrammaBle Research Infrastructure for Computer Science and Science Applications) is an International infrastructure that enables cutting-edge experimentation and research at-scale in the areas of networking, cybersecurity, distributed computing, storage, virtual reality, 5G, machine learning, and science applications.

The FABRIC infrastructure is a distributed set of equipment at commercial collocation spaces, national labs and campuses. Each of the 29 FABRIC sites has large amounts of compute and storage, interconnected by high speed, dedicated optical links. It also connects to specialized testbeds (5G/IoT PAWR, NSF Clouds), the Internet and high-performance computing facilities to create a rich environment for a wide variety of experimental activities.

FABRIC Across Borders (FAB) extends the network to 4 additional nodes in Asia and Europe.
FABRIC PORTAL
*Requires CILogin / ACCESS ID