Skip to Content

Coronavirus: Get complete details about the university's response to COVID-19.

Division of Information Technology

Research Computing

HPC Clusters

The University of South Carolina campus High Performance Computing (HPC) clusters are available to researchers requiring specialized hardware resources for research applications. The clusters are managed by Research Computing (RC) in the Division of Information Technology.

Research Computing resources at the University of South Carolina include the high-performance computing cluster, Hyperion, which consists 407 individual nodes and provides a total core count of 15,524 CPU cores. The cluster is a heterogeneous configuration, consisting of 343 compute nodes, 8 large memory nodes, 53 GPU nodes, a large SMP system, and 2 IBM Power8 quad GPU servers. All nodes are connected via a high-speed, low latency, Infiniband network at 100 Gb/s. All nodes are also connected to a 1.4 Petabyte high performance GPFS scratch filesystem, and 450 Terabytes of home directory storage. This cluster, managed by the Research Computing group under the Division of Information Technology, is in the university data center that provides enterprise-level monitoring, cooling, power backup and Internet2 connectivity.

RC clusters are available in job queues under the Bright Cluster Management system.  Bright provides a robust software environment to deploy, monitor and manage HPC clusters. 

Hyperion

Hyperion is our flagship cluster intended for large, parallel jobs and consists of 407 compute, GPU and Big Data nodes, providing 15,524 CPU cores. Compute and GPU nodes have 128 GB of RAM and Big Data nodes have 1.5 TB RAM.  All nodes have EDR infiniband (100 Gb/s) interconnects, and access to 1.4 PB of GPFS storage.

Bolden

This cluster is intended for teaching purposes only and consists of 20 compute nodes providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and access to the 300 TB of Lustre storage.

Maxwell

This cluster is available for teaching purposes only.  There are about 55 compute nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM. 

Thoth

This cluster is used for special projects, prototyping and evaluating new software environments.  Please contact RC (rc@sc.edu) for more information.

 

Summary of HPC Clusters

Name Number of
nodes
Cores per node TotalCores Processor
speeds
Memory per node Disk Storage GPU Nodes Big Data Nodes Interconnect
Hyperion 407  GEN1: 28 (Compute) GEN1: 28 (GPU) GEN1: 40 (Big Data) Gen2: 48 15,524 GEN1: 2.8 GHz (Compute, GPU)
GEN1: 2.1 GHz (Big Data)
GEN2: 3.0 GHz
Compute (128 GB) GPU (128 GB) Big Data (1.5 TB) 450 TB Home (10 Gb/s Ethernet) 1.4 PB Scratch
(100 Gb/s Infiniband)
9 (Dual
P-100)
44 (Dual V100)
8 EDR Infiniband
100 Gb/s
Bolden  20  20  400 2.8 GHz 64 GB 300 TB 1 1 FDR Infiniband
54 Gb/s
Maxwell *
 55  12  660 2.4 GHz/2.8 GHz 24 GB 20 TB 15 (M1060) None QDR Infiniband
40 Gb/s
Thoth *
 41  8-12  820 2.5 GHz 128 GB 4 TB None None Ethernet
1 Gb/s

Hyperion is under active vendor service contracts.  * Bolden, Maxwell and Thoth are not under service contracts but will remain operational for teaching, testing or prototyping until decommissioned.


Challenge the conventional. Create the exceptional. No Limits.

©