Skip to Content

Division of Information Technology

Research Computing

High Performance Computing Clusters

The University of South Carolina High Performance Computing (HPC) clusters are available to researchers requiring specialized hardware resources for computational research applications. The clusters are managed by Research Computing (RC) in the Division of Information Technology.

High Performance Computing resources at the University of South Carolina include the USC flagship Hyperion HPC cluster, which consists of 356 nodes and provides a total core count of 16,616 CPU cores. The cluster is a heterogeneous configuration, consisting of 291 compute nodes, 8 Big Memory nodes, 53 GPU nodes, a large SMP system, and 2 IBM Power8 quad GPU servers. All nodes are connected via a high speed, low latency InfiniBand network at 100 Gb/s, and a 1.4 Petabyte high-performance GPFS scratch filesystem, with 450 Terabytes of 1Gb/s home directory storage. Hyperion is housed in the USC data center, which provides enterprise-level monitoring, cooling, power backup, and Internet2 connectivity.

Research Computing clusters are available in job queues under the Bright Cluster Management system.  Bright provides a robust software environment to deploy, monitor and manage HPC clusters. 

Hyperion

Hyperion is our flagship cluster intended for large, parallel jobs and consists of 356 compute, GPU and Big Memory nodes, providing 16,616 CPU cores. Compute and GPU nodes have 128-256 GB of RAM and Big Memory nodes have 2TB RAM.  All nodes have EDR infiniband (100 Gb/s) interconnects, and access to 1.4 PB of GPFS storage.

Bolden

This cluster is intended for teaching purposes only and consists of 20 compute nodes providing 460 CPU cores. All nodes have FDR infiniband (54 Mb/s) interconnects and access to the 300 TB of Lustre storage.

Maxwell (Retired)  

This cluster was available for teaching purposes only. There were about 55 compute nodes with 2.8 GHz and 2.4 GHz CPUs each with 24 GB of RAM.

 

Historical Summary of HPC Clusters

Name Number of
nodes
Cores per node TotalCores Processor
speeds
Memory per node Disk Storage GPU Nodes Big Memory Nodes Interconnect Status
Hyperion Phase III 356  64 or 48 (Compute) 48 or 28 (GPU) 64 (Big Memory)  16,616 3.0 GHz Compute (256 or 192 GB) GPU (192 or 128 GB) Big Memory (2.0 TB) 450 TB Home (1 Gb/s Ethernet) 1.4 PB Scratch
(100 Gb/s Infiniband)
9 (Dual
P-100)
44 (Dual V100)
8 EDR Infiniband
100 Gb/s
Active
Hyperion Phase II 407  48 or 28 (Compute) 48 or 28 (GPU) 40 (Big Memory)  15,524 3.0 GHz Compute (128 GB) GPU (128 GB) Big Memory (1.5 TB) 450 TB Home (1 Gb/s Ethernet) 1.4 PB Scratch
(100 Gb/s Infiniband)
9 (Dual
P-100)
44 (Dual V100)
8 EDR Infiniband
100 Gb/s
Retired
Hyperion Phase I 224  28 (Compute) 28 (GPU)  40 (Big Memory) 6,760 2.8 GHz (Compute, GPU)
2.1 GHz (Big Memory)
Compute (128 GB) GPU (128 GB) Big Memory (1.5 TB) 300 TB of Lustre storage 50 TB of NFS storage 1.5 PB Scratch (100 Gb/s Infiniband) 8 8 EDR Infiniband
100 Gb/s
Retired
Bolden  20  20  400 2.8 GHz 64 GB 300 TB 1 1 FDR Infiniband
54 Gb/s
Active
Maxwell 55 12 660 2.4 GHz/2.8 GHz 24 GB 20 TB 15 (M1060) None QDR Infiniband 40 Gb/s Retired

 


Challenge the conventional. Create the exceptional. No Limits.

©