Skip to main content

Computing Hardware

CRCD Hardware Specifications

The CRC provides different hardware types to target different computing use cases. These hardware profiles are grouped together under a common cluster name and are further divided into partitions to highlight differences in the architecture or usage modes.

Cluster Types and Use Cases
Cluster Acronym Full Form of Acronym Description of Use Cases
mpi Message Passing Interface For tightly coupled parallel codes that use the Message Passing Interface APIs for distributing computation across multiple nodes, each with its own memory space
htc High Throughput Computing For genomics and other health sciences-related workflows that can run on a single node
smp Shared Memory Processing For jobs that can run on a single node where the CPU cores share a common memory space
gpu Graphics Processing Unit For AI/ML applications and physics-based simulation codes that had been written to take advantage of accelerated computing on GPU cores

Diagram of the CRCD hardware landscape showing the four cluster types (MPI, HTC, SMP, GPU), login nodes, VIZ nodes, and their interconnections


GPU Cluster Overview

The GPU cluster is optimized for computational tasks requiring GPU acceleration, such as artificial intelligence and machine learning workflows, molecular dynamics simulations, and large-scale data analysis.

Key Features

  • Designed for high-performance GPU workloads.
  • Supports CUDA, TensorFlow, PyTorch, and other GPU-accelerated frameworks.

Specifications

Partition Nodes GPU VRAM GPU/Node --constraint CPU Cores/Node Mem/Node Scratch Network Node Names
a100 12 NVIDIA A100-PCIE-40GB 40 GB 4 a100,40g,amd Amd Epyc 7742 61 512 GB 2 TB NVMe HDR200 IB gpu-n[33-44]
a100_multi 10 NVIDIA A100-PCIE-40GB 40 GB 4 a100,40g,amd Amd Epyc 7742 64 512 GB 2 TB NVMe HDR200 IB gpu-n[45-54]
a100_nvlink 3 NVIDIA A100-SXM4-40GB 40 GB 8 a100,40g,amd Amd Epyc 7742 128 1 TB 12 TB NVMe HDR200 IB gpu-n[28-30]
a100_nvlink 2 NVIDIA A100-SXM4-80GB 80 GB 8 a100,80g,amd Amd Epyc 7742 128 1 TB 2 TB NVMe HDR200 IB gpu-n[31-32]
l40s 19 NVIDIA L40S 45 GB 4 l40s,48g,intel Intel Xeon Platinum 8462Y+ 64 512 GB 7 TB NVMe 10GbE gpu-n[55-73]
rtx6k 9 NVIDIA RTX PRO 6000 Blackwell Server Edition 96 GB 8 rtx6k,96g,amd Amd Epyc 9555 128 1 TB 7 TB NVMe HDR200 IB gpu-n[74-82]
h200 2 NVIDIA H200 140 GB 8 h200,141g,intel Intel Xeon Platinum 8592+ 128 3 TB 7 TB NVMe HDR200 IB gpu-n[89-90]

HTC Cluster Overview

The HTC Cluster is designed to handle data-intensive health sciences workflows (genomics, neuroimaging, etc.) processing that can run on a single node.

Key Features

  • Dedicated Open OnDemand web portal instance

Specifications

Partition Nodes GPU VRAM GPU/Node --constraint CPU Cores/Node Mem/Node Scratch Network Node Names
htc 72 N/A N/A N/A intel,ice_lake Intel Xeon Platinum 8352Y 81 768 GB 3 TB NVMe 10GbE htc-1024-n[0-3],htc-n[24-91]

MPI Cluster Overview

The MPI cluster enables jobs with tightly coupled parallel codes using Message Passing Interface APIs for distributing computation across multiple nodes, each with its own memory space.

Key Features

  • Infiniband and Omni-Path networking
  • Minimum of 2 Nodes per Job

MPI Cluster Overview

Specifications

Partition Nodes GPU VRAM GPU/Node --constraint CPU Cores/Node Mem/Node Scratch Network Node Names
mpi 136 N/A N/A N/A hdr Intel Xeon Gold 6342 48 512 GB 2 TB NVMe HDR200 IB mpi-n[0-135]
ndr 18 N/A N/A N/A ndr Amd Epyc 9575F 128 1 TB 3 TB NVMe HDR200 IB mpi-n[136-153]

SMP Cluster Overview

The SMP nodes are appropriate for programs that are parallelized using the shared memory framework. These nodes are similar to your laptop but with more CPU cores and shared memory space between them.

Key Features

  • high memory partition for nodes with up to 3 TB of shared memory

Specifications

Partition Nodes GPU VRAM GPU/Node --constraint CPU Cores/Node Mem/Node Scratch Network Node Names
smp 94 N/A N/A N/A amd,rome Amd Epyc 7302 47 256 GB 960 GB NVMe 10GbE smp-n[156-210,214-251,266]
high-mem 11 N/A N/A N/A intel,ice_lake Intel Xeon Platinum 8352Y 61 1 TB 10 TB NVMe 10GbE smp-1024-n[0-8],smp-2048-n[0-1]

TEACH Overview

The TEACH cluster make a subset of hardware on the CRCD system available for students and teachers to develop computational workflows around course materials without competing with research-oriented jobs.

Key Features

  • Consists of both CPU and GPU hardware

Specifications

Specifications

Partition Nodes GPU VRAM GPU/Node --constraint CPU Cores/Node Mem/Node Scratch Network Node Names
cpu 48 N/A N/A N/A N/A Intel Xeon Gold 6126 24 192 GB 480 GB NVMe 1GbE teach-cpu-n[0-1,10-19,2,20-29,3,30-39,4,40-47,5-9]
gpu 1 N/A N/A N/A gtx1080 unknown 19 128 GB ? 10GbE teach-gpu-n12
gpu 9 NVIDIA GeForce GTX 1080 Ti 11 GB 4 gtx1080 Intel Xeon Silver 4112 19 96 GB 480 GB NVMe 1GbE teach-gpu-n[17-25]
gpu 7 NVIDIA GeForce GTX TITAN X 12 GB 4 titanx Intel Xeon CPU E5-2620 v3 @ 2.40GHz 19 128 GB 960 GB NVMe 1GbE teach-gpu-n[0-6]
gpu 2 NVIDIA L4 22 GB 8 l4 Intel Xeon Platinum 8592+ 19 512 GB 733 GB NVMe 10GbE teach-gpu-n[15-16]
gpu 7 NVIDIA GeForce GTX 1080 8 GB 4 gtx1080 Intel Xeon CPU E5-2620 v3 @ 2.40GHz 19 128 GB 822 GB NVMe 1GbE teach-gpu-n[10-11,13-14,7-9]

Login Nodes Overview

The Login Nodes provide access to a Linux Command Line interface on the CRCD system via Secure SHell protocol (ssh).

Key Features

  • Load balancing between login nodes to better address usage demands
  • Cgroup-based management of system resources

Specifications

Login Node Specifications
hostname backend hostname Architecture Cores/Node Mem Mem/Core OS Drive Network
h2p.crc.pitt.edu login0.crc.pitt.edu Intel Xeon Gold 6326 (Ice Lake) 32 256 GB 8 GB 2x 480 GB NVMe (RAID 1) 25GbE
  login1.crc.pitt.edu Intel Xeon Gold 6326 (Ice Lake) 32 256 GB 8 GB 2x 480 GB NVMe (RAID 1) 25GbE
htc.crc.pitt.edu login3.crc.pitt.edu Intel Xeon Gold 6326 (Ice Lake) 32 256 GB 8 GB 2x 480 GB NVMe (RAID 1) 25GbE

VIZ Overview

The VIZ Login Nodes enable access to an in-browser Linux Desktop environment on the CRCD system.

Key Features

  • Load balancing between login nodes to better address usage demands
  • Cgroup-based management of system resources

Specifications

VIZ Node Specifications
Web URL backend hostname GPU Type # GPUs Host Architecture Cores Mem Mem/Core Scratch Network
https://viz.crc.pitt.edu viz-n0.crc.pitt.edu GTX 1080 8GB 2 Intel Xeon E5-2680v4 (Broadwell) 28 256 GB 9.1 GB 1.6 TB SSD 10GbE
  viz-n1.crc.pitt.edu RTX 2080 Ti 11GB 2 Intel Xeon Gold 6226 (Cascade Lake) 24 192 GB 8 GB 1.9 TB SSD 10GbE