High Performance Computing (HPC)

High Performance Computing systems for structural calculations consist of two symmetric multi-processor systems with GPU capability and two Beowulf clusters running the Linux Operating System.

GPU/SMP Systems

Archer:  This 48 core computer consists of 2 24-core Intel Xeon CPUs and has 256GB of shared memory for processing that needs to utilize large amounts of memory and GPU accelerated resources.  It has two Tesla K40 GPU's for a total of 5760 compute cores running at 875MHz and 24GB of GPU memory.  The machine also has 1TB of ultra-fast solid-state local scratch space.

Executor:  This 64 core computer consists of 2 32 Core AMD "Epyc" CPU’s running at a boost clock of up to 3.0GHz.  It has 512GB of fast DDR4 shared memory for processing that needs to utilize large blocks of memory at once.  The machine also has 20TB of high-performance scratch local disk space for ultra-high speed processing of large datasets.  It contains two RTX 2080 TI GPU co-processors for CUDA accelerated applications, giving it a total of 8,704 CUDA cores and 22GB GDDR5.

Beowulf Clusters

Ultron:  Ultron consists of 12 1U compute nodes and a 3U head node with solid-state storage.  Compute nodes each have dual 14-core 2.4GHz Intel Xeon E5-2680 CPU’s with 256GB RAM.  All compute nodes have 512GB SSD drives as local scratch space.  3 nodes contains 20 NVidia Tesla K80 processors for CUDA accelerated applications.  Cluster communication is via a 56 Gb/s FDR InfiniBand.  In total there are 320 Xeon CPU cores with 2.8TB of RAM, and 49,920 GPU/CUDA cores with 240GB GDDR5.  It has a theoretical peak performance of 108.212 TFLOPS for CPU's/GPU's combined.

Vision: Vision consists of 4 1U compute nodes, a 3U head node with 26TB solid-state storage.  It utilizes a clustered and parallelized scalable storage system, currently at 140TB.  Compute nodes each have 32 -core 2.5GHz AMD EPYC CPU's with with 512GB RAM.  The compute nodes have 2TB SSD local scratch space.  All nodes contain 2 Nvidia 2080 Ti GPUs for CUDA accelerated applications.  Cluster communication is via 100 Gb/s EDR InfiniBand.  In total there are 128 EPYC CPU cores with 2048GB RAM, and 34,816 CUDA cores.