High Performance Computing (HPC)

High Performance Computing systems for structural calculations consist of two symmetric multi-processor systems, 1 dedicated GPU system, and two Beowulf clusters running the Linux Operating System.

SMP Systems

Executor:  This 56 core computer consists of 2 32 Core AMD "Epyc" CPU’s running at a boost clock of up to 3.0GHz.  It has 512GB of fast DDR4 shared memory for processing that needs to utilize large blocks of memory at once.  The machine also has 24TB of high-performance scratch local disk space for ultra-high speed processing of large datasets.  It contains two Titan V GPU co-processors for CUDA accelerated applications, giving it a total of 10,240 CUDA cores.

Chimera:   This 48 core computer consists of 4 12-core 2.1GHz Opteron 6172 CPUs and has 192GB of shared memory for processing that needs to utilize large blocks of memory at once. The machine also has 400GB of local disk space.

GPU Systems

Archer:  This 48 core computer consists of 2 24-core Intel Xeon CPUs and has 256GB of shared memory for processing that needs to utilize large amounts of memory and GPU accelerated resources.  It has two Tesla K40 GPU's for a total of 5760 compute cores running at 875MHz and 24GB of GPU memory.  The machine also has 1TB of ultra-fast solid-state local scratch space.

Beowulf Clusters

Ultron:  Ultron consists of 12 1U compute nodes and a 3U head node with solid-state storage.  Compute nodes each have dual 14-core 2.4GHz Intel Xeon E5-2680 CPU’s with 256GB RAM.  All compute nodes have 512GB SSD drives as local scratch space.  1 node contains 4 NVidia Tesla K80 processors and 1 node contains 4 NVidia Titan V processors for CUDA accelerated applications.  Cluster communication is via a 56 Gb/s FDR InfiniBand.  In total there are 320 Xeon CPU cores with 2.56TB of RAM, and 20,224 GPU/CUDA cores with 96GB GDDR5.  It has a theoretical peak performance of 28.212 TFLOPS for CPU's/GPU's combined.

DS2 (Deathstar II):  DS2 consists of 15 1U compute nodes and a 3U head node. Ten compute nodes have dual 12-core 2.6GHz opteron 6344 CPUs with 64GB of RAM while the other 5 have quad socket 12-core 2.1GHz opteron 6172 CPUs with 192GB of RAM. All Compute nodes have 400GB hard drives as system/scratch space. Two 24 core nodes also contain one GTX670 GPU card for CUDA calculations. The cluster is tied together using 10gigabit ethernet running at full duplex with jumbo frame rates enabled. In total there are 480 Opteron cores and 1.6TB of RAM. It has a theoretical peak performance of 2.256 TFLOPS for CPUs alone.