As a leader in high performance computing and networking, OSC is a vital resource for Ohio's scientists and engineers. OSC’s cluster computing capabilities make it a fully scalable center with mid-range machines to match those found at National Science Foundation centers and other national labs.
OSC provides statewide resources to help researchers making discoveries in a vast array of scientific disciplines. Beyond providing shared statewide resources, OSC works to create a user-focused, user-friendly environment for our clients.
Collectively, OSC supercomputers provide a peak computing performance of 7.5 Petaflops (PF). The center also offers approximately 16 Petabytes (PB) of disk storage capacity distributed over several file systems, plus more than 14 PB of available backup tape storage (with the ability to easily expand to over 23 PB).
Technical Specifications
- Pitzer Cluster: A 10,240-core Dell Intel Gold 6148 + 19,104-core Dual Intel Xeon 8268 machine
- 224 nodes have 40 cores per node and 192 GB of memory per node
- 340 nodes have 48 cores per node and 192 GB of memory per node
- 32 nodes have 40 cores, 384 GB of memory, and 2 NVIDIA Volta V100 GPUs
- 42 nodes have 48 cores, 384 GB of memory, and 2 NVIDIA Volta V100 GPUs
- 4 nodes have 48 cores, 768 GB of memory, and 4 NVIDIA Volta V100s w/32GB GPU memory and NVLink
- 4 nodes have 80 cores and 3.0 TB of memory for large Symmetric Multiprocessing (SMP) style jobs
- Theoretical system peak performance of 3.9 petaflops
- Owens Cluster: A 23,392-core Dell Intel Xeon E5-2680 v4 machine
- 648 nodes have 28 cores per node and 128 GB of memory per node
- 16 nodes have 48 cores and 1.5 TB of memory for large Symmetric Multiprocessing (SMP) style jobs
- 160 nodes have 28 cores, 128 GB of memory, and 1 NVIDIA Pascal P100 GPU
- Theoretical system peak performance of 1.5 petaflops
- Ascend Cluster: A 2,304-core Dell AMD EPYC™ machine
- 24 nodes have 88 usable cores, 921GB of usable memory and 4 NVIDIA A100 GPUs per node
- Theoretical system peak performance of 2.0 petaflops
-
Cardinal Cluster: A 39,312-core, 378 node Dell Intel CPU Max 9470 HBM machine
- 326 nodes have 104 cores (96 usable), 128GB HBM and 512 GB DDR5 of memory per node
- 32 nodes have 104 cores (96 usable), 1 TB of memory, and 4 NVIDIA Hopper H100 GPUs w/ 94GB memory and NVLink per node
- 16 nodes have 104 cores (96 usable), 128GB HBM and 2TB DDR5 of memory for large Symmetric Multiprocessing (SMP) style jobs per node
- 4 login nodes have 104 cores (96 usable), 128GB HBM and 1 TB DDR5 of memory per node
- Theoretical system peak performance of 10.5 petaflops
- GPU Computing: All OSC systems now support GPU Computing. Specific information is given on each cluster's page.
- Owens: 160 NVIDIA Tesla P100
- Pitzer: 64 NVIDIA Volta V100 (two each on 32 nodes); 84 NVIDIA Volta V100 (two each on 42 nodes); 16 NVIDIA Volta V100s and NVLink (four each on 4 nodes)
- Ascend: 96 NVIDIA A100s and NVLink (four each on 24 nodes)
-
Cardinal: 128 NVIDIA H100s and NVLink (four each on 32 nodes)
Getting Started
To request a fully subsidized account you must be a full-time faculty member or research scientist at an Ohio college or university. Graduate students, postdoctoral researchers, lab members and other colleagues seeking access to OSC resources must be working with an eligible primary investigator.
- To get an account, check out our Applying for Academic Accounts page.
- Review OSC’s available software information.
- Review OSC’s user policies.