High Performance Computing Center
|Attention: Hrothgar will be turned off from 10/02/2014 to 10/16/2014 for storage system replacement and file migrating.|
HROTHGAR ---- TTU HPCC's NEWEST COMPUTATIONAL RESOURCE
Texas Tech University's High Performance Computing Center (HPCC) was established in 1999 to promote research and teaching on campus through integrating leading-edge high performance computing and visualization for the faculty, staff and students of Texas Tech University.
The TTU High Performance Computing Center (HPCC) supports research computing. Our facilities have been designed to support research computing equipment, but not production computing or equipment. The HPCC provides consulting and assistance to campus researchers with experimental software and/or hardware needs. We also provide training in parallel and grid computing (as used at the facility) and administration for local high performance systems. HPCC serves as a liaison between various teams that are engaged in research. We work to support, configure and port applications to HPCC resources.
The High Performance Computing Center's (HPCC) hardware is distributed between two primary locations. The main production server, Hrothgar, along with its associated storage systems and several smaller resources are located on campus in the Experimental Sciences Building. Other clusters are located off-campus. Public nodes are available to any TTU researcher. Private nodes are owned by individual researchers and administered by HPCC. Antaeus private nodes are available for public non-priority use.
The main Hrothgar cluster is a node based system. It has 640 nodes (7680 cores) for parallel jobs, 128 nodes (1024 cores) for serial jobs and 46 private nodes. Each of the parallel nodes contains two Westmere 2.8 GHz 6-core processors with 24 GB main memory. The serial nodes contain two Nehalem 3.0 GHz 4-core processors with 16 GB main memory. The parallel and private nodes are connected through double-data-rate Infiniband networking that provides full cross-section bandwidth among the parallel nodes. The parallel nodes have a peak rating of 86 teraflops and a recorded high performance LINPACK rating of 68 teraflops. The serial nodes are interconnected with Gigabit Ethernet.
The main Hrothgar cluster has a DataDirect Network storage system capable of providing storage for up to one petabyte of data. The cluster file system is based on Lustre and provides a shared file system connected to most of the central clusters run by the HPCC. The file system uses Infiniband to connect the parallel nodes on Hrothgar, while using Gigabit Ethernet to connect to the rest of the systems.
The HPCC central resources also include Janus, a 22 node cluster running Windows HPC server. Eighteen of these nodes are the same model as the serial nodes on Hrothgar and use Gigabit Ethernet.
The Antaeus and Weland clusters at the off-campus Reese data center support local, regional and international grid computing. Antaeus has 24 public nodes and 40 private nodes for a total of 512 processing cores. All nodes are the same as the serial nodes on Hrothgar. An additional local resource is TechGrid, an opportunistic-processing resource based on Condor. In summer 2013, TTU Library contributed 1000 3.2GHz cores of computer power to TechGrid, so the TechGrid consits of over 2000 campus processors supplemented by the Weland cluster, which has a total of 128 cores running at 2.53GHz.
TTU also participates in a range of local, regional, national and international grid and cloud computing projects in order to provide access to and training on the use of these resources to our user community and to contribute resources and expertise to the operation of these projects.
For more information about services, equipment, or grid computing efforts at Texas Tech University, please email email@example.com or contact our office directly at (806) 742-4350.