Tizard supercomputer

Tizard supercomputer

hpc-pg-icon

The Tizard supercomputer can be used for complex data processing and analysis jobs that standard desktop computers would find it difficult or impossible to perform, and enables users to run many processing jobs with different parameters or input files more quickly.

Researchers from diverse fields use our supercomputing resources to analyse vast genomic databases, study the interactions of complex molecular structures, model the interactions of fundamental particles in the nucleus, and run complicated statistical queries.

Since the supercomputer is a shared resources with many users, access is controlled by a queueing system. To have a job processed by the supercomputer, you need to submit it to the processing queue using a batch file that specifies the program you want to run. We can show you how to produce this file and talk with you about which supercomputer is best suited to your processing needs.

Tizard user guide

Technical specifications

CPU cluster

  • 48 SGI compute nodes connected by a high-speed QDR Infiniband network
  • Each node has 48 cores (4 AMD 6238 12-core 2.6Ghz CPUs) and 128GB memory (2.7GB per core)
  • A total of 2304 cores with a peak performance of 24 TFLOPS

For general purpose computing, supports single processor jobs, multi-core applications that need to run on a single node, or parallel programs that can utilise many cores across multiple compute nodes.  If you require more than 4GB per core you should use the big memory nodes. If you only require 8 cores or less than the Australian Research Cloud is your best option.

Big memory nodes

  • 1 Dell R910 server with 4 Intel Xeon E7-8837 8-core 2.66 GHz processors, 1TB memory, 3 TB of local scratch disk
  • 1 Dell R810 server with 4 Intel Xeon E7-4830 8-core 2.13 GHz processors, 512GB memory, 1.7 TB of local scratch disk

For applications that require relatively small numbers of cores and large memory per core.

Tesla GPU nodes

  • Each node has 4 nVIDIA Tesla M2090 GPUs (6GB GPU memory per card), 2 x Intel Xeon L5640 6-core CPUs @ 2.26Ghz, 96GB memory
  • Each node provides of 2.7 TFlops (single precision) from the GPUs (1/2 of this for double precision)
  • 5 nodes giving 13.5 TFlops total single precision (7 TFlops double precision).

For applications that have been ported to run on GPUs and need good double precision performance, large GPU memory and error correcting (ECC) GPU memory.

Consumer (GTX580) GPU nodes

  • Each node has 4 GeForce GTX580 GPUs (3GB GPU memory per card), 2 x Intel Xeon L5630 4-core CPUs @ 2.13Ghz, 24GB memory
  • Each node provides of 2.7 TFlops (single precision) from the GPUs (1/4 of this for double precision)
  • 12 nodes giving 32 TFlops total single precision (8 TFlops double precision).

For applications that have been ported to run on GPUs and are mostly single precision calculations and don’t need large GPU memory or error correcting (ECC) GPU memory.

Virtualisation server

  • 1 Dell R815 server with 4 AMD Opteron 6128 8-core processors, 256GB memory and 3.6 TB disk.

For hosting virtual machines supporting applications that require interactive access (e.g. using a GUI) and/or do not run on the operating systems used on the eRSA HPC systems.

History

The $700,000 machine was purchased with funds from an ARC Linkage Infrastructure, Equipment and Facilities grant. Tizard represents a big win for the South Australian research community.

The Tizard machine is named in memory of James Tizard, the founding CEO of SABRENet (2007-2011) and Director of eResearch SA (2009-10), who passed away in 2011.

 

Contact us to get started with HPC

CONTACT US
  • “It would be impossible to do the type of research that we’re doing without them – it is a major factor in achieving our research outcomes.”  
    Associate Professor Con DoolanSchool of Mechanical Engineering, University of Adelaide
  • “The supercomputing facilities at eResearch SA permit analysis of a host of interesting problems in evolutionary biology. It is the only computer system in SA that can perform certain complex calculations required to infer large evolutionary trees and associated patterns of evolution.”  
    Associate Professor Michael LeeSouth Australian Museum

Our partners

University of South Australia logo
uni-adelaide-logo
flinders-logo