|
Hardware FacilitiesCherry L. Emerson | Alumni of the Center | Hardware Facilities | Software Collection Details | Lectureship Award | Visiting Fellowship Award | Picture Gallery | Contacts Currently, the Center operates and maintains an IBM RS/6000 SP4 supercomputer, a high performance AMD64's Opteron Linux cluster (from TeamHPC), a SUN Fire V40z Linux cluster, an Octane-2 (dual) graphics workstation, and five IBM RS/6000 standalone workstations. They are all connected to Emory's ethernet, and networked together through NFS. Long, CPU time and I/O intensive jobs are handled by IBM's batch queue manager "LoadLeveler". From the user's point of view, these services result in an almost "one-machine" image: Users accounts are uniform on SP and Linux nodes with the same user ID, password and home directory structure. Users can submit their jobs, and LoadLeveler dispatches the jobs to the first available node or workstation whose resources match the jobs' requirements. The IBM machines are running the AIX 5.3 operating system, which contains the following system software: Fortran compiler v.7.1, which supports 64-bit architectures and f77, f90 and f95 standards; High Performance Fortran (HPF) v.1.1; C and C++ compile v.5.0; Object-Oriented programming language, Java v.1.1.8; IBM's Engineering and Scientific Subroutine Library (ESSL) version 3.2, and parallel ESSL v.2.2; Linear Algebraic Package (LAPACK), which provides routines for solving systems of simultaneous linear equations, eigenvalue and singular value problems, etc.; POE- Parallel Operating Environment- including MPI library, and Loadleveler v.2.2. The HPC machines are running SuSe 10 Enterprise Linux with all the standard system libraries. Additionally, Intel, Open-MPI and Pathscale 3.1 (trial subscription) compilers have been installed so far. The SUN Fire machines are running SuSe 9.0 featuring (among the usual Linux distribution software) the Portland Group compilers (PGI) version 6.1 pgf77, pgf90, pgcc which support 64-bit architectures; LAPACK library provided by PGI; MPI library for parallel programming. In brief: I. The TeamHPC Linux cluster (called Wind) consists of a 2.8 GHz dual-socket dual-core server with 4GB of RAM and 4*146GB of disk for system and user data; and 32 3.0 GHz dual-socket dual-core compute nodes with 8GB of RAM and 146 GB scratch disk. The master node is open to users for performing low intensity tasks, such
as graphics visualization and small code testing. The compute nodes
are used solely for long, CPU intensive production jobs (such as molecular
dynamics, Monte Carlo simulations, electronic structure calculations,
and more) which are distributed by the PBS batch queuing system. In this
cluster the PBS batch queuing system is nicely combined with IBM's Loadleveler
queuing system. The nodes are connected by a Gigabit-Ethernet switch. III. Our IBM/R6000 POWER4+ (SP4) system consists of six 4-way nodes
put together into a single frame. Each node is equipped with a 1.2 GHz
processor, two separate 12K RPM SCSI hard disks for a total of 64 GB,
and 2 GB of RAM. The nodes are connected to the control workstation. Nodes
and control workstation operate at a network speed of 100Mbps. Along with
Wind and Fire, SP4+ is the most heavily used system in the Center. Typical I/O
bound applications include large basis set coupled cluster and multi-reference
configuration interaction jobs.
|
|||||||
About Us | Center Subscription | Visiting Fellows | Lectureship Award | Research | User Handbooks | News | Center Homepage
Short Courses | FAQs | Points of Interest | Search | Emory College | Emory University