Hemisphere

Hemisphere is a 64-node cluster that has provided the bulk of the CSC's processing capacity from 2003-2007. Each node contains dual 2.4 GHz Intel Xeon processors and 2GB of memory. Since Hemisphere and Occam are no longer under warranty, we are not adding new accounts at this time.

Connecting to Hemisphere

All accounts at the CSC have access to Hemisphere. Simply use an SSH client to connect to hemisphere.cs.colorado.edu. To move files to or from Hemisphere, use an SCP client to connect to hemisphere.cs.colorado.edu.

For more information on shared resources, such as where to store your files, see the Getting Started guide.

Specifications

System Name: Hemisphere
Online Date: March 2003
Interconnect: Gigabit/100Mbps ethernet
Total Compute Nodes: 64
Processors per Node: Dual Intel Xeon 2.4 GHz
RAM per Node: 2.0 GB

Statistics

Additional reports for Hemisphere can be found on Ganglia.

Documentation

Applications
Common Libraries
Compilers
MPI Libraries
PBS Queues

Applications

The following common applications are installed. If you would like another application installed for system-wide use, please contact the administration team.

Programming Languages

java 1.4.2 /opt/j2sdk1.4.2_12/bin/java
Python 2.4 /opt/python-2.4

Debugging and Performance

PAPI /opt/papi-3.2.1

Other Popular Applications

MATLAB 6.1 /opt/matlab-6.1/bin/matlab

Common Libraries

The following common libraries are installed. If you would like another library installed, please contact the administration team.

NCAR Command Language (NCL) /opt/ncl/lib

Compilers

The following compilers are available on Hemisphere in the following locations:

GNU Compilers

Compiler Location
gcc 3.3.5 /usr/bin/gcc

Intel Compilers 9.1default

Compiler Location
icc 9.1 /opt/intel/cc/9.1.039/bin/icc
ifort 9.1 /opt/intel/fc/9.1.033/bin/ifc

Portland Group Compilersdefault

Compiler Location
pgCC 5.2 /opt/pgi/linux86/5.2/bin/pgCC
pgcc 5.2 /opt/pgi/linux86/5.2/bin/pgcc
pgf77 5.2 /opt/pgi/linux86/5.2/bin/pgf77
pgf90 5.2 /opt/pgi/linux86/5.2/bin/pgf90

MPI Libraries

Hemisphere supports MPI using MPICH on Gigabit Ethernet. (Hemisphere used to have a Dolphin SCI torus with Scali MPI for parallel processing, but this has been deprecated as the machine exceeded its 3-year maintenance contract.)
MPICH

To use MPICH, first select a compilation version in /opt. The gcc compilation usually works best, but if you have complex code that links C, C++, F77, and F90, you might need to try a version created with the same compiler you are using. Then use the following compilation directives:

Feature Compiler Directive
Compilation Headers -I/opt/mpich-version/include
Linking Libraries -L/opt/mpich-version/lib -lmpich

To run a MPICH program on Hemisphere, use the following line in your PBS batch script file:

/opt/mpich-version/bin/mpirun -machinefile $PBS_NODEFILE <program and arguments>

To avoid difficulties, make sure to specify the full path to ‘mpirun’. If you attempt to run a program compiled with one MPI library with the mpirun command shipped with another MPI implementation, your program may fail to run or every process will execute as the only process in a one-task communicator.

PBS Queues

The Hemisphere nodes are controlled using the PBS batch scheduling system. The queues are configured to support a large user community with different job types. In particular, we support users debugging code that require short turnaround time for small jobs, users running large parallel jobs, and users running a large quantity of single processor jobs for parameter studies.

To meet the demands of this job mix, the following queues are available: