Occam is a 27-node IBM blade cluster. Each node contains dual IBM PowerPC 970 processors and 2.5 GB of memory.

Since Hemisphere and Occam are no longer under warranty, we are not adding new accounts at this time.

Connecting to Occam

All accounts at the CSC have access to Occam. Simply use a SSH client to connect to occam.cs.colorado.edu. To move files to or from Occam, use a SCP client to connect to fileserver.cs.colorado.edu.

For more information on shared resources, such as where to store your files, see the Getting Started guide.

Important Note: Occam is a IBM PowerPC 970 system that support compiling in both 32-bit and 64-bit modes. Make sure to specify the correct compiler options to compile and link in the desired mode. In addition, make sure to specify the correct libraries for linking; you must link with libraries intended for the desired bit mode. In most cases, the IBM xlf/xlc produce the highest performance 64-bit code.


Additional reports for Occam can be found on Ganglia.


System Name: Occam
Online Date: October 2004
Interconnect: Gigabit Ethernet
Total Compute Nodes: 27
Processors per Node: Dual IBM PowerPC 970
RAM per Node: 2.5 GB


Common Libraries
MPI Libraries
PBS Queues


The following common applications are installed. If you would like another application installed for system-wide use, please contact the administration team.

Programming Languages

java 1.3.1 /usr/lib/java/bin/java
perl 5.8.3 /usr/bin/perl
python 2.4 /opt/python/2.4/bin/python

Common Libraries

The following common libraries are installed. If you would like another library installed, please contact the administration team.

Package Compiler Location
NETCDF IBM xl /opt/netcdf-xl


The following compilers are available on Occam in the following locations:

GNU Compilers

Compiler Location
gcc 4.1.2 /usr/bin/gcc
gcc 3.4.6 /usr/bin/gcc-3.4

The gcc compiler produces 32-bit code by default. You must specify the correct gcc compiler options to use 64-bit mode. Common options include the following:

CC=”gcc -m64”
LD=”ld -m elf64ppc -L/usr/lib64 -L/usr/X11R6/lib64/”
AS=”gcc -c -m64”

IBM Compilers

Compiler Location
xlc 7.0 /opt/ibmcmp/vacpp/7.0/bin/xlc
xlf 9.1 /opt/ibmcmp/xlf/9.1/bin/xlf

MPI Libraries

Occam supports MPI using MPICH and MPICH2 libraries, with both libraries runing over Ethernet.


To use MPICH, use the following compilation directives:

Feature Compiler Directive
Compilation Headers -I /opt/mpich-version/include
Linking Libraries -L /opt/mpich-version/lib -lmpich

Note that you must specify the correct version of MPICH for the compiler and bit level you are using, such as xlf, gcc32, or gcc64.

To run a MPICH program on Occam, use the following line in your PBS batch script file:

/opt/mpich-version/bin/mpirun -machinefile $PBS_NODEFILE <program and arguments>

To avoid difficulties, make sure to specify the full path to ‘mpirun’. If you attempt to run a program compiled with one MPI library with the mpirun command shipped with another MPI implementation, your program may fail to run or every process will execute as the only process in a one-task communicator.


Coming soon

PBS Queues

The Occam nodes are controlled using the PBS batch scheduling system. Because Occam is less utilized than Hemisphere, the queues have no resource or walltime limitations. The following queues are available: