Skip to Content

The Division of Information Technology


Research Cyberinfrastructure

Tutorials and Documentation

Planck Cluster User Guide

Accessing Planck

Within the USC network:
  • Login using any SSH client to planck.psc.sc.edu with the login name and password that you acquired when your compute account was created. Port 22 should be used for connection (by default).
  • Example: ssh user_name@planck.psc.sc.edu
Outside the USC network:
  • In order to access Planck from home (or anywhere outside the USC) you will need to install a VPN client. To do so you are required to have Java and any web browser (FireFox, Explorer, Safari, or Google Chrome) installed on your local computer.
  • To get a VPN client go to the USC website. For students, the URL is https://sslvpn.sc.edu/students; for faculty it is https://sslvpn.sc.edu/facstaff 
There are several steps for getting the VPN client installed:
  1. Enter your network username and password, then click Sign In.
  2. Wait while the initial setup begins.
  3. Click Start.
  4. Wait for the VPN client to launch for the first time.
  5. A new window will pop up. Once the status shows connected, you have successfully logged into the VPN client.
  6. If you want to terminate the VPN session, right-click on the Juniper icon in your system tray and select Sign Out.
How to setup password-less SSH access. (Useful for automating and scripting runs.)
For first time logins, a script automatically does this for you, however just in case you need to reconfigure it again correctly follow the steps below.
In your terminal type:
  • ssh-keygen -t rsa
Then the following will pop up:
  • Generating public/private rsa key pair.
  • Enter file in which to save the key (/home/user_id/.ssh/id_rsa): Click ENTER
  • Enter passphrase (empty for no passphrase): Click ENTER
  • Enter same passphrase again: Click ENTER
  • Your identification has been saved in /home/user_id/.ssh/id_rsa
  • Your public key has been saved in /home/user_id/.ssh/id_rsa.pub
  • The key fingerptint is:
Go to .ssh directory by typing
  • cd .ssh
Copy public key by typing
  • cp id_rsa.pub authorized_keys

Setting Up User Environment

Planck is one of the HPC clusters that uses modules to setup the environment. The module system can make software and related environment settings easily available. Feel free to contact cluster support to add additional modules to cover the software you use and make it available for everyone.
To get the list of available software execute:
  • module avail or module list
To unload all the modules (and clean all the settings) execute:
  • module purge
To load (add) a specific module execute:
  • module load MODULE_NAME or module add MODULE_NAME
If you are planning to use a specific module every time you login then put “module load MODULE_NAME” into your ~/.bashrc file.
Example: the following line will add Intel Compiler and Open MPI runtime to your environment
  • module load intel/12.0.4 openmpi/143-intel

Compiling on Planck

Ok, now that you have logged in to the cluster, successfully set up the environment variables and loaded the modules that you need, e.g. Intel compilers, MPI, etc. What now?
You will first need to learn how to compile your code.
This section provides an overview on the compilation of the source codes for serial and parallel (MPI, OpenMP) execution. Any compiler can be used for just compiling (using -c option) or both compiling and linking the code.
Compiling SERIAL code:
In order to compile the code you first need to make sure that you have a compiler module loaded. This can be verified by the module list command. Below are several examples of compiling and linking a SERIAL code with different compilers.
Using Intel C compiler:
  • module load intel
  • icc -o code.exe code.c
Using Intel Fortran compiler:
  • module load intel
  • ifort -o code.exe code.f
Using GNU C/fortran compilers:
  • gcc -o code.exe code.c
Using GNU Fortran compiler:
  • gfortran -o code.exe code.f
Using Portland Group C/fortran compilers:
  • module load pgi
  • pgcc -o code.exe code.c
  • pgfortran -o code.exe code.f
Compiling MPI code:
In order to compile a PARALLEL (MPI) version of your code you would need the MPI module loaded. For example, in order to load OpenMPI runtime into your environment, load the module named openmpi/143-intel. Below are several examples of compiling and linking MPI source code.
  • module load intel/12.0.4 openmpi/143-intel
With Intel C compiler:
  • mpicc -o code.exe code.c
With Intel Fortran 90 compiler:
  • mpif90 -o code.exe code.f90
Compiling OpenMP code:
Since each of Planck's nodes has 12 cores in total, applications can be executed using a shared memory model within that node. The OpenMP compiler options can be used to enable SMP support on the nodes. Below are several examples.
Using Intel Fortran compiler with OpenMP:
  • module load intel/12.0.4
  • ifort -o code.exe code.f -openmp
Using MPI Intel Fortran compiler with OpenMP:
  • module load intel/12.0.4
  • mpif77 -o code.exe code.f -openmp

 

Loading Libraries:

There are several libraries available on the Planck including Intel MKL libraries. These libraries provide highly optimized mathematical packages and functions. It is very useful to know how to link those libraries in order for your code to function properly. Below are several guidelines for linking the libraries.
Use -l option to link in a library. For example, using MPI C compiler:
  • mpicc code.c -l library_name
Here it is assumed that the path to the library file (library_name.so or library_name.a) can be found in the LD_LIBRARY_PATH environment variable path. However, if you want to include the library path explicitly, this can be done using -L option.
  • mpicc code.c -L/mydirectory/lib -library_name
Loading MKL libraries:
Before linking the libraries make sure that the corresponding MKL library module was loaded. You can use modules to do this for you.
  • module load intel/12.0.4
This automatically creates an environmental variable MKL_ROOT which points to the MKL libraries installed on Planck.
There are several parameters that affects the way your code links the MKL libraries. These include the compiler you use, processor architecture, dynamic or static linking, sequential of mutli-threaded type of parallel model used in your code, whether or not you need any extra linear algebra packages, such as Lapack, Blas, Scalapack, etc.
We suggest to use the Intel MKL Link advisor which can be found at the following URL.
http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/
Following that advisor you will be able to create a link line that can be then copy-pasted into your Makefile.
 

Running on Planck

The SGE Queue System
Sun Grid Engine (aka SGE) is the resource management service used on the Planck cluster. The SGE handles basic batch processing operations such as job submission, job monitoring, and job control. There are several features of the SGE including more efficient use of computational time, fair and optimal sharing of cluster resources, etc.
Job submission:
SGE provides a qsub command to submit job scripts to the cluster.
  • qsub job_script
Job deletion:
  • qdel job_ID
where job_ID is a identifier of a submitted job
Checking status of submitted job:
  • qstat -u user_ID
where user_ID is an ID of the user (login name)
Running serial job:
Here is an example of the script that can be used for a serial job requesting 1 CPU, 2Gb of memory, and 2 hours of runtime.
  • #!/bin/bash
  • #$ -N jobname
  • #$ -l mf=2gb
  • #$ -l h_rt=2:00:00
  • #$ -j y
  • #$ -q verylong.q
  • #$ -M user_ID@sc.edu ./code.exe
Where verylong.q is the name of a queue you submit to, code.exe is the executable that you run (make sure to provide a path to it).
Running parallel job:
Below is an example of the script for the parallel job requesting 16 CPUs (on two nodes), 22 Gb of memory, and 5 mins of runtime.
  • #!/bin/bash
  • #$ -pe 8way 16
  • #$ -l mf=22gb
  • #$ -cwd
  • #$ -S /bin/bash
  • #$ -q normal.q
  • #$ -l h_rt=00:05:00 #$ -M user_ID@sc.edu
## Set up your environment
source /share/apps/modules/sge-modules.sh module add intel/12.0.4 openmpi/143-intel
## Run your parallel code
mpirun -np #NSLOTS ./code.exe > output