wiki:sherlock

more to come

Download ISSM

svn --username XXX --password XXX  checkout http://issm.ess.uci.edu/svn/issm/issm/trunk-jpl

ssh configuration

You can add the following lines to ~/.ssh/config on your local machine:

Host sherlock login.sherlock.stanford.edu
   ControlMaster auto
   ControlPersist yes
   ControlPath ~/.ssh/%l%r@%h:%p
   HostName login.sherlock.stanford.edu
   User USERNAME

and replace USERNAME by your sherlock username.

Once this is done, you can ssh green planet by simply doing:

ssh sherlock

Note: to run ISSM on sherlock, make sure you establish an ssh connection to sherlock from your local machine. That way, ISSM will not require you to enter your password every time a job is submitted or donwloaded. Make sure your hostname is not too long otherwise this will not work. This may happen when you use a wifi Internet connection (you can force your hostname manually on your local machine by using sudo hostname NAME).

Environment

On Sherlock, add the following lines to ~/.bashrc:

module load system subversion
module load impi/2019
module load imkl/2019
export ISSM_DIR=PATHTOTRUNK
source $ISSM_DIR/etc/environment.sh

Log out and log back in to apply this change.

Installing ISSM on Sherlock

You need to install the following packages in this exact sequence:

  • autotools (install.sh)
  • petsc (install-3.9-sherlock.sh)
  • m1qn3 (install.sh)

Use the following configuration script (adapt to your needs):

./configure \
   --prefix=$ISSM_DIR \
   --with-wrappers=no \
   --with-kml=no \
   --with-bamg=no \
   --with-metis-dir=$ISSM_DIR/externalpackages/metis/install \
   --with-petsc-dir=$ISSM_DIR/externalpackages/petsc/install \
   --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \
   --with-mpi-include="/share/software/user/restricted/impi/2019//compilers_and_libraries_2019.0.117/linux/mpi/intel64/include" \
   --with-mpi-libflags="-L/data/apps/mpi/openmpi-1.8.3/gcc/4.8.3/lib -lmpicxx -lmpifort -lmpi" \
   --with-mkl-libflags="-L/share/software/user/restricted/imkl/2019/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread" \
   --with-mumps-dir=$ISSM_DIR/externalpackages/petsc/install/ \
   --with-scalapack-dir=$ISSM_DIR/externalpackages/petsc/install/ \
   --with-numthreads=16 \
   --with-fortran-lib="-L/usr/lib/gcc/x86_64-redhat-linux/4.8.5/ -lgfortran" \
   --enable-debugging 

sherlock_settings.m

You have to add a file in $ISSM_DIR/src/m entitled sherlock_settings.m with your personal settings:

cluster.login='wchu28';
cluster.port=0;
cluster.codepath='/home/users/wchu28/trunk-jpl/bin/';
cluster.executionpath='/home/users/wchu28/trunk-jpl/execution/';

Use your username for the login and enter your codepath and executionpath. These settings will be picked up automatically by Matlab when you do md.cluster=sherlock()

Running jobs on Sherlock

On Sherlock, you can use up to 48 cores per node (partition ilg2.3). The more nodes and the longer the requested time, the more you will have to wait in the queue. Per job you can only request up to 125GB of RAM. So choose your settings wisely:

md.cluster=sherlock('numnodes',1,'cpuspernode',8);
md.cluster.time=10;

to have a maximum job time of 10 minutes and 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.

Now if you want to check the status of your job and the queue you are using, type in the bash on Sherlock session:

squeue -u username

You can delete your job manually by typing:

scancel JOBID

where JOBID is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files JOBNAME.outlog and JOBNAME.errlog. The outlog file contains the information that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.

If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the information Matlab gave you $ISSM_DIR/execution/LAUNCHSTRING/JOBNAME.lock, you copy the LAUNCHSTRING and you type in Matlab:

md=loadresultsfromcluster(md,'LAUNCHSTRING','JOBNAME');

Obs.: in the case where md.settings.waitonlock>0 and you need to load manually (e.g., internet interruption), it is necessary to set md.private.runtimename=LAUNCHSTRING; before calling loadresultsfromcluster.

slurm

A comparison of PBS to slurm commands can be found here: http://slurm.schedmd.com/rosetta.pdf

Useful commands:

Graphical overview over greenplanet usage:

sview

Get number of idle nodes:

sinfo --states=idle

See jobs of <username>:

squeue -u <username>

Get more information on jobs of user:

sacct -u <username> --format=User,JobID,account,Timelimit,elapsed,ReqMem,MaxRss,ExitCode

Get information on partition (here ilg2.3):

scontrol show partition=ilg2.3

Get sorted list of users on partition:

squeue  | grep -i ilg2.3 | awk '{print $4}' | sort | uniq -c | sort -rn
Last modified 5 years ago Last modified on 09/25/19 14:50:05
Note: See TracWiki for help on using the wiki.