Version 13 (modified by 6 years ago) ( diff ) | ,
---|
Getting an account
more to come...
ssh configuration
You can add the following lines to ~/.ssh/config
on your local machine:
Host aurora aurora.jpl.nasa.gov HostName aurora.jpl.nasa.gov User YOURUSERNAME HostKeyAlias aurora.jpl.nasa.gov HostbasedAuthentication no
and replace YOURUSERNAME
by your JPL username.
Once this is done, you can ssh aurora by simply doing:
ssh aurora
(No password less ssh connection possible, sorry...)
Environment
On aurora, add the following lines to ~/.bash_profile
:
export ISSM_DIR=PATHTOTRUNK source $ISSM_DIR/etc/environment.sh source /usr/share/Modules/init/bash module load intel/cluster-toolkit-2013.5.192 module load apps/matlab-2016b export MATLAB_DIR="/opt/matlab/R2016b/" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/opt/intel/impi/4.1.3/intel64/lib/"
Log out and log back in to apply this change.
Installing ISSM on aurora
aurora can be used to run the code or with MATLAB locally. You can check out ISSM and install the following packages:
- autotools (the version provided is too old)
- PETSc (use the aurora script and follow the instructions, you will need to submit a job and compile PETSc manually, do not make test, it will not work on the cluster)
- m1qn3
- triangle (install-linux64.sh)
Use the following configuration script (adapt to your needs):
./configure \ --prefix=$ISSM_DIR \ --with-matlab-dir=$MATLAB_DIR \ --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \ --with-metis-dir=$ISSM_DIR/externalpackages/petsc/install \ --with-petsc-dir=$ISSM_DIR/externalpackages/petsc/install \ --with-scalapack-dir=$ISSM_DIR/externalpackages/petsc/install \ --with-mpi-include="/opt/intel/impi/4.1.3/intel64/include/" \ --with-mpi-libflags="-L/opt/intel/impi/4.1.3/intel64/lib/ -lmpi -lmpiif" \ --with-petsc-arch=$ISSM_ARCH \ --with-mkl-libflags="-L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64/ -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_rt -lpthread -lm" \ --with-mumps-dir=$ISSM_DIR/externalpackages/petsc/install \ --with-fortran-lib="-L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64/ -lifcore -lifport -lifcoremt" \ --with-triangle-dir=$ISSM_DIR/externalpackages/triangle/install \ --with-cxxoptflags="-O3" \ --with-vendor=intel-aurora \ --enable-development \ --enable-debugging
aurora_settings.m
To launch from your local computer, you have to add a file in $ISSM_DIR/src/m
entitled aurora_settings.m
with your personal settings on your local issm install:
cluster.login='schlegel'; cluster.codepath='/home/schlegel/issm/trunk/bin'; cluster.executionpath='/home/schlegel/issm/trunk/execution';
Use your username for the login
and enter your code path and execution path. These settings will be picked up automatically by matlab when you do md.cluster=aurora()
Note that the `executionpath' creates temporary binary files that can be removed once the job is complete. For this reason, you can set the path to be somewhere on the /aurora_nobackup/issm or /halo_nobackup/issm filesystem, which is unlimited temporary storage on these systems.
Running jobs on aurora
On aurora, you can use up to 256 cpus. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely: https://hpcs.jpl.nasa.gov/Users/HPC_resources.html
If you are running from your local machine:
md.cluster=aurora();
This will default to 1 node, 24 cpus.
To change the number of cpus use:
md.cluster.cpuspernode=3;
Before you run your job, make sure to open a port first and enter the port number in md.cluster. Here is a handy alias:
alias auroratunnel='ssh -L 1070:localhost:22 aurora'
That will open port number 1070 that you can then use in ISSM so that you don't need to enter your password. Make sure to use the same port number in md.cluster.
Or, if you are launching from MATLAB on aurora/halo, make sure to change the cluster name so that ISSM knows it does not need to scp files.
md.cluster.name = oshostname();