== Getting an account == New users have to ask the PIs (Eric Larour or Mathieu Morlighem) to request a NASA account for them if they don't have one already: - go to [https://www.nas.nasa.gov/hecc/portal/accounts] select option 3 ("I want to request a NASA identity for one of my new users") - provide the information requested Then, a NAS account has to be created by the new user: - go to [https://www.nas.nasa.gov/hecc/portal/accounts] select option 2 ("I want to request and account for myself") - provide the information requested using either GID = s1690 for Eric's group or GID = s1507 for Mathieu's group - the PI will receive an email to approve the request - All users must complete NASA-mandatory Basic IT Security Training (This year's training is called "FY2016 CYBERSECURITY AND SENSITIVE UNCLASSIFIED INFORMATION AWARENESS TRAINING"). == Password-less ssh == Follow the steps outlined here [http://www.nas.nasa.gov/hecc/support/kb/Setting-Up-SSH-Passthrough_232.html] == Environment == Pleiades uses `csh` and not `bash`, add the following to your `~/.cshrc`: {{{ #!sh #ISSM setenv ISSM_DIR /home1/mmorligh/issm/trunk/ source $ISSM_DIR/etc/environment.csh #Packages module load comp-intel/2016.2.181 module load mpi-sgi/mpt module load svn/1.6.21 module load cmake/2.8.12.1 }}} And replace `ISSM_DIR` with your actual trunk. ''Log out and log back in'' to apply this change. == Installing ISSM on Pleiades == Pleiades will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use Pleiades' matlab. You can check out ISSM and install the following packages: - autotools - PETSc (use the pleiades script and ``follow`` the instructions, you will need to submit a job and compile PETSc manually, do not make test, it will not work on the cluster) - m1qn3 For documentation of pleiades, see here: http://www.nas.nasa.gov/hecc/support/kb/ Use the following configuration script (adapt to your needs): {{{ #!sh ./configure \ --prefix=$ISSM_DIR \ --with-wrappers=no \ --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \ --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \ --with-mpi-include=" " \ --with-mpi-libflags=" -lmpi" \ --with-mkl-libflags="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \ --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \ --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \ --with-scalapack-dir="$ISSM_DIR/externalpackages/petsc/install" \ --with-cxxoptflags="-O3 -axAVX" \ --with-vendor="intel-pleiades"\ --enable-development }}} == Installing ISSM on Pleiades with Dakota == For Dakota to run, you will need to have additional packages. Therefore, in your `~/.cshrc` you will need to load the following package in addition to those listed above: {{{ #Packages module load math/intel_mkl_64_10.0.011 }}} You will still need to make utotools, PETSc, and m1qn3. For PETSc, use the dakota install script: - install-3.7-pleiades_dakota6.2.sh In addition, will also need to build the external package: - boost, install-1.55-pleiades_dakota6.2.sh - dakota, install-6.2-pleiades.sh Finally, add the following to your configuration script: {{{ --with-boost-dir=$ISSM_DIR/externalpackages/boost/install \ --with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \ }}} == pfe_settings.m == You have to add a file in `$ISSM_DIR/src/m` entitled `pfe_settings.m` with your personal settings: {{{ #!m cluster.login='mmorligh'; cluster.queue='devel'; cluster.codepath='/u/mmorligh/issm/trunk/bin'; cluster.executionpath='/u/mmorligh/issm/trunk/execution/'; cluster.grouplist='s1690'; }}} use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster=pfe()` For Dakota, make sure your module list includes math/intel_mkl_64_10.0.011 . == Running jobs on Pleiades == On Pleiades, the more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely: {{{ #!m md.cluster=pfe('numnodes',8,'time',30,'processor','wes','queue','devel'); md.cluster.time=10; }}} to have a maximum job time of 10 minutes and 8 westmere nodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. Now if you want to check the status of your job and the queue you are using, use this command: {{{ #!sh qstat -u USERNAME }}} You can delete your job manually by typing: {{{ #!sh qdel JOBID }}} where JOBID is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informations that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error. If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the informations Matlab gave you `/home/srebuffi/trunk-jpl/execution//SOMETHING/JOBNAME.lock `, you copy the SOMETHING and you type in Matlab: {{{ #!m md=loadresultsfromcluster(md,'SOMETHING'); }}}