Changes between Version 4 and Version 5 of gscc


Ignore:
Timestamp:
12/09/20 15:49:40 (4 years ago)
Author:
downs
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • gscc

    v4 v5  
    8080channels:
    8181  - conda-forge
    82   - dbanas
    8382dependencies:
    8483  - python=3.7
     
    8685  - gxx_linux-64
    8786  - fortran-compiler
    88   - petsc=3.12 
     87  - petsc=3.12
    8988  - numpy
    9089  - scipy
     
    9493  - lapack
    9594  - netcdf4
    96   - triangle
    97   - chaco
    9895  - mpi
     96  - libtool
     97  - automake
     98  - pip
    9999}}}
    100100
    101 With Anacodna activated, run
     101With Anacodna activated, run the following to create and activate the environment.
    102102
    103103{{{
    104104#!sh
    105105conda env create -f environment.yml
     106conda activate issm
    106107}}}
     108
     109
    107110
    108111= Install ISSM =
     
    112115#!sh
    113116module unload rocks-openmpi
    114 export ISSM_DIR=/home/USERNAME/trunk-jpl/
    115 export CONDA_DIR=/home/USERNAME/anaconda3/envs/test/
     117export ISSM_DIR=/home/jd231341e/trunk-jpl/
     118export CONDA_DIR=/home/jd231341e/anaconda3/envs/test/
    116119source $ISSM_DIR/etc/environment.sh
     120source ~/anaconda3/bin/activate
     121conda activate test
    117122}}}
    118123
    119 Create a file called environment.yml with the following contents:
     124This will automatically activate the Anaconda environment when you login and disable the default MPI in favor of Anaconda's installed MPI version. Most of the dependencies can be installed through Conda but m1qn3 needs to be installed manually
    120125
     126{{{
     127source $ISSM_DIR/etc/environment.sh
     128cd $ISSM_DIR/externalpackages/m1qn3
     129./install.sh
     130}}}
    121131
    122 
    123 Make sure that Anaconda is activated and do
    124 
    125 == Installing ISSM on Greenplanet ==
    126 
    127 Greenplanet will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use Greenplanet's Matlab. You can check out ISSM and install the following packages:
    128  - autotools (the one provided by default is too old)
    129  - PETSc 3.14
    130  - m1qn3
    131 
    132 Use the following configuration script (adapt to your needs):
     132Now we can configure and install ISSM via:
    133133
    134134{{{
    135135#!sh
     136cd $ISSM_DIR
     137source $ISSM_DIR/etc/environment.sh
     138autoreconf -ivf
    136139./configure \
    137  --prefix=$ISSM_DIR \
    138  --with-wrappers=no \
    139  --with-kriging=no \
    140  --with-kml=no \
    141  --with-bamg=no \
    142  --without-Love \
    143  --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \
    144  --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \
    145  --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \
    146  --with-mpi-include="/sopt/OpenMPI/3.1.2/intel-2018.3-slim/include" \
    147  --with-mpi-libflags="-L/sopt/OpenMPI/3.1.2/intel-2018.3-slim/lib -lmpi -lm -lmpi_mpifh" \
    148  --with-mkl-libflags="-L/sopt/MKL/2018.3/lib -mkl=cluster" \
    149  --with-mumps-dir=$ISSM_DIR/externalpackages/petsc/install/ \
    150  --with-scalapack-dir=$ISSM_DIR/externalpackages/petsc/install/ \
    151  --with-cxxoptflags="-O3 -fPIC -std=c++11" \
    152  --with-vendor=intel-gp
     140    --prefix="$ISSM_DIR" \
     141    --disable-static \
     142    --enable-development \
     143    --with-numthreads=8 \
     144    --with-python-version=3.7 \
     145    --with-python-dir="$CONDA_DIR" \
     146    --with-python-numpy-dir="$CONDA_DIR/lib/python3.7/site-packages/numpy/core/include/numpy" \
     147    --with-fortran-lib="-L$CONDA_DIR/lib/gcc/x86_64-conda-linux-gnu/7.5.0/ -lgfortran" \
     148    --with-mpi-include="$CONDA_DIR/lib/include" \
     149    --with-mpi-libflags="-L$CONDA_DIR/lib" \
     150    --with-metis-dir="$CONDA_DIR/lib" \
     151    --with-scalapack-dir="$CONDA_DIR/lib" \
     152    --with-mumps-dir="$CONDA_DIR/lib" \
     153    --with-petsc-dir="$CONDA_DIR" \
     154    --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install"
     155make --jobs=8
     156make install
    153157}}}
    154 == greenplanet_settings.m ==
    155158
    156 You have to add a file in `$ISSM_DIR/src/m` entitled `greenplanet_settings.m` with your personal settings:
     159
     160This is not a completely minimal setu
    157161
    158162{{{
     
    166170
    167171Use your username for the `login` and enter your `codepath` and `executionpath`. These settings will be picked up automatically by Matlab when you do `md.cluster=greenplanet()`
    168 
    169 == Running jobs on Greenplanet ==
    170 
    171 On Greenplanet, you can use up to 30 cores per node (partition `ilg2.3`). The more nodes and the longer the requested time, the more you will have to wait in the queue. Per job you can only request up to 125GB of RAM. So choose your settings wisely:
    172 
    173  {{{
    174 #!m
    175 md.cluster=greenplanet('numnodes',1,'cpuspernode',8);
    176 md.cluster.time=10;
    177 }}}
    178 
    179 to have a maximum job time of 10 minutes and 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
    180 
    181 Now if you want to check the status of your job and the queue you are using, type in the bash on '''Greenplanet''' session:
    182 
    183  {{{
    184 #!sh
    185 squeue -u username
    186 }}}
    187 
    188 You can delete your job manually by typing:
    189 
    190 {{{
    191 #!sh
    192 scancel JOBID
    193 }}}
    194 
    195 where `JOBID` is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the information that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.
    196 
    197 If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the information Matlab gave you `$ISSM_DIR/execution/LAUNCHSTRING/JOBNAME.lock`, you copy the LAUNCHSTRING and you type in Matlab:
    198 
    199 {{{
    200 #!m
    201 md=loadresultsfromcluster(md,'LAUNCHSTRING','JOBNAME');
    202 }}}
    203 
    204 Obs.: in the case where `md.settings.waitonlock`>0 and you need to load manually (e.g., internet interruption), it is necessary to set `md.private.runtimename=LAUNCHSTRING;` before calling `loadresultsfromcluster`.
    205 
    206 == slurm ==
    207 
    208 A comparison of PBS to slurm commands can be found here: http://slurm.schedmd.com/rosetta.pdf
    209 
    210 Useful commands:
    211 
    212 Graphical overview over greenplanet usage:
    213 {{{
    214 sview
    215 }}}
    216 
    217 Get number of idle nodes:
    218 {{{
    219 sinfo --states=idle
    220 }}}
    221 
    222 See jobs of <username>:
    223 {{{
    224 squeue -u <username>
    225 }}}
    226 
    227 Get more information on jobs of user:
    228 {{{
    229 sacct -u <username> --format=User,JobID,account,Timelimit,elapsed,ReqMem,MaxRss,ExitCode
    230 }}}
    231 
    232 Get information on partition (here ilg2.3):
    233 {{{
    234 scontrol show partition=ilg2.3
    235 }}}
    236 
    237 Get sorted list of users on partition:
    238 {{{
    239 squeue  | grep -i ilg2.3 | awk '{print $4}' | sort | uniq -c | sort -rn
    240 }}}