Changes between Version 58 and Version 59 of pleiadesbash


Ignore:
Timestamp:
01/09/25 21:06:05 (3 weeks ago)
Author:
jdquinn
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • pleiadesbash

    v58 v59  
    11== Getting an account ==
    22
    3 New users have to ask the PIs (Eric Larour or Mathieu Morlighem) to request a NASA account for them if they don't have one already:
    4  - go to [https://www.nas.nasa.gov/hecc/portal/accounts] select option 3 ("I want to request a NASA identity for one of my new users")
     3New users have to ask the PIs (Eric Larour or Mathieu Morlighem) to request a NASA account for them if they don't have one already,
     4 - navigate to [https://www.nas.nasa.gov/hecc/portal/accounts]
     5 - select option 3 ("I want to request a NASA identity for one of my new users")
    56 - provide the information requested
    67
    7 Then, a NAS account has to be created by the new user:
    8  - go to [https://www.nas.nasa.gov/hecc/portal/accounts] select option 2 ("I want to request and account for myself")
     8Then, a NAS account has to be created by the new user,
     9 - go to [https://www.nas.nasa.gov/hecc/portal/accounts]
     10 - select option 2 ("I want to request and account for myself")
    911 - provide the information requested using either GID = s5692 for Eric's group or GID = s1507 for Mathieu's group
    1012 - the PI will receive an email to approve the request
    11  - All users must complete NASA-mandatory Basic IT Security Training (This year's training is called "FY21 CYBERSECURITY AND SENSITIVE UNCLASSIFIED INFORMATION AWARENESS COURSE").
     13
     14All users must complete NASA-mandatory Basic IT Security Training
     15
     16Current Points of Contact
    1217 - Dartmouth Security officer: Sean McNamara, Sean.R.McNamara@dartmouth.edu [https://itc.dartmouth.edu/about/who-we-are/itc-leadership]
    1318 - UCI IT Security officer: Josh Drummond jdrummon@uci.edu
    1419 - JPL IT Security officer: Tomas Soderstrom Tomas.J.Soderstrom@jpl.nasa.gov
    1520
    16 == Password-less ssh ==
     21== Setting Up Password-less SSH for NASA NAS HECC ==
    1722
    1823Follow the steps outlined here [http://www.nas.nasa.gov/hecc/support/kb/Setting-Up-SSH-Passthrough_232.html]
    1924
     25== Checking Out a Copy of the ISSM Code Repository ==
     26
     27Please see the [https://github.com/ISSMteam/ISSM ISSM code repository on GitHub] for detailed instructions on how to check out a copy of the repository.
     28
    2029== Environment ==
    2130
    2231Make sure to clone ISSM in your `/nobackup/` directory, where you can save a lot more files than in your home directory.
    2332
    24 Add the following to your `~/.bashrc`:
    25 {{{
    26 #ISSM
    27 export ISSM_DIR="/u/username/trunk-jpl"
    28 source $ISSM_DIR/etc/environment.sh
    29 
    30 #Packages
     33Add the following to `~/.bashrc`,
     34{{{
     35# Modules
    3136module load mpi-hpe/mpt
    32 module load comp-intel
     37module load comp-intel/2020.4.304
    3338module load petsc/3.17.3_intel_mpt_py
    3439
    35 #Set compilers
    36 export MPICXX_CXX=icpc
    37 export MPICC_CC=icc
    38 export MPIF90_F90=ifort
     40# Environment
     41export MPICXX_CXX=icpx
     42export MPICC_CC=icx
     43export MPF90_F90=ifort
    3944export CC="mpicc"
    4045export CXX="mpicxx"
    4146export F77="mpif77"
    42 }}}
    43 
    44 And replace `ISSM_DIR` with your actual trunk. Log out and log back in to apply this change.
    45 
    46 Note that if your `.bashrc` is not loaded when you logging, you will have to add a new file: `~/.bash_login` with the following content:
     47
     48export COMP_INTEL_ROOT="/nasa/intel/Compiler/2020.4.304/compilers_and_libraries_2020.4.304/linux"
     49
     50export ISSM_DIR="<ISSM_DIR>"
     51}}}
     52
     53replacing `<ISSM_DIR>` with the path to your local copy of the repository. Run `source ~/.bashrc` to apply these changes to your current environment.
     54
     55NOTE: If your `.bashrc` is not loaded when you log in, you will have to add a new file, `~/.bash_login`, with the following content,
    4756{{{
    4857if [ -f ~/.bashrc ]; then . ~/.bashrc; fi
    4958}}}
    5059
     60NOTE: You may need to update the version of the `comp-intel` module as well as the corresponding value of variable `COMP_INTEL_ROOT` as recommended/available modules are updated on HECC. Please update this page or ask a project lead to do so when this occurs.
     61
    5162== Installing ISSM on Pleiades ==
    5263
    53 '''Do NOT install mpich'''. We have to use the one provided by NAS. Pleiades will ''only'' be used to run the code, you will use your local machine for pre- and post- processing, you will never use Pleiades' copy of MATLAB. You can check out ISSM and install the following packages:
    54  - m1qn3 (`./install-linux.sh`)
    55  - autotools
    56  - that's it because we now use Pleiades' PETSc!
    57 
    58 For documentation on the Pleiades cluster, see here: http://www.nas.nasa.gov/hecc/support/kb/
    59 
    60 You will need to run the following command before configuring ISSM:
    61 {{{
    62 #!sh
     64[https://www.nas.nasa.gov/hecc/support/kb Pleiades cluster documentation]
     65
     66'''Do NOT install `mpich`'''. We have to use the MPI implementation (MPT) provided on HECC. Pleiades will ''only'' be used to run solutions, you will use your local machine for pre- and post- processing, you will never use Pleiades' copy of MATLAB.
     67
     68For an installation of ISSM with basic capabilities, first install the following external packages,
     69{{{
     70triangle        install-linux.sh
     71m1qn3           install-linux.sh
     72semic           install.sh
     73}}}
     74
     75You will need to run the following before configuring ISSM,
     76{{{
    6377cd $ISSM_DIR
    6478autoreconf -ivf
    6579}}}
    6680
    67 Use the following configuration script for ISSM (adapt to your needs):
    68 
    69 {{{
    70 #!sh
    71 export CXXFLAGS="-g -Ofast -axCORE-AVX2,AVX -xSSE4.2"
    72 export CFLAGS="-g -Ofast"
     81Use the following configuration script for ISSM (adapt to your needs),
     82{{{
     83#!sh
     84export CFLAGS="-O3"
     85export CXXFLAGS="-O3 -std=c++11"
    7386
    7487./configure \
    75    --prefix=$ISSM_DIR \
    76    --with-wrappers=no \
    77    --with-mpi-include="${MPI_ROOT}/include" \
    78    --with-mpi-libflags="-L${MPI_ROOT}/lib -lmpi" \
    79    --with-petsc-dir=${PETSC_DIR} \
    80    --with-parmetis-dir=${PETSC_DIR} \
    81    --with-metis-dir=${PETSC_DIR} \
    82    --with-mumps-dir=${PETSC_DIR} \
    83    --with-mkl-libflags="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
    84    --with-m1qn3-dir="${ISSM_DIR}/externalpackages/m1qn3/install" \
    85    --with-fortran-lib="-L${MKLROOT}/../compiler/lib/intel64_lin -lifcore -lifport -lgfortran" \
    86    --enable-development
    87 }}}
     88--prefix="${ISSM_DIR}" \
     89--enable-development \
     90--enable-standalone-libraries \
     91--with-wrappers=no \
     92--with-graphics-lib="/usr/lib64/libX11.so" \
     93--with-fortran-lib="-L${COMP_INTEL_ROOT}/compiler/lib/intel64_lin -lifcore -lifport -lgfortran" \
     94--with-mkl-libflags="-L${COMP_INTEL_ROOT}/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
     95--with-mpi-include="/nasa/hpe/mpt/2.30_rhel810/include" \
     96--with-mpi-libflags="-L/nasa/hpe/mpt/2.30_rhel810/lib -lmpi" \
     97--with-blas-lapack-lib="-L${COMP_INTEL_ROOT}/mkl/lib/intel64 -lmkl_blas95_lp64 -lmkl_lapack95_lp64" \
     98--with-metis-dir="${PETSC_DIR}" \
     99--with-parmetis-dir="${PETSC_DIR}" \
     100--with-scalapack-lib="-L${COMP_INTEL_ROOT}/mkl/lib/intel64/libmkl_scalapack_lp64.so" \
     101--with-mumps-dir="${PETSC_DIR}" \
     102--with-petsc-dir="${PETSC_DIR}" \
     103--with-triangle-dir="${ISSM_DIR}/externalpackages/triangle/install" \
     104--with-m1qn3-dir="${ISSM_DIR}/externalpackages/m1qn3/install" \
     105--with-semic-dir="${ISSM_DIR}/externalpackages/semic/install" \
     106}}}
     107
     108NOTE: As with the version `comp-intel` module, the version of `mpi-hpe/mpt` will be updated occasionally. Update the path supplied to `--with-mpi-include` and `--mpi-libflags` accordingly.
     109
     110NOTE: Refer to `jenkins/pleiades-basic` for external packages and configuration updates that may not yet be listed here.
    88111
    89112== Installing ISSM on Pleiades with Dakota ==
    90113
    91 You will not need to remake PETSc and m1qn3, as long as you made them using the Intel MPI libraries and compilers, as described above.
    92 
    93 Dakota will require that you have a python that is of version python2. The easiest way to do this is create a link for a `python` call to `/usr/bin/python2`.  One way to do this is to add a path to a new bin folder in your `~/.bashrc`:
    94 
    95 {{{
    96 export PATH="$PATH:${HOME}/bin"
    97 }}}
    98 
    99 Then, create a bin folder in your home directory (if you do not have one already).  In your home directory, type `mkdir bin`.  Then `cd bin`, and finally type `ln -s /usr/bin/python2 python`.
    100 
    101 Then, ''log out and log back in'' and build the following external packages:
    102  - gsl, install-pleiades.sh
    103  - boost, install-1.55-pleiades.sh
    104  - dakota, install-6.2-pleiades_toss4.sh
    105 
    106 Finally, you will need to use the following configuration script (i.e. adding the "with" lines for dakota and boost):
    107 
    108 {{{
    109 #!sh
     114For an installation of ISSM with Dakota, first install the following external packages,
     115{{{
     116gsl                     install-pleiades.sh
     117boost           install-1.7-linux.sh
     118dakota          install-6.2-pleiades.sh
     119chaco           install-linux.sh
     120triangle        install-linux.sh
     121m1qn3           install-linux.sh
     122semic           install.sh
     123}}}
     124
     125Use the following configuration script for ISSM (adapt to your needs),
     126{{{
     127#!sh
     128export CFLAGS="-O3"
     129export CXXFLAGS="-O3 -std=c++11"
     130
    110131./configure \
    111  --prefix=$ISSM_DIR \
    112  --enable-standalone-libraries \
    113  --with-wrappers=no \
    114  --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \
    115  --with-metis-dir=$PETSC_ROOT \
    116  --with-petsc-dir=$PETSC_ROOT \
    117  --with-scalapack-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so" \
    118  --with-boost-dir=$ISSM_DIR/externalpackages/boost/install \
    119  --with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \
    120  --with-mpi-include=" " \
    121  --with-mpi-libflags=" -lmpi" \
    122  --with-mkl-libflags="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
    123  --with-mumps-dir=$PETSC_ROOT \
    124  --with-fortran-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin/ -lifcore -lifport -lgfortran" \
    125  --with-cxxoptflags="-O3 " \
    126  --with-vendor="intel-pleiades-mpi" \
    127  --enable-development
    128 }}}
    129 
    130 Remember, you will need to run the following command before configuring ISSM:
    131 
    132 {{{
    133 cd $ISSM_DIR
    134 autoreconf -ivf
    135 }}}
    136 
    137 Note, if you would like to use the python 3 version that loads in the environment, you will need to call `python3` to do so, as long as the python alias is set to python2.
    138 
    139 NOTE: Refer to jenkins/pleiades-dakota for external packages and configuration updates that may not yet be listed here.
     132--prefix="${ISSM_DIR}" \
     133--enable-development \
     134--enable-standalone-libraries \
     135--with-wrappers=no \
     136--with-graphics-lib="/usr/lib64/libX11.so" \
     137--with-fortran-lib="-L${COMP_INTEL_ROOT}/compiler/lib/intel64_lin -lifcore -lifport -lgfortran" \
     138--with-mkl-libflags="-L${COMP_INTEL_ROOT}/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
     139--with-mpi-include="${COMP_INTEL_ROOT}/mpi/intel64/include" \
     140--with-mpi-libflags="-lmpi" \
     141--with-blas-lapack-lib="-L${COMP_INTEL_ROOT}/mkl/lib/intel64 -lmkl_blas95_lp64 -lmkl_lapack95_lp64" \
     142--with-metis-dir="${PETSC_DIR}" \
     143--with-parmetis-dir="${PETSC_DIR}" \
     144--with-mumps-dir="${PETSC_DIR}" \
     145--with-scalapack-lib="-L${COMP_INTEL_ROOT}/mkl/lib/intel64/libmkl_scalapack_lp64.so" \
     146--with-petsc-dir="${PETSC_DIR}" \
     147--with-gsl-dir="${ISSM_DIR}/externalpackages/gsl/install" \
     148--with-boost-dir="${ISSM_DIR}/externalpackages/boost/install" \
     149--with-dakota-dir="${ISSM_DIR}/externalpackages/dakota/install" \
     150--with-chaco-dir="${ISSM_DIR}/externalpackages/chaco/install" \
     151--with-triangle-dir="${ISSM_DIR}/externalpackages/triangle/install" \
     152--with-m1qn3-dir="${ISSM_DIR}/externalpackages/m1qn3/install" \
     153--with-semic-dir="${ISSM_DIR}/externalpackages/semic/install" \
     154}}}
     155
     156NOTE: As with the version `comp-intel` module, the version of `mpi-hpe/mpt` will be updated occasionally. Update the path supplied to `--with-mpi-include` and `--mpi-libflags` accordingly.
     157
     158NOTE: Refer to `jenkins/pleiades-dakota` for external packages and configuration updates that may not yet be listed here.
    140159
    141160== Installing ISSM on Pleiades with Solid Earth Capabilities ==
    142161
    143 The required external packages and configuration of ISSM with Solid Earth capabilities is similar to building ISSM with Dakota. You will not need to remake PETSc and m1qn3, as long as you made them using the Intel MPI libraries and compilers, as described above.
    144 
    145 Then, build the following external packages:
     162For an installation of ISSM with Solid Earth capabilities, first install the following external packages,
    146163{{{
    147164zlib            install-1.sh
     
    164181}}}
    165182
    166 Finally, you will need to use the following configuration script (i.e. adding the "with" lines for the needed packages):
    167 
    168 {{{
    169 #!sh
    170 
     183Use the following configuration script for ISSM (adapt to your needs),
     184{{{
     185#!sh
    171186export CFLAGS="-O3"
    172187export CXXFLAGS="-O3 -std=c++11"
    173188
    174 ./configure \
    175189--prefix="${ISSM_DIR}" \
    176190--enable-development \
     
    199213}}}
    200214
    201 Again, you will need to run the following command before configuring ISSM:
    202 
    203 {{{
    204 cd $ISSM_DIR
    205 autoreconf -ivf
    206 }}}
    207 
    208 NOTE: Refer to jenkins/pleiades-solid_earth for external packages and configuration updates that may not yet be listed here.
     215NOTE: As with the version `comp-intel` module, the version of `mpi-hpe/mpt` will be updated occasionally. Update the path supplied to `--with-mpi-include` and `--mpi-libflags` accordingly.
     216
     217NOTE: Refer to `jenkins/pleiades-solid-eartj` for external packages and configuration updates that may not yet be listed here.
    209218
    210219== Installing ISSM on Pleiades with CoDiPack ==
    211220
    212 You will need to build these additional external package:
    213  - medipack, install.sh
    214  - codipack, install.sh
    215 
    216 Your configuration script should look like this:
    217 
     221You will need to build these additional external packages
     222{{{
     223medipack        install.sh
     224codipack        install.sh
     225}}}
     226
     227Your configuration script should look like this,
    218228{{{
    219229#!sh
     
    222232
    223233./configure \
    224    --prefix=$ISSM_DIR \
    225    --with-wrappers=no \
    226    --without-Love \
    227    --without-Sealevelchange \
    228    --without-kriging \
    229    --with-mpi-include="${MPI_ROOT}/include" \
    230    --with-mpi-libflags="-L${MPI_ROOT}/lib -lmpi" \
    231    --with-parmetis-dir=${PETSC_DIR} \
    232    --with-metis-dir=${PETSC_DIR} \
    233    --with-mumps-dir=${PETSC_DIR} \
    234    --with-mkl-libflags="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_scalapack_lp64 -lmkl_blacs_sgimpt_lp64 -lmkl_core -lpthread -lm" \
    235    --with-m1qn3-dir="${ISSM_DIR}/externalpackages/m1qn3/install" \
    236    --with-codipack-dir="$ISSM_DIR/externalpackages/codipack/install" \
    237    --with-medipack-dir="$ISSM_DIR/externalpackages/medipack/install" \
    238    --enable-tape-alloc \
    239    --with-fortran-lib="-L${MKLROOT}/../compiler/lib/intel64_lin -lifcore -lifport -lgfortran" \
    240    --enable-development
     234--prefix=$ISSM_DIR \
     235--with-wrappers=no \
     236--without-Love \
     237--without-Sealevelchange \
     238--without-kriging \
     239--with-mpi-include="${MPI_ROOT}/include" \
     240--with-mpi-libflags="-L${MPI_ROOT}/lib -lmpi" \
     241--with-parmetis-dir=${PETSC_DIR} \
     242--with-metis-dir=${PETSC_DIR} \
     243--with-mumps-dir=${PETSC_DIR} \
     244--with-mkl-libflags="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_scalapack_lp64 -lmkl_blacs_sgimpt_lp64 -lmkl_core -lpthread -lm" \
     245--with-m1qn3-dir="${ISSM_DIR}/externalpackages/m1qn3/install" \
     246--with-codipack-dir="$ISSM_DIR/externalpackages/codipack/install" \
     247--with-medipack-dir="$ISSM_DIR/externalpackages/medipack/install" \
     248--enable-tape-alloc \
     249--with-fortran-lib="-L${MKLROOT}/../compiler/lib/intel64_lin -lifcore -lifport -lgfortran" \
     250--enable-development
    241251}}}
    242252
     
    271281}}}
    272282
    273 
    274283== Running jobs on Pleiades ==
    275284On Pleiades, the more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely:
    276285
    277  {{{
     286{{{
    278287#!m
    279288md.cluster=pfe('numnodes',1,'time',28,'processor','bro','queue','devel');
     
    283292to have a maximum job time of 10 minutes and 1 broadwell node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. For more information about the processors, see [https://www.nas.nasa.gov/hecc/support/kb/pleiades-configuration-details_77.html].
    284293
    285 Now if you want to check the status of your job and the queue you are using, use this command:
    286 
    287  {{{
     294Now if you want to check the status of your job and the queue you are using, use this command,
     295{{{
    288296#!sh
    289297qstat -u USERNAME
    290298}}}
    291299
    292 You can delete your job manually by typing:
    293 
    294 {{{
    295 #!sh
     300You can delete your job manually by typing,
     301{{{
     302#!m
    296303qdel JOBID
    297304}}}