Changes between Version 31 and Version 32 of pleiadesbash


Ignore:
Timestamp:
01/12/22 13:58:20 (3 years ago)
Author:
schlegel
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • pleiadesbash

    v31 v32  
    2626
    2727#Packages
    28 module load pkgsrc
     28module load pkgsrc/2020Q4
    2929module load comp-intel/2016.2.181
    30 module load mpi-sgi/mpt
     30module load mpi-hpe/mpt
    3131}}}
    3232
     
    3737'''Do NOT install mpich'''. We have to use the one provided by NAS. Pleiades will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use Pleiades' matlab. You can check out ISSM and install the following packages:
    3838 - m1qn3
    39  - PETSc (use `install-3.13-pleiades.sh` or newer)
     39 - PETSc (use `install-3.15-pleiades.sh` or newer)
    4040
    4141For documentation of pleiades, see here: http://www.nas.nasa.gov/hecc/support/kb/
     
    7171== Installing ISSM on Pleiades with Dakota ==
    7272
    73 For Dakota to run, you you will still need to make PETSc, and m1qn3.
    74 
    75 In addition, will need to build the external packages:
     73For Dakota to run, you you will still need to make PETSc and m1qn3, but you will need to make sure you are using the intel mpi and that the externalpackages are built with the mpi compilers.
     74
     75In your `~/.cshrc`, add the following lines:
     76
     77{{{
     78#Set compilers
     79setenv CC mpicc
     80setenv CXX mpicxx
     81setenv F77 mpif77
     82}}}
     83
     84And change your loaded packages to (note the removal of the pkgsrc):
     85
     86{{{
     87#Packages
     88module load comp-intel/2018.3.222
     89module load mpi-intel/2018.3.222
     90}}}
     91
     92Then ''log out and log back in'', and reinstall the following packages:
     93 - PETSc (use the pleiades script install-3.14-pleiades.sh)
     94 - m1qn3
     95 
     96In addition, will need to build the external package:
     97 - gsl, install-pleiades.sh
     98 - boost, install-1.55-pleiades.sh
    7699 - dakota, install-6.2-pleiades.sh
    77100
    78 Finally, you will need to make with mpi compilers by using the configuration script:
    79 
    80 {{{
    81 #!sh
    82 export F77=mpif77
    83 export FC=mpif90
    84 
     101Finally, you will need to use the following configuration script:
     102
     103{{{
     104#!sh
    85105./configure \
    86106 --prefix=$ISSM_DIR \
     107 --enable-standalone-libraries \
    87108 --with-wrappers=no \
    88  --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \
    89  --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \
    90  --with-boost-dir=/nasa/pkgsrc/sles12/2018Q3/ \
     109 --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \
     110 --with-triangle-dir=$ISSM_DIR/externalpackages/triangle/install \
     111 --with-metis-dir=$PETSC_ROOT \
     112 --with-petsc-dir=$PETSC_ROOT \
     113 --with-scalapack-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so" \
     114 --with-boost-dir=$ISSM_DIR/externalpackages/boost/install \
    91115 --with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \
    92  --with-gsl-dir=/nasa/pkgsrc/sles12/2018Q3/ \
    93116 --with-mpi-include=" " \
    94117 --with-mpi-libflags=" -lmpi" \
    95  --with-mkl-libflags="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -limf -lsvml -lirc" \
    96  --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \
    97  --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \
    98  --with-scalapack-dir="$ISSM_DIR/externalpackages/petsc/install" \
    99  --with-graphics-lib="/usr/lib64/libX11.so" \
    100  --with-fortran-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/compiler/lib/intel64/ -lifcore -lifport" \
     118 --with-mkl-libflags="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
     119 --with-mumps-dir=$PETSC_ROOT \
     120 --with-fortran-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin/ -lifcore -lifport -lgfortran" \
     121 --with-cxxoptflags="-O3 " \
    101122 --with-vendor="intel-pleiades-mpi" \
    102123 --enable-development
    103124}}}
     125
     126Remember, you will need to run the following command before configuring ISSM:
     127
     128{{{
     129cd $ISSM_DIR
     130autoreconf -ivf
     131}}}
     132
     133Note, if you would like the capabilities of pkgsrc/2020Q4, including updated svn, make sure that you build dakota without that package loaded.  Once you have ISSM compiled and installed with dakota, feel free to add back in to your `~/.cshrc`:
     134
     135{{{
     136#Packages
     137module load pkgsrc/2020Q4
     138}}}
     139
    104140
    105141== Installing ISSM on Pleiades with CoDiPack ==
     
    138174
    139175You will get a lot of warnings while compiling (like ''warning #2196: routine is both "inline" and "noinline"''), just ignore them.
     176
    140177== pfe_settings.m ==
    141178
     
    152189}}}
    153190
    154 use your username for the `login` and enter your code path and execution path. Be sure to create the final execution directory (mkdir) within the nobackup folder. These settings will be picked up automatically by matlab when you do `md.cluster=pfe()`. To determine your `grouplist`, on Pleiades run:
     191use your username for the `login` and enter your code path and execution path. Be sure to create the final execution directory (mkdir) within the nobackup folder. These settings will be picked up automatically by matlab when you do `md.cluster=pfe()`.
     192
     193Without dakota, make sure your module list includes scicon/app-tools, comp-intel/2016.2.181, and mpi-hpe/mpt.
     194With dakota, make sure your module list includes scicon/app-tools, comp-intel/2018.3.222, and mpi-intel/2018.3.222. You can specify your own list of modules by adding to `pfe_settings.m`, for example:
     195
     196{{{
     197cluster.modules = {'comp-intel/2018.3.222' 'mpi-intel/2018.3.222' 'scicon/app-tools'};
     198}}}
     199
     200To determine your `grouplist`, on Pleiades run:
    155201
    156202{{{
     
    158204}}}
    159205
    160 Make sure your module list includes pkgsrc, comp-intel/2016.2.181, and mpi-sgi/mpt .
    161206
    162207== Running jobs on Pleiades ==
     
    166211 {{{
    167212#!m
    168 md.cluster=pfe('numnodes',8,'time',30,'processor','wes','queue','devel');
     213md.cluster=pfe('numnodes',8,'time',30,'processor','bro','queue','devel');
    169214md.cluster.time=10;
    170215}}}
    171216
    172 to have a maximum job time of 10 minutes and 8 westmere nodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
     217to have a maximum job time of 10 minutes and 8 broadwell nodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
    173218
    174219Now if you want to check the status of your job and the queue you are using, use this command: