Changes between Version 7 and Version 8 of pleiadescsh


Ignore:
Timestamp:
01/12/22 13:27:47 (3 years ago)
Author:
schlegel
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • pleiadescsh

    v7 v8  
    2929
    3030#Packages
    31 module load pkgsrc
     31module load pkgsrc/2020Q4
    3232module load comp-intel/2016.2.181
    33 module load mpi-sgi/mpt
     33module load mpi-hpe/mpt
    3434}}}
    3535
     
    3939
    4040Pleiades will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use Pleiades' matlab. You can check out ISSM and install the following packages:
    41  - PETSc (use the pleiades script install-3.13-pleiades.sh or newer)
     41 - PETSc (use the pleiades script install-3.13-pleiades.sh)
    4242 - m1qn3
    4343
     
    5959 --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \
    6060 --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \
    61  --with-scalapack-dir="$ISSM_DIR/externalpackages/petsc/install" \
     61 --with-scalapack-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so" \
    6262 --with-cxxoptflags="-O3 -axAVX" \
    6363 --with-fortran-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/compiler/lib/intel64/ -lifcore -lifport" \
     
    6868== Installing ISSM on Pleiades with Dakota ==
    6969
    70 For Dakota to run, you you will still need to make cmake, PETSc, and m1qn3.
     70For Dakota to run, you you will still need to make PETSc and m1qn3, but you will need to make sure you are using the intel mpi and that the externalpackages are built with the mpi compilers.
    7171
    72 In addition, will need to build the external package:
     72In your `~/.cshrc`, add the following lines:
     73
     74{{{
     75setenv CC mpicc
     76setenv CXX mpicxx
     77setenv F77 mpif77
     78}}}
     79
     80And change your loaded packages to (no the removal of the pkgsrc):
     81
     82{{{
     83#Packages
     84module load comp-intel/2018.3.222
     85module load mpi-intel/2018.3.222
     86}}}
     87
     88Then log out and back in, and reinstall the following packages:
     89 - PETSc (use the pleiades script install-3.14-pleiades.sh)
     90 - m1qn3
     91 
     92In addition, will need to build the external package:
     93 - boost, install-1.55-pleiades.sh
    7394 - dakota, install-6.2-pleiades.sh
    7495
    75 Finally, you will need to make with mpi compilers by using the following configuration script:
     96Finally, you will need to use the following configuration script:
    7697
    7798{{{
    7899#!/bin/csh
    79100
    80 export F77=mpif77
    81 export FC=mpif90
    82 
    83101./configure \
    84102 --prefix=$ISSM_DIR \
     103 --enable-standalone-libraries \
    85104 --with-wrappers=no \
    86  --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \
    87  --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \
    88  --with-boost-dir=/nasa/pkgsrc/sles12/2018Q3/ \
     105 --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \
     106 --with-triangle-dir=$ISSM_DIR/externalpackages/triangle/install \
     107 --with-metis-dir=$PETSC_ROOT \
     108 --with-petsc-dir=$PETSC_ROOT \
     109 --with-scalapack-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so" \
     110 --with-boost-dir=$ISSM_DIR/externalpackages/boost/install \
    89111 --with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \
    90  --with-gsl-dir=/nasa/pkgsrc/sles12/2018Q3/ \
    91112 --with-mpi-include=" " \
    92113 --with-mpi-libflags=" -lmpi" \
    93  --with-mkl-libflags="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -limf -lsvml -lirc" \
    94  --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \
    95  --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \
    96  --with-scalapack-dir="$ISSM_DIR/externalpackages/petsc/install" \
    97  --with-graphics-lib="/usr/lib64/libX11.so" \
    98  --with-fortran-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/compiler/lib/intel64/ -lifcore -lifport" \
     114 --with-mkl-libflags="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
     115 --with-mumps-dir=$PETSC_ROOT \
     116 --with-fortran-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin/ -lifcore -lifport -lgfortran" \
     117 --with-cxxoptflags="-O3 " \
    99118 --with-vendor="intel-pleiades-mpi" \
    100119 --enable-development
    101120}}}
     121
     122Note, if you would like the capabilities of pkgsrc/2020Q4, including updated svn, make sure that you build dakota without that package loaded.  Once you have ISSM compiled and installed with dakota, feel free to add back in to your `~/.cshrc`:
     123
     124{{
     125module load pkgsrc/2020Q4
     126}}
    102127
    103128== pfe_settings.m ==
     
    116141use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster=pfe()`
    117142
    118 Make sure your module list includes pkgsrc, comp-intel/2016.2.181, and mpi-sgi/mpt .
     143Without dakota, make sure your module list includes scicon/app-tools, comp-intel/2016.2.181, and mpi-hpe/mpt.
     144With dakota, make sure your module list includes scicon/app-tools, comp-intel/2018.3.222, and mpi-intel/2018.3.222. You can specify your own list of modules by adding to `pfe_settings.m`, for example,:
     145
     146{{
     147cluster.modules = {'comp-intel/2018.3.222' 'mpi-intel/2018.3.222' 'scicon/app-tools'};
     148}}
     149
    119150
    120151== Running jobs on Pleiades ==
     
    124155 {{{
    125156#!m
    126 md.cluster=pfe('numnodes',8,'time',30,'processor','wes','queue','devel');
     157md.cluster=pfe('numnodes',8,'time',30,'processor','bro','queue','devel');
    127158md.cluster.time=10;
    128159}}}
    129160
    130 to have a maximum job time of 10 minutes and 8 westmere nodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
     161to have a maximum job time of 10 minutes and 8 broadwell nodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
    131162
    132163Now if you want to check the status of your job and the queue you are using, use this command: