Changes between Version 7 and Version 8 of pleiadescsh
- Timestamp:
- 01/12/22 13:27:47 (3 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
pleiadescsh
v7 v8 29 29 30 30 #Packages 31 module load pkgsrc 31 module load pkgsrc/2020Q4 32 32 module load comp-intel/2016.2.181 33 module load mpi- sgi/mpt33 module load mpi-hpe/mpt 34 34 }}} 35 35 … … 39 39 40 40 Pleiades will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use Pleiades' matlab. You can check out ISSM and install the following packages: 41 - PETSc (use the pleiades script install-3.13-pleiades.sh or newer)41 - PETSc (use the pleiades script install-3.13-pleiades.sh) 42 42 - m1qn3 43 43 … … 59 59 --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \ 60 60 --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \ 61 --with-scalapack- dir="$ISSM_DIR/externalpackages/petsc/install" \61 --with-scalapack-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so" \ 62 62 --with-cxxoptflags="-O3 -axAVX" \ 63 63 --with-fortran-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/compiler/lib/intel64/ -lifcore -lifport" \ … … 68 68 == Installing ISSM on Pleiades with Dakota == 69 69 70 For Dakota to run, you you will still need to make cmake, PETSc, and m1qn3.70 For Dakota to run, you you will still need to make PETSc and m1qn3, but you will need to make sure you are using the intel mpi and that the externalpackages are built with the mpi compilers. 71 71 72 In addition, will need to build the external package: 72 In your `~/.cshrc`, add the following lines: 73 74 {{{ 75 setenv CC mpicc 76 setenv CXX mpicxx 77 setenv F77 mpif77 78 }}} 79 80 And change your loaded packages to (no the removal of the pkgsrc): 81 82 {{{ 83 #Packages 84 module load comp-intel/2018.3.222 85 module load mpi-intel/2018.3.222 86 }}} 87 88 Then log out and back in, and reinstall the following packages: 89 - PETSc (use the pleiades script install-3.14-pleiades.sh) 90 - m1qn3 91 92 In addition, will need to build the external package: 93 - boost, install-1.55-pleiades.sh 73 94 - dakota, install-6.2-pleiades.sh 74 95 75 Finally, you will need to make with mpi compilers by usingthe following configuration script:96 Finally, you will need to use the following configuration script: 76 97 77 98 {{{ 78 99 #!/bin/csh 79 100 80 export F77=mpif7781 export FC=mpif9082 83 101 ./configure \ 84 102 --prefix=$ISSM_DIR \ 103 --enable-standalone-libraries \ 85 104 --with-wrappers=no \ 86 --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \ 87 --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \ 88 --with-boost-dir=/nasa/pkgsrc/sles12/2018Q3/ \ 105 --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \ 106 --with-triangle-dir=$ISSM_DIR/externalpackages/triangle/install \ 107 --with-metis-dir=$PETSC_ROOT \ 108 --with-petsc-dir=$PETSC_ROOT \ 109 --with-scalapack-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so" \ 110 --with-boost-dir=$ISSM_DIR/externalpackages/boost/install \ 89 111 --with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \ 90 --with-gsl-dir=/nasa/pkgsrc/sles12/2018Q3/ \91 112 --with-mpi-include=" " \ 92 113 --with-mpi-libflags=" -lmpi" \ 93 --with-mkl-libflags="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -limf -lsvml -lirc" \ 94 --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \ 95 --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \ 96 --with-scalapack-dir="$ISSM_DIR/externalpackages/petsc/install" \ 97 --with-graphics-lib="/usr/lib64/libX11.so" \ 98 --with-fortran-lib="-L/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/compiler/lib/intel64/ -lifcore -lifport" \ 114 --with-mkl-libflags="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \ 115 --with-mumps-dir=$PETSC_ROOT \ 116 --with-fortran-lib="-L/nasa/intel/Compiler/2018.3.222/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin/ -lifcore -lifport -lgfortran" \ 117 --with-cxxoptflags="-O3 " \ 99 118 --with-vendor="intel-pleiades-mpi" \ 100 119 --enable-development 101 120 }}} 121 122 Note, if you would like the capabilities of pkgsrc/2020Q4, including updated svn, make sure that you build dakota without that package loaded. Once you have ISSM compiled and installed with dakota, feel free to add back in to your `~/.cshrc`: 123 124 {{ 125 module load pkgsrc/2020Q4 126 }} 102 127 103 128 == pfe_settings.m == … … 116 141 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster=pfe()` 117 142 118 Make sure your module list includes pkgsrc, comp-intel/2016.2.181, and mpi-sgi/mpt . 143 Without dakota, make sure your module list includes scicon/app-tools, comp-intel/2016.2.181, and mpi-hpe/mpt. 144 With dakota, make sure your module list includes scicon/app-tools, comp-intel/2018.3.222, and mpi-intel/2018.3.222. You can specify your own list of modules by adding to `pfe_settings.m`, for example,: 145 146 {{ 147 cluster.modules = {'comp-intel/2018.3.222' 'mpi-intel/2018.3.222' 'scicon/app-tools'}; 148 }} 149 119 150 120 151 == Running jobs on Pleiades == … … 124 155 {{{ 125 156 #!m 126 md.cluster=pfe('numnodes',8,'time',30,'processor',' wes','queue','devel');157 md.cluster=pfe('numnodes',8,'time',30,'processor','bro','queue','devel'); 127 158 md.cluster.time=10; 128 159 }}} 129 160 130 to have a maximum job time of 10 minutes and 8 westmerenodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.161 to have a maximum job time of 10 minutes and 8 broadwell nodes. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. 131 162 132 163 Now if you want to check the status of your job and the queue you are using, use this command: