137 | | --prefix=$ISSM_DIR \ |
138 | | --with-wrappers=no \ |
139 | | --with-kriging=no \ |
140 | | --with-kml=no \ |
141 | | --with-bamg=no \ |
142 | | --without-Love \ |
143 | | --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \ |
144 | | --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \ |
145 | | --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \ |
146 | | --with-mpi-include="/sopt/OpenMPI/3.1.2/intel-2018.3-slim/include" \ |
147 | | --with-mpi-libflags="-L/sopt/OpenMPI/3.1.2/intel-2018.3-slim/lib -lmpi -lm -lmpi_mpifh" \ |
148 | | --with-mkl-libflags="-L/sopt/MKL/2018.3/lib -mkl=cluster" \ |
149 | | --with-mumps-dir=$ISSM_DIR/externalpackages/petsc/install/ \ |
150 | | --with-scalapack-dir=$ISSM_DIR/externalpackages/petsc/install/ \ |
151 | | --with-cxxoptflags="-O3 -fPIC -std=c++11" \ |
152 | | --with-vendor=intel-gp |
| 140 | --prefix="$ISSM_DIR" \ |
| 141 | --disable-static \ |
| 142 | --enable-development \ |
| 143 | --with-numthreads=8 \ |
| 144 | --with-python-version=3.7 \ |
| 145 | --with-python-dir="$CONDA_DIR" \ |
| 146 | --with-python-numpy-dir="$CONDA_DIR/lib/python3.7/site-packages/numpy/core/include/numpy" \ |
| 147 | --with-fortran-lib="-L$CONDA_DIR/lib/gcc/x86_64-conda-linux-gnu/7.5.0/ -lgfortran" \ |
| 148 | --with-mpi-include="$CONDA_DIR/lib/include" \ |
| 149 | --with-mpi-libflags="-L$CONDA_DIR/lib" \ |
| 150 | --with-metis-dir="$CONDA_DIR/lib" \ |
| 151 | --with-scalapack-dir="$CONDA_DIR/lib" \ |
| 152 | --with-mumps-dir="$CONDA_DIR/lib" \ |
| 153 | --with-petsc-dir="$CONDA_DIR" \ |
| 154 | --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" |
| 155 | make --jobs=8 |
| 156 | make install |
168 | | |
169 | | == Running jobs on Greenplanet == |
170 | | |
171 | | On Greenplanet, you can use up to 30 cores per node (partition `ilg2.3`). The more nodes and the longer the requested time, the more you will have to wait in the queue. Per job you can only request up to 125GB of RAM. So choose your settings wisely: |
172 | | |
173 | | {{{ |
174 | | #!m |
175 | | md.cluster=greenplanet('numnodes',1,'cpuspernode',8); |
176 | | md.cluster.time=10; |
177 | | }}} |
178 | | |
179 | | to have a maximum job time of 10 minutes and 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. |
180 | | |
181 | | Now if you want to check the status of your job and the queue you are using, type in the bash on '''Greenplanet''' session: |
182 | | |
183 | | {{{ |
184 | | #!sh |
185 | | squeue -u username |
186 | | }}} |
187 | | |
188 | | You can delete your job manually by typing: |
189 | | |
190 | | {{{ |
191 | | #!sh |
192 | | scancel JOBID |
193 | | }}} |
194 | | |
195 | | where `JOBID` is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the information that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error. |
196 | | |
197 | | If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the information Matlab gave you `$ISSM_DIR/execution/LAUNCHSTRING/JOBNAME.lock`, you copy the LAUNCHSTRING and you type in Matlab: |
198 | | |
199 | | {{{ |
200 | | #!m |
201 | | md=loadresultsfromcluster(md,'LAUNCHSTRING','JOBNAME'); |
202 | | }}} |
203 | | |
204 | | Obs.: in the case where `md.settings.waitonlock`>0 and you need to load manually (e.g., internet interruption), it is necessary to set `md.private.runtimename=LAUNCHSTRING;` before calling `loadresultsfromcluster`. |
205 | | |
206 | | == slurm == |
207 | | |
208 | | A comparison of PBS to slurm commands can be found here: http://slurm.schedmd.com/rosetta.pdf |
209 | | |
210 | | Useful commands: |
211 | | |
212 | | Graphical overview over greenplanet usage: |
213 | | {{{ |
214 | | sview |
215 | | }}} |
216 | | |
217 | | Get number of idle nodes: |
218 | | {{{ |
219 | | sinfo --states=idle |
220 | | }}} |
221 | | |
222 | | See jobs of <username>: |
223 | | {{{ |
224 | | squeue -u <username> |
225 | | }}} |
226 | | |
227 | | Get more information on jobs of user: |
228 | | {{{ |
229 | | sacct -u <username> --format=User,JobID,account,Timelimit,elapsed,ReqMem,MaxRss,ExitCode |
230 | | }}} |
231 | | |
232 | | Get information on partition (here ilg2.3): |
233 | | {{{ |
234 | | scontrol show partition=ilg2.3 |
235 | | }}} |
236 | | |
237 | | Get sorted list of users on partition: |
238 | | {{{ |
239 | | squeue | grep -i ilg2.3 | awk '{print $4}' | sort | uniq -c | sort -rn |
240 | | }}} |