Changes between Version 1 and Version 2 of discovery
- Timestamp:
- 08/18/21 17:53:51 (4 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
discovery
v1 v2 8 8 {{{ 9 9 #!sh 10 Host hpc hpc.oit.uci.edu 11 HostName hpc.oit.uci.edu 12 User YOURHPCUSERNAME 13 HostKeyAlias hpc.uci.edu 14 HostbasedAuthentication no 10 Host discovery discovery7.dartmouth.edu 11 Hostname discovery7.dartmouth.edu 12 User USERNAME 15 13 }}} 16 and replace ` YOURHPCUSERNAME` by your hpc username.14 and replace `USERNAME` by your discovery username (which should be your Dartmouth NetID). 17 15 18 Once this is done, you can ssh hpcby simply doing:16 Once this is done, you can ssh discovery by simply doing: 19 17 20 18 {{{ 21 19 #!sh 22 ssh hpc20 ssh discovery 23 21 }}} 24 22 … … 38 36 }}} 39 37 40 Two files were created: your private key `/Users/username/.ssh/id_rsa`, and the public key `/Users/username/.ssh/id_rsa.pub`. The private key is read-only and only for you, it is used to decrypt all correspondence encrypted with the public key. The contents of the public key need to be copied to `~/.ssh/authorized_keys` on your hpcaccount:38 Two files were created: your private key `/Users/username/.ssh/id_rsa`, and the public key `/Users/username/.ssh/id_rsa.pub`. The private key is read-only and only for you, it is used to decrypt all correspondence encrypted with the public key. The contents of the public key need to be copied to `~/.ssh/authorized_keys` on your discovery account: 41 39 42 40 {{{ … … 45 43 }}} 46 44 47 Now on ''' hpc''', copy the content of id_rsa.pub:45 Now on '''discovery''', copy the content of id_rsa.pub: 48 46 49 47 {{{ … … 55 53 == Environment == 56 54 57 On hpc, add the following lines to `~/.bashrc`:55 On discovery, add the following lines to `~/.bashrc`: 58 56 {{{ 59 57 #!sh … … 69 67 ''Log out and log back in'' to apply this change. 70 68 71 == Installing ISSM on hpc==69 == Installing ISSM on discovery == 72 70 73 hpc will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use hpc's matlab. You can check out ISSM and install the following packages: 74 - autotools 71 discovery will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use discovery's MATLAB. You can check out ISSM and install the following packages: 75 72 - PETSc 3.15 (use the discovery script) 76 73 - m1qn3 … … 81 78 #!sh 82 79 ./configure \ 83 --prefix=$ISSM_DIR \ 84 --with-wrappers=no \ 85 --with-kml=no \ 86 --with-bamg=no \ 87 --with-metis-dir=$ISSM_DIR/externalpackages/metis/install \ 88 --with-petsc-dir=$ISSM_DIR/externalpackages/petsc/install \ 89 --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \ 90 --with-mpi-include="/data/apps/mpi/openmpi-1.8.3/gcc/4.8.3/include" \ 91 --with-mpi-libflags="-L/data/apps/mpi/openmpi-1.8.3/gcc/4.8.3/lib -lmpi_cxx -lmpi -lmpi_usempi" \ 92 --with-blas-lapack-dir=$ISSM_DIR/externalpackages/petsc/install \ 93 --with-mumps-dir=$ISSM_DIR/externalpackages/petsc/install/ \ 94 --with-scalapack-dir=$ISSM_DIR/externalpackages/petsc/install/ \ 95 --with-numthreads=16 \ 96 --with-fortran-lib="-L/data/apps/gcc/4.7.3/lib64 -lgfortran" \ 97 --enable-debugging \ 98 --enable-development 80 --prefix=$ISSM_DIR \ 81 --with-wrappers=no \ 82 --with-petsc-dir="$ISSM_DIR/externalpackages/petsc/install" \ 83 --with-m1qn3-dir="$ISSM_DIR/externalpackages/m1qn3/install" \ 84 --with-mpi-include="/optnfs/el7/mpich/3.3-intel19.3/include" \ 85 --with-mpi-libflags=" -lmpi -lifport" \ 86 --with-mkl-libflags="$MKL_LIB" \ 87 --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \ 88 --with-mumps-dir="$ISSM_DIR/externalpackages/petsc/install" \ 89 --with-scalapack-dir="$ISSM_DIR/externalpackages/petsc/install" \ 90 --with-cxxoptflags="-g -O3 -std=c++11" \ 91 --enable-development 99 92 }}} 100 == hpc_settings.m ==93 == discovery_settings.m == 101 94 102 HPCstaff ask that no "serious work" should be done on your home directory, you should create an execution directory as `/pub/$USERNAME/execution`.95 Discovery staff ask that no "serious work" should be done on your home directory, you should create an execution directory as `/pub/$USERNAME/execution`. 103 96 104 You have to add a file in `$ISSM_DIR/src/m` entitled ` hpc_settings.m` with your personal settings on your local ism install:97 You have to add a file in `$ISSM_DIR/src/m` entitled `discovery_settings.m` with your personal settings on your local ism install: 105 98 106 99 {{{ … … 113 106 }}} 114 107 115 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster= hpc()`108 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster= discovery()` 116 109 117 == Running jobs on hpc==110 == Running jobs on discovery == 118 111 119 On hpc, you can use up to 64 cores per node. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely:112 On discovery, you can use up to 64 cores per node. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely: 120 113 121 114 {{{ 122 115 #!m 123 md.cluster= hpc('numnodes',1,'cpuspernode',8);116 md.cluster= discovery('numnodes',1,'cpuspernode',8); 124 117 }}} 125 118 … … 128 121 to have a job of 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. 129 122 130 Now if you want to check the status of your job and the queue you are using, type in the bash with the hpcsession:123 Now if you want to check the status of your job and the queue you are using, type in the bash with the discovery session: 131 124 132 125 {{{ … … 142 135 }}} 143 136 144 where JOBID is the ID of your job (indicated in the M atlab session). Matlabindicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informations that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.137 where JOBID is the ID of your job (indicated in the MATLAB session). MATLAB indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informations that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error. 145 138 146 139 If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the informations Matlab gave you `/home/srebuffi/trunk-jpl/execution//SOMETHING/JOBNAME.lock `, you copy the SOMETHING and you type in Matlab: