Changes between Version 4 and Version 5 of lonestar
- Timestamp:
- 01/27/16 20:26:01 (9 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
lonestar
v4 v5 7 7 {{{ 8 8 #!sh 9 Host lonestar l onestar.tacc.utexas.edu10 HostName l onestar.tacc.utexas.edu9 Host lonestar ls5.tacc.utexas.edu 10 HostName ls5.tacc.utexas.edu 11 11 User YOURUSERNAME 12 HostKeyAlias l onestar.tacc.utexas.edu12 HostKeyAlias ls5.tacc.utexas.edu 13 13 HostbasedAuthentication no 14 14 }}} 15 and replace `YOURUSERNAME` by your lonestar username.15 and replace `YOURUSERNAME` by your lonestar5 username. 16 16 17 Once this is done, you can ssh lonestar by simply doing:17 Once this is done, you can ssh lonestar5 by simply doing: 18 18 19 19 {{{ … … 59 59 export ISSM_DIR=PATHTOTRUNK 60 60 source $ISSM_DIR/etc/environment.sh 61 module load cmake/2.8.7 62 module load mkl/10.3 61 module load cmake/3.4.1 63 62 }}} 64 63 … … 81 80 --with-kml=no \ 82 81 --with-bamg=no \ 83 --with-metis-dir= $ISSM_DIR/externalpackages/metis/install\82 --with-metis-dir="$ISSM_DIR/externalpackages/petsc/install" \ 84 83 --with-petsc-dir=$ISSM_DIR/externalpackages/petsc/install \ 85 84 --with-m1qn3-dir=$ISSM_DIR/externalpackages/m1qn3/install \ 86 --with-mpi-include="/opt/ apps/intel11_1/mvapich2/1.6/include/" \87 --with-mpi-libflags="-L /opt/apps/intel11_1/mvapich2/1.6/lib/ -lmpich" \88 --with-mkl-dir=" $TACC_MKL_LIB" \85 --with-mpi-include="/opt/cray/mpt/default/gni/mpich-intel/14.0/include/" \ 86 --with-mpi-libflags="-Lopt/cray/mpt/default/gni/mpich-intel/14.0/lib/ -lmpich" \ 87 --with-mkl-dir="/opt/apps/intel/16.0.1.150/compilers_and_libraries_2016.1.150/linux/mkl/lib/intel64" \ 89 88 --with-mumps-dir=$ISSM_DIR/externalpackages/petsc/install/ \ 90 89 --with-scalapack-dir=$ISSM_DIR/externalpackages/petsc/install/ \ … … 103 102 cluster.login='seroussi'; 104 103 cluster.codepath='/home1/03729/seroussi/trunk-jpl/bin/'; 105 cluster.executionpath='/ home1/03729/seroussi/trunk-jpl/execution/';104 cluster.executionpath='/work/03729/seroussi/trunk-jpl/execution/'; 106 105 }}} 107 106 … … 120 119 to have a job of 2 nodes, 12 cpus for nodes, so a total of 24 cores. 121 120 121 To submit a job on lonestar, do: 122 123 {{{ 124 #!m 125 sbatch job.queue 126 }}} 127 122 128 Now if you want to check the status of your job and the queue you are using, type in the bash with the lonestar session: 123 129