Changes between Version 7 and Version 8 of aci


Ignore:
Timestamp:
07/19/17 06:37:34 (8 years ago)
Author:
eps5217
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • aci

    v7 v8  
    7272And replace `ISSM_DIR` with your actual trunk. ''Log out and log back in'' to apply this change.
    7373
    74 == Installing ISSM on Pleiades ==
     74== Installing ISSM on ACI ==
    7575
    7676Pleiades will ''only'' be used to run the code, you will use your local machine for pre and post processing, you will never use Pleiades' matlab. You can check out ISSM and install the following packages:
     
    7878 - PETSc (use the pleiades script and ``follow`` the instructions, you will need to submit a job and compile PETSc manually, do not make test, it will not work on the cluster)
    7979 - m1qn3
    80 
    81 For documentation of pleiades, see here: http://www.nas.nasa.gov/hecc/support/kb/
    8280
    8381Use the following configuration script (adapt to your needs):
     
    102100}}}
    103101
    104 == Installing ISSM on Pleiades with Dakota ==
     102== aci_settings.m ==
    105103
    106 For Dakota to run, you you will still need to make autotools, cmake PETSc, and m1qn3.
    107 
    108 In addition, will need to build the external package:
    109  - boost, install-1.55-pleiades.sh
    110  - dakota, install-6.2-pleiades.sh
    111 
    112 Finally, add the following to your configuration script:
    113 
    114 {{{
    115  --with-boost-dir=$ISSM_DIR/externalpackages/boost/install \
    116  --with-dakota-dir=$ISSM_DIR/externalpackages/dakota/install \
    117 }}}
    118 
    119 == pfe_settings.m ==
    120 
    121 You have to add a file in `$ISSM_DIR/src/m` entitled `pfe_settings.m` with your personal settings:
     104You have to add a file in `$ISSM_DIR/src/m` entitled `aci_settings.m` with your personal settings:
    122105
    123106{{{
     
    134117Make sure your module list includes pkgsrc, comp-intel/2016.2.181, and mpi-sgi/mpt .
    135118
    136 == Running jobs on Pleiades ==
     119== Running jobs on ACI ==
    137120
    138121On Pleiades, the more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely:
     
    140123 {{{
    141124#!m
    142 md.cluster=pfe('numnodes',8,'time',30,'processor','wes','queue','devel');
     125md.cluster=aci('numnodes',8,'time',30,'processor','wes','queue','devel');
    143126md.cluster.time=10;
    144127}}}
     
    160143}}}
    161144
    162 where JOBID is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informations that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.
     145where JOBID is the ID of your job (indicated in the MATLAB session). MATLAB indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informations that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.
    163146
    164 If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the informations Matlab gave you `/home/srebuffi/trunk-jpl/execution//SOMETHING/JOBNAME.lock `, you copy the SOMETHING and you type in Matlab:
     147If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the informations Matlab gave you `/home/srebuffi/trunk-jpl/execution//SOMETHING/JOBNAME.lock `, you copy the SOMETHING and you type in MATLAB:
    165148
    166149{{{