Changes between Version 5 and Version 6 of sherlock


Ignore:
Timestamp:
09/24/19 15:28:49 (5 years ago)
Author:
wchu28
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • sherlock

    v5 v6  
    4848== sherlock_settings.m ==
    4949
    50 HPC staff ask that no "serious work" should be done on your home directory, you should create an execution directory as `/pub/$USERNAME/execution`.
    51 
    52 You have to add a file in `$ISSM_DIR/src/m` entitled `hpc_settings.m` with your personal settings on your local ism install:
     50You have to add a file in `$ISSM_DIR/src/m` entitled `sherlock_settings.m` with your personal settings:
    5351
    5452{{{
    5553#!m
    56 cluster.login='mmorligh';
    57 cluster.port=8000;
    58 cluster.queue='pub64';
    59 cluster.codepath='/data/users/mmorligh/trunk-jpl/bin/';
    60 cluster.executionpath='/data/users/mmorligh/trunk-jpl/execution/';
     54cluster.login='wchu28';
     55cluster.port=0;
     56cluster.codepath='/home/users/wchu28/trunk-jpl/bin/';
     57cluster.executionpath='/home/users/wchu28/trunk-jpl/execution/';
    6158}}}
    6259
    63 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster=hpc()`
     60Use your username for the `login` and enter your `codepath` and `executionpath`. These settings will be picked up automatically by Matlab when you do `md.cluster=sherlock()`
    6461
    65 == Running jobs on Sherlock  ==
     62== Running jobs on Sherlock ==
    6663
    67 On hpc, you can use up to 64 cores per node. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely:
     64On Greenplanet, you can use up to 30 cores per node (partition `ilg2.3`). The more nodes and the longer the requested time, the more you will have to wait in the queue. Per job you can only request up to 125GB of RAM. So choose your settings wisely:
    6865
    6966 {{{
    7067#!m
    71 md.cluster=sherlock('numnodes',1,'cpuspernode',8);
     68md.cluster=greenplanet('numnodes',1,'cpuspernode',8);
     69md.cluster.time=10;
    7270}}}
    7371
    74 The list of available queues is `'pub64','free64','free48','free*,pub64'` and `'free*'`.
     72to have a maximum job time of 10 minutes and 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
    7573
    76 to have a job of 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.
    77 
    78 Now if you want to check the status of your job and the queue you are using, type in the bash with the hpc session:
     74Now if you want to check the status of your job and the queue you are using, type in the bash on '''Greenplanet''' session:
    7975
    8076 {{{
    8177#!sh
    82 qstat -u USERNAME
     78squeue -u username
    8379}}}
    8480
     
    8783{{{
    8884#!sh
    89 qdel JOBID
     85scancel JOBID
    9086}}}
    9187
    92 where JOBID is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informations that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.
     88where `JOBID` is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the information that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.
    9389
    94 If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the informations Matlab gave you `/home/srebuffi/trunk-jpl/execution//SOMETHING/JOBNAME.lock `, you copy the SOMETHING and you type in Matlab:
     90If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the information Matlab gave you `$ISSM_DIR/execution/LAUNCHSTRING/JOBNAME.lock`, you copy the LAUNCHSTRING and you type in Matlab:
    9591
    9692{{{
    9793#!m
    98 md=loadresultsfromcluster(md,'SOMETHING');
     94md=loadresultsfromcluster(md,'LAUNCHSTRING','JOBNAME');
    9995}}}
    10096
     97Obs.: in the case where `md.settings.waitonlock`>0 and you need to load manually (e.g., internet interruption), it is necessary to set `md.private.runtimename=LAUNCHSTRING;` before calling `loadresultsfromcluster`.
    10198
     99== slurm ==
     100
     101A comparison of PBS to slurm commands can be found here: http://slurm.schedmd.com/rosetta.pdf
     102
     103Useful commands:
     104
     105Graphical overview over greenplanet usage:
     106{{{
     107sview
     108}}}
     109
     110Get number of idle nodes:
     111{{{
     112sinfo --states=idle
     113}}}
     114
     115See jobs of <username>:
     116{{{
     117squeue -u <username>
     118}}}
     119
     120Get more information on jobs of user:
     121{{{
     122sacct -u <username> --format=User,JobID,account,Timelimit,elapsed,ReqMem,MaxRss,ExitCode
     123}}}
     124
     125Get information on partition (here ilg2.3):
     126{{{
     127scontrol show partition=ilg2.3
     128}}}
     129
     130Get sorted list of users on partition:
     131{{{
     132squeue  | grep -i ilg2.3 | awk '{print $4}' | sort | uniq -c | sort -rn
     133}}}