Changes between Version 5 and Version 6 of sherlock
- Timestamp:
- 09/24/19 15:28:49 (5 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
sherlock
v5 v6 48 48 == sherlock_settings.m == 49 49 50 HPC staff ask that no "serious work" should be done on your home directory, you should create an execution directory as `/pub/$USERNAME/execution`. 51 52 You have to add a file in `$ISSM_DIR/src/m` entitled `hpc_settings.m` with your personal settings on your local ism install: 50 You have to add a file in `$ISSM_DIR/src/m` entitled `sherlock_settings.m` with your personal settings: 53 51 54 52 {{{ 55 53 #!m 56 cluster.login='mmorligh'; 57 cluster.port=8000; 58 cluster.queue='pub64'; 59 cluster.codepath='/data/users/mmorligh/trunk-jpl/bin/'; 60 cluster.executionpath='/data/users/mmorligh/trunk-jpl/execution/'; 54 cluster.login='wchu28'; 55 cluster.port=0; 56 cluster.codepath='/home/users/wchu28/trunk-jpl/bin/'; 57 cluster.executionpath='/home/users/wchu28/trunk-jpl/execution/'; 61 58 }}} 62 59 63 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster=hpc()`60 Use your username for the `login` and enter your `codepath` and `executionpath`. These settings will be picked up automatically by Matlab when you do `md.cluster=sherlock()` 64 61 65 == Running jobs on Sherlock 62 == Running jobs on Sherlock == 66 63 67 On hpc, you can use up to 64 cores per node. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely:64 On Greenplanet, you can use up to 30 cores per node (partition `ilg2.3`). The more nodes and the longer the requested time, the more you will have to wait in the queue. Per job you can only request up to 125GB of RAM. So choose your settings wisely: 68 65 69 66 {{{ 70 67 #!m 71 md.cluster=sherlock('numnodes',1,'cpuspernode',8); 68 md.cluster=greenplanet('numnodes',1,'cpuspernode',8); 69 md.cluster.time=10; 72 70 }}} 73 71 74 The list of available queues is `'pub64','free64','free48','free*,pub64'` and `'free*'`.72 to have a maximum job time of 10 minutes and 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. 75 73 76 to have a job of 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results. 77 78 Now if you want to check the status of your job and the queue you are using, type in the bash with the hpc session: 74 Now if you want to check the status of your job and the queue you are using, type in the bash on '''Greenplanet''' session: 79 75 80 76 {{{ 81 77 #!sh 82 qstat -u USERNAME 78 squeue -u username 83 79 }}} 84 80 … … 87 83 {{{ 88 84 #!sh 89 qdel JOBID85 scancel JOBID 90 86 }}} 91 87 92 where JOBID is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the informationsthat would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error.88 where `JOBID` is the ID of your job (indicated in the Matlab session). Matlab indicates too the directory of your job where you can find the files `JOBNAME.outlog` and `JOBNAME.errlog`. The outlog file contains the information that would appear if you were running your job on your local machine and the errlog file contains the error information in case the job encounters an error. 93 89 94 If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the information s Matlab gave you `/home/srebuffi/trunk-jpl/execution//SOMETHING/JOBNAME.lock `, you copy the SOMETHING and you type in Matlab:90 If you want to load results from the cluster manually (for example if you have an error due to an internet interruption), you find in the information Matlab gave you `$ISSM_DIR/execution/LAUNCHSTRING/JOBNAME.lock`, you copy the LAUNCHSTRING and you type in Matlab: 95 91 96 92 {{{ 97 93 #!m 98 md=loadresultsfromcluster(md,' SOMETHING');94 md=loadresultsfromcluster(md,'LAUNCHSTRING','JOBNAME'); 99 95 }}} 100 96 97 Obs.: in the case where `md.settings.waitonlock`>0 and you need to load manually (e.g., internet interruption), it is necessary to set `md.private.runtimename=LAUNCHSTRING;` before calling `loadresultsfromcluster`. 101 98 99 == slurm == 100 101 A comparison of PBS to slurm commands can be found here: http://slurm.schedmd.com/rosetta.pdf 102 103 Useful commands: 104 105 Graphical overview over greenplanet usage: 106 {{{ 107 sview 108 }}} 109 110 Get number of idle nodes: 111 {{{ 112 sinfo --states=idle 113 }}} 114 115 See jobs of <username>: 116 {{{ 117 squeue -u <username> 118 }}} 119 120 Get more information on jobs of user: 121 {{{ 122 sacct -u <username> --format=User,JobID,account,Timelimit,elapsed,ReqMem,MaxRss,ExitCode 123 }}} 124 125 Get information on partition (here ilg2.3): 126 {{{ 127 scontrol show partition=ilg2.3 128 }}} 129 130 Get sorted list of users on partition: 131 {{{ 132 squeue | grep -i ilg2.3 | awk '{print $4}' | sort | uniq -c | sort -rn 133 }}}