Changes between Version 2 and Version 3 of lonestar
- Timestamp:
- 09/17/15 19:22:22 (9 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
lonestar
v2 v3 9 9 Host lonestar lonestar.tacc.utexas.edu 10 10 HostName lonestar.tacc.utexas.edu 11 User YOUR HPCUSERNAME11 User YOURUSERNAME 12 12 HostKeyAlias lonestar.tacc.utexas.edu 13 13 HostbasedAuthentication no 14 14 }}} 15 and replace `YOUR HPCUSERNAME` by your lonestar username.15 and replace `YOURUSERNAME` by your lonestar username. 16 16 17 17 Once this is done, you can ssh lonestar by simply doing: … … 24 24 == Password-less ssh == 25 25 26 Once you have the account, you can setup a public key authenti fication in order to avoid having to input your password for each run.26 Once you have the account, you can setup a public key authentication in order to avoid having to input your password for each run. 27 27 You need to have a SSH public/private key pair. If you do not, you can create a SSH public/private key pair by typing the following command and following the prompts (no passphrase necessary): 28 28 {{{ … … 95 95 }}} 96 96 97 98 == STOP HERE ==99 100 97 == lonestar_settings.m == 101 98 … … 105 102 #!m 106 103 cluster.login='seroussi'; 107 cluster.port=8000; 108 cluster.queue='pub64'; 109 cluster.codepath='/data/users/mmorligh/trunk-jpl/bin/'; 110 cluster.executionpath='/data/users/mmorligh/trunk-jpl/execution/'; 104 cluster.codepath='/home1/03729/seroussi/trunk-jpl/bin/'; 105 cluster.executionpath='/home1/03729/seroussi/trunk-jpl/execution/'; 111 106 }}} 112 107 113 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster= hpc()`108 use your username for the `login` and enter your code path and execution path. These settings will be picked up automatically by matlab when you do `md.cluster=lonestar()` 114 109 115 == Running jobs on hpc==110 == Running jobs on lonestar == 116 111 117 On hpc, you can use up to 64 cores per node. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely:112 On lonestar, each node has 12 cores and you can use any multiple of 12 for the total number of processors. The more nodes and the longer the requested time, the more you will have to wait in the queue. So choose your settings wisely: 118 113 119 114 {{{ 120 115 #!m 121 md.cluster= hpc('numnodes',1,'cpuspernode',8);116 md.cluster=lonestar('numnodes',2); 122 117 }}} 123 118 124 The list of available queues is `'pub64','free64','free48','free*,pub64'` and `'free*'`.125 119 126 to have a job of 8 cores on one node. If the run lasts longer than 10 minutes, it will be killed and you will not be able to retrieve your results.120 to have a job of 2 nodes, 12 cpus for nodes, so a total of 24 cores. 127 121 128 Now if you want to check the status of your job and the queue you are using, type in the bash with the hpcsession:122 Now if you want to check the status of your job and the queue you are using, type in the bash with the lonestar session: 129 123 130 124 {{{