Pleiades only has about 15 MATLAB licenses on the cluster and does not allow to run MATLAB on batch jobs (submitted through qsub). The alternative is to compile your MATLAB functions for deployment using mcc (https://www.mathworks.com/help/compiler/mcc.html). To do so, you must first precompile your MATLAb code into an executable, and then submit this as a Pleiades job script.
Precompile your MATLAB code
The following Matlab script (createMCC.m) will create an executable out of a Matlab script that can be submitted to Pleiades:
%file to be turned into an executable filename = 'XXX.m' %Get dependencies files = matlab.codetools.requiredFilesAndProducts(filename); %Creaste long string deps = []; for i=1:numel(files) if contains(files{i},'normfit_issm.m') continue elseif contains(files{i},'dakota_moments.m') continue elseif contains(files{i},'dakota_out_parse.m') continue else deps = [deps ' ' files{i}]; end end %Create command command = ['mcc -m ' filename deps ' -o MCCexecutable'] %Create executable system(command);
Run this script in Matlab the two following files will be generated: run_ MCCexecutable.sh
and MCCexecutable
.
Note the following potential problems:
- MCC needs to account for all *.m files that will be used while running your script and will not be able to interpret a new *.m file created on the fly (like we do with
parameterize
). - It will be helpful to remove dependencies that rely on special MATLAB licenses, as Pleiades only has a limited amount. For instance, one can remove 'normfit_issm.m', 'dakota_moments.m', and 'dakota_out_parse.m' from the list of dependencies (if your code will not use them) so that we do not require Matlab's statistical toolbox license. Just make sure to change the path to these files to the correct one on your machine in the example code above.
Submit a job
Make sure to change paths and the group-id in the file below to match that of your system:
#PBS -S /bin/bash #PBS -l select=1:ncpus=28:model=bro #PBS -l walltime=100 #PBS -q devel #PBS -W group_list=s1690 #PBS -m e #PBS -o /home1/mmorligh/issm/trunk-jpl/test/NightlyRun/run.outlog #PBS -e /home1/mmorligh/issm/trunk-jpl/test/NightlyRun/run.errlog . /usr/share/modules/init/bash #load modules module load comp-intel/2018.3.222 module load mpi-hpe/mpt.2.17r13 module load matlab/2017b module load netcdf/4.4.1.1_mpt #Export some variables export PATH="$PATH:." export MPI_LAUNCH_TIMEOUT=520 export MPI_GROUP_MAX=64 #ISSM stuff export ISSM_DIR="/u/mmorligh/issm/trunk-jpl/" source $ISSM_DIR/etc/environment.sh #move and start simulation cd /home1/mmorligh/issm/trunk-jpl/test/NightlyRun #Run the precompiled code ./run_MCCexecutable.sh $ISSM_DIR/lib:$ISSM_DIR/externalpackages/gsl/install/lib:$ISSM_DIR/externalpackages/petsc/install/lib:/nasa/intel/Compiler/2016.2.181/mkl/lib/intel64/:/nasa/intel/Compiler/2016.2.181/compilers_and_libraries_2016.2.181/linux/compiler/lib/intel64/:/nasa/sgi/mpt/2.15r20/lib:/nasa/netcdf/4.4.1.1_mpt/lib:/nasa/matlab/2017b
Potential Problems:
- If you have undefined symbols, you may need to add a path to the
./MCCexecutable.sh
command. - If you get an error that the path to your ISSM trunk is incorrect once your code is deployed, you may have to hardcode this path into
src/m/os/issmdir.m
. This may also be caused by calls toaddpath
in your code, which should be removed before compiling.