wiki:eis

Installation on SMCE head node (external packages, binaries only)

  • Added the following to ~/.bashrc,
    # Spack Modules
    source /shared/spack/share/spack/setup-env.sh
    module load intelmpi
    #module load python-3.8.12-gcc-9.4.0-uyaszn4
    module load subversion-1.14.0-gcc-11.1.0-evgneyf
    
    ###################
    ### Environment ###
    ###################
    export CC=/opt/intel/mpi/2021.4.0/bin/mpicc
    export GCC=/opt/intel/mpi/2021.4.0/bin/mpicxx
    export FC=/opt/intel/mpi/2021.4.0/bin/mpifc
    
    export MPI_HOME=/opt/intel/mpi/2021.4.0
    
    ######################
    ### Path Variables ###
    ######################
    
    # ISSM
    export ISSM_DIR="/efs/issm/binaries/repos/trunk-jpl-working"
    export ISSM_EXT_DIR="/efs/issm/binaries/ext"
    
  • Checked out a copy of the ISSM development repository with,
    mkdir -p /efs/issm/binaries/repos
    svn co https://issm.ess.uci.edu/svn/issm/issm/trunk-jpl $ISSM_DIR
    
  • Copied $ISSM_DIR/jenkins/ross-debian_linux-full to $ISSM_DIR/jenkins/eis-smce-binaries
  • Modified $ISSM_DIR/jenkins/eis-smce-binaries with,
    #--------------------#
    # ISSM Configuration #
    #--------------------#
    
    ISSM_CONFIG='\
    	--prefix="${ISSM_DIR}" \
    	--disable-static \
    	--with-wrappers=no \
    	--enable-development \
    	--enable-debugging \
    	--with-numthreads=48 \
    	--with-fortran-lib="-L/usr/lib/gcc/x86_64-linux-gnu/9 -lgfortran" \
    	--with-mpi-include="/opt/intel/mpi/2021.4.0/include" \
    	--with-mpi-libflags="-L/opt/intel/mpi/2021.4.0/lib -lmpi -lmpicxx -lmpifort" \
    	--with-blas-lapack-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-metis-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-parmetis-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-scalapack-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-mumps-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-hdf5-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-petsc-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-gsl-dir="${ISSM_EXT_DIR}/gsl/install" \
    	--with-boost-dir="${ISSM_EXT_DIR}/boost/install" \
    	--with-dakota-dir="${ISSM_EXT_DIR}/dakota/install" \
    	--with-proj-dir="${ISSM_EXT_DIR}/proj/install" \
    	--with-triangle-dir="${ISSM_EXT_DIR}/triangle/install" \
    	--with-chaco-dir="${ISSM_EXT_DIR}/chaco/install" \
    	--with-m1qn3-dir="${ISSM_EXT_DIR}/m1qn3/install" \
    	--with-semic-dir="${ISSM_EXT_DIR}/semic/install"
    '
    
    #-------------------#
    # External Packages #
    #-------------------#
    
    EXTERNALPACKAGES="
    	autotools	install-linux.sh
    	cmake		install.sh
    	petsc		install-3.12-linux.sh
    	gsl			install.sh
    	boost		install-1.7-linux.sh
    	dakota		install-6.2-linux.sh
    	curl		install-7-linux.sh
    	netcdf		install-4.7-parallel.sh
    	sqlite		install.sh
    	proj		install-6.sh
    	gdal		install-3.sh
    	gshhg		install.sh
    	gmt			install-6-linux.sh
    	gmsh		install-4-linux.sh
    	triangle	install-linux.sh
    	chaco		install.sh
    	m1qn3		install.sh
    	semic		install.sh
    "
    
  • Committed $ISSM_DIR/jenkins/eis-smce-head-node-binaries to repo
  • Installed each of the external packages in $ISSM_DIR/jenkins/eis-daskhub-python-modules, after changing each of the installation paths with,
    PREFIX=${ISSM_EXT_DIR}/<pkg>/install
    
    and making the following modifications,
    • Modified $ISSM_DIR/externalpackages/petsc/install-3.12-linux.sh by,
      • removing,
        --download-mpich=1
        
      • adding,
        --with-mpi-dir=/opt/intel/mpi/2021.4.0
        --known-mpi-shared-libraries=1
        
    • $ISSM_DIR/externalpackages/dakota/install-6.2-linux.sh requires Python 2 to build, so either (1) cannot load Spack module for Python 3, or, (2) have to figure out how to override this in the install script (perhaps with alias python=/usr/bin/python)
    • Modified $ISSM_DIR/externalpackages/gdal/install-3.sh with,
      --with-curl="${CURL_ROOT}/bin/curl-config"
      

Installation on MATLAB node (MATLAB modules)

TO BE ADDED AFTER FURTHER INSTALLATION AND DEBUGGING

Installation on Daskhub (Python modules)


NOTE: Binaries are also compiled as we do not have a good way of disabling them. We could ignore them, remove them, or let users know that they're available for smaller test runs.


  • Added the following to ~/.bash_profile,
    # File system modifications that are not currently part of image
    #
    
    # Create a link to gl.h so that the CMake configuration can find it.
    # Note that we need to add it here because it does not persist.
    if [ ! -f /srv/conda/envs/notebook/include/GL/gl.h ]; then
            ln -s /srv/conda/envs/notebook/include/gstreamer-1.0/gst/gl/gl.h  /srv/conda/envs/notebook/include/GL/gl.h
    fi
    
    # Set permissions on private key
    chmod 400 ~/.ssh/eis-nasa-smce-head-node
    
    
    ## Environment
    #
    export OPENSSL_ROOT_DIR="/srv/conda/envs/notebook"
    
    
    ## Path Variables
    #
    
    # ISSM
    export ISSM_DIR="/efs/issm/python-modules/repos/trunk-jpl-working"
    export ISSM_EXT_DIR="/efs/issm/python-modules/ext"
    
    
    ## Aliases
    #
    alias issm-python-dev-env="source "\${ISSM_DIR}"/etc/environment.sh; export PYTHONPATH="\${ISSM_DIR}"/src/m/dev; export PYTHONSTARTUP="\${PYTHONPATH}"/devpath.py; export PYTHONUNBUFFERED=1; cd "\${ISSM_DIR}"/test/NightlyRun; LD_PRELOAD=/srv/conda/envs/notebook/lib/libpython3.9.so.1.0:"\${ISSM_EXT_DIR}"/petsc/install/lib/libmpifort.so.12 python" # Then run ./runme.py <options>
    
  • Checked out a copy of the ISSM development repository with,
    mkdir -p /efs/issm/python-modules/repos
    svn co https://issm.ess.uci.edu/svn/issm/issm/trunk-jpl $ISSM_DIR
    
  • Copied $ISSM_DIR/jenkins/ross-debian_linux-full to $ISSM_DIR/jenkins/eis-daskhub-python-modules
  • Modified $ISSM_DIR/jenkins/eis-daskhub-python-modules with,
    #--------------------#
    # ISSM Configuration #
    #--------------------#
    
    ISSM_CONFIG='\
    	--prefix="${ISSM_DIR}" \
    	--disable-static \
    	--enable-development \
    	--enable-debugging \
    	--with-numthreads=4 \
    	--with-python-dir="/srv/conda/envs/notebook" \
    	--with-python-version="3.9" \
    	--with-python-numpy-dir="/srv/conda/envs/notebook/lib/python3.9/site-packages/numpy/core/include/numpy" \
    	--with-fortran-lib="-L/usr/lib/x86_64-linux-gnu -lgfortran" \
    	--with-mpi-include="${ISSM_EXT_DIR}/petsc/install/include" \
    	--with-mpi-libflags="-L${ISSM_EXT_DIR}/petsc/install/lib -lmpi -lmpicxx -lmpifort" \
    	--with-blas-lapack-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-metis-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-parmetis-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-scalapack-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-mumps-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-hdf5-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-petsc-dir="${ISSM_EXT_DIR}/petsc/install" \
    	--with-gsl-dir="${ISSM_EXT_DIR}/gsl/install" \
    	--with-boost-dir="${ISSM_EXT_DIR}/boost/install" \
    	--with-dakota-dir="${ISSM_EXT_DIR}/dakota/install" \
    	--with-proj-dir="${ISSM_EXT_DIR}/proj/install" \
    	--with-triangle-dir="${ISSM_EXT_DIR}/triangle/install" \
    	--with-chaco-dir="${ISSM_EXT_DIR}/chaco/install" \
    	--with-m1qn3-dir="${ISSM_EXT_DIR}/m1qn3/install" \
    	--with-semic-dir="${ISSM_EXT_DIR}/semic/install" \
    '
    
    #-------------------#
    # External Packages #
    #-------------------#
    
    EXTERNALPACKAGES="
    	autotools	install-linux.sh
    	cmake		install.sh
    	petsc		install-3.16-linux.sh
    	gsl			install.sh
    	boost		install-1.7-linux.sh
    	dakota		install-6.2-linux.sh
    	curl		install-7-linux.sh
    	netcdf		install-4.7-parallel.sh
    	sqlite		install.sh
    	proj		install-6.sh
    	gdal		install-3-python.sh
    	gshhg		install.sh
    	gmt			install-6-linux.sh
    	gmsh		install-4-linux.sh
    	triangle	install-linux.sh
    	chaco		install.sh
    	m1qn3		install.sh
    	semic		install.sh
    "
    
  • Committed $ISSM_DIR/jenkins/eis-daskhub-python-modules to repo
  • Before installing any of the external packages, needed to disable the Conda environment with conda deactivate
  • Installed each of the external packages in $ISSM_DIR/jenkins/eis-daskhub-python-modules, after changing each of the installation paths with,
    PREFIX=${ISSM_EXT_DIR}/<pkg>/install
    
    and making the following modifications
    • Needed to remove,
      --FFLAGS="-fallow-argument-mismatch"
      
      from $ISSM_DIR/externalpackages/petsc/install-3.16-petsc.sh
    • Needed to modify $ISSM_DIR/externalpackages/gdal/install-3-python.sh with,
      export CPATH="${CPATH}:/srv/conda/envs/notebook/include"
      export LDFLAGS="-L${NETCDF_ROOT}/lib -lnetcdf"
      export LIBRARY_PATH="${LIBRARY_PATH}:/srv/conda/envs/notebook/lib"
      export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/srv/conda/envs/notebook/lib"
      
      and,
      --with-sqlite3="${SQLITE_ROOT}" \
      --with-libjson-c="/srv/conda/envs/notebook"
      
  • Needed to modify $ISSM_DIR/externalpackages/gmt/install-6-linux.sh with,
    export CC=mpicc
    export CPATH="${CPATH}:/srv/conda/envs/notebook/include"
    export LIBRARY_PATH="${LIBRARY_PATH}:/srv/conda/envs/notebook/lib"
    export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/srv/conda/envs/notebook/lib"
    
  • With Conda environment still disabled, ran,
    alias envsubst="/srv/conda/envs/notebook/bin/envsubst"
    jenkins/jenkins.sh jenkins/eis-daskhub-python-modules
    
  • Ran,
    mkdir ~/.ssh
    ssh-keygen -f ~/.ssh/eis-nasa-smce
    
    then gave contents of public key to SMCE team to add to SMCE head node
  • Ran,
    chmod 400 ~/.ssh/eis-nasa-smce
    
  • Created $ISSM_DIR/src/m/classes/clusters/eis_nasa_smce.py with contents,
    import os
    import shutil
    import subprocess
    
    try:
        from eis_nasa_smce_settings import eis_nasa_smce_settings
    except ImportError:
        print('You need eis_nasa_smce_settings.py to proceed, check presence and sys.path')
    from fielddisplay import fielddisplay
    from helpers import *
    from IssmConfig import IssmConfig
    from issmssh import issmssh
    from MatlabFuncs import *
    from pairoptions import pairoptions
    
    class eis_nasa_smce(object):
        """EIS_NASA_SMCE cluster class definition
    
        Usage:
            cluster = eis_nasa_smce()
            cluster = eis_nasa_smce('np', 3)
            cluster = eis_nasa_scme('np', 3, 'login', 'username')
        """
    
        def __init__(self, *args):  # {{{
            self.name = '52.10.233.96'
            self.login = 'jdquinn1'
            self.idfile = '~/.ssh/eis-nasa-smce'
            self.modules = ['intelmpi']
            self.numnodes = 4
            self.cpuspernode = 1
            self.port = 0
            self.time = 12 * 60 * 60
            self.processor = 'skylake'
            self.partition = 'sealevel-c5xl-spot'
            self.srcpath = '/efs/issm-new/binaries/repos/trunk-jpl-working'
            self.extpkgpath = '/efs/issm-new/binaries/ext'
            self.codepath = '/efs/issm-new/binaries/repos/trunk-jpl-working/bin'
            self.executionpath = '~/issm-exec'
            self.interactive = 0
            self.numstreams = 1
            self.hyperthreading = 0
            self.email = ''
    
            # Use provided options to change fields
            options = pairoptions(*args)
    
            # Initialize cluster using user settings if provided
            try:
                self = eis_nasa_smce_settings(self)
            except NameError:
                print('eis_nasa_smce_settings.py not found, using default settings')
    
            # OK get other fields
            self = options.AssignObjectFields(self)
        # }}}
    
        def __repr__(self):  # {{{
            # Display the object
            s = 'class eis_nasa_smce object\n'
            s += '    name: {}\n'.format(self.name)
            s += '    login: {}\n'.format(self.login)
            s += '    idfile: {}\n'.format(self.idfile)
            s += '    modules: {}\n'.format(strjoin(self.modules, ', '))
            s += '    numnodes: {}\n'.format(self.numnodes)
            s += '    cpuspernode: {}\n'.format(self.cpuspernode)
            s += '    np: {}\n'.format(self.nprocs())
            s += '    port: {}\n'.format(self.port)
            s += '    time: {}\n'.format(self.time)
            s += '    processor: {}\n'.format(self.processor)
            s += '    partition: {}\n'.format(self.partition)
            s += '    srcpath: {}\n'.format(self.srcpath)
            s += '    extpkgpath: {}\n'.format(self.extpkgpath)
            s += '    codepath: {}\n'.format(self.codepath)
            s += '    executionpath: {}\n'.format(self.executionpath)
            s += '    interactive: {}\n'.format(self.interactive)
            s += '    numstreams: {}\n'.format(self.numstreams)
            s += '    hyperthreading: {}\n'.format(self.hyperthreading)
            return s
        # }}}
    
        def nprocs(self):  # {{{
            return self.numnodes * self.cpuspernode
        # }}}
    
        def checkconsistency(self, md, solution, analyses):  # {{{
            # Now, check cluster.cpuspernode according to processor type
            if self.processor == 'skylake':
                if self.cpuspernode > 14 or self.cpuspernode < 1:
                    md = md.checkmessage('cpuspernode should be between 1 and 14 for \'skyw\' processors in hyperthreading mode')
            else:
                md = md.checkmessage('unknown processor type, should be \'skylake\'')
    
            # Miscellaneous
            if not self.login:
                md = md.checkmessage('login empty')
            if self.port:
                md = md.checkmessage('port must be set to 0 as we do not have an SSH tunnel')
            if not self.codepath:
                md = md.checkmessage('codepath empty')
            if not self.executionpath:
                md = md.checkmessage('executionpath empty')
    
            return self
        # }}}
    
        def BuildQueueScript(self, dirname, modelname, solution, io_gather, isvalgrind, isgprof, isdakota, isoceancoupling):  # {{{
            if isgprof:
                print('gprof not supported by cluster, ignoring...')
    
            issmexec = 'issm.exe'
            mpiexec = 'mpiexec' # Set to alternative mpiexec if desired
    
            if isdakota:
                version = IssmConfig('_DAKOTA_VERSION_')[0:2]
                version = float(str(version[0]))
                if version >= 6:
                    issmexec = 'issm_dakota.exe'
            if isoceancoupling:
                issmexec = 'issm_ocean.exe'
    
            # Write queuing script
            fid = open(modelname + '.queue', 'w')
    
            fid.write('#!/bin/bash\n')
            fid.write('#SBATCH --partition={} \n'.format(self.partition))
            fid.write('#SBATCH -J {} \n'.format(modelname))
            fid.write('#SBATCH -o {}.outlog \n'.format(modelname))
            fid.write('#SBATCH -e {}.errlog \n'.format(modelname))
            fid.write('#SBATCH --nodes={} \n'.format(self.numnodes))
            fid.write('#SBATCH --ntasks-per-node={} \n'.format(self.cpuspernode))
            fid.write('#SBATCH --cpus-per-task={} \n'.format(self.numstreams))
            fid.write('#SBATCH -t {:02d}:{:02d}:00 \n'.format(int(floor(self.time / 3600)), int(floor(self.time % 3600) / 60)))
            if (self.email.find('@')>-1):
                fid.write('#SBATCH --mail-user={} \n'.format(self.email))
                fid.write('#SBATCH --mail-type=BEGIN,END,FAIL \n\n')
            fid.write('source /etc/profile\n')
            fid.write('source /shared/spack/share/spack/setup-env.sh\n')
            for i in range(len(self.modules)):
                 fid.write('module load {} &> /dev/null\n'.format(self.modules[i]))
            fid.write('export MPI_GROUP_MAX=64\n\n')
            fid.write('export MPI_UNBUFFERED_STDIO=true\n\n')
            fid.write('export PATH="$PATH:/opt/slurm/bin"\n')
            fid.write('export PATH="$PATH:."\n\n')
            fid.write('export ISSM_DIR="{}"\n'.format(self.srcpath))
            fid.write('export ISSM_EXT_DIR="{}"\n'.format(self.extpkgpath))
            fid.write('source $ISSM_DIR/etc/environment.sh\n')
            fid.write('cd {}/{}/\n\n'.format(self.executionpath, dirname))
            fid.write('{} -n {} {}/{} {} {}/{} {}\n'.format(mpiexec, self.nprocs(), self.codepath, issmexec, solution, self.executionpath, dirname, modelname))
    
            if not io_gather: # concatenate the output files
                fid.write('cat {}.outbin.* > {}.outbin'.format(modelname, modelname))
            fid.close()
    
            # In interactive mode, create a run file, and errlog and outlog file
            if self.interactive:
                fid = open(modelname + '.run', 'w')
                if not isvalgrind:
                    fid.write('{} -np {} {}/{} {} {}/{} {}\n'.format(mpiexec, self.nprocs(), self.codepath, issmexec, solution, self.executionpath, dirname, modelname))
                else:
                    fid.write('{} -np {} valgrind --leak-check=full {}/{} {} {}/{} {}\n'.format(mpiexec, self.nprocs(), self.codepath, issmexec, solution, self.executionpath, dirname, modelname))
                if not io_gather: # concatenate the output files
                    fid.write('cat {}.outbin.* > {}.outbin'.format(modelname, modelname))
                fid.close()
                fid = open(modelname + '.errlog', 'w') # TODO: Change this to system call (touch <file>)?
                fid.close()
                fid = open(modelname + '.outlog', 'w') # TODO: Change this to system call (touch <file>)?
                fid.close()
        # }}}
    
        def UploadQueueJob(self, modelname, dirname, filelist):  # {{{
            # Compress the files into one zip
            compressstring = 'tar -zcf {}.tar.gz'.format(dirname)
            for file in filelist:
                compressstring += ' {}'.format(file)
            if self.interactive:
                compressstring += ' {}.run {}.errlog {}.outlog'.format(modelname, modelname, modelname)
            subprocess.call(compressstring, shell=True)
    
            print('uploading input file and queueing script')
            if self.interactive:
                directory = '{}/Interactive{}'.format(self.executionpath, self.interactive)
            else:
                directory = self.executionpath
    
            # NOTE: Replacement for issmscpout(self.name, directory, self.login, self.port, ['{}.tar.gz'.format(dirname)])
            copystring = 'cp {}.tar.gz /efs/issm/tmp'.format(dirname)
            subprocess.call(copystring, shell=True)
        # }}}
    
        def LaunchQueueJob(self, modelname, dirname, filelist, restart, batch):  # {{{
            if self.interactive:
                if not isempty(restart):
                    launchcommand = 'cd {}/Interactive{}'.format(self.executionpath, self.interactive)
                else:
                    launchcommand = 'cd {}/Interactive{} && tar -zxf {}.tar.gz'.format(self.executionpath, self.interactive, dirname)
            else:
                if not isempty(restart):
                    launchcommand = 'cd {} && cd {} && sbatch {}.queue'.format(self.executionpath, dirname, modelname)
                else:
                    launchcommand = 'cd {} && rm -rf {} && mkdir {} && cd {} && cp /efs/issm/tmp/{}.tar.gz . && tar -zxf {}.tar.gz && /opt/slurm/bin/sbatch {}.queue'.format(self.executionpath, dirname, dirname, dirname, dirname, dirname, modelname)
    
            print('launching solution sequence on remote cluster')
    
            # NOTE: Replacement for issmssh(self.name, self.login, self.port, launchcommand)
            subprocess.call('ssh -l {} -i {} {} "{}"'.format(self.login, self.idfile, self.name, launchcommand), shell=True)
        # }}}
    
        def Download(self, dirname, filelist):  # {{{
            # Copy files from cluster to current directory
        
            # NOTE: Replacement for issmscpin(self.name, self.login, self.port, directory, filelist)
            directory = '{}/{}/'.format(self.executionpath, dirname)
            fileliststr = '{' + ','.join([str(x) for x in filelist]) + '}'
            downloadcommand = 'scp -i {} {}@{}:{} {}/.'.format(self.idfile, self.login, self.name, os.path.join(directory, fileliststr), os.getcwd())
            subprocess.call(downloadcommand, shell=True) 
        # }}}
    
  • Committed $ISSM_DIR/src/m/classes/clusters/eis_nasa_smce_head_node.py to repo

NOTE:

  • It is important that the directory indicated by self.executionpath exists on SMCE head node before launch
  • Either in $ISSM_DIR/src/m/classes/clusters/eis_nasa_smce_head_node.py or runme
    • cluster.login should be changed to your username
    • cluster.executionpath should be changed to desired execution path on SMCE head node
    • cluster.partition should be changed to desired AWS ECS instance type (discuss with SMCE team before changing)
  • Created $ISSM_DIR/test/NightlyRun/test101eisnasasmce.py with contents,
    #Test Name: SquareShelfConstrainedStressSSA2d
    from model import *
    from socket import gethostname
    from triangle import triangle
    from setmask import setmask
    from parameterize import parameterize
    from setflowequation import setflowequation
    from solve import solve
    from massfluxatgate import massfluxatgate
    from generic import generic
    from eis_nasa_smce import eis_nasa_smce
    
    md = triangle(model(), '../Exp/Square.exp', 50000)
    md = setmask(md, 'all', '')
    md = parameterize(md, '../Par/SquareShelfConstrained.py')
    md = setflowequation(md, 'SSA', 'all')
    md.cluster = generic('name', gethostname(), 'np', 2)
    
    if True:
        cluster = eis_nasa_smce()
        cluster.partition = 'sealevel-c5xl-spot'
        md.cluster = cluster
    
    #outputs
    md.stressbalance.requested_outputs = ['default', 'DeviatoricStressxx', 'DeviatoricStressyy', 'DeviatoricStressxy', 'MassFlux1', 'MassFlux2', 'MassFlux3', 'MassFlux4', 'MassFlux5', 'MassFlux6']
    md.outputdefinition.definitions = [massfluxatgate('name', 'MassFlux1', 'profilename', '../Exp/MassFlux1.exp', 'definitionstring', 'Outputdefinition1'),
                                       massfluxatgate('name', 'MassFlux2', 'profilename', '../Exp/MassFlux2.exp', 'definitionstring', 'Outputdefinition2'),
                                       massfluxatgate('name', 'MassFlux3', 'profilename', '../Exp/MassFlux3.exp', 'definitionstring', 'Outputdefinition3'),
                                       massfluxatgate('name', 'MassFlux4', 'profilename', '../Exp/MassFlux4.exp', 'definitionstring', 'Outputdefinition4'),
                                       massfluxatgate('name', 'MassFlux5', 'profilename', '../Exp/MassFlux5.exp', 'definitionstring', 'Outputdefinition5'),
                                       massfluxatgate('name', 'MassFlux6', 'profilename', '../Exp/MassFlux6.exp', 'definitionstring', 'Outputdefinition6')]
    
    md = solve(md, 'Stressbalance')
    
    #Fields and tolerances to track changes
    field_names = ['Vx', 'Vy', 'Vel', 'Pressure',
                   'DeviatoricStressxx', 'DeviatoricStressyy', 'DeviatoricStressxy',
                   'MassFlux1', 'MassFlux2', 'MassFlux3', 'MassFlux4', 'MassFlux5', 'MassFlux6']
    field_tolerances = [3e-13, 1e-13, 1e-13, 1e-13,
                        2e-13, 1e-13, 2e-13,
                        1e-13, 1e-13, 1e-13,
                        1e-13, 1e-13, 1e-13]
    field_values = [md.results.StressbalanceSolution.Vx,
                    md.results.StressbalanceSolution.Vy,
                    md.results.StressbalanceSolution.Vel,
                    md.results.StressbalanceSolution.Pressure,
                    md.results.StressbalanceSolution.DeviatoricStressxx,
                    md.results.StressbalanceSolution.DeviatoricStressyy,
                    md.results.StressbalanceSolution.DeviatoricStressxy,
                    md.results.StressbalanceSolution.MassFlux1,
                    md.results.StressbalanceSolution.MassFlux2,
                    md.results.StressbalanceSolution.MassFlux3,
                    md.results.StressbalanceSolution.MassFlux4,
                    md.results.StressbalanceSolution.MassFlux5,
                    md.results.StressbalanceSolution.MassFlux6]
    
  • Committed $ISSM_DIR/test/NightlyRun/test101eisnasasmce.py to repo

TODO

  • When okayed by Denis, remove,
    • /efs/issm
    • /issm/ext
    • /issm/repos
    • /shared/issm

then, move /efs/issm-new to /efs/issm

  • Correct link to libpython on Daskhub (should not have to LD_PRELOAD libpython3.9.so.1.0)
  • Correct link to libproj on Daskhub (see ldd $ISSM_DIR/lib/IssmConfig_python.so; there is a link to libproj.so.25)
  • After SMCE team has created issm group, modify $ISSM_DIR/src/m/classes/clusters/eis_nasa_smce.py::LaunchQueueJob so that tarball is moved rather than copied (look at /efs/issm/issm/trunk-jpl-denis/src/m/classes/clusters/generic.py for examples of cp as opposed to scp)

For SMCE team

  • Run,
    ln -s /srv/conda/envs/notebook/include/gstreamer-1.0/gst/gl/gl.h  /srv/conda/envs/notebook/include/GL/gl.h
    
    on Daskhub and bake resulting symbolic link onto base image
  • Create group issm on SMCE head node
  • Change group ownership of /efs/issm/tmp to issm group
  • Review $ISSM_DIR/src/m/classes/clusters/eis_nasa_smce.py on Daskhub, in particular
    • self.processor
    • self.interactive
    • self.numstreams
    • self.hyperthreading
  • Can we support for self.email?
Last modified 18 months ago Last modified on 11/06/22 12:30:16
Note: See TracWiki for help on using the wiki.