| 38 | - PETSc (see instructions below) |
| 39 | |
| 40 | Installing PETSc on a cluster is challenging because PETSc cannot call `mpiexec` without formally submitting a job. You first need to run the installation script `install-3.8-pleiades.sh`. You will then be stopped by PETSc with the following message: |
| 41 | {{{ |
| 42 | #!sh |
| 43 | ================================================================================= |
| 44 | Since your compute nodes require use of a batch system or mpiexec you must: |
| 45 | |
| 46 | 1) cd /home3/mmorligh/issm/trunk-jpl/externalpackages/petsc/src |
| 47 | |
| 48 | 2) Submit ./conftest-arch-linux2-c-opt to 1 processor of your batch system or system you are |
| 49 | cross-compiling for; this will generate the file reconfigure-arch-linux2-c-opt.py |
| 50 | |
| 51 | 3) Run ./reconfigure-arch-linux2-c-opt.py (to complete the configure process). |
| 52 | ================================================================================= |
| 53 | }}} |
| 54 | |
| 55 | To make your life easy, you have a script ready to go. Go to the `src` folder, where PETSc is being installed, open `script.queue` and change `group_list` to `s1507` if you are in Mathieu's group. Then submit the job using `qsub`: |
| 56 | {{{ |
| 57 | #!sh |
| 58 | qsub script.queue |
| 59 | }}} |
| 60 | |
| 61 | A job ID (similar to `3513616.pbspl1.nas.nasa.gov`) will appear to the screen. Once the job is done, you should see a new file in the current directory: `./reconfigure-arch-linux2-c-opt.py`. Once you see it, you can run: |
| 62 | {{{ |
| 63 | #!sh |
| 64 | ./reconfigure-arch-linux2-c-opt.py |
| 65 | }}} |
| 66 | PETSc is going to compile its packages, and at the end, you will need to copy and paste the `make` commands: |
| 67 | {{{ |
| 68 | #!sh |
| 69 | make |
| 70 | }}} |
| 71 | and |
| 72 | {{{ |
| 73 | #!sh |
| 74 | make install |
| 75 | }}} |
| 76 | and that's it! |