13.5. Parallel Programming

Since TAO uses the message-passing model for parallel programming and employs MPI for all interprocessor communication, the user is free to employ MPI routines as needed throughout an application code. However, by default the user is shielded from many of the details of message passing within TAO, since these are hidden within parallel objects, such as vectors, matrices, and solvers. In addition, TAO users can interface to external tools, such as the generalized vector scatters/gathers and distributed arrays within PETSc, to assist in the management of parallel data.

The user must specify a communicator upon creation of any TAO objects (such as a vector, matrix, or solver) to indicate the processors over which the object is to be distributed. For example, some commands for matrix, vector, and solver creation are:

   info = MatCreate(MPI_Comm comm,int m,int n,int M,int N,Mat *H); 
   info = VecCreate(MPI_Comm comm,int m,int M,Vec *x); 
   info = TaoCreate(MPI_Comm comm,TaoMethod method,TAO_SOLVER *tao);  
The creation routines are collective over all processors in the communicator; thus, all processors in the communicator must call the creation routine. In addition, if a sequence of collective routines is being used, the routines must be called in the same order on each processor.