In the context of software standards for parallel computing, two other names are bound to pop up -- Parallel Virtual Machine (PVM) and High Performance Fortran (HPF). Both of these have close ties to MPI. While this review is about MPI, it is intended as an orientation for new users, and to this end, it is appropriate to see how MPI fits in the larger context.
PVM is a package of software that provides message passing functionality as well as infrastructure for building a virtual parallel computer out of a network of workstations . It is often thought of as a competitor to MPI, but it is really a different beast. PVM is a research project of the University of Tennessee at Knoxville and Oak Ridge National Laboratory. While quite popular for writing message passing programs, PVM is a vehicle for performing research in parallel computing rather than a parallel computing standard. Its weaknesses with respect to MPI are also its strengths: it is not bound by an absolute requirement for backward compatibility; its design is not constrained to be efficient on every imaginable MIMD parallel architecture; there is no rigorous specification of PVM behavior. In some sense the tradeoff is between efficiency and portability in MPI, and flexibility and adaptability in PVM.
Successful features of PVM are finding their way into MPI, though MPI is unlikely to provide any support for fault tolerance or a virtual distributed operating system in the near future. Moreover, since PVM is defined by a full implementation rather than a specification, possibilities for interoperability in PVM are higher than in MPI.
HPF is an industry standard for the data parallel model of parallel computation. HPF was standardized a year earlier than MPI, and the successful HPF standardization process was copied by the MPI Forum. Despite the conceptual appeal and simplicity of HPF, MPI is much more widely used than HPF, for several reasons. These include: