---------------------------------------------------------------------- KNOWN ISSUES ---------------------------------------------------------------------- ### Lacking channel-specific features * ch3 does not presently support communication across heterogeneous platforms (e.g., a big-endian machine communicating with a little-endian machine). * ch3:nemesis:mx does not support dynamic processes at this time. * ch3:ssm and ch3:shm do not support thread safety. * ch3:shm does not support dynamic processes (e.g., MPI_Comm_spawn). * Support for "external32" data representation is incomplete. This affects the MPI_Pack_external and MPI_Unpack_external routines, as well the external data representation capabilities of ROMIO. * ch3:dllchan is rated "experimental". There are known problems when configured with --enable-g and --enable-g=log. ### Build Platforms * ch3:nemesis does not work on Solaris. You can use ch3:sock on this platform. * ch3:ssm uses special interprocess locks (often assembly) that many not work with some compilers or machine architectures. It is known to work on Linux with GNU, Intel and Pathscale compilers and on Windows with the Visual Studio compilers, on Intel and AMD architectures. * The sctp channel is fully supported for FreeBSD and Mac OS X. As of the time of this release, bugs in the stack currently existed in the Linux kernel, and will hopefully soon be resolved. It is known to not work under Solaris and Windows. For Solaris, the SCTP API available in the kernel of standard Solaris 10 is a subset of the standard API used by the sctp channel. Cooperation with the Sun SCTP developers to support ch3:sctp under Solaris for future releases is currently ongoing. For Windows, no known kernel-based SCTP stack for Windows currently exists. ### Other configure options * --enable-sharedlibs=gcc does not work on Solaris because of difference between the GNU ld program and the Solaris ld program. ### Process Managers * The MPD process manager can only handle relatively small amounts of data on stdin and may also have problems if there is data on stdin that is not consumed by the program. * The Hydra process manager does not support dynamic processes at this time. * The SMPD process manager does not work reliably with threaded MPI processes. MPI_Comm_spawn() does not currently work for >= 256 arguments with smpd. ### Performance issues * SMP-aware collectives do not perform as well, in select cases, as non-SMP-aware collectives, e.g. MPI_Reduce with message sizes larger than 64KiB. These can be disabled by the configure option "--disable-smpcoll". * MPI_Irecv operations that are not explicitly completed before MPI_Finalize is called may fail to complete before MPI_Finalize returns, and thus never complete. Furthermore, any matching send operations may erroneously fail. By explicitly completed, we mean that the request associated with the operation is completed by one of the MPI_Test or MPI_Wait routines. * For passive target RMA, there is no asynchronous agent at the target that will cause progress to occur. Progress occurs only when the user calls an MPI function at the target (which could well be MPI_Win_free). ### C++ Binding: * The MPI datatypes corresponding to Fortran datatypes are not available (e.g., no MPI::DOUBLE_PRECISION). * The C++ binding does not implement a separate profiling interface, as allowed by the MPI-2 Standard (Section 10.1.10 Profiling). * MPI::ERRORS_RETURN may still throw exceptions in the event of an error rather than silently returning.