The benchmarking eort within the Computational Research & Development Programs at the Oak RidgeNational Laboratory (ORNL) seeks to design and enable High Performance Computing (HPC) benchmarksand test suites. The work described in this report is a part of the eort focusing on the comparison andanalysis of OpenSHMEM implementations using the Interleave Or Random (IOR) software forbenchmarking parallel file system using POSIX, MPIIO, or HDF5 interfaces. We describe the eort toemulate the MPIIO parallel collective capabilities in the IOR benchmark using OpenSHMEMcommunication. One development effort was in emulating the MPI derived datatype used in the read/writeoperations and in setting the file view. Another eort was in implementing an internal cache inOpenSHMEM distributed shared memory to facilitate global collective I/O operations. Experimentscomparing collective I/O in MPIIO implementations with the OpenSHMEM implementations wereperformed on the SGI Turing Cluster and the Cray XK7 Titan supercomputer at the Oak Ridge LeadershipComputing Facility (OLCF). The preliminary results suggest that on the Cray XK7 Titan, the MPIIOimplementations obtained higher write performance and the OpenSHMEM version obtained slightly higherread performance. On the SGI Turing Cluster, the MPIIO implementations obtained slightly higherperformance over the OpenSHMEM implementations on large files.
展開▼