adios Overview

The Adaptable IO System (ADIOS) provides a simple, flexible way for scientists to describe the data in their code that may need to be written, read, or processed outside of the running simulation. By providing an external to the code XML file describing the various elements, their types, and how you wish to process them this run, the routines in the host code (either Fortran or C) can transparently change how they process the data.

ADIOS has demonstrated impressive IO performance results on leadership class machines and clusters; sometimes showing an improvement of more than 1000 times over well known parallel file formats. ADIOS is essentially an I/O componentization of different I/O transport methods. This feature allows flexibility for application scientists to adopt the best I/O method for different computer infrastructures with very little modification of their scientific applications. ADIOS has a suite of simple, easy-to-use APIs. Instead of being provided as the arguments of APIs, all the required metadata are stored in an external Extensible Markup Language (XML) configuration file, which is readable, editable, and portable for most machines.

Support

Usage

ADIOS is available as a module.
$ module load adios/1.9.0
There are several versions of ADIOS. The default version of the module includes POSIX, MPI-IO and MPI-CIO methods. We also have parallel HDF5 as seperate transport method. For parallel HDF5:
module load hdf5/1.8.1
ftn -c test.F90 ${ADIOS_INC}
ftn -o test test.o ${ADIOS_LIB} ${HDF5_FLIB}
or
module load hdf5/1.8.1
CC -c test.F90 ${ADIOS_INC}
CC -o test test.o ${ADIOS_LIB} ${HDF5_CLIB}
Examples can be found in the examples directory of ADIOS. A simple example code would be:

arrays_write.c:

#include <stdio.h>
#include <string.h>
#include "mpi.h"
#include "adios.h"

/*************************************************************/
/*          Example of writing arrays in ADIOS               */
/*                                                           */
/*        Similar example is manual/2_adios_write.c          */
/*************************************************************/
int main (int argc, char ** argv) 
{
    char        filename [256];
    int         rank, i, j;
    int         NX = 10, NY = 100; 
    double      t[NX][NY];
    int         p[NX];
    MPI_Comm    comm = MPI_COMM_WORLD;

    uint64_t    adios_groupsize, adios_totalsize;
    int64_t     adios_handle;

    MPI_Init (&argc, &argv);
    MPI_Comm_rank (comm, &rank);

    for (i = 0; i < NX; i++)
        for (j = 0; j< NY; j++)
            t[i][j] = rank * NX + i + j*(1.0/NY);

    for (i = 0; i < NX; i++)
        p[i] = rank * NX + i;

    strcpy (filename, "arrays.bp");
    adios_init ("arrays.xml", comm);
    adios_open (&adios_handle, "arrays", filename, "w", comm);
#include "gwrite_arrays.ch"
    adios_close (adios_handle);

    MPI_Barrier (comm);

    adios_finalize (rank);

    MPI_Finalize ();
    return 0;
}

gwrite_arrays.ch:

adios_groupsize = 4 \
                + 4 \
                + 8 * (NX) * (NY) \
                + 4 * (NX);
adios_group_size (adios_handle, adios_groupsize, &adios_totalsize);
adios_write (adios_handle, "NX", &NX);
adios_write (adios_handle, "NY", &NY);
adios_write (adios_handle, "var_double_2Darray", t);
adios_write (adios_handle, "var_int_1Darray", p);

arrays.xml:

<?xml version="1.0"?>
<adios-config host-language="C">

 <adios-group name="arrays">
 <var name="NX" type="integer"/>
 <var name="NY" type="integer"/>
 <var name="var_double_2Darray" gwrite="t" type="double" dimensions="NX,NY"/>
 <var name="var_int_1Darray" gwrite="p" type="integer" dimensions="NX"/>
 </adios-group>

 <method group="arrays" method="MPI"/>

 <buffer max-size-MB="20"/>

</adios-config>
The xml file is used to provide ADIOS information on how to format the data. Compilation is the same as above. Make sure that the XML file is in the same directory that the binary is aprun'd from!
Builds

SUMMIT

  • adios@1.11.1%gcc@4.8.5+fortran+mpi ^python@2.7.12
  • adios@1.11.1%pgi@17.9+fortran+mpi ^python@2.7.12