Commit 8cf6a6ff authored by Francois Tessier's avatar Francois Tessier
Browse files

New version of S3D-IO. More recent and with the ability to disable NetCFD

parent 05f92d89
...@@ -6,7 +6,7 @@ PPN=16 ...@@ -6,7 +6,7 @@ PPN=16
NPROCS=$((NODES*PPN)) NPROCS=$((NODES*PPN))
TARGET="/projects/visualization/ftessier/debug" TARGET="/projects/visualization/ftessier/debug"
cd $HOME/TAPIOCA/examples cd $HOME/TAPIOCA/examples/HACC-IO
export TAPIOCA_DEVNULL=false export TAPIOCA_DEVNULL=false
export TAPIOCA_COMMSPLIT=true export TAPIOCA_COMMSPLIT=true
......
Copyright 2003-2013 Northwestern University
Portions of this software were developed by the Sandia National Laboratory.
Access and use of this software shall impose the following obligations
and understandings on the user. The user is granted the right, without
any fee or cost, to use, copy, modify, alter, enhance and distribute
this software, and any derivative works thereof, and its supporting
documentation for any purpose whatsoever, provided that this entire
notice appears in all copies of the software, derivative works and
supporting documentation. Further, Northwestern University requests
that the user credit Northwestern University in any publications that
result from the use of this software or in any product that includes
this software. The name Northwestern University, however, may not be
used in any advertising or publicity to endorse or promote any
products or commercial entity unless specific written permission is
obtained from Northwestern University. The user also understands that
Northwestern University is not obligated to provide the user with
any support, consulting, training or assistance of any kind with regard
to the use, operation and performance of this software nor to provide
the user with any updates, revisions, new versions or "bug fixes."
THIS SOFTWARE IS PROVIDED BY NORTHWESTERN UNIVERSITY "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL NORTHWESTERN UNIVERSITY BE
LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
ARISING OUT OF OR IN CONNECTION WITH THE ACCESS, USE OR PERFORMANCE
OF THIS SOFTWARE.
#
# Copyright (C) 2013, Northwestern University
# See COPYRIGHT notice in top-level directory.
#
# $Id: Makefile 3485 2015-12-27 00:06:31Z wkliao $
#
#
# Please change the following variables:
# MPIF90 -- MPI Fortran compiler
# FCFLAGS -- Compile flag
# PNETCDF_DIR -- PnetCDF library installation directory
#
MPIF90 = mpif90
FCFLAGS = -Wall -g
PNETCDF_DIR = $(HOME)/PnetCDF
PNETCDF_DIR = $(HOME)
COMPILE_F90 = $(MPIF90) $(FCFLAGS) $(INC) -c
LINK = $(MPIF90) $(FCFLAGS)
INC = -I$(PNETCDF_DIR)/include
LIBS = -L$(PNETCDF_DIR)/lib -lpnetcdf
SRCS = runtime_m.f90 \
param_m.f90 \
topology_m.f90 \
variables_m.f90 \
io_profiling_m.f90 \
pnetcdf_m.f90 \
init_field.f90 \
io.f90 \
random_number.f90 \
solve_driver.f90 \
main.f90
OBJS = $(SRCS:.f90=.o)
MODS = $(SRCS:.f90=.mod)
TARGET = s3d_io.x
all: $(TARGET)
%.o:%.f90
$(COMPILE_F90) $<
$(TARGET): $(OBJS)
$(LINK) $(OBJS) -o $(TARGET) $(LIBS)
PACKAGE_NAME = s3d-io-pnetcdf-1.1
PACKING_LIST = $(SRCS) Makefile README COPYRIGHT RELEASE_NOTE
dist:
/bin/rm -rf $(PACKAGE_NAME) $(PACKAGE_NAME).tar.gz
mkdir $(PACKAGE_NAME)
cp $(PACKING_LIST) $(PACKAGE_NAME)
tar -cf $(PACKAGE_NAME).tar $(PACKAGE_NAME)
gzip $(PACKAGE_NAME).tar
/bin/rm -rf $(PACKAGE_NAME)
clean:
/bin/rm -f $(OBJS) $(MODS) $(TARGET)
distclean: clean
/bin/rm -rf $(PACKAGE_NAME).tar.gz $(PACKAGE_NAME)
#
# Copyright (C) 2013, Northwestern University This is an I/O simulator for the DNS code S3D; the physics modules are
# See COPYRIGHT notice in top-level directory. missing but the memory arrangement and I/O routines are taken from the
# production code.
# $Id: README 3457 2015-11-21 23:07:56Z wkliao $
If you use this simulator in your work and didn't receive it directly
This benchmark programs is the I/O kernel of S3D combustion simulation code. from the S3D group at Sandia National Laboratories, please let us
http://exactcodesign.org/ There are several I/O methods implemented in S3D. know by emailing Jackie Chen at jhchen@sandia.gov. If you do something
This software only contains the method of Parallel NetCDF. interesting, we ask that you share the results with us. If you're
planning to publish work that uses theses routines please contact us
S3D is a continuum scale first principles direct numerical simulation code at the outset.
which solves the compressible governing equations of mass continuity, momenta,
energy and mass fractions of chemical species including chemical reactions. Both makefiles and CMake scripts are provided for building the
Readers are referred to the published paper below. J. Chen, A. Choudhary, B. simulator; CMake is the build system used for production S3D runs and
de Supinski, M. DeVries, E. Hawkes, S. Klasky, W. Liao, K. Ma, J. Crummey, N. has the convenient feature of allowing for out of source builds and an
Podhorszki, R. Sankaran, S. Shende, and C. Yoo. Teras-cale Direct Numerical installation command to arrange the executables and input files
Simulations of Turbulent Combustion Using S3D. In Computational Science and appropriately to run the simulator, e.g.:
Discovery Volume 2, January 2009.
tar -xzvf s3dio.tgz
I/O pattern: mkdir build
A checkpoint is performed at regular intervals, and its data consist of 8-byte cd build
three-dimensional arrays. At each checkpoint, four global arrays, representing ccmake ../S3D-IO
mass, velocity, pressure, and temperature, respectively, are written to a newly make
created file in the canonical order. Mass and velocity are four-dimensional make install
arrays while pressure and temperature are three-dimensional arrays. All four
arrays share the same size for the lowest three spatial dimensions X, Y, and Z, During the ccmake step the different IO methods to be built can be
which are partitioned among MPI processes in a block-block-block fashion. For enabled / disabled, paths to the necessary libraries can be provided,
the mass and velocity arrays, the length of the fourth dimension is 11 and 3, and the run tree directory can be specified.
respectively. The fourth dimension, the most significant one, is not
partitioned. As the number of MPI processes increases, the aggregate I/O The run tree will look like:
amount proportionally increases as well. ./s3d_run/run
./s3d_run/input
For more detailed description of the data partitioning and I/O patterns, ./s3d_run/post
please refer to the following paper. ./s3d_run/data
W. Liao and A. Choudhary. Dynamically Adapting File Domain Partitioning
Methods for Collective I/O Based on Underlying Parallel File System The executable and a sample job submission script are in the run directory.
Locking Protocols. In the Proceedings of International Conference for
High Performance Computing, Networking, Storage and Analysis, Austin, The job size and the IO method can be selected by modifying the
Texas, November 2008. contens of ./s3d_run/input/s3d.in: in the "GRID DIMENSION PARAMETERS"
section nx_g, ny_g, nz_g is the global grid size, and npx, npy, npz
To compile: are the dimensions of the decomposition topology. It is necessary
Edit Makefile and set/change variables: that:
MPIF90 - MPI Fortran 90 compiler
FCFLAGS - compile flags mod(nx_g, npx) == 0
PNETCDF_DIR - the path of PnetCDF library
(1.4.0 and higher is required) and
For example: nx_g/npx > 10
MPIF90 = mpif90
FCFLAGS = -O2 usually, 30 < nx_g/npx < 45.
PNETCDF_DIR = ${HOME}/PnetCDF
The total mpi job size should be npx*npy*npz.
To run:
Usage: s3d_io.x nx_g ny_g nz_g npx npy npz dir_path The I/O method is selected on the last line of ./s3d_run/input/s3d.in;
There are 9 command-line arguments: the availabile methods are:
nx_g - GLOBAL grid size along X dimension
ny_g - GLOBAL grid size along Y dimension Fortran I/O - 1 file per mpi process per output time
nz_g - GLOBAL grid size along Z dimension MPI-IO - 1 file per timestep
npx - number of MPI processes along X dimension PnetCDF - 1 netCDF file per timestep written using parallel netCDF
npy - number of MPI processes along Y dimension HDF5 - 1 HDF5 file per timestep written using parallel HDF5
npz - number of MPI processes along Z dimension
method - 0: using PnetCDF blocking APIs, 1: nonblocking APIs The latter 3 options, the collective I/O routines, were provided by
restart - restart from reading a previous written file (True/False) Alok Choudhary (choudhar@ece.northwestern.edu) and Wei-keng Liao
dir_path - the directory name to store the output files (wkliao@ece.northwestern.edu) and questions about these routines
should be sent directly to them.
To change the number of checkpoint dumps (default is set to 5), edit
file param_m.f90 and set a different value for i_time_end:
i_time_end = 5 ! number of checkpoints (also number of output files)
The contents of all variables written to files are set to random numbers.
This setting can be disabled by comment out the line below in file
solve_driver.f90
call random_set
Example run command:
For a test run with small data size and a short return time, here is an
example command for running on 4 MPI processes.
mpiexec -n 4 ./s3d_io.x 10 10 10 2 2 1 1 F .
The command below runs on 4096 MPI processes with the global array
of size 800x800x800 and local array of size 50x50x50, output directory
/scratch1/scratchdirs/wkliao/FS_1M_96 using nonblocking APIs, and without
restart.
mpiexec -l -n 512 ./s3d_io.x 800 800 800 16 16 16 1 F /scratch1/scratchdirs/wkliao/FS_1M_96
Example output from stdout:
++++ I/O is done through PnetCDF ++++
I/O method : nonblocking APIs
Run with restart : False
No. MPI processes : 4096
Global array size : 800 x 800 x 800
output file path : /scratch1/scratchdirs/wkliao/FS_1M_96
file striping count : 96
file striping size : 1048576 bytes
-----------------------------------------------
Time for open : 0.11 sec
Time for read : 0.00 sec
Time for write : 18.04 sec
Time for close : 0.02 sec
no. read calls : 0 per process
no. write calls : 20 per process
total read amount : 0.00 GiB
total write amount : 305.18 GiB
read bandwidth : 0.00 MiB/s
write bandwidth : 17318.78 MiB/s
-----------------------------------------------
total I/O amount : 305.18 GiB
total I/O time : 18.17 sec
I/O bandwidth : 17201.53 MiB/s
Questions/Comments:
email: wkliao@eecs.northwestern.edu
Source file for HDF% I/O method is in
source/modules/hdf5_m.f90
Two subroutines are :
hdf5_write() and hdf5_read()
To compile and link a parallel netcdf library before make,
on Jaguar:
% module load hdf5/1.6.5_par
on Ewok:
% module load hdf5/1.6.5_par_pgi625
To check the datasets and attributes saved in an HDF5 file, use command h5ls.
For example:
jaguar14 ::run(7:50pm) #400% h5ls -v ../data/pressure_wave_test.0.000E+00.field.h5
Opened "/lustre/scr144/wkliao/NWU-S3D-IO/data/pressure_wave_test.0.000E+00.field.h5" with sec2 driver.
Mach_number Dataset {SCALAR}
Location: 0:1:0:14168
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
acoustic_Reynolds_number Dataset {SCALAR}
Location: 0:1:0:13896
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
convective_Reynolds_number Dataset {SCALAR}
Location: 0:1:0:14440
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
element_name_1 Dataset {SCALAR}
Location: 0:1:0:4640
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 16 logical bytes, 16 allocated bytes, 100.00% utilization
Type: 16-byte space-padded ASCII string
element_name_2 Dataset {SCALAR}
Location: 0:1:0:4912
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 16 logical bytes, 16 allocated bytes, 100.00% utilization
Type: 16-byte space-padded ASCII string
element_name_3 Dataset {SCALAR}
Location: 0:1:0:5184
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 16 logical bytes, 16 allocated bytes, 100.00% utilization
Type: 16-byte space-padded ASCII string
element_name_4 Dataset {SCALAR}
Location: 0:1:0:5456
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 16 logical bytes, 16 allocated bytes, 100.00% utilization
Type: 16-byte space-padded ASCII string
freestream_temperature Dataset {SCALAR}
Location: 0:1:0:9648
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:CO Dataset {SCALAR}
Location: 0:1:0:8288
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:CO2 Dataset {SCALAR}
Location: 0:1:0:8560
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:H Dataset {SCALAR}
Location: 0:1:0:7416
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:H2 Dataset {SCALAR}
Location: 0:1:0:5728
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:H2O Dataset {SCALAR}
Location: 0:1:0:7144
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:HCO Dataset {SCALAR}
Location: 0:1:0:8832
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:HO2 Dataset {SCALAR}
Location: 0:1:0:7688
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:N2 Dataset {SCALAR}
Location: 0:1:0:9104
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:O Dataset {SCALAR}
Location: 0:1:0:6600
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:O2 Dataset {SCALAR}
Location: 0:1:0:6328
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
molecular_weight:OH Dataset {SCALAR}
Location: 0:1:0:6872
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
number_of_elements_in_reaction_mechansim Dataset {SCALAR}
Location: 0:1:0:976
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 4 logical bytes, 4 allocated bytes, 100.00% utilization
Type: native int
number_of_reaction_third-body_reactions Dataset {SCALAR}
Location: 0:1:0:4368
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 4 logical bytes, 4 allocated bytes, 100.00% utilization
Type: native int
number_of_species_in_reaction_mechansim Dataset {SCALAR}
Location: 0:1:0:1576
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 4 logical bytes, 4 allocated bytes, 100.00% utilization
Type: native int
number_of_steps_in_reaction_mechansims Dataset {SCALAR}
Location: 0:1:0:4096
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 4 logical bytes, 4 allocated bytes, 100.00% utilization
Type: native int
pout Dataset {SCALAR}
Location: 0:1:0:15528
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
pressure Dataset {100/100, 100/100, 100/100}
Location: 0:1:0:88016712
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 8000000 logical bytes, 8000000 allocated bytes, 100.00% utilization
Type: native double
reference_conductivity Dataset {SCALAR}
Location: 0:1:0:11064
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_density Dataset {SCALAR}
Location: 0:1:0:10792
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_length Dataset {SCALAR}
Location: 0:1:0:13352
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_pressure Dataset {SCALAR}
Location: 0:1:0:11936
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_ratio_of_specifice_heats Dataset {SCALAR}
Location: 0:1:0:10248
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_specific_heat Dataset {SCALAR}
Location: 0:1:0:12752
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_speed_of_sound Dataset {SCALAR}
Location: 0:1:0:10520
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_temperature Dataset {SCALAR}
Location: 0:1:0:11664
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_time Dataset {SCALAR}
Location: 0:1:0:12480
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
reference_viscosity Dataset {SCALAR}
Location: 0:1:0:13624
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
standard_atmospheric_pressure Dataset {SCALAR}
Location: 0:1:0:12208
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
temp Dataset {100/100, 100/100, 100/100}
Location: 0:1:0:16072
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 8000000 logical bytes, 8000000 allocated bytes, 100.00% utilization
Type: native double
time Dataset {SCALAR}
Location: 0:1:0:14712
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
time_save Dataset {SCALAR}
Location: 0:1:0:15256
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
tstep Dataset {SCALAR}
Location: 0:1:0:14984
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
u Dataset {3/3, 100/100, 100/100, 100/100}
Attribute: velocity\ component {3}
Type: 16-byte space-padded ASCII string
Data: "u" ' ' repeats 14 times, "v" ' ' repeats 14 times, "w" ' ' repeats 14 times
Location: 0:1:0:88016984
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 24000000 logical bytes, 24000000 allocated bytes, 100.00% utilization
Type: native double
universal_gas_constant Dataset {SCALAR}
Location: 0:1:0:9376
Links: 1
Modified: 2008-08-11 19:50:16 EDT
Storage: 8 logical bytes, 8 allocated bytes, 100.00% utilization
Type: native double
yspecies Dataset {11/11, 100/100, 100/100, 100/100}
Attribute: specie\ names {11}
Type: 16-byte space-padded ASCII string
Data:
(0) "Y-H2" ' ' repeats 11 times, "Y-O2" ' ' repeats 11 times, "Y-O" ' ' repeats 12 times,
(3) "Y-OH" ' ' repeats 11 times, "Y-H2O" ' ' repeats 10 times, "Y-H" ' ' repeats 12 times,
(6) "Y-HO2" ' ' repeats 10 times, "Y-CO" ' ' repeats 11 times,
(8) "Y-CO2" ' ' repeats 10 times, "Y-HCO" ' ' repeats 10 times, "Y-N2" ' ' repeats 11 times
Location: 0:1:0:15800
Links: 1
Modified: 2008-08-11 19:50:17 EDT
Storage: 88000000 logical bytes, 88000000 allocated bytes, 100.00% utilization
Type: native double
Source file for parallel netCDF I/O method is in
source/modules/pnetcdf_m.f90
Two subroutines are :
pnetcdf_write() and pnetcdf_read()
To compile and link a parallel netcdf library, a pnetcdf library is
available in ~wkliao/PnetCDF
To check the datasets and attributes saved in a netcdf file, use command
% ncdump -c netcdf_file_name.nc
A serial netcdf library is built in ~wkliao/NetCDF. The command ncdump is
available in ~wkliao/NetCDF/bin
An example of using ncdump is given below.
jaguar10 ::run(12:59am) #423% ~wkliao/NetCDF/bin/ncdump -c ../data/pressure_wave_test.1.000E-06.field.nc
netcdf pressure_wave_test.0.000E+00.field {
dimensions:
nx_g = 100 ;
ny_g = 100 ;
nz_g = 50 ;
number_of_species = 11 ;
number_of_velocity_components = 3 ;
variables:
double yspecies(number_of_species, nz_g, ny_g, nx_g) ;
yspecies:specie_name_01 = "Y-H2 " ;
yspecies:specie_name_02 = "Y-O2 " ;
yspecies:specie_name_03 = "Y-O " ;
yspecies:specie_name_04 = "Y-OH " ;
yspecies:specie_name_05 = "Y-H2O " ;
yspecies:specie_name_06 = "Y-H " ;
yspecies:specie_name_07 = "Y-HO2 " ;
yspecies:specie_name_08 = "Y-CO " ;
yspecies:specie_name_09 = "Y-CO2 " ;
yspecies:specie_name_10 = "Y-HCO " ;
yspecies:specie_name_11 = "Y-N2 " ;
double temp(nz_g, ny_g, nx_g) ;
double pressure(nz_g, ny_g, nx_g) ;
double u(number_of_velocity_components, nz_g, ny_g, nx_g) ;
u:velocity_component_1 = "u" ;
u:velocity_component_2 = "v" ;
u:velocity_component_3 = "w" ;
// global attributes:
:number_of_elements_in_reaction_mechansim = 4 ;
:number_of_species_in_reaction_mechansim = 11 ;
:number_of_steps_in_reaction_mechansims = 21 ;
:number_of_reaction_third-body_reactions = 7 ;
:element_name_1 = "C " ;
:element_name_2 = "H " ;
:element_name_3 = "O " ;
:element_name_4 = "N " ;
:molecular_weight:H2 = 0.00201594 ;
:molecular_weight:O2 = 0.0319988 ;
:molecular_weight:O = 0.0159994 ;
:molecular_weight:OH = 0.01700737 ;
:molecular_weight:H2O = 0.01801534 ;
:molecular_weight:H = 0.00100797 ;
:molecular_weight:HO2 = 0.03300677 ;
:molecular_weight:CO = 0.02801055 ;
:molecular_weight:CO2 = 0.04400995 ;
:molecular_weight:HCO = 0.02901852 ;
:molecular_weight:N2 = 0.0280134 ;
:universal_gas_constant = 8.314 ;
:freestream_temperature = 300. ;
:reference_ratio_of_specifice_heats = 1.4 ;
:reference_speed_of_sound = 347.2 ;
:reference_density = 1.1766 ;
:reference_conductivity = 0.02614 ;
:reference_temperature = 120. ;
:reference_pressure = 141836.588544 ;
:standard_atmospheric_pressure = 101325. ;