Skip to content

Commit

Permalink
Remove necessity for RecordComponent::SCALAR (#1154)
Browse files Browse the repository at this point in the history
* Add helper: openPMD::auxiliary::overloaded

* Prepare Attributable for virtual inheritance

Use only zero-param constructors to avoid diamond initialization
pitfalls

* Fix default constructors/operators of Container and BaseRecordComponent

These derive virtually from Attributable, and this avoids that pitfalls
propagate to user code.

* Add m_datasetDefined

See in-code documentation.

* Prepare class structure without applying logic yet

BaseRecord is now derived by its contained RecordComponent type.
If it is scalar, the idea is that the BaseRecord itself is used as a
RecordComponent, without needing to retrieve the [SCALAR] entry.
No logic implemented yet around this, this just prepares the class
structure.
Notice that this will write some unnecessary attributes since the
RecordComponent types initialize some default attributes upon
construction.

* No longer use map entry SCALAR in application logic

Not yet supported: Backward compatibility for still allowing legacy
access to scalar entries

* Remove KEEP_SYNCHRONOUS task

No longer needed, as one object in the openPMD hierarchy is no longer
represented by possibly multiple Writable objects.

* Adapt Coretests

* No virtual methods in Container class

Either this way, or make all of them virtual

* Fully override Container methods in BaseRecord

Special care for legacy usage of SCALAR constant. Implement iteration
API such that it works for scalar components as well.

* Adapt Container API to C++17

insert() and reverse iterators

* Adapt myPath() functionality

* Factor out create_and_bind_container() function template

Will later be called by Record-type classes, too

* Factor out RecordComponent.__setitem__ and __getitem__

Similarly to the Container API, we will need to apply this to
Record-type classes.
Defining `__setitem__` and `__getitem__` for them is sufficient, as all
other members are inherited from RecordComponent.
`__setitem__` and `__getitem__` need special care, as they are inherited
from Container AND from RecordComponent, so some conflict resolution is
needed.

* Consistently use copy semantics in Python API

* Apply new class structure to Python API as well

* Adapt openpmd-pipe to new design

This somewhat demonstrates that this change is slightly API-breaking.
Since openpmd-pipe acts directly on the class structure via
`instanceof()`, fixes are necessary.

* Safeguard: No scalar and vector components side by side

"A scalar component can not be contained at the same time as one or more regular components."

* Remove [SCALAR] from all examples

* Avoid object slicing when using records as scalar components

* Documentation

* Adapt to refactored Python bindings after rebasing
  • Loading branch information
franzpoeschel authored Dec 22, 2023
1 parent e965f69 commit 2e89f87
Show file tree
Hide file tree
Showing 67 changed files with 1,572 additions and 703 deletions.
2 changes: 0 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -462,7 +462,6 @@ set(CORE_SOURCE
src/auxiliary/JSON.cpp
src/backend/Attributable.cpp
src/backend/BaseRecordComponent.cpp
src/backend/Container.cpp
src/backend/MeshRecordComponent.cpp
src/backend/PatchRecord.cpp
src/backend/PatchRecordComponent.cpp
Expand Down Expand Up @@ -605,7 +604,6 @@ if(openPMD_HAVE_PYTHON)
src/binding/python/openPMD.cpp
src/binding/python/Access.cpp
src/binding/python/Attributable.cpp
src/binding/python/BaseRecord.cpp
src/binding/python/BaseRecordComponent.cpp
src/binding/python/ChunkInfo.cpp
src/binding/python/Dataset.cpp
Expand Down
3 changes: 3 additions & 0 deletions docs/source/usage/concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ A record is an data set with common properties, e.g. the electric field :math:`\
A density field could be another record - which is scalar as it only has one component.

In general, openPMD allows records with arbitrary number of components (tensors), as well as vector records and scalar records.
In the case of vector records, the single components are stored as datasets within the record.
In the case of scalar records, the record and component are equivalent.
In the API, the record can be directly used as a component, and in the standard a scalar record is represented by the scalar dataset with attributes.

Meshes and Particles
--------------------
Expand Down
2 changes: 1 addition & 1 deletion examples/10_streaming_write.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@
temperature.axis_labels = ["x", "y"]
temperature.grid_spacing = [1., 1.]
# temperature has no x,y,z components, so skip the last layer:
temperature_dataset = temperature[io.Mesh_Record_Component.SCALAR]
temperature_dataset = temperature
# let's say we are in a 3x3 mesh
temperature_dataset.reset_dataset(
io.Dataset(np.dtype("double"), [3, 3]))
Expand Down
3 changes: 1 addition & 2 deletions examples/12_span_write.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,7 @@ void span_write(std::string const &filename)

using mesh_type = position_t;

RecordComponent chargeDensity =
iteration.meshes["e_chargeDensity"][RecordComponent::SCALAR];
Mesh chargeDensity = iteration.meshes["e_chargeDensity"];

/*
* A similar memory optimization is possible by using a unique_ptr type
Expand Down
3 changes: 1 addition & 2 deletions examples/13_write_dynamic_configuration.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,7 @@ chunks = "auto"
Dataset differentlyCompressedDataset{Datatype::INT, {10}};
differentlyCompressedDataset.options = differentCompressionSettings;

auto someMesh = iteration.meshes["differentCompressionSettings"]
[RecordComponent::SCALAR];
auto someMesh = iteration.meshes["differentCompressionSettings"];
someMesh.resetDataset(differentlyCompressedDataset);
std::vector<int> dataVec(10, i);
someMesh.storeChunk(dataVec, {0}, {10});
Expand Down
2 changes: 1 addition & 1 deletion examples/13_write_dynamic_configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def main():
temperature.axis_labels = ["x", "y"]
temperature.grid_spacing = [1., 1.]
# temperature has no x,y,z components, so skip the last layer:
temperature_dataset = temperature[io.Mesh_Record_Component.SCALAR]
temperature_dataset = temperature
# let's say we are in a 3x3 mesh
dataset = io.Dataset(np.dtype("double"), [3, 3])
dataset.options = json.dumps(config)
Expand Down
3 changes: 1 addition & 2 deletions examples/1_structure.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,9 @@ int main()
* differently.
* https://github.com/openPMD/openPMD-standard/blob/latest/STANDARD.md#scalar-vector-and-tensor-records*/
Record mass = electrons["mass"];
RecordComponent mass_scalar = mass[RecordComponent::SCALAR];

Dataset dataset = Dataset(Datatype::DOUBLE, Extent{1});
mass_scalar.resetDataset(dataset);
mass.resetDataset(dataset);

/* Required Records and RecordComponents are created automatically.
* Initialization has to be done explicitly by the user. */
Expand Down
4 changes: 1 addition & 3 deletions examples/2_read_serial.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,7 @@ int main()
}

openPMD::ParticleSpecies electrons = i.particles["electrons"];
std::shared_ptr<double> charge =
electrons["charge"][openPMD::RecordComponent::SCALAR]
.loadChunk<double>();
std::shared_ptr<double> charge = electrons["charge"].loadChunk<double>();
series.flush();
cout << "And the first electron particle has a charge = "
<< charge.get()[0];
Expand Down
2 changes: 1 addition & 1 deletion examples/2_read_serial.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@

# printing a scalar value
electrons = i.particles["electrons"]
charge = electrons["charge"][io.Mesh_Record_Component.SCALAR]
charge = electrons["charge"]
series.flush()
print("And the first electron particle has a charge {}"
.format(charge[0]))
Expand Down
3 changes: 1 addition & 2 deletions examples/3_write_serial.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,7 @@ int main(int argc, char *argv[])
// in streaming setups, e.g. an iteration cannot be opened again once
// it has been closed.
// `Series::iterations` can be directly accessed in random-access workflows.
MeshRecordComponent rho =
series.writeIterations()[1].meshes["rho"][MeshRecordComponent::SCALAR];
Mesh rho = series.writeIterations()[1].meshes["rho"];
cout << "Created a scalar mesh Record with all required openPMD "
"attributes\n";

Expand Down
2 changes: 1 addition & 1 deletion examples/3_write_serial.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
# it has been closed.
# `Series.iterations` can be directly accessed in random-access workflows.
rho = series.write_iterations()[1]. \
meshes["rho"][io.Mesh_Record_Component.SCALAR]
meshes["rho"]

dataset = io.Dataset(data.dtype, data.shape)

Expand Down
3 changes: 1 addition & 2 deletions examples/5_write_parallel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@ int main(int argc, char *argv[])
// in streaming setups, e.g. an iteration cannot be opened again once
// it has been closed.
series.iterations[1].open();
MeshRecordComponent mymesh =
series.iterations[1].meshes["mymesh"][MeshRecordComponent::SCALAR];
Mesh mymesh = series.iterations[1].meshes["mymesh"];

// example 1D domain decomposition in first index
Datatype datatype = determineDatatype<float>();
Expand Down
2 changes: 1 addition & 1 deletion examples/5_write_parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
# `Series.iterations` can be directly accessed in random-access workflows.
series.iterations[1].open()
mymesh = series.iterations[1]. \
meshes["mymesh"][io.Mesh_Record_Component.SCALAR]
meshes["mymesh"]

# example 1D domain decomposition in first index
global_extent = [comm.size * 10, 300]
Expand Down
15 changes: 5 additions & 10 deletions examples/7_extended_write_serial.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ int main()
{{io::UnitDimension::M, 1}});
electrons["displacement"]["x"].setUnitSI(1e-6);
electrons.erase("displacement");
electrons["weighting"][io::RecordComponent::SCALAR]
electrons["weighting"]
.resetDataset({io::Datatype::FLOAT, {1}})
.makeConstant(1.e-5);
}
Expand Down Expand Up @@ -150,11 +150,8 @@ int main()
electrons["positionOffset"]["x"].resetDataset(d);

auto dset = io::Dataset(io::determineDatatype<uint64_t>(), {2});
electrons.particlePatches["numParticles"][io::RecordComponent::SCALAR]
.resetDataset(dset);
electrons
.particlePatches["numParticlesOffset"][io::RecordComponent::SCALAR]
.resetDataset(dset);
electrons.particlePatches["numParticles"].resetDataset(dset);
electrons.particlePatches["numParticlesOffset"].resetDataset(dset);

dset = io::Dataset(io::Datatype::FLOAT, {2});
electrons.particlePatches["offset"].setUnitDimension(
Expand Down Expand Up @@ -204,12 +201,10 @@ int main()
electrons["positionOffset"]["x"].storeChunk(
partial_particleOff, o, e);

electrons
.particlePatches["numParticles"][io::RecordComponent::SCALAR]
.store(i, numParticles);
electrons.particlePatches["numParticles"].store(i, numParticles);
electrons
.particlePatches["numParticlesOffset"]
[io::RecordComponent::SCALAR]

.store(i, numParticlesOffset);

electrons.particlePatches["offset"]["x"].store(
Expand Down
10 changes: 5 additions & 5 deletions examples/7_extended_write_serial.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@
electrons["displacement"].unit_dimension = {Unit_Dimension.M: 1}
electrons["displacement"]["x"].unit_SI = 1.e-6
del electrons["displacement"]
electrons["weighting"][SCALAR] \
electrons["weighting"] \
.reset_dataset(Dataset(np.dtype("float32"), extent=[1])) \
.make_constant(1.e-5)

Expand Down Expand Up @@ -137,8 +137,8 @@
electrons["positionOffset"]["x"].reset_dataset(d)

dset = Dataset(np.dtype("uint64"), extent=[2])
electrons.particle_patches["numParticles"][SCALAR].reset_dataset(dset)
electrons.particle_patches["numParticlesOffset"][SCALAR]. \
electrons.particle_patches["numParticles"].reset_dataset(dset)
electrons.particle_patches["numParticlesOffset"]. \
reset_dataset(dset)

dset = Dataset(partial_particlePos.dtype, extent=[2])
Expand Down Expand Up @@ -185,9 +185,9 @@
electrons["position"]["x"][o:u] = partial_particlePos
electrons["positionOffset"]["x"][o:u] = partial_particleOff

electrons.particle_patches["numParticles"][SCALAR].store(
electrons.particle_patches["numParticles"].store(
i, np.array([numParticles], dtype=np.uint64))
electrons.particle_patches["numParticlesOffset"][SCALAR].store(
electrons.particle_patches["numParticlesOffset"].store(
i, np.array([numParticlesOffset], dtype=np.uint64))

electrons.particle_patches["offset"]["x"].store(
Expand Down
10 changes: 4 additions & 6 deletions examples/8a_benchmark_write_parallel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -820,8 +820,8 @@ void AbstractPattern::storeParticles(ParticleSpecies &currSpecies, int &step)
openPMD::Dataset(openPMD::determineDatatype<uint64_t>(), {np});
auto const realDataSet =
openPMD::Dataset(openPMD::determineDatatype<double>(), {np});
currSpecies["id"][RecordComponent::SCALAR].resetDataset(intDataSet);
currSpecies["charge"][RecordComponent::SCALAR].resetDataset(realDataSet);
currSpecies["id"].resetDataset(intDataSet);
currSpecies["charge"].resetDataset(realDataSet);

currSpecies["position"]["x"].resetDataset(realDataSet);

Expand All @@ -837,12 +837,10 @@ void AbstractPattern::storeParticles(ParticleSpecies &currSpecies, int &step)
if (count > 0)
{
auto ids = createData<uint64_t>(count, offset, 1);
currSpecies["id"][RecordComponent::SCALAR].storeChunk(
ids, {offset}, {count});
currSpecies["id"].storeChunk(ids, {offset}, {count});

auto charges = createData<double>(count, 0.1 * step, 0.0001);
currSpecies["charge"][RecordComponent::SCALAR].storeChunk(
charges, {offset}, {count});
currSpecies["charge"].storeChunk(charges, {offset}, {count});

auto mx = createData<double>(count, 1.0 * step, 0.0002);
currSpecies["position"]["x"].storeChunk(mx, {offset}, {count});
Expand Down
2 changes: 1 addition & 1 deletion examples/8b_benchmark_read_parallel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -750,7 +750,7 @@ class TestInput
}

openPMD::ParticleSpecies p = iter.particles.begin()->second;
RecordComponent idVal = p["id"][RecordComponent::SCALAR];
Record idVal = p["id"];

Extent pExtent = idVal.getExtent();

Expand Down
8 changes: 4 additions & 4 deletions examples/9_particle_write_serial.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,14 @@

# let's set a weird user-defined record this time
electrons["displacement"].unit_dimension = {Unit_Dimension.M: 1}
electrons["displacement"][SCALAR].unit_SI = 1.e-6
electrons["displacement"].unit_SI = 1.e-6
dset = Dataset(np.dtype("float64"), extent=[n_particles])
electrons["displacement"][SCALAR].reset_dataset(dset)
electrons["displacement"][SCALAR].make_constant(42.43)
electrons["displacement"].reset_dataset(dset)
electrons["displacement"].make_constant(42.43)
# don't like it anymore? remove it with:
# del electrons["displacement"]

electrons["weighting"][SCALAR] \
electrons["weighting"] \
.reset_dataset(Dataset(np.dtype("float32"), extent=[n_particles])) \
.make_constant(1.e-5)

Expand Down
8 changes: 0 additions & 8 deletions include/openPMD/IO/AbstractIOHandlerImpl.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -385,14 +385,6 @@ class AbstractIOHandlerImpl
virtual void
listAttributes(Writable *, Parameter<Operation::LIST_ATTS> &) = 0;

/** Treat the current Writable as equivalent to that in the parameter object
*
* Using the default implementation (which copies the abstractFilePath
* into the current writable) should be enough for all backends.
*/
void keepSynchronous(
Writable *, Parameter<Operation::KEEP_SYNCHRONOUS> const &param);

/** Notify the backend that the Writable has been / will be deallocated.
*
* The backend should remove all references to this Writable from internal
Expand Down
46 changes: 5 additions & 41 deletions include/openPMD/IO/IOTask.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,36 +48,19 @@ Writable *getWritable(Attributable *);
/** Type of IO operation between logical and persistent data.
*/
OPENPMDAPI_EXPORT_ENUM_CLASS(Operation){
CREATE_FILE,
CHECK_FILE,
OPEN_FILE,
CLOSE_FILE,
CREATE_FILE, CHECK_FILE, OPEN_FILE, CLOSE_FILE,
DELETE_FILE,

CREATE_PATH,
CLOSE_PATH,
OPEN_PATH,
DELETE_PATH,
CREATE_PATH, CLOSE_PATH, OPEN_PATH, DELETE_PATH,
LIST_PATHS,

CREATE_DATASET,
EXTEND_DATASET,
OPEN_DATASET,
DELETE_DATASET,
WRITE_DATASET,
READ_DATASET,
LIST_DATASETS,
GET_BUFFER_VIEW,
CREATE_DATASET, EXTEND_DATASET, OPEN_DATASET, DELETE_DATASET,
WRITE_DATASET, READ_DATASET, LIST_DATASETS, GET_BUFFER_VIEW,

DELETE_ATT,
WRITE_ATT,
READ_ATT,
LIST_ATTS,
DELETE_ATT, WRITE_ATT, READ_ATT, LIST_ATTS,

ADVANCE,
AVAILABLE_CHUNKS, //!< Query chunks that can be loaded in a dataset
KEEP_SYNCHRONOUS, //!< Keep two items in the object model synchronous with
//!< each other
DEREGISTER //!< Inform the backend that an object has been deleted.
}; // note: if you change the enum members here, please update
// docs/source/dev/design.rst
Expand Down Expand Up @@ -658,25 +641,6 @@ struct OPENPMDAPI_EXPORT Parameter<Operation::AVAILABLE_CHUNKS>
std::shared_ptr<ChunkTable> chunks = std::make_shared<ChunkTable>();
};

template <>
struct OPENPMDAPI_EXPORT Parameter<Operation::KEEP_SYNCHRONOUS>
: public AbstractParameter
{
Parameter() = default;
Parameter(Parameter &&) = default;
Parameter(Parameter const &) = default;
Parameter &operator=(Parameter &&) = default;
Parameter &operator=(Parameter const &) = default;

std::unique_ptr<AbstractParameter> to_heap() && override
{
return std::make_unique<Parameter<Operation::KEEP_SYNCHRONOUS>>(
std::move(*this));
}

Writable *otherWritable;
};

template <>
struct OPENPMDAPI_EXPORT Parameter<Operation::DEREGISTER>
: public AbstractParameter
Expand Down
14 changes: 10 additions & 4 deletions include/openPMD/Iteration.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -244,19 +244,25 @@ class Iteration : public Attributable
private:
Iteration();

std::shared_ptr<internal::IterationData> m_iterationData{
new internal::IterationData};
using Data_t = internal::IterationData;
std::shared_ptr<Data_t> m_iterationData;

inline internal::IterationData const &get() const
inline Data_t const &get() const
{
return *m_iterationData;
}

inline internal::IterationData &get()
inline Data_t &get()
{
return *m_iterationData;
}

inline void setData(std::shared_ptr<Data_t> data)
{
m_iterationData = std::move(data);
Attributable::setData(m_iterationData);
}

void flushFileBased(
std::string const &, IterationIndex_t, internal::FlushParams const &);
void flushGroupBased(IterationIndex_t, internal::FlushParams const &);
Expand Down
9 changes: 3 additions & 6 deletions include/openPMD/ParticleSpecies.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -61,19 +61,16 @@ namespace traits
template <>
struct GenerationPolicy<ParticleSpecies>
{
constexpr static bool is_noop = false;
template <typename T>
void operator()(T &ret)
{
ret.particlePatches.linkHierarchy(ret.writable());

auto &np = ret.particlePatches["numParticles"];
auto &npc = np[RecordComponent::SCALAR];
npc.resetDataset(Dataset(determineDatatype<uint64_t>(), {1}));
npc.parent() = np.parent();
np.resetDataset(Dataset(determineDatatype<uint64_t>(), {1}));
auto &npo = ret.particlePatches["numParticlesOffset"];
auto &npoc = npo[RecordComponent::SCALAR];
npoc.resetDataset(Dataset(determineDatatype<uint64_t>(), {1}));
npoc.parent() = npo.parent();
npo.resetDataset(Dataset(determineDatatype<uint64_t>(), {1}));
}
};
} // namespace traits
Expand Down
Loading

0 comments on commit 2e89f87

Please sign in to comment.