Skip to content
This repository has been archived by the owner on Dec 20, 2023. It is now read-only.

🧪 Gather QCEC benchmarks #6

Open
burgholzer opened this issue May 8, 2023 · 0 comments · May be fixed by #14
Open

🧪 Gather QCEC benchmarks #6

burgholzer opened this issue May 8, 2023 · 0 comments · May be fixed by #14
Assignees
Labels
enhancement New feature or request

Comments

@burgholzer
Copy link
Collaborator

burgholzer commented May 8, 2023

The task is to gather a representative set of benchmarks for QCEC and set up a corresponding benchmark suite.
This builds on #4


On the one hand, this means different equivalence checking approaches within QCEC. This should at least include:

All of these can be built from an ec::EquivalenceCheckingManager by passing the right configuration options (see https://qcec.readthedocs.io/en/latest/library/configuration/Execution.html). Just disable all but one checker. Also make sure to set parallel=false. If you are up for it, you could try to resolve cda-tum/mqt-qcec#146. (not a must have)


On the other hand, this means benchmark circuits.

As a baseline, the available circuits at https://github.com/cda-tum/qcec/tree/main/test/circuits can be used (see all tests in https://github.com/cda-tum/qcec/tree/main/test/legacy).

In general, equivalence checking instances can be easily generated using mqt.bench and/or Qiskit and/or QFR algorithms.
Any (high-level; algorithmic) circuit from mqt.bench can be transpiled using Qiskit (or tket or whatever) and the resulting circuit can be checked against the original high-level implementation.
Always make sure to include measurements in the original circuit description.
It would be nice to include some of the benchmark circuits used for DDSIM also here.
The more you can do in C++ (without resorting to QASM or Python), the better.


As for the size of the benchmarks: The current max in the DD package, without changing anything, is 128 qubits. Benchmarks that run in significantly less than a second are not interesting. The interesting runtimes are between 1s and 300s per benchmark. So careful selection is key. Otherwise running the evaluation once will take forever.

In all cases, make sure to fix a random seed wherever applicable so that results are reproducible (as much as possible).

@burgholzer burgholzer added the enhancement New feature or request label May 8, 2023
@tyi1025 tyi1025 self-assigned this May 8, 2023
@tyi1025 tyi1025 linked a pull request Jul 10, 2023 that will close this issue
@tyi1025 tyi1025 linked a pull request Jul 10, 2023 that will close this issue
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants