Skip to content

Commit

Permalink
add to README
Browse files Browse the repository at this point in the history
  • Loading branch information
cvick32 committed Sep 21, 2022
1 parent 05b0dc8 commit be3e87b
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 8 deletions.
32 changes: 29 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,13 @@ logs the results of that run in `./results/{tool_name}/`.
All results will be in python files that can be viewed in the
top-level directory under `results/{tool_name}/`.

## Watching Results

If at any point a certain benchmarks hangs, you can C-c out of that
particular benchmark run and continue with the rest. The alternative
is to close the entire shell session which will kill the entire
benchmark run.

## run_benchmarks.py Arguments

- Tool Name
Expand Down Expand Up @@ -69,6 +76,25 @@ sets.

To get back to the main branch run: `git switch main`.

## Output Files

To view the output of our tool, first run a benchmark: `python
run_benchmark.py CondHist single --only_run=array_copy.smt2`.

Then, after that command terminates, view the `out.vmt` file in the
`src` directory. This file is runnable in `ic3ia` like so: `ic3ia
out.vmt`.

## "Interesting" Benchmarks

To find a benchmark that requires synthesizing a history condition you
can look in the `interp` dictionary of
`paper-results/CondHist/aeval-multiple-results.py`.

Then, to run that benchmark, for instance `array_hybr_sum.smt2`, run:

`python run_benchmarks.py CondHist multiple --only_run=array_hybr_sum.smt2`

# Reproducing Results
To reproduce all the results from the paper in full you will need to
run the following commands:
Expand All @@ -84,9 +110,9 @@ run the following commands:
- on branch `UnCondHist2`
- `python run_benchmarks.py UnCondHist2 all`

Note that these commands will take multiple hours to complete. To run
a subset of the benchmarks, use the `--subset` command line argument,
demonstrated above.
Note that each of these commands will take from 1 to 2 hours to
complete. To run a subset of the benchmarks, use the `--subset`
command line argument, demonstrated above.



Expand Down
5 changes: 0 additions & 5 deletions src/transform_and_run.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,6 @@ def run_example(file):
v = SmtToVmt(p.get_all_vars(), p.prop, filename)
run_benchmark(filename, v, TIMEOUT_TIME)

strange = ["array_even_odd_2.smt2", "array_index_compl.smt2", "array_split_17.smt2", "array_even_odd_1.smt2"]


def run_aeval_single_ours(tool_name, num, only_run):
i = 0
for filename in os.listdir(SINGLE):
Expand All @@ -58,8 +55,6 @@ def run_aeval_single_ours(tool_name, num, only_run):
continue
if only_run and filename != only_run:
continue
if filename not in strange:
continue
problem = None
with open(os.path.join(SINGLE, filename)) as f:
problem = SmtProblem(f.read())
Expand Down

0 comments on commit be3e87b

Please sign in to comment.