Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

list index out of range in generate_perf_csv_for_all_dms #1

Open
mazlo opened this issue Sep 22, 2018 · 1 comment
Open

list index out of range in generate_perf_csv_for_all_dms #1

mazlo opened this issue Sep 22, 2018 · 1 comment

Comments

@mazlo
Copy link

mazlo commented Sep 22, 2018

Hey guys, thanks for this great benchmark and also the great catalogue of systems to evaluate.

If got a error with the example dataset and query1 when running with cold_cache option. I'm running the benchmark with this command

docker run --privileged litmus:local python run_script.py --rdf 4store --benchmark_actions cold_cache --runs 1

and it is giving me the error below.

['graph_data/graph-example-1.xml']
['r_4store']
['query_cold']
['cold_query']
update-alternatives: error: no alternatives for mozilla-javaplugin.so
update-java-alternatives: plugin alternative does not exist: /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/IcedTeaPlugin.so
rdf_data/rdf-example-1.nt
RDF-3X turtle importer
(c) 2008 Thomas Neumann. Web site: http://www.mpi-inf.mpg.de/~neumann/rdf3x
Parsing rdf_data/rdf-example-1.nt...
Building the dictionary...
Resolving string ids...
Loading database into /tmp/query_check...
Loading triples...
Loading strings...
Computing statistics...
Done.
Benchmarking 4store
4store[2379]: backend-setup.c:186 erased files for KB hello
4store[2379]: backend-setup.c:318 created RDF metadata for KB hello
['/4store_queries/query1.sparql']
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.1 --append -e cycles,instructions,cache-references,cache-misses,bus-cycles -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')
FInished perf1
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.2 --append -e L1-dcache-loads,L1-dcache-load-misses,L1-dcache-stores,dTLB-loads,dTLB-load-misses,dTLB-prefetch-misses -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')
FInished perf2
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.3 --append -e LLC-loads,LLC-load-misses,LLC-stores,LLC-prefetches -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')
Finished Perf3
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.4 --append -e branches,branch-misses,context-switches,cpu-migrations,page-faults -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')






('*********', '/4store_queries/*.sparql', '*************')






('*********', 'query1', '*************')
query_hot_logs_perf.log.query1.1
query_hot_logs_perf.log.query1.2
query_hot_logs_perf.log.query1.3
query_hot_logs_perf.log.query1.4
query_cold_logs_perf.log.query1.1
query_cold_logs_perf.log.query1.3
query_cold_logs_perf.log.query1.2
query_cold_logs_perf.log.query1.4
{}
('**************', 'r_4store', '***********************')
Traceback (most recent call last):
  File "run_script.py", line 2215, in <module>
    generate_perf_csv_for_all_dms("r_", "temp_rdf.csv", process_files = process_files, list_of_dms = ["r_" + each for each in args['rdf'].split(",")])
  File "run_script.py", line 1953, in generate_perf_csv_for_all_dms
    f.write(",".join(all_cold[each]['2'][i][4:]))
IndexError: list index out of range

Have you encountered this error before? I would be very glad for any pointer.

With docker run --privileged litmus:local python run_script.py --rdf 4store --benchmark_actions warm_cache --runs 1 it is working fine.

My host system is ubuntu 16.04.

@mazlo
Copy link
Author

mazlo commented Sep 24, 2018

Ok, I've played around a bit.

First, I connected to the container via docker run -it --privileged litmus:local bash so that I can run the script from within the container.

When I run python run_script.py --rdf 4store --benchmark_actions cold_cache --runs 1 I get an error related to that reported above, but a bit different.

root@de9513ed006c:/# python run_script.py --rdf 4store --benchmark_actions cold_cache --runs 1
['graph_data/graph-example-1.xml']
['r_4store']
['query_cold']
['cold_query']
update-alternatives: error: no alternatives for mozilla-javaplugin.so
update-java-alternatives: plugin alternative does not exist: /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/IcedTeaPlugin.so
rdf_data/rdf-example-1.nt
RDF-3X turtle importer
(c) 2008 Thomas Neumann. Web site: http://www.mpi-inf.mpg.de/~neumann/rdf3x
Parsing rdf_data/rdf-example-1.nt...
Building the dictionary...
Resolving string ids...
Loading database into /tmp/query_check...
Loading triples...
Loading strings...
Computing statistics...
Done.
Benchmarking 4store
4store[494]: backend-setup.c:186 erased files for KB hello
4store[494]: backend-setup.c:318 created RDF metadata for KB hello
['/4store_queries/query1.sparql']
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.1 --append -e cycles,instructions,cache-references,cache-misses,bus-cycles -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')
FInished perf1
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.2 --append -e L1-dcache-loads,L1-dcache-load-misses,L1-dcache-stores,dTLB-loads,dTLB-load-misses,dTLB-prefetch-misses -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')
FInished perf2
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.3 --append -e LLC-loads,LLC-load-misses,LLC-stores,LLC-prefetches -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')
Finished Perf3
vm.drop_caches = 3
('*****', 'perf stat -o /var/log/4store/query_cold_logs_perf.log.query1.4 --append -e branches,branch-misses,context-switches,cpu-migrations,page-faults -a /scripts/4store/4store_query_cold_cache_perf.sh hello /4store_queries/query1.sparql /var/log/4store/query_cold_logs.log', '*****')






('*********', '/4store_queries/*.sparql', '*************')






('*********', 'query1', '*************')
query_cold_logs_perf.log.query1.1
query_cold_logs_perf.log.query1.3
query_cold_logs_perf.log.query1.2
query_cold_logs_perf.log.query1.4
{}
Traceback (most recent call last):
  File "run_script.py", line 2215, in <module>
    generate_perf_csv_for_all_dms("r_", "temp_rdf.csv", process_files = process_files, list_of_dms = ["r_" + each for each in args['rdf'].split(",")])
  File "run_script.py", line 1908, in generate_perf_csv_for_all_dms
    f.write(",".join(dic_load_headers['1']))
KeyError: '1'

It seems that process_all_perfs_dms is not adding the structure as expected by generate_perf_csv_for_all_dms.

BUT! It works when I first run with warm_cache option and then with cold_cache option. Because I am in the container the log files won't get discarded after the container terminates. So,

python run_script.py --rdf 4store --benchmark_actions warm_cache --runs 1
python run_script.py --rdf 4store --benchmark_actions cold_cache --runs 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant