-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression from v9.7 -> v9.8-v9.11 #4166
Comments
how many workers are you using ? |
the hint is incomplete. Can you improve it ? |
I have consistent results with these parameters: fix_variables_to_their_hinted_value:true,num_workers:10,use_feasibility_jump:false,use_rins_lns:false,use_feasibility_pump:false,cp_model_probing_level:0 It seems to solve consistently in 6-7s. |
The minimal example above does not set them explicitly, so I assume it's determined by the number of cores on the system? In my case, that's 8.
Can you elaborate? Should one either give no hints at all or hints to all variables?
The configuration Can you comment further on the configuration options you have set to force consistency? It does look like search either succeeds quickly, or some search strategy leads the solver astray entirely (because where the usual ~8s solution is missed, solution are often not even found with a significantly larger budget. |
The closer to completeness the hint is, the less effort is needed in search.
We do process complete feasible hints differently.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le lun. 1 avr. 2024 à 17:51, Hanno Becker ***@***.***> a
écrit :
… how many workers are you using ?
The minimal example above does not set them explicitly, so I assume it's
determined by the number of cores on the system? In my case, that's 8.
the hint is incomplete. Can you improve it ?
Can you elaborate? Should one either give no hints at all or hints to all
variables?
I have consistent results with these parameters:
fix_variables_to_their_hinted_value:true,num_workers:10,use_feasibility_jump:false,use_rins_lns:false,use_feasibility_pump:false,cp_model_probing_level:0
The configurationfix_variables_to_their_hinted_value:true does not seem
like an option in my case, because the hints really are only hints -- I set
them based on an expectation that for the majority of variables of a
certain kind the hinted property will be true, but there will be exceptions
(in more detail: SLOTHY can interleave neighbouring loop iterations, and
there are booleans indicating if an instruction is pulled forward into the
previous iteration (e.g. an early load), or deferred to the next iteration
(e.g. late store) -- most instructions will stay in their original
iteration and hence the tool is hinting at that, but without early/late
instructions altogether, the tool would be much less powerful).
Can you comment further on the configuration options you have set to force
consistency? It does look like search either succeeds quickly, or some
search strategy leads the solver astray entirely (because where the usual
~8s solution is missed, solution are often not even found with a
significantly larger budget.
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3M5G7FGLMAHO44BJJLY3F667AVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQGAZTMNJWGI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
There are usually simple solutions which follow the (incomplete) hints, but they won't minimize the given objective (stall minimization, in SLOTHY's case) -- are those still useful hints in your experience, or should they be removed? |
can you try num_workers:24 ? |
or just num_workers:1,search_branching:FIXED_SEARCH |
What is the takeaway here? When should I consider setting I will start a SLOTHY CI run using |
Following suggestions in google/or-tools#4166
Following suggestions in google/or-tools#4166
@lperron Unfortunately, unconditionally setting How would you suggest to proceed here? Do you have a sense of what v9.7->v9.8 change might have triggered this performance change? Has some new search strategy been added in v9.8 that might lead the solver astray in the models produced by SLOTHY? |
OK. I have no quick solution. If you could send me a collection of models, I can integrate those into out benchmark suite. |
@lperron I will prepare a set of models representative of SLOTHY workloads and share them in the coming days. |
@lperron @Mizux I have exported some of the models exercised in the SLOTHY CI here: https://github.com/slothy-optimizer/slothy/tree/ci_models/paper/scripts/models Performance numbers as observed on my local Apple M1 are in https://github.com/slothy-optimizer/slothy/blob/ci_models/paper/scripts/models/results.txt. Some of them are solved/refuted very quickly, so one should probably hand-select a few that can be solved in seconds-minutes. Please let me know if this is useful to you, or what kind of models/format you would prefer otherwise. |
can you try with these parameters ?
|
I ran all the models with 15 runs per model with different settings 16 workers, 20s:
12 workers, 20s,
|
@lperron Thank you for investigating! What do your measurements tell you? |
The second set of parameters is stable, and solve all the set of problems reliably. |
@lperron Thank you very much for investigating. Are you going to make changes in CP-SAT to make the behaviour the default, or what are next steps? |
No.
Try setting these parameters in your code, and t ll me how it performs.
Le mar. 23 avr. 2024, 05:15, Hanno Becker ***@***.***> a
écrit :
… @lperron <https://github.com/lperron> Thank you very much for
investigating. Are you going to make changes in CP-SAT to make the
behaviour the default, or what are next steps?
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3I2OYNWR5VWZRBGWT3Y6XG4NAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANZRGMZTANJWHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@lperron I'll run the CI on the proposed parameters and get back to you. |
@lperron Thanks! Should I read this as "Wait for 9.11, it may solve your issues"? |
it could solve your issues, ..., or not :-)
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le lun. 8 juil. 2024 à 17:46, Hanno Becker ***@***.***> a
écrit :
… @lperron <https://github.com/lperron> Thanks! Should I read this as "Wait
for 9.11, it may solve your issues"?
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3MYUX3I4QVP3AGRWTLZLKX4LAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJUGQ4DINZTGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@lperron Ok, let's wait and see :-) |
Hi,
Can you try again ?
I just ran our benchmarks on main (16 threads, 15s). I have one example
that times out (ntt_dilithium_123_45678_a55_1712651065788) and one that
only finds a feasible solution (slothy_ci_fft_1712650422585).
Thanks
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le lun. 8 juil. 2024 à 17:53, Hanno Becker ***@***.***> a
écrit :
… @lperron <https://github.com/lperron> Ok, let's wait and see :-)
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3J45QUAUFOL3VQZ4L3ZLKYXNAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJUGUYDKOBRGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi @lperron. Unfortunately, we still observe the performance regression on 9.11 that prevent us from updating, which is increasingly an issue. We will get back to you shortly with the problematic models. |
I've looked into this again, and something else must be going on. The models generated with I ran the same example as the one Hanno used: I've attached the 3 models: Note thtat the I ran each of the models with { import ortools
from ortools.sat.python import cp_model
from google.protobuf import text_format
TIMEOUT=60
ITERATIONS=5
def load_and_solve(file, i):
model = cp_model.CpModel()
with open(file, "r") as file:
text_format.Parse(file.read(), model.Proto())
print (f"[{i}]: Solve using OR-Tools v{ortools.__version__} ... ", end='', flush=True)
solver = cp_model.CpSolver()
solver.parameters.max_time_in_seconds = TIMEOUT
status = solver.Solve(model)
status_str = solver.StatusName(status)
print(f"{status_str}, wall time: {solver.WallTime():.4f} s")
def do_it(file):
print(file)
for i in range(ITERATIONS):
load_and_solve(file, i)
do_it("model_v9.7.txt")
do_it("model_v9.11.txt")
do_it("model_8edc858e5cbe8902801d846899dc0de9be748b2c.11.txt") v9.7
v9.11
8edc858
I've also noticed that in |
are you using a callback ? There is a bug in 9.11 with callbacks in non C++
languages.
They do not stop until the time limit.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 07:05, Matthias J. Kannwischer <
***@***.***> a écrit :
… @lperron <https://github.com/lperron>
I've looked into this again, and something else must be going on. The
models generated with v9.7 solve fine with v9.11,
but if I generate the model for the same example with v9.11, it's VEEERY
slow on both v9.7 or v9.11.
I ran the same example as the one Hanno used: python3 example.py
--examples ntt_dilithium_45678_a55.
For v9.11 it gets stuck at ntt_dilithium_123_45678_a55.layer45678_start.slothy
(160 stalls) - so that's the model I extract.
I've extracted it with v9.7, v9.11, and the current version in the stable
branch (8edc858
<8edc858>
).
Happy to try other versions if you send me the commits.
I've attached the 3 models:
model_v9.11.txt
<https://github.com/user-attachments/files/17929498/model_v9.11.txt>
model_v9.7.txt
<https://github.com/user-attachments/files/17929499/model_v9.7.txt>
model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
<https://github.com/user-attachments/files/17929497/model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt>
Note thtat the v9.7 one seems to be quite a bit larger than v9.11 and
8edc858
<8edc858>
for some reason. Something must have changed in how these models are
generated. Do you have any idea what has changed that may cause this? v9.8
release notes states something about Reduce memory footprint for large
model - maybe it's related to that?
I ran each of the models with {v9.7, v9.11, 8edc858
<8edc858>}
with a timeout of 60 seconds (I did some other experiments with 300 seconds
and also was not enough).
import ortoolsfrom ortools.sat.python import cp_modelfrom google.protobuf import text_format
TIMEOUT=60ITERATIONS=5
def load_and_solve(file, i):
model = cp_model.CpModel()
with open(file, "r") as file:
text_format.Parse(file.read(), model.Proto())
print (f"[{i}]: Solve using OR-Tools v{ortools.__version__} ... ", end='', flush=True)
solver = cp_model.CpSolver()
solver.parameters.max_time_in_seconds = TIMEOUT
status = solver.Solve(model)
status_str = solver.StatusName(status)
print(f"{status_str}, wall time: {solver.WallTime():.4f} s")
def do_it(file):
print(file)
for i in range(ITERATIONS):
load_and_solve(file, i)
do_it("model_v9.7.txt")do_it("model_v9.11.txt")do_it("model_8edc858e5cbe8902801d846899dc0de9be748b2c.11.txt")
v9.7
model_v9.7.txt
[0]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 20.1114 s
[1]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 19.7626 s
[2]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 16.9000 s
[3]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 21.7891 s
[4]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 16.0108 s
model_v9.11.txt
[0]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1689 s
[1]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1938 s
[2]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1931 s
[3]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1643 s
[4]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1502 s
model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
[0]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1764 s
[1]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.3204 s
[2]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.2146 s
[3]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1442 s
[4]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.0729 s
v9.11
model_v9.7.txt
[0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 14.6964 s
[1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 15.1049 s
[2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 13.9518 s
[3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 16.2707 s
[4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 14.6126 s
model_v9.11.txt
[0]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 63.0134 s
[1]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.6885 s
[2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 40.2420 s
[3]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.5499 s
[4]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.2716 s
model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
[0]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.2912 s
[1]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.6309 s
[2]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.1115 s
[3]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.9964 s
[4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 49.7086 s
8edc858
<8edc858>
model_v9.7.txt
[0]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 11.0053 s
[1]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 12.9060 s
[2]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 14.0371 s
[3]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 11.9474 s
[4]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 13.0161 s
model_v9.11.txt
[0]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 62.1908 s
[1]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5383 s
[2]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4522 s
[3]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4555 s
[4]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 33.2710 s
model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
[0]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4360 s
[1]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5685 s
[2]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5485 s
[3]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4791 s
[4]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.3822 s
I've also noticed that in v9.7, if no objective it passed, it will stop
once it finds the first solution, while in the newer versions it will keep
searching more solutions until it hits the timeout. That's not very useful
for us, but I can tweak the callback in SLOTHY to just stop once one
solution is found.
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3PQYIHHZQSUTIVFE7T2CVOJXAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBSHE3DSMJZHE>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2502969199 ***@***.***>
|
stable is 9.11
Please try the v99bugfix branch.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 07:44, Laurent Perron ***@***.***> a écrit :
… are you using a callback ? There is a bug in 9.11 with callbacks in non
C++ languages.
They do not stop until the time limit.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68
53 00
Le mer. 27 nov. 2024 à 07:05, Matthias J. Kannwischer <
***@***.***> a écrit :
> @lperron <https://github.com/lperron>
>
> I've looked into this again, and something else must be going on. The
> models generated with v9.7 solve fine with v9.11,
> but if I generate the model for the same example with v9.11, it's VEEERY
> slow on both v9.7 or v9.11.
>
> I ran the same example as the one Hanno used: python3 example.py
> --examples ntt_dilithium_45678_a55.
> For v9.11 it gets stuck at ntt_dilithium_123_45678_a55.layer45678_start.slothy
> (160 stalls) - so that's the model I extract.
> I've extracted it with v9.7, v9.11, and the current version in the
> stable branch (8edc858
> <8edc858>
> ).
> Happy to try other versions if you send me the commits.
>
> I've attached the 3 models:
> model_v9.11.txt
> <https://github.com/user-attachments/files/17929498/model_v9.11.txt>
> model_v9.7.txt
> <https://github.com/user-attachments/files/17929499/model_v9.7.txt>
> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
> <https://github.com/user-attachments/files/17929497/model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt>
>
> Note thtat the v9.7 one seems to be quite a bit larger than v9.11 and
> 8edc858
> <8edc858>
> for some reason. Something must have changed in how these models are
> generated. Do you have any idea what has changed that may cause this?
> v9.8 release notes states something about Reduce memory footprint for
> large model - maybe it's related to that?
>
> I ran each of the models with {v9.7, v9.11, 8edc858
> <8edc858>}
> with a timeout of 60 seconds (I did some other experiments with 300 seconds
> and also was not enough).
>
> import ortoolsfrom ortools.sat.python import cp_modelfrom google.protobuf import text_format
> TIMEOUT=60ITERATIONS=5
> def load_and_solve(file, i):
> model = cp_model.CpModel()
> with open(file, "r") as file:
> text_format.Parse(file.read(), model.Proto())
> print (f"[{i}]: Solve using OR-Tools v{ortools.__version__} ... ", end='', flush=True)
> solver = cp_model.CpSolver()
> solver.parameters.max_time_in_seconds = TIMEOUT
>
> status = solver.Solve(model)
> status_str = solver.StatusName(status)
> print(f"{status_str}, wall time: {solver.WallTime():.4f} s")
>
> def do_it(file):
> print(file)
> for i in range(ITERATIONS):
> load_and_solve(file, i)
>
> do_it("model_v9.7.txt")do_it("model_v9.11.txt")do_it("model_8edc858e5cbe8902801d846899dc0de9be748b2c.11.txt")
>
> v9.7
>
> model_v9.7.txt
> [0]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 20.1114 s
> [1]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 19.7626 s
> [2]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 16.9000 s
> [3]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 21.7891 s
> [4]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 16.0108 s
> model_v9.11.txt
> [0]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1689 s
> [1]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1938 s
> [2]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1931 s
> [3]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1643 s
> [4]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1502 s
> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
> [0]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1764 s
> [1]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.3204 s
> [2]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.2146 s
> [3]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1442 s
> [4]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.0729 s
>
> v9.11
>
> model_v9.7.txt
> [0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 14.6964 s
> [1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 15.1049 s
> [2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 13.9518 s
> [3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 16.2707 s
> [4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 14.6126 s
> model_v9.11.txt
> [0]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 63.0134 s
> [1]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.6885 s
> [2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 40.2420 s
> [3]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.5499 s
> [4]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.2716 s
> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
> [0]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.2912 s
> [1]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.6309 s
> [2]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.1115 s
> [3]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.9964 s
> [4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 49.7086 s
>
> 8edc858
> <8edc858>
>
> model_v9.7.txt
> [0]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 11.0053 s
> [1]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 12.9060 s
> [2]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 14.0371 s
> [3]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 11.9474 s
> [4]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 13.0161 s
> model_v9.11.txt
> [0]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 62.1908 s
> [1]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5383 s
> [2]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4522 s
> [3]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4555 s
> [4]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 33.2710 s
> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
> [0]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4360 s
> [1]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5685 s
> [2]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5485 s
> [3]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4791 s
> [4]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.3822 s
>
> I've also noticed that in v9.7, if no objective it passed, it will stop
> once it finds the first solution, while in the newer versions it will keep
> searching more solutions until it hits the timeout. That's not very useful
> for us, but I can tweak the callback in SLOTHY to just stop once one
> solution is found.
>
> —
> Reply to this email directly, view it on GitHub
> <#4166 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACUPL3PQYIHHZQSUTIVFE7T2CVOJXAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBSHE3DSMJZHE>
> .
> You are receiving this because you were mentioned.Message ID:
> <google/or-tools/issues/4166/2502969199 ***@***.***>
>
|
Let's focus on model generation.
Can you take a small sample and generate the 9.7 and 9.11 models for
investigation.
Which language are you using to generate the model ?
Le mer. 27 nov. 2024, 07:45, Laurent Perron ***@***.***> a écrit :
… stable is 9.11
Please try the v99bugfix branch.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68
53 00
Le mer. 27 nov. 2024 à 07:44, Laurent Perron ***@***.***> a
écrit :
> are you using a callback ? There is a bug in 9.11 with callbacks in non
> C++ languages.
> They do not stop until the time limit.
> Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68
> 53 00
>
>
>
> Le mer. 27 nov. 2024 à 07:05, Matthias J. Kannwischer <
> ***@***.***> a écrit :
>
>> @lperron <https://github.com/lperron>
>>
>> I've looked into this again, and something else must be going on. The
>> models generated with v9.7 solve fine with v9.11,
>> but if I generate the model for the same example with v9.11, it's
>> VEEERY slow on both v9.7 or v9.11.
>>
>> I ran the same example as the one Hanno used: python3 example.py
>> --examples ntt_dilithium_45678_a55.
>> For v9.11 it gets stuck at ntt_dilithium_123_45678_a55.layer45678_start.slothy
>> (160 stalls) - so that's the model I extract.
>> I've extracted it with v9.7, v9.11, and the current version in the
>> stable branch (8edc858
>> <8edc858>
>> ).
>> Happy to try other versions if you send me the commits.
>>
>> I've attached the 3 models:
>> model_v9.11.txt
>> <https://github.com/user-attachments/files/17929498/model_v9.11.txt>
>> model_v9.7.txt
>> <https://github.com/user-attachments/files/17929499/model_v9.7.txt>
>> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
>> <https://github.com/user-attachments/files/17929497/model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt>
>>
>> Note thtat the v9.7 one seems to be quite a bit larger than v9.11 and
>> 8edc858
>> <8edc858>
>> for some reason. Something must have changed in how these models are
>> generated. Do you have any idea what has changed that may cause this?
>> v9.8 release notes states something about Reduce memory footprint for
>> large model - maybe it's related to that?
>>
>> I ran each of the models with {v9.7, v9.11, 8edc858
>> <8edc858>}
>> with a timeout of 60 seconds (I did some other experiments with 300 seconds
>> and also was not enough).
>>
>> import ortoolsfrom ortools.sat.python import cp_modelfrom google.protobuf import text_format
>> TIMEOUT=60ITERATIONS=5
>> def load_and_solve(file, i):
>> model = cp_model.CpModel()
>> with open(file, "r") as file:
>> text_format.Parse(file.read(), model.Proto())
>> print (f"[{i}]: Solve using OR-Tools v{ortools.__version__} ... ", end='', flush=True)
>> solver = cp_model.CpSolver()
>> solver.parameters.max_time_in_seconds = TIMEOUT
>>
>> status = solver.Solve(model)
>> status_str = solver.StatusName(status)
>> print(f"{status_str}, wall time: {solver.WallTime():.4f} s")
>>
>> def do_it(file):
>> print(file)
>> for i in range(ITERATIONS):
>> load_and_solve(file, i)
>>
>> do_it("model_v9.7.txt")do_it("model_v9.11.txt")do_it("model_8edc858e5cbe8902801d846899dc0de9be748b2c.11.txt")
>>
>> v9.7
>>
>> model_v9.7.txt
>> [0]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 20.1114 s
>> [1]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 19.7626 s
>> [2]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 16.9000 s
>> [3]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 21.7891 s
>> [4]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 16.0108 s
>> model_v9.11.txt
>> [0]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1689 s
>> [1]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1938 s
>> [2]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1931 s
>> [3]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1643 s
>> [4]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1502 s
>> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
>> [0]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1764 s
>> [1]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.3204 s
>> [2]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.2146 s
>> [3]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.1442 s
>> [4]: Solve using OR-Tools v9.7.2996 ... UNKNOWN, wall time: 60.0729 s
>>
>> v9.11
>>
>> model_v9.7.txt
>> [0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 14.6964 s
>> [1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 15.1049 s
>> [2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 13.9518 s
>> [3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 16.2707 s
>> [4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 14.6126 s
>> model_v9.11.txt
>> [0]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 63.0134 s
>> [1]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.6885 s
>> [2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 40.2420 s
>> [3]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.5499 s
>> [4]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.2716 s
>> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
>> [0]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.2912 s
>> [1]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.6309 s
>> [2]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 62.1115 s
>> [3]: Solve using OR-Tools v9.11.4210 ... UNKNOWN, wall time: 61.9964 s
>> [4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 49.7086 s
>>
>> 8edc858
>> <8edc858>
>>
>> model_v9.7.txt
>> [0]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 11.0053 s
>> [1]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 12.9060 s
>> [2]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 14.0371 s
>> [3]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 11.9474 s
>> [4]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 13.0161 s
>> model_v9.11.txt
>> [0]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 62.1908 s
>> [1]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5383 s
>> [2]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4522 s
>> [3]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4555 s
>> [4]: Solve using OR-Tools v9.11.4212 ... OPTIMAL, wall time: 33.2710 s
>> model_8edc858e5cbe8902801d846899dc0de9be748b2c.txt
>> [0]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4360 s
>> [1]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5685 s
>> [2]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.5485 s
>> [3]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.4791 s
>> [4]: Solve using OR-Tools v9.11.4212 ... UNKNOWN, wall time: 61.3822 s
>>
>> I've also noticed that in v9.7, if no objective it passed, it will stop
>> once it finds the first solution, while in the newer versions it will keep
>> searching more solutions until it hits the timeout. That's not very useful
>> for us, but I can tweak the callback in SLOTHY to just stop once one
>> solution is found.
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#4166 (comment)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/ACUPL3PQYIHHZQSUTIVFE7T2CVOJXAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBSHE3DSMJZHE>
>> .
>> You are receiving this because you were mentioned.Message ID:
>> <google/or-tools/issues/4166/2502969199 ***@***.***>
>>
>
|
We are using Python. I've attached two simpler models obtained by using If I run this with
it's clear that the v9.11 model performs much worse:
If I run it without the parameters, it's hard to tell (maybe the example is too small).
|
Something is fishy.
in 9.7, creating an interval adds the linear constraint.
This is not happening in the v9.7 model.
in the v9.11 model, I see all linears close to the intervals. Which should
not happen.
Loading model with linears in 9.7 should work, without it should not.
loading model with linears in 9.11 should just be slower, without it should
be fine.
It seems versions are mixed up.
Comment: a lot of intervals have a fixed size, use the
new_fixed_size_interval or new_optional_fixed_size_interval.
If you need the end object, just store start + size. This will speed up
everything.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 10:10, Matthias J. Kannwischer <
***@***.***> a écrit :
… We are using Python.
I've attached two simpler models obtained by using python3 example.py
--examples fixedpoint_radix4_fft_m55 --log-model
fixedpoint_radix4_fft_m55_v9.11.txt
<https://github.com/user-attachments/files/17931760/fixedpoint_radix4_fft_m55_v9.11.txt>
fixedpoint_radix4_fft_m55_v9.7.txt
<https://github.com/user-attachments/files/17931761/fixedpoint_radix4_fft_m55_v9.7.txt>
If I run this with
solver.parameters.search_branching = cp_model.FIXED_SEARCH
solver.parameters.num_workers = 1
it's clear that the v9.11 model performs much worse:
$ python3 run_model.py
fixedpoint_radix4_fft_m55_v9.7.txt
[0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.8248 s
[1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.7799 s
[2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.7804 s
[3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.7740 s
[4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.7762 s
fixedpoint_radix4_fft_m55_v9.11.txt
[0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 44.2290 s
[1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 7.9812 s
[2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 17.2137 s
[3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 43.6993 s
[4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 17.8757 s
If I run it without the parameters, it's hard to tell (maybe the example
is too small).
$ python3 run_model.py
fixedpoint_radix4_fft_m55_v9.7.txt
[0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.8357 s
[1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 1.8599 s
[2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 2.6001 s
[3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.6007 s
[4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.3738 s
fixedpoint_radix4_fft_m55_v9.11.txt
[0]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.6041 s
[1]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.3039 s
[2]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.8779 s
[3]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.8803 s
[4]: Solve using OR-Tools v9.11.4210 ... OPTIMAL, wall time: 3.8195 s
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3JPNVAPBBTP7A5VQBD2CWEBNAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBTGMZDAMJSGI>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2503320122 ***@***.***>
|
Yes, deliberartely mixed up versions. I only ran with |
@lperron I don't yet follow. @mkannwischer's numbers from #4166 (comment) shows that when solving a v9.11-generated model in v9.11 performs much worse than solving the v9.7-generated model for the same underlying example in v9.11. That seems unexpected? |
In the v9_7 model, the hint is complete.
In the v9_11, the hint is not complete.
This is why solve terminates in 9.7 terminates just after presolve.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 11:26, Hanno Becker ***@***.***> a
écrit :
… @lperron <https://github.com/lperron> I don't yet follow. @mkannwischer
<https://github.com/mkannwischer>'s numbers from #4166 (comment)
<#4166 (comment)>
shows that when solving a v9.11-generated model in v9.11 performs much
worse than solving the v9.7-generated model for the same underlying example
in v9.11. That seems unexpected?
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3I4PAUVIZG5CUK2H4D2CWM57AVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBTGQ4TSMJXGU>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2503499175 ***@***.***>
|
Initial satisfaction model 'model_v9.7': (model_fingerprint:
0xa690468c9f044eee)
#Variables: 20'461
- 18'727 Booleans in [0,1]
- 289 in [-1,938]
- 576 in [0,474]
- 866 in [0,938]
- 3 constants in {-1,464,928}
#kAllDiff: 1
#kBoolOr: 1'211 (#enforced: 1'211) (#literals: 1'211)
#kExactlyOne: 829 (#literals: 9'069)
#kIntProd: 9'081
#kInterval: 18'754 (#enforced: 18'170)
#kLinear1: 10'002 (#enforced: 9'365)
#kLinear2: 8'569 (#enforced: 1'915 #multi: 1'546) (#complex_domain: 12)
#kLinear3: 9'801 (#enforced: 9'513)
#kNoOverlap: 68 (#intervals: 18754, #optional: 18170, #variable_sizes: 9081)
Starting presolve at 0.01s
The solution hint is complete and is feasible.
1.01e-02s 0.00e+00d [DetectDominanceRelations]
2.07e-01s 0.00e+00d [PresolveToFixPoint] #num_loops=19
#num_dual_strengthening=1
7.48e-04s 0.00e+00d [ExtractEncodingFromLinear]
#potential_supersets=244
2.32e-03s 0.00e+00d [DetectDuplicateColumns]
5.63e-03s 0.00e+00d [DetectDuplicateConstraints] #duplicates=9'586
vs
Initial satisfaction model 'model_v9.11': (model_fingerprint:
0xf29d8f33d908b280)
#Variables: 20'461
- 18'727 Booleans in [0,1]
- 289 in [-1,938]
- 576 in [0,474]
- 866 in [0,938]
- 3 constants in {-1,464,928}
#kAllDiff: 1
#kBoolOr: 1'211 (#enforced: 1'211) (#literals: 1'211)
#kExactlyOne: 829 (#literals: 9'069)
#kIntProd: 9'081
#kInterval: 18'754 (#enforced: 18'170)
#kLinear1: 921 (#enforced: 284)
#kLinear2: 7'977 (#enforced: 1'907 #multi: 1'546) (#complex_domain: 12)
#kLinear3: 720 (#enforced: 432)
#kNoOverlap: 68 (#intervals: 18754, #optional: 18170, #variable_sizes: 9081)
Starting presolve at 0.01s
The solution hint is incomplete: 864 out of 20458 non fixed variables
hinted.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 14:59, Laurent Perron ***@***.***> a écrit :
… In the v9_7 model, the hint is complete.
In the v9_11, the hint is not complete.
This is why solve terminates in 9.7 terminates just after presolve.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68
53 00
Le mer. 27 nov. 2024 à 11:26, Hanno Becker ***@***.***> a
écrit :
> @lperron <https://github.com/lperron> I don't yet follow. @mkannwischer
> <https://github.com/mkannwischer>'s numbers from #4166 (comment)
> <#4166 (comment)>
> shows that when solving a v9.11-generated model in v9.11 performs much
> worse than solving the v9.7-generated model for the same underlying example
> in v9.11. That seems unexpected?
>
> —
> Reply to this email directly, view it on GitHub
> <#4166 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACUPL3I4PAUVIZG5CUK2H4D2CWM57AVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBTGQ4TSMJXGU>
> .
> You are receiving this because you were mentioned.Message ID:
> <google/or-tools/issues/4166/2503499175 ***@***.***>
>
|
So the code that generates the hint is broken with later version of
or-tools.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 15:00, Laurent Perron ***@***.***> a écrit :
…
Initial satisfaction model 'model_v9.7': (model_fingerprint:
0xa690468c9f044eee)
#Variables: 20'461
- 18'727 Booleans in [0,1]
- 289 in [-1,938]
- 576 in [0,474]
- 866 in [0,938]
- 3 constants in {-1,464,928}
#kAllDiff: 1
#kBoolOr: 1'211 (#enforced: 1'211) (#literals: 1'211)
#kExactlyOne: 829 (#literals: 9'069)
#kIntProd: 9'081
#kInterval: 18'754 (#enforced: 18'170)
#kLinear1: 10'002 (#enforced: 9'365)
#kLinear2: 8'569 (#enforced: 1'915 #multi: 1'546) (#complex_domain: 12)
#kLinear3: 9'801 (#enforced: 9'513)
#kNoOverlap: 68 (#intervals: 18754, #optional: 18170, #variable_sizes:
9081)
Starting presolve at 0.01s
The solution hint is complete and is feasible.
1.01e-02s 0.00e+00d [DetectDominanceRelations]
2.07e-01s 0.00e+00d [PresolveToFixPoint] #num_loops=19
#num_dual_strengthening=1
7.48e-04s 0.00e+00d [ExtractEncodingFromLinear]
#potential_supersets=244
2.32e-03s 0.00e+00d [DetectDuplicateColumns]
5.63e-03s 0.00e+00d [DetectDuplicateConstraints] #duplicates=9'586
vs
Initial satisfaction model 'model_v9.11': (model_fingerprint:
0xf29d8f33d908b280)
#Variables: 20'461
- 18'727 Booleans in [0,1]
- 289 in [-1,938]
- 576 in [0,474]
- 866 in [0,938]
- 3 constants in {-1,464,928}
#kAllDiff: 1
#kBoolOr: 1'211 (#enforced: 1'211) (#literals: 1'211)
#kExactlyOne: 829 (#literals: 9'069)
#kIntProd: 9'081
#kInterval: 18'754 (#enforced: 18'170)
#kLinear1: 921 (#enforced: 284)
#kLinear2: 7'977 (#enforced: 1'907 #multi: 1'546) (#complex_domain: 12)
#kLinear3: 720 (#enforced: 432)
#kNoOverlap: 68 (#intervals: 18754, #optional: 18170, #variable_sizes:
9081)
Starting presolve at 0.01s
The solution hint is incomplete: 864 out of 20458 non fixed variables
hinted.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68
53 00
Le mer. 27 nov. 2024 à 14:59, Laurent Perron ***@***.***> a
écrit :
> In the v9_7 model, the hint is complete.
> In the v9_11, the hint is not complete.
>
> This is why solve terminates in 9.7 terminates just after presolve.
> Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68
> 53 00
>
>
>
> Le mer. 27 nov. 2024 à 11:26, Hanno Becker ***@***.***> a
> écrit :
>
>> @lperron <https://github.com/lperron> I don't yet follow. @mkannwischer
>> <https://github.com/mkannwischer>'s numbers from #4166 (comment)
>> <#4166 (comment)>
>> shows that when solving a v9.11-generated model in v9.11 performs much
>> worse than solving the v9.7-generated model for the same underlying example
>> in v9.11. That seems unexpected?
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#4166 (comment)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/ACUPL3I4PAUVIZG5CUK2H4D2CWM57AVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBTGQ4TSMJXGU>
>> .
>> You are receiving this because you were mentioned.Message ID:
>> <google/or-tools/issues/4166/2503499175 ***@***.***>
>>
>
|
@lperron Thank you for your reply. I'm struggling to reconcile this -- our tool is unchanged, we merely changed OR-Tools underneath. All hints are registered via the Python binding We will add logging on our end in the wrapper that calls |
I don't think so. add_hint writes to the proto directly.
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le mer. 27 nov. 2024 à 15:47, Hanno Becker ***@***.***> a
écrit :
… @lperron <https://github.com/lperron> I cannot make sense of this. Our
tool is unchanged, we merely changed OR-Tools underneath. All hints are
registered via the Python binding cp_model.AddHint().
We will add logging on our end in the wrapper that calls
cp_model.AddHint(), but is there any chance the information is lost at
some point between registering the Hint and dumping the model? Could it be
related to the model size optimizations that happened in v9.8 (which is the
first version we saw this peformance issue with)?
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3JOBAFW6WUIHKDCYAD2CXLOLAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBUGA3DGNJYHA>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2504063588 ***@***.***>
|
Okay, sorry - this was a stupid mistake. This also voids some previous experiments in this thread as @hanno-becker was using the same code for exporting models as far as I can tell. fixedpoint_radix4_fft_m55_attempt2_v9.11.txt Solving both with
Solving both with
I think this rules out model generation, but once again confirms that |
I've also extracted the models of the Running the
Running either of the models with |
Can you try with the v99bugfix branch?
Le jeu. 28 nov. 2024, 08:11, Matthias J. Kannwischer <
***@***.***> a écrit :
… I've also extracted the models of the ntt_dilithium_123_45678_a55 example
that we looked at before again:
ntt_dilithium_123_45678_a55_v9.11.txt
<https://github.com/user-attachments/files/17944064/ntt_dilithium_123_45678_a55_v9.11.txt>
ntt_dilithium_123_45678_a55_v9.7.txt
<https://github.com/user-attachments/files/17944063/ntt_dilithium_123_45678_a55_v9.7.txt>
Running the v9.7 model with v9.7 this completes within 10 seconds
reliably:
ntt_dilithium_123_45678_a55_v9.7.txt
[0]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 9.2531 s
[1]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 9.2211 s
[2]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 9.5007 s
[3]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 12.4243 s
[4]: Solve using OR-Tools v9.7.2996 ... OPTIMAL, wall time: 10.9639 s
Running either of the models with v9.11 does not find any solution within
600 seconds with either of the two models. I'll let this run for a bit
longer - but it's definitely *much* slower.
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3IEFRWH37UTIIVUPUT2C26ZTAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBVGQYDGOBRHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Update on I've just tried with the
|
Initial satisfaction model 'ntt_dilithium_123_45678_a55_v9.11':
(model_fingerprint: 0x33930727b80ac91c)
#Variables: 20'461
- 18'727 Booleans in [0,1]
- 289 in [-1,938]
- 576 in [0,474]
- 866 in [0,938]
- 3 constants in {-1,464,928}
#kAllDiff: 1
#kBoolOr: 1'211 (#enforced: 1'211) (#literals: 1'211)
#kExactlyOne: 829 (#literals: 9'069)
#kIntProd: 9'081
#kInterval: 18'754 (#enforced: 18'170)
#kLinear1: 921 (#enforced: 284)
#kLinear2: 7'977 (#enforced: 1'907 #multi: 1'546) (#complex_domain: 12)
#kLinear3: 720 (#enforced: 432)
#kNoOverlap: 68 (#intervals: 18754, #optional: 18170, #variable_sizes: 9081)
Starting presolve at 0.01s
The solution hint is incomplete: 864 out of 20458 non fixed variables
hinted. <=== not fixed
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le jeu. 28 nov. 2024 à 10:18, Matthias J. Kannwischer <
***@***.***> a écrit :
… Update on v9.11: Even after 1200 seconds I do not get any solutions for
ntt_dilithium_123_45678_a55_v9.11.txt
<https://github.com/user-attachments/files/17944064/ntt_dilithium_123_45678_a55_v9.11.txt>
.
I've just tried with the v99bugfix branch (commit 78d5ba5
<78d5ba5>),
but I'm not seeing any difference:
$ python3 run_model.py
ntt_dilithium_123_45678_a55_v9.11.txt
[0]: Solve using OR-Tools v9.12.4349 ... UNKNOWN, wall time: 59.9998 s
[1]: Solve using OR-Tools v9.12.4349 ... UNKNOWN, wall time: 60.0002 s
[2]: Solve using OR-Tools v9.12.4349 ... UNKNOWN, wall time: 60.0002 s
[3]: Solve using OR-Tools v9.12.4349 ... UNKNOWN, wall time: 60.0006 s
[4]: Solve using OR-Tools v9.12.4349 ... UNKNOWN, wall time: 60.0002 s
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3J2ZI4EXUZZDMUVI3D2C3NX7AVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBVGYZDKMZQGA>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2505625300 ***@***.***>
|
@lperron Sorry, can you elaborate? It is expected that there are no complete hints in the model. |
then you are just lucky with 9.7 and unlucky with 9.11
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le jeu. 28 nov. 2024 à 10:44, Hanno Becker ***@***.***> a
écrit :
… @lperron <https://github.com/lperron> Sorry, can you elaborate? It is
expected that there are no complete hints in the model.
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3IQTNZINMAFXDSTLNT2C3QYTAVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBVGY3TQMRVHE>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2505678259 ***@***.***>
|
Is that all we can say here? This regression persisted from 9.8 onwards and blocks our application from migrating since. We would greatly appreciate if you could help us understand the options we can explore here, since you surely know best what changed between 9.7 and 9.8 that could cause this drastic slowdown. This may also be related to the regression reported in #4189. |
I ran all the benchmarks (96) you sent me with 15s and 12 workers.
I prove all of them except 2 (1 feasible, 1 unknown).
I am sorry if this is blocking you. I solve the ntt in ~30s on my machine
with 16 workers.
What more can I say ?
Laurent Perron | Operations Research | ***@***.*** | (33) 1 42 68 53
00
Le jeu. 28 nov. 2024 à 10:58, Hanno Becker ***@***.***> a
écrit :
… Is that all we can say here? This regression persisted from 9.8 onwards
and blocks our application from migrating since. We would greatly
appreciate if you could help us understand the options we can explore here,
since you surely know best what changes between 9.7 and 9.8 that could
cause this drastic slowdown. This may also be related to the regression
reported in #4189 <#4189>.
—
Reply to this email directly, view it on GitHub
<#4166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPL3KKSSQHGUPCY55W5QT2C3SN7AVCNFSM6AAAAABFQV5KC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBVG4YDONJTGA>
.
You are receiving this because you were mentioned.Message ID:
<google/or-tools/issues/4166/2505707530 ***@***.***>
|
@lperron Maybe the following helps advance the discussion: We ran the problematic model again, 3x with 9.7 and 3x with 9.11, setting num_workers=16 and this time also gathering logs. Wall times for 9.7 were 12s, 17s, 26s The logs indicate a large difference in the set of employed and successful solvers. Perhaps some solvers in 9.11 need to be disabled / deprioritized for the models we care about? If you had time to take a look, we'd be grateful. log-97-1.txt |
@lperron Thank you very much for your help so far with debugging this! Do you have any other suggestions of what we could try to track down the cause of the 10x slowdown for some of our models (e.g., the ones in Hanno's last comment) ? Happy to try the models with another commit of or-tools in case you think this could be resolved by now in some branch. |
What version of OR-Tools and what language are you using?
Version: v9.7, v9.8, v9.9, v9.10, v9.11
Language: Python
Which solver are you using (e.g. CP-SAT, Routing Solver, GLOP, BOP, Gurobi)
CP-SAT
What operating system (Linux, Windows, ...) and version?
Apple M1 Pro, MacOS Sonoma Version 14.3.1
What did you do?
Updated OR-Tools from v9.7 to v9.8 and v9.9 when used as the backend for the SLOTHY assembly superoptimizer.
What did you expect to see
CP-SAT performance that is similar or better in terms of runtime and consistency.
What did you see instead?
Significant inconsistency in the runtime of CP-SAT.
Steps to reproduce:
run_model.py
:Here are the outputs on my local machine (see above):
Anything else we should know about your project / environment
logs/ntt_dilithium_45678_a55_model.txt
viapython3 example.py --examples ntt_dilithium_45678_a55 --log-model
based off the SLOTHYmain
branch.If you need any more information, please let me know.
The text was updated successfully, but these errors were encountered: