-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add the horizontal elastic timetable overload checking rule to the cumulative constraint. #4401
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR!
I had a quick look at the CL and added some comments. I'm a new member of the team, so don't be surprised if I misunderstood something during the review.
} | ||
profile_events_.emplace_back(-1, kMaxIntegerValue, IntegerValue(0), | ||
ProfileEventType::kSentinel); | ||
IncrementalSort(profile_events_.begin(), profile_events_.end()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that if you are not updating profile_events_
incrementally there is no point on sorting it with this method. I think plain std::sort
would be cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, changed the code to truly build the profile incrementally by updating the events rather than building it from scratch each time.
I don't think this drives much benefits yet but it opens the door for further incremental optimization using reversibles similarly to what is done in time-table.
const IntegerValue capacity = integer_trail_->UpperBound(capacity_); | ||
if (window_end < ScheduleTasks(window_end, capacity)) { | ||
AddScheduleTaskReason(window_end); | ||
return helper_->ReportConflict(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If end_min is lower than the result of ScheduleTasks, can't you push a tighter end_min?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, I guess we could use ScheduleTasks(window_end, capacity) - 1
but I would have to think a little more about it.
That being said, I don't think this propagator as it stands (compared to its extensions that update the start/end vars) is worth adding to OR-tools. I'd rather prefer to optimize the explanation of these versions.
case TaskType::kFull: | ||
switch (event.event_type) { | ||
case ProfileEventType::kStartMin: | ||
delta_max += event.height; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think changing demand_req and demand_max directly without the deltas might be simpler to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, done 👍
DCHECK_LT(next_event, profile_events_.size()); | ||
|
||
IntegerValue next_time = profile_events_[next_event].time; | ||
const IntegerValue length = next_time - time; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: please move this definition closer to where it is used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that length
is only used once, I've inlined it directly in the overload adjustment. IMO this does not hurt the code's readability as the computation is small and pretty clear.
// previous time points. | ||
const IntegerValue overload_delta = demand_req - true_capa; | ||
|
||
if (overload_delta > 0) { // adding overload |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think this logic would be easier to read if the variable naming made clear what is the overload/removable/used for a time unit and what is the equivalent for the time window.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't super happy with these names either; what do you think about the new ones? I've also slightly simplified the logic as there's no need for the std::min
call.
Hi @vitor1001, thank you for the review and the comments! I tried to address most of them, PTAL. I'm planning to keep working on this PR and update the propagator to actually filter the start/end variables. Though, I would need a way to evaluate that the code is actually driving improvements (in terms of #nodes first, then in terms of speed). Is there a c++ example I could use to compare the performance of different versions of the cumulative on a benchmark that matters to the team by any chance? |
Here is a set of cp-sat protobufs for rcpsp models (single and multi mode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a very clean implementation, simpler than the paper!
I have some questions since now you are the expert :)
max_freeable_overload; | ||
overload = 0; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if these formulas from the paper could be simplified much further.
I'm not sure whether the following approach would be equivalent. In each subwindow [time, next_time]:
- the energy that can be used by tasks is (next_time - time) * true_capa.
- the energy required by fixed tasks is (next_time - time) * demand_req.
Then the energy required by full tasks over all subwindows is just the sum of energies of full tasks.
If energy_required is strictly larger than energy_usable, then the set of full/fixed tasks associated to window_end are not feasible.
WDYT? The paper looks like it is scheduling each time point by hand, does this approach miss something?
// TODO: There's an opportunity to further generalize the reason if | ||
// demand_max and overload are set to 0 before the end of the window. | ||
// This can happen if the resources consumption has "humps" though it | ||
// is unclear whether this pattern is likely in practice or not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the strongest opportunity would be to reduce the window.
It looks like this always considers all tasks in (-inf, window_end).
If a window [start_min_i, end_max_j) becomes overloaded, there should be some gain in including only those tasks in the window.
With the approach I described above, this would require starting from the last event at window_end and sweeping events towards the start:
- initialize demand_req with the correct value, set demand_max = demand_req.
- on start_min event, add energy of full tasks to energy_required
- extend energy_required/usable as above
- whenever energy_required > energy_usable, the checker fails immediately
- explain only with full/fixed tasks inside W = [event_time, window_end)
- (I'm not sure) Explanation of full tasks' start_min/end_max can be extended to W.
This PR adds an implementation of the Horizontal Elastic TimeTable Overload Checker presented in Kameugne et al. 2024 to the Cumulative Constraint.
The propagator is not activated by default and can be enabled by setting the
SatParameters.use_horizontal_overload_checking_in_cumulative
totrue
.For Reviewers
The PR is not ready for merge. Rather, it is meant as a "snapshot" to start discussing the implementation and whether the propagator is a net-positive for OR-Tools.
Ultimately, I plan to keep working on this PR to implement the "IQHE Edge-finder" rule (from the same paper) which dominates the TimeTable Edge-finder filter rule in terms of propagation. That being said, it is not clear whether this will be a net-positive in an LCG solver. The reason is that the explanations tend to be big and not easy to generalize.