-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
textbook sos example that worked on v0.7.1 and fails on v0.9.0 #119
Comments
Hmm, I get different termination statuses on 0.7.1 depending on the chosen BLAS for the PSD constraints ( The big difference for me is that the objective value on 0.9 becomes NaN (for both BLAS), but on 0.7.1 the objective value is an actual number (for both BLAS). Do you by any chance use the objective in the loop? Also, just because I am curious, what is the NonnegativeConeT(0) doing here? |
This might actually be related to #120 . Using the infeasibility checks from the preprint (it seems as they were not implemented in the code) this problem seem to be consistently terminated as |
In oxfordcontrol/Clarabel.rs#119, they asked about why we declare a `NonnegativeConeT(0)`. I guess it's not wrong, but it's certainly inelegant. This PR avoids writing the zero-dimensional cones.
Thanks for pointing that out. It really shouldn't be there. I've opened RobotLocomotion/drake#21646 to update our parser. |
In oxfordcontrol/Clarabel.rs#119, they asked about why we declare a `NonnegativeConeT(0)`. I guess it's not wrong, but it's certainly inelegant. This PR avoids writing the zero-dimensional cones.
In oxfordcontrol/Clarabel.rs#119, they asked about why we declare a `NonnegativeConeT(0)`. I guess it's not wrong, but it's certainly inelegant. This PR avoids writing the zero-dimensional cones.
…n#21646) In oxfordcontrol/Clarabel.rs#119, they asked about why we declare a `NonnegativeConeT(0)`. I guess it's not wrong, but it's certainly inelegant. This PR avoids writing the zero-dimensional cones.
I'm not sure what to do with this issue. For the current development branch I get inconsistent results, namely:
It looks as if in all cases the solver progress identically for about 10 iterations, and then things diverge. All of the versions should be the same in principle, but implementation of some very low level stuff is different (i.e. better) in Julia and faer than in our native implementation, e.g. because of better use of simd instructions, pairwise summation for dot products etc. That doesn't explain the python behaviour though. The Julia/Python wrappers for the rust implementation shouldn't actually do anything to the data other than just pass it through to the solver, and I am using the same version of the rust code for both wrappers. I will need to look into this a bit more. |
The underlying problem in the example appears to be a single very badly scaled row in A constraint like The problem also solves cleanly if you disable equilibration, I think because in that case the LHS of the constraint never gets big enough to register as a constraint violation. If equilibration is enabled (i.e. the default behaviour), then the solver tries to scale the row by Sort of related, but we had a discussion at some point about whether such a constraint should be eliminated by a presolve step. I think in the end we concluded that it should not be eliminated. Maybe it's worth looking for cases like this in the presolve and at least normalizing them somehow to have unit LHS terms. |
Two fixes relevant to this issue, though neither solves the problem of a very scaled condition equality constraint.
So in summary, you may still not get the right answer for this problem, but you will now consistently get the same wrong answer. |
We recently upgraded Drake from v0.7.1 to v0.9.0, and I experienced a regression in one of my "simple" examples using sums-of-squares for Lyapunov analysis. Mosek, SCS, and Clarabel v0.7.1 solved it fine.
For context, you can find the original notebook here. Unfortunately, the failure happens inside some alternations... specifically in the
OptimizeLyapunov
method at the bottom of the notebook, and ifOptimizeMultipliers
uses Mosek, and onlyOptimizeLyapunov
uses Clarabel, then everything works.Nevertheless, here is the Clarabel-only reproduction of the final solve which now results in
Terminated with status = InsufficientProgress
.I know at least one relatively simple thing I can do to improve the numerics for this problem, but thought I should report the regression before I do.
The text was updated successfully, but these errors were encountered: