Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GLPK errors on JuMP-HiGHS MPS benchmarks #68

Open
siddharth-krishna opened this issue Dec 2, 2024 · 1 comment
Open

GLPK errors on JuMP-HiGHS MPS benchmarks #68

siddharth-krishna opened this issue Dec 2, 2024 · 1 comment
Assignees

Comments

@siddharth-krishna
Copy link
Contributor

@jacek-oet @danielelerede-oet I notice in the results on #64 (commit 59f1e54) that GLPK errors on all the newly added MPS benchmarks, so I ran one manually to see the problem:

glpsol --mps runner/benchmarks/genx-7_three_zones_w_colocated_VRE_storage-3-24h.mps 
GLPSOL--GLPK LP/MIP Solver 5.0
Parameter(s) specified in the command line:
 --mps runner/benchmarks/genx-7_three_zones_w_colocated_VRE_storage-3-24h.mps
Reading problem data from 'runner/benchmarks/genx-7_three_zones_w_colocated_VRE_storage-3-24h.mps'...
runner/benchmarks/genx-7_three_zones_w_colocated_VRE_storage-3-24h.mps:1: warning: missing model name in field 3
Objective: Obj
runner/benchmarks/genx-7_three_zones_w_colocated_VRE_storage-3-24h.mps:19573: in fixed MPS format positions 37-39 must be blank
MPS file processing error

I can't see anything obviously wrong in the MPS file on that line. Googling for the error message leads to this issue:
Pyomo/pyomo#2115

But I can't understand the difference between the 2 MPS file in the OP, it looks like there are just white space differences? Any ideas what the problem could be? (cc @FabianHofmann in case you've seen this error before)
image

Another problem is that currently we are recording such errors like so:

Benchmark,Size,Solver,Solver Version,Solver Release Year,Status,Termination Condition,Runtime (s),Memory Usage (MB),Objective Value,Max Integrality Violation,Duality Gap
genx-1_three_zones,3-1h,glpk,5.0,2020,warning,unknown,0.3038334846496582,170.26,nan,,

But this means that this counts as a runtime of 0.30s for GLPK and this skews the SGM table in the Home dashboard, making it look like GLPK is the fastest solver! What's the correct thing to do here: exclude warning rows before calculating SGM? Or, change run_solver.py to put the timeout value in the runtime column if the status is warning and include it in the SGM calculation (which is what we do for TOs).

siddharth-krishna added a commit that referenced this issue Jan 8, 2025
The errors we were seeing with GLPK in #68 was because linopy by default
calls `glpsol` with the `--mps` argument, which is for the fixed MPS
format, but the JuMP-HiGHS benchmarks are using the free MPS format
(`--freemps`), so I re-ran the GLPK runs by hacking linopy to use
`--freemps`:
```diff
diff --git a/linopy/solvers.py b/linopy/solvers.py
index e89a6a9..2c996e3 100644
--- a/linopy/solvers.py
+++ b/linopy/solvers.py
@@ -563,7 +563,8 @@ class GLPK(Solver):
         Path(solution_fn).parent.mkdir(exist_ok=True)
 
         # TODO use --nopresol argument for non-optimal solution output
-        command = f"glpsol --{io_api} {problem_fn} --output {solution_fn} "
+        io_api_arg = "freemps" if io_api == "mps" else io_api
+        command = f"glpsol --{io_api_arg} {problem_fn} --output {solution_fn} "
         if log_fn is not None:
             command += f"--log {log_fn} "
         if warmstart_fn:
```

This PR has the new results, and I'll open a linopy PR to fix the issue
in a more robust way in the new year. (FYI @FabianHofmann )

There are still a few benchmarks returning `uknown` statuses, this needs
to be looked into.
@siddharth-krishna
Copy link
Contributor Author

Update: for now we have given non-ok rows the maximum timeout value for runtime and max memory value, and continued to include them in the SGM calculation.

The GLPK errors were due to an incorrect solver option being passed by linopy, this was fixed in a hacky way in #86. I've opened a linopy issue to track a more robust solution: PyPSA/linopy#402

Once that is fixed, we should modify our benchmark runner to use the fixed linopy. Then we can close this issue.

@siddharth-krishna siddharth-krishna self-assigned this Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant