Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various Bugfixes #146

Merged
merged 5 commits into from
Nov 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 12 additions & 10 deletions algobattle/battle.py
Original file line number Diff line number Diff line change
Expand Up @@ -429,39 +429,41 @@ async def run_battle(self, fight: FightHandler, config: Config, min_size: int, u
"""

def sizes(size: int, max_size: int) -> Iterable[int]:
if size > max_size:
return
counter = count(1)
size = max(size, min_size)
while size < max_size:
yield size
size += next(counter) ** config.exponent
yield max_size

note = "Starting battle..."
for _ in range(config.rounds):
max_size = config.maximum_size
for _i in range(config.rounds):
lower_bound = min_size
upper_bound = config.maximum_size
self.results.append(0)
gen_errors = 0
while self.results[-1] < max_size:
for size in sizes(self.results[-1] + 1, max_size):
ui.update_battle_data(self.UiData(reached=self.results, cap=max_size, note=note))
while lower_bound <= upper_bound:
lower_bound = max(lower_bound, self.results[-1] + 1)
for size in sizes(lower_bound, upper_bound):
ui.update_battle_data(self.UiData(reached=self.results, cap=upper_bound, note=note))
result = await fight.run(size)
if result.generator.error and config.max_generator_errors != "unlimited":
gen_errors += 1
if gen_errors >= config.max_generator_errors:
self.results[-1] = max_size
self.results[-1] = upper_bound
note = f"Generator failed {gen_errors} times in a row, solver wins round by default!"
break
else:
gen_errors = 0
if result.score < config.minimum_score:
max_size = size - 1
upper_bound = size - 1
note = "Solver didn't achieve the needed score, resetting the cap"
break
else:
note = "Solver was successful, increasing the cap"
self.results[-1] = size
else:
note = "Cap reached, resetting instance size"
note = "Cap reached, resetting instance size"

def score(self) -> float:
"""Averages the highest instance size reached in each round."""
Expand Down
3 changes: 2 additions & 1 deletion algobattle/match.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
ConfigDict,
Field,
GetCoreSchemaHandler,
SerializeAsAny,
ValidationInfo,
field_validator,
model_serializer,
Expand Down Expand Up @@ -67,7 +68,7 @@ class Match(BaseModel):

active_teams: list[str] = field(default_factory=list)
excluded_teams: dict[str, ExceptionInfo] = field(default_factory=dict)
battles: dict[MatchupStr, Battle] = Field(default_factory=dict)
battles: dict[MatchupStr, SerializeAsAny[Battle]] = Field(default_factory=dict)

async def _run_battle(
self,
Expand Down
6 changes: 3 additions & 3 deletions algobattle/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -457,12 +457,12 @@ class EdgeLen:

@staticmethod
def _func(v: Any, edges: list[tuple[int, int]]) -> Any:
"""Validates that the collection has length `instance.size`."""
"""Validates that the collection has the same length as `instance.edges`."""
if len(v) != len(edges):
raise ValueError("Value does not have length `instance.size`")
raise ValueError("Value does not have the same length as `instance.edges`")
return v

_validator = AttributeReferenceValidator(_func, InstanceRef.size)
_validator = AttributeReferenceValidator(_func, InstanceRef.edges)

@classmethod
def __get_pydantic_core_schema__(cls, source_type: Any, handler: GetCoreSchemaHandler) -> CoreSchema:
Expand Down
41 changes: 41 additions & 0 deletions docs/advanced/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,3 +260,44 @@ one called `algobattle.toml`. Defaults to the current working directory.
### config

Opens the CLI config file. Accepts no arguments.1

### package problem

Packages the problem in the project folder into a `.algo` file.

`project`
: Path to the Algobattle project containing the problem. Can either point directly to a project config file, or to
a folder containing one called `algobattle.toml`. Defaults to the current working directory.

`--description`
: Path to a file containing a human-readable description of the problem. Defaults to one called `description` (with any
extension) in the project's directory.

`--out` / `-o`
: Location where the packaged `.algo` file will be placed. Defaults to a file named after the problem in the project's
directory.

### package programs

Packages the programs of a particular team into `.prog` files. These files can be used to easily share programs, or
upload them to the Algobattle website.

!!! tip "Keep program sizes down"
Algobattle will package everything in the program directories into a zip file. This may include unnecessary build
artefacts, logs, program output, etc. It's best to remove any superfluous files (in particular, anything in your
`.gitignore`) from the directories before running this command.

`project`
: Path to the Algobattle project containing the programs. Can either point directly to a project config file, or to
a folder containing one called `algobattle.toml`. Defaults to the current working directory.

`--team`
: Name of the team whose programs should be packaged. If there is only one team in this project, it will be selected
by default.

`--generator` and `--solver`
: Whether to package this particular program. Defaults to `#!py True`.

`--test` / `--no-test`
: Whether to test the programs before packaging them to make sure that they are building and running correctly.
Defaults to `#!py True`.
21 changes: 16 additions & 5 deletions docs/tutorial/programs.md
Original file line number Diff line number Diff line change
Expand Up @@ -301,12 +301,23 @@ This time it should run without any errors. If that doesn't work for you, there'
## Packaging the Programs

You may want to share your code with e.g. your lab instructors. The best way to do that is to package them into Algobattle
program files. These are files using the `.prob` file extension that are formatted in such a way that Algobattle recognises
them and can use them to run matches.
program files. We do this by running

```
algobattle package programs
```

!!! note "Using the web framework"
If your lab is using the web framework, these files are what you need to upload to have your programs run in the
matches.

!!! tip "Keep program sizes down"
Algobattle will package everything in the program directories. This may include unnecessary build artefacts, logs,
program output, etc. It's best to remove any superfluous files (in particular, anything in your `.gitignore`)
from the directories before running this command.

!!! tip "A peek behind the curtain"
These files again are just `zip` files containing everything in your programs' folders in a specific format.
It's best to remove any unnessesary files from them before packaging to keep file sizes down.

!!! note "Using the web framework"
If your lab is using the web framework, these files are what you need to upload to have your programs run in the matches.
This will create two `.prog` files that contain all the data Algobattle needs to run our programs. We can then easily
share our code using just these files, or upload them to the Algobattle website.
Loading