Skip to content

Commit

Permalink
Merge pull request #88 from hadar-simulator/release/v0.4.0
Browse files Browse the repository at this point in the history
Release/v0.4.0
  • Loading branch information
FrancoisJ authored Aug 26, 2020
2 parents 56c7581 + 8053678 commit ed2ec55
Show file tree
Hide file tree
Showing 35 changed files with 2,863 additions and 812 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
182 changes: 104 additions & 78 deletions docs/source/architecture/analyzer.rst
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Analyzer
========

For a high abstraction and to be agnostic about technology, Hadar uses objects as glue for optimizer. Objects are cool, but are too complicated to manipulated for data analysis. Analyzer contains tools to help analyzing study result.
For a high abstraction and to be agnostic about technology, Hadar uses objects as glue for optimizer. Objects are cool, but are too complicated to manipulated for data analysis. Analyzer contains tools to help analyzing study and result.

Today, there is only :code:`ResultAnalyzer`, with two features level:

* **high level** user asks directly to compute global cost and global remain capacity, etc.

* **low level** user asks *raw* data represented inside pandas Dataframe.
* **low level** user build query and get *raw* data represented inside pandas Dataframe.

Before speaking about this features, let's see how data are transformed.

Expand All @@ -16,26 +16,26 @@ Flatten Data

As said above, object is nice to encapsulate data and represent it into agnostic form. Objects can be serialized into JSON or something else to be used by another software maybe in another language. But keep object to analyze data is awful.

Python has a very efficient tool for data analysis : pandas. Therefore challenge is to transform object into pandas Dataframe. Solution used is to flatten data to fill into table.
Python has a very efficient tool for data analysis : pandas. Therefore challenge is to transform object into pandas Dataframe. Solution is to flatten data to fill into table.

Consumption
***********

For example with consumption. Data into :code:`Study` is cost and asked quantity. And in :code:`Result` it's cost (same) and given quantity. This tuple *(cost, asked, given)* is present for each node, each consumption attached on this node, each scenario and each timestep. If we want to flatten data, we need to fill this table

+------+------+------+------+------+------+------+
| cost | asked| given| node | name | scn | t |
+------+------+------+------+------+------+------+
| 10 | 5 | 5 | fr | load | 0 | 0 |
+------+------+------+------+------+------+------+
| 10 | 7 | 7 | fr | load | 0 | 1 |
+------+------+------+------+------+------+------+
| 10 | 7 | 5 | fr | load | 1 | 0 |
+------+------+------+------+------+------+------+
| 10 | 6 | 6 | fr | load | 1 | 1 |
+------+------+------+------+------+------+------+
| ... | ... | ... | ... | ... | .. | ... |
+------+------+------+------+------+------+------+
+------+------+------+------+------+------+------+------------+
| cost | asked| given| node | name | scn | t | network |
+------+------+------+------+------+------+------+------------+
| 10 | 5 | 5 | fr | load | 0 | 0 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 7 | 7 | fr | load | 0 | 1 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 7 | 5 | fr | load | 1 | 0 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 6 | 6 | fr | load | 1 | 1 | default |
+------+------+------+------+------+------+------+------------+
| ... | ... | ... | ... | ... | .. | ... | ... |
+------+------+------+------+------+------+------+------------+

It is the purpose of :code:`_build_consumption(study: Study, result: Result) -> pd.Dataframe` to build this array

Expand All @@ -44,45 +44,107 @@ Production

Production follow the same pattern. However, they don't have *asked* and *given* but *available* and *used* quantity. Therefore table looks like

+------+------+------+------+------+------+------+
| cost | avail| used | node | name | scn | t |
+------+------+------+------+------+------+------+
| 10 | 100 | 21 | fr | coal | 0 | 0 |
+------+------+------+------+------+------+------+
| 10 | 100 | 36 | fr | coal | 0 | 1 |
+------+------+------+------+------+------+------+
| 10 | 100 | 12 | fr | coal | 1 | 0 |
+------+------+------+------+------+------+------+
| 10 | 100 | 81 | fr | coal | 1 | 1 |
+------+------+------+------+------+------+------+
| ... | ... | ... | ... | ... | .. | ... |
+------+------+------+------+------+------+------+
+------+------+------+------+------+------+------+------------+
| cost | avail| used | node | name | scn | t | network |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 21 | fr | coal | 0 | 0 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 36 | fr | coal | 0 | 1 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 12 | fr | coal | 1 | 0 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 81 | fr | coal | 1 | 1 | default |
+------+------+------+------+------+------+------+------------+
| ... | ... | ... | ... | ... | .. | ... | ... |
+------+------+------+------+------+------+------+------------+

It's done by :code:`_build_production(study: Study, result: Result) -> pd.Dataframe` method.


Storage
*******

Storage follow the same pattern. Therefore table looks like.

+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+
|max_capacity | capacity | max_flow_in | flow_in | max_flow_out | flow_out | cost | init_capacity | eff | node | name | scn | t | network |
+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+
| 12000 | 678 | 400 | 214 | 400 | 0 | 10 | 0 | .99 | fr | cell | 0 | 0 | default |
+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+
| 12000 | 892 | 400 | 53 | 400 | 0 | 10 | 0 | .99 | fr | cell | 0 | 1 | default |
+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+
| 12000 | 945 | 400 | 0 | 400 | 87 | 10 | 0 | .99 | fr | cell | 1 | 0 | default |
+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+
| 12000 | 853 | 400 | 0 | 400 | 0 | 10 | 0 | .99 | fr | cell | 1 | 1 | default |
+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | .. | ... | ... |
+-------------+----------+-------------+---------+--------------+----------+------+---------------+-----+------+------+------+------+------------+


It's done by :code:`_build_storage(study: Study, result: Result) -> pd.Dataframe` method.


Link
****

Link follow the same pattern. Hierarchical structure naming change. There are not *node* and *name* but *source* and *destination*. Therefore table looks like.

+------+------+------+------+------+------+------+
| cost | avail| used | src | dest | scn | t |
+------+------+------+------+------+------+------+
| 10 | 100 | 21 | fr | uk | 0 | 0 |
+------+------+------+------+------+------+------+
| 10 | 100 | 36 | fr | uk | 0 | 1 |
+------+------+------+------+------+------+------+
| 10 | 100 | 12 | fr | uk | 1 | 0 |
+------+------+------+------+------+------+------+
| 10 | 100 | 81 | fr | uk | 1 | 1 |
+------+------+------+------+------+------+------+
| ... | ... | ... | ... | ... | .. | .. |
+------+------+------+------+------+------+------+
+------+------+------+------+------+------+------+------------+
| cost | avail| used | src | dest | scn | t | network |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 21 | fr | uk | 0 | 0 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 36 | fr | uk | 0 | 1 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 12 | fr | uk | 1 | 0 | default |
+------+------+------+------+------+------+------+------------+
| 10 | 100 | 81 | fr | uk | 1 | 1 | default |
+------+------+------+------+------+------+------+------------+
| ... | ... | ... | ... | ... | .. | .. | ... |
+------+------+------+------+------+------+------+------------+

It's done by :code:`_build_link(study: Study, result: Result) -> pd.Dataframe` method.


Converter
*********

Converter follow the same pattern, it just split in two tables. One for source element:

+-----+-------+------+------+------+------+------+------------+
| max | ratio | flow | node | name | scn | t | network |
+-----+-------+------+------+------+------+------+------------+
| 100 | .4 | 52 | fr | conv | 0 | 0 | default |
+-----+-------+------+------+------+------+------+------------+
| 100 | .4 | 87 | fr | conv | 0 | 1 | default |
+-----+-------+------+------+------+------+------+------------+
| 100 | .4 | 23 | fr | conv | 1 | 0 | default |
+-----+-------+------+------+------+------+------+------------+
| 100 | .4 | 58 | fr | conv | 1 | 1 | default |
+-----+-------+------+------+------+------+------+------------+
| ... | ... | ... | ... | ... | .. | ... | ... |
+-----+-------+------+------+------+------+------+------------+

It's done by :code:`_build_src_converter(study: Study, result: Result) -> pd.Dataframe` method.

And an other for destination element, tables are near identical. Source has special attributes called *ratio* and destintion has special attribute called *cost*:

+-----+-------+------+------+------+------+------+------------+
| max | cost | flow | node | name | scn | t | network |
+-----+-------+------+------+------+------+------+------------+
| 100 | 20 | 52 | fr | conv | 0 | 0 | default |
+-----+-------+------+------+------+------+------+------------+
| 100 | 20 | 87 | fr | conv | 0 | 1 | default |
+-----+-------+------+------+------+------+------+------------+
| 100 | 20 | 23 | fr | conv | 1 | 0 | default |
+-----+-------+------+------+------+------+------+------------+
| 100 | 20 | 58 | fr | conv | 1 | 1 | default |
+-----+-------+------+------+------+------+------+------------+
| ... | ... | ... | ... | ... | .. | ... | ... |
+-----+-------+------+------+------+------+------+------------+

It's done by :code:`_build_dest_converter(study: Study, result: Result) -> pd.Dataframe` method.

Low level analysis power with a *FluentAPISelector*
---------------------------------------------------

Expand Down Expand Up @@ -165,39 +227,3 @@ Unlike low level, high level focus on provides ready to use data. Unlike low lev
* :code:`get_cost(self, node: str) -> np.ndarray:` method which according to node given returns a matrix (scenario, horizon) shape with summarize cost.

* :code:`get_balance(self, node: str) -> np.ndarray` method which according to node given returns a matrix (scenario, horizon) shape with exchange balance (i.e. sum of exportation minus sum of importation)



































j
24 changes: 15 additions & 9 deletions docs/source/architecture/optimizer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Today, two optimizers are present :code:`LPOptimizer` and :code:`RemoteOptimizer
RemoteOptimizer
---------------

Let's start by the simplest. :code:`RemoteOptimizer` is a client to hadar server. As you may know Hadar exist like a python library, but has also a tiny project to package hadar inside web server. You can find more details on this server in this `repository. <https://github.com/hadar-simulator/simple-server>`_
Let's start by the simplest. :code:`RemoteOptimizer` is a client to hadar server. As you may know Hadar exist like a python library, but has also a tiny project to package hadar inside web server. You can find more details on this server in this `repository. <https://github.com/hadar-simulator/community-server>`_

Client implements :code:`Optimizer` interface. Like that, to deploy compute on a data-center, only one line of code changes. ::

Expand All @@ -41,7 +41,7 @@ Analyze that in details.
InputMapper
************

If you look in code, you will see two domains. One at :code:`hadar.optimizer.[input/output]` and another at :code:`hadar.optimizer.lp.domain` . If you look carefully it seems the same :code:`Consumption` , :code:`OutputConsumption` in one hand, :code:`LPConsumption` in other hand. The only change is a new attribute in :code:`LP*` called :code:`variable` . Variables are the parameters of the problem. It's what or-tools has to find, i.e. power used for production, capacity used for border and lost of load for consumption.
If you look in code, you will see three domains. One at :code:`hadar.optimizer.input`, :code:`hadar.optimizer.output` and another at :code:`hadar.optimizer.lp.domain` . If you look carefully it seems the same :code:`Consumption` , :code:`OutputConsumption` in one hand, :code:`LPConsumption` in other hand. The only change is a new attribute in :code:`LP*` called :code:`variable` . Variables are the parameters of the problem. It's what or-tools has to find, i.e. power used for production, capacity used for border and lost of load for consumption.

Therefore, :code:`InputMapper` roles are just to create new object with ortools Variables initialized, like we can see in this code snippet. ::

Expand All @@ -58,7 +58,7 @@ Therefore, :code:`InputMapper` roles are just to create new object with ortools
OutputMapper
************

At the end, :code:`OutputMapper` does the reverse thing. :code:`LP*` objects have computed :code:`Variables`. We need to extract result find by or-tool to :code:`Result` object.
At the end, :code:`OutputMapper` does the reverse thing. :code:`LP*` objects have computed :code:`Variables`. We need to extract result found by or-tool to :code:`Result` object.

Mapping of :code:`LPProduction` and :code:`LPLink` are straight forward. I propose you to look at :code:`LPConsumption` code ::

Expand All @@ -79,6 +79,10 @@ Hadar has to build problem optimization. These algorithms are encapsulated insid

:code:`ObjectiveBuilder` takes node by its method :code:`add_node`. Then for all productions, consumptions, links, it adds :math:`variable * cost` into objective equation.

:code:`StorageBuilder` build constraints for each storage element. Constraints care about a strict volume integrity (i.e. volume is the sum of last volume + input - output)

:code:`ConverterBuilder` build ratio constraints between each inputs converter to output.

:code:`AdequacyBuilder` is a bit more tricky. For each node, it will create a new adequacy constraint equation (c.f. :ref:`Linear Model <linear-model>`). Coefficients, here are 1 or -1 depending of *inner* power or *outer* power. Have you seen these line ? ::

self.constraints[(t, link.src)].SetCoefficient(link.variable, -1) # Export from src
Expand Down Expand Up @@ -136,13 +140,15 @@ It should work, but in fact not... I don't know why, when multiprocessing want t
Study
-----

:code:`Study` is a *API object* I means it encapsulates all data needed to compute adequacy. It's the glue between workflow (or any other preprocessing) and optimizer. Study has an hierarchical structure of 3 levels :
:code: Study` is a *API object* I means it encapsulates all data needed to compute adequacy. It's the glue between workflow (or any other preprocessing) and optimizer. Study has an hierarchical structure of 3 levels :

#. study level with set of networks and converter (:code:`Converter`)

#. node level with node name as key.
#. network level (:code:`InputNetwork`) with set of nodes.

#. type elements level with *consumption*, *production* and *link* entries. Represented by :code:`InputNode` object.
#. node level (:code:`InputNode`) with set of consumptions, productions, storages and links elements.

#. element with index as key. Represented by :code:`Consumption`, :code:`Production`, :code:`Link` objects
#. element level (:code:`Consumption`, :code:`Production`, :code:`Storage`, :code:`Link`). According to element type, some attributes are numpy 2D matrix with shape(nb_scn, horizon)

Most important attribute could be :code:`quantity` which represent quantity of power used in network. For link, is a transfert capacity. For production is a generation capacity. For consumption is a forced load to sustain.

Expand Down Expand Up @@ -175,9 +181,9 @@ In the case of optimizer, *Fluent API Selector* is represented by :code:`Network

* You can only downstream deeper step by step (i.e. :code:`network()` then :code:`node()`, then :code:`consumption()` )

* But you can upstream as you want (i.e. go direcly from :code:`consumption()` to :code:`network()` )
* But you can upstream as you want (i.e. go direcly from :code:`consumption()` to :code:`network()` or :code:`converter()` )

To help user, quantity field is flexible:
To help user, quantity and cost fields are flexible:

* lists are converted to numpy array

Expand Down
Loading

0 comments on commit ed2ec55

Please sign in to comment.