Giter Club home page Giter Club logo

pypownet's Introduction

pypownet

pypownet stands for Python Power Network, which is a simulator for power (electrical) networks.

The simulator is able to emulate a power grid (of any size or characteristics) subject to a set of temporal injections (productions and consumptions) for discretized timesteps. Loadflow computations relies on Matpower and can be run under the AC or DC models. The simulator is able to simulate cascading failures, where successively overflowed lines are switched off and a loadflow is computed on the subsequent grid.

Video capture of the renderer of the simulator in action

Illustration of a running power grid with our renderer on the default IEEE14 grid environment. NB: the renderer drastically slows the performance of pypownet: it takes ~40s to compute 1000 timesteps without renderer mode with this environment.

The simulator comes with an Reinforcement Learning-focused environment, which implements states (observations), actions (reduced to node-splitting and line status switches) as well as a reward signal. Finally, a renderer is available, such that the observations of the network can be plotted in real-time (synchronized with the game time).

Official documentation: https://pypownet.readthedocs.io/

Installation

Using Docker

Retrieve the Docker image:

sudo docker pull marvinler/pypownet:2.2.8-light

Without using Docker

Requirements:

  • Python >= 3.6

For Octave backend (default is Python backend):

  • Octave >= 4.0.6
  • Matpower >= 6.0

Instructions

These instructions allow to run the simulator with a Python backend; for Octave backend, please refer to the documentation for installation instructions.

Step 1: Install Python3.6
sudo apt-get update
sudo apt-get install python3.6

If you have any trouble with this step, please refer to the official webpage of Python.

(Optional, recommended) Step 1bis: Create a virtual environment
virtualenv -p python3.6 --system-site-packages venv
source venv/bin/activate
Step 2: Clone pypownet
git clone https://github.com/MarvinLer/pypownet

This should create a folder pypownet with the current sources.

Step 3: Run the installation script of pypownet

Finally, run the following Python command to install the current simulator (including the Python libraries dependencies):

cd pypownet/
python3.6 setup.py install

After this, this simulator is available under the name pypownet (e.g. import pypownet).

Basic usage

Without using Docker

Experiments can be conducted using the CLI.

Using CLI arguments

CLI can be used to run simulations:

python -m pypownet.main -v

You can use python -m pypownet.main --help for further information about these runners arguments. Example running 1000 iterations (here, ~40 days) of the do-nothing (default) agent on a grid with 14 substations:

python -m pypownet.main --parameters parameters/default14 --niter 1000 --verbose --render

With this default14/ parameters (emulates a grid with 14 substations, 5 productions, 11 consumptions and 20 lines), it takes ~100 seconds to run 1000 timesteps (old i5).

Using Docker

You can use the command line of the image with shared display (for running the renderer):

sudo docker run -it --privileged --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" marvinler/pypownet:2.2.0 sh

This will open a terminal of the image. The usage is then identical to without docker, by doing the steps within this terminal.

Main features

pypownet is a power grid simulator, that emulates a power grid that is subject to pre-computed injections, planned maintenance as well as random external hazards. Here is a list of pypownet main features:

  • emulates a grid of any size and electrical properties in a game discretized in timesteps of any (fixed) size
  • computes and apply cascading failure process: at each timestep, overflowed lines with certain conditions are switched off, with a consequent loadflow computation to retrieve the new grid steady-state, and reiterating the process
  • has an RL-focused interface, where players or controlers can play actions (node-splitting or line status switches) on the current grid, based on a partial observation of the grid (high dimension), with a customable reward signal (and game over options)
  • has a renderer that enables the user to see the grid evolving in real-time, as well as the actions of the controler currently playing and further grid state details (works only for pypownet official grid cases)
  • has a runner that enables to use pypownet fully by simply coding an agent (with a method act(observation))
  • possess some baselines models (including treesearches) illustrating how to use the furnished environment
  • can be launched with CLI with the possibility of managing certain parameters (such as renderer toggling or the agent to be played)
  • functions on both DC and AC mode
  • has a set of parameters that can be customized (including AC or DC mode, or hard-overflow coefficient), associated with sets of injections, planned maintenance and random hazards of the various chronics
  • handles node-splitting (at the moment only max 2 nodes per substation) and lines switches off for topology management

Generate the documentation

The stable official documentation is available at https://pypownet.readthedocs.io/. Alternatively, a copy of the master documentation can be computed: you will need Sphinx, a Documentation building tool, and a nice-looking custom Sphinx theme similar to the one of readthedocs.io:

pip install sphinx sphinx_rtd_theme

This installs both the Sphinx package and the custom template. Then:

cd doc
sphinx-build -b html ./source ./build

The html will be available within the folder doc/build.

Tests

pypownet is provided with series of tests developped by @ZergD and RTE. These tests are designed to verify some behavior of the game as a whole, including some expected grid values based on perfectly controlled injections/topology. Tests can be run with pytest in the current directory.

(Here)[tests/README.md] for more information about the testing module.

License information

Copyright 2017-2019 RTE and INRIA (France)

RTE: http://www.rte-france.com
INRIA: https://www.inria.fr/

This Source Code is subject to the terms of the GNU Lesser General Public License v3.0. If a copy of the LGPL-v3 was not distributed with this file, You can obtain one at https://www.gnu.org/licenses/lgpl-3.0.fr.html.

Citation

If you use this repo or find it useful, please consider citing:

@article{lerousseau2021design,
  title={Design and implementation of an environment for Learning to Run a Power Network (L2RPN)},
  author={Lerousseau, Marvin},
  journal={arXiv preprint arXiv:2104.04080},
  year={2021}
}

pypownet's People

Contributors

bdonnot avatar dependabot[bot] avatar marota avatar marvinler avatar zergd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pypownet's Issues

Reward

The reward is not fully finished.
It doesn't take into account a load cut

Frame by frame approach

Disable the intermediate time step "t+0.5", and follow this process:

1/ Apply action + apply new injections
2/ Perform load flow + cascading failure

safe mode :
3/ If the cascading failure results in a stable situation and there is no game over condition satisfied, then do nothing. Return the corresponding reward, etc
Else, cancel action and return low reward

hardcore mode:
3/ If the cascading failure results in a stable situation and there is no game over condition satisfied, then do nothing. Return the corresponding reward, etc
Else, return game over

14 nodes

Enable a full support of case 14 (environnement and renderer)

Test clean Install de PyPownet - Bugs

Describe the bug
Installation avec Docker de PyPownet sur Mac.
Insatll complete.
Bugs:

  • en lançant python -m pypownet.main => ModuleNotFoundError: No module named 'oct2py'
  • en lançant sudo docker run -it --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" marvinler/pypownet sh
    => Unable to find image 'marvinler/pypownet:latest' locally
    docker: Error response from daemon: manifest for marvinler/pypownet:latest not found.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Random line break support

Enable support of random line opening events. This aims at simulating a random hazard on a line.
The input that decides that such an event happens should be stored in a file similar to the injections.
The damaged lines should then remain unavailable for a given time.

Confusing get_switches_configuration_of_substation(*args) behaviour

The method you have called get_switches_configuration_of_substation(*args) which takes as arguments the action and the substation_id is a bit confusing sometimes. Personally I think this method should give you the current switches status of the grid independently of the action you're taking because you are only interested to know how elements are connected in the grid. By default all belongs the busbar 0 but when you make some changes, you want to know how the grid looks like now and if you pass the incorrect action array, the information you get is misleading.

Store damaged lines

If a cascading failure results in a line being opened, then store the information that this line is damaged, and make it unavailable for a given time lapse.

Action understandability

We would need to be able to identify, the elements in the action vector that correspond to each substation. The current implementation already offers an easier to understand action ordering, but the problem is that you don't know where the limit between each node is.
If I were to try every possible topological combination for a specific node, it's quite hard to identify which parts of the action vector could be modified

Fix amperes values

Amperes values in observation are not computed correctly. There's a mismatch with values computed with theory. The difference is affected by a factor of 10.

Action.__getitem__() returns nothing

correction :
ai placé de return
aussi une petite modification de setatrr pour plus de clareté.

class Action(object):
def init(self, prods_switches_subaction, loads_switches_subaction,
lines_or_switches_subaction, lines_ex_switches_subaction, lines_status_subaction):
if prods_switches_subaction is None:
raise ValueError('Expected prods_switches_subaction to be array, got None')
if loads_switches_subaction is None:
raise ValueError('Expected loads_switches_subaction to be array, got None')
if lines_or_switches_subaction is None:
raise ValueError('Expected lines_or_switches_subaction to be array, got None')
if lines_ex_switches_subaction is None:
raise ValueError('Expected lines_ex_switches_subaction to be array, got None')
if lines_status_subaction is None:
raise ValueError('Expected lines_status_subaction to be array, got None')

    self.prods_switches_subaction = np.asarray(prods_switches_subaction).astype(int)
    self.loads_switches_subaction = np.asarray(loads_switches_subaction).astype(int)
    self.lines_or_switches_subaction = np.asarray(lines_or_switches_subaction).astype(int)
    self.lines_ex_switches_subaction = np.asarray(lines_ex_switches_subaction).astype(int)
    self.lines_status_subaction = np.asarray(lines_status_subaction).astype(int)

    self._prods_switches_length = len(self.prods_switches_subaction)
    self._loads_switches_length = len(self.loads_switches_subaction)
    self._lines_or_switches_length = len(self.lines_or_switches_subaction)
    self._lines_ex_switches_length = len(self.lines_ex_switches_subaction)
    self._lines_status_length = len(self.lines_status_subaction)

def get_prods_switches_subaction(self):
    return self.prods_switches_subaction

def get_loads_switches_subaction(self):
    return self.loads_switches_subaction

def get_lines_or_switches_subaction(self):
    return self.lines_or_switches_subaction

def get_lines_ex_switches_subaction(self):
    return self.lines_ex_switches_subaction

def get_node_splitting_subaction(self):
    return np.concatenate((self.get_prods_switches_subaction(), self.get_loads_switches_subaction(),
                           self.get_lines_or_switches_subaction(), self.get_lines_ex_switches_subaction(),))

def get_lines_status_subaction(self):
    return self.lines_status_subaction

def as_array(self):
    return np.concatenate((self.get_node_splitting_subaction(), self.get_lines_status_subaction(),))

def __str__(self):
    return ', '.join(list(map(str, self.as_array())))

def __len__(self, do_sum=True):
    length_aslist = (self._prods_switches_length, self._loads_switches_length, self._lines_or_switches_length,
                     self._lines_ex_switches_length, self._lines_status_length)
    return sum(length_aslist) if do_sum else length_aslist

def __setitem__(self, item, value):
    item %= len(self)
    if item < self._prods_switches_length:
        self.prods_switches_subaction.__setitem__(item, value)
        return
    item -= self._prods_switches_length
    
    if item < self._loads_switches_length:
        self.loads_switches_subaction.__setitem__(item, value)
        return
    item -= self._loads_switches_length

    if item < self._lines_or_switches_length:
        self.lines_or_switches_subaction.__setitem__(item, value)
        return
    item -= self._lines_or_switches_length

    if item < self._lines_ex_switches_length:
        self.lines_ex_switches_subaction.__setitem__(item, value)
        return
    item -= self._lines_ex_switches_length

    self.lines_status_subaction.__setitem__(item, value)

def __getitem__(self, item):
    item %= len(self)
    
    if item < self._prods_switches_length:
        return self.prods_switches_subaction.__getitem__(item)
    item -= self._prods_switches_length

    if item < self._loads_switches_length:
        return self.loads_switches_subaction.__getitem__(item)
    item -= self._loads_switches_length

    if item < self._lines_or_switches_length:
        return self.lines_or_switches_subaction.__getitem__(item)
    item -= self._lines_or_switches_length

    if item < self._lines_ex_switches_length:
        return self.lines_ex_switches_subaction.__getitem__(item)
    item -= self._lines_ex_switches_length
    
    return self.lines_status_subaction.__getitem__(item)

random error with random agent

I created this agent.
And I have the same error as previously (i created another bug tracking, because it is a totally different input) but ti append randomly in time, (low rate with p near 1

class TrainerAgent(Agent):
""" The template to be used to create an agent: any controler of the power grid is expected to be a daughter of this
class.
"""

def __init__(self, environment):
    """Initialize a new agent."""
    self.agent = GreedySearch(environment)
    self.randoma = RandomNodeSplitting(environment)
    self.actions = list()
    self.environment = environment

def act(self, observation):
   
    if np.random.rand() < 0.3 :
        action = self.agent.act(observation)
        self.actions.append( (observation.as_array(), action.as_array())) 
    else :
        action = self.randoma.act(observation)
    return action

the errorTraceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/to/pypownet/main.py", line 63, in
main()
File "/home/to/pypownet/main.py", line 58, in main
final_reward = runner.loop(iterations=args.niter)
File "/home/to/pypownet/runner.py", line 111, in loop
(obs, act, rew) = self.step()
File "/home/to/pypownet/runner.py", line 80, in step
action = self.agent.act(self.last_observation)
File "/home/to/pypownet/agent.py", line 592, in act
action = self.agent.act(observation)
File "/home/to/pypownet/agent.py", line 232, in act
reward_aslist = self.environment.simulate(action, do_sum=False)
File "/home/to/pypownet/environment.py", line 611, in simulate
observation, reward_flag, done = self.game.simulate(to_simulate_action)
File "/home/to/pypownet/game.py", line 606, in simulate
simulated_obs, flag, done = self.step(action, _is_simulation=True)
File "/home/to/pypownet/game.py", line 556, in step
self._compute_loadflow_cascading()
File "/home/to/pypownet/game.py", line 370, in _compute_loadflow_cascading
self.grid.compute_loadflow(fname_end='_cascading%d' % depth)
File "/home/to/pypownet/grid.py", line 244, in compute_loadflow
output, loadflow_success = self.__vanilla_loadflow_backend_callback(fname_end=fname_end)
File "/home/to/pypownet/grid.py", line 218, in __vanilla_loadflow_backend_callback
output, loadflow_success = function(self.mpc, self.loadflow_options, pprint, fname)
File "/usr/lib/python3.7/site-packages/PYPOWER-5.1.4-py3.7.egg/pypower/rundcpf.py", line 26, in rundcpf
return runpf(casedata, ppopt, fname, solvedcase)
File "/usr/lib/python3.7/site-packages/PYPOWER-5.1.4-py3.7.egg/pypower/runpf.py", line 99, in runpf
ref, pv, pq = bustypes(bus, gen)
File "/usr/lib/python3.7/site-packages/PYPOWER-5.1.4-py3.7.egg/pypower/bustypes.py", line 47, in bustypes
ref[0] = pv[0] # use the first PV bus

Change the renderer legend

Describe the bug
llustration of the renderer with the default14/ environment; note that the renderer drastically slows the performance of pypownet: it takes ~40s to compute 1000 timesteps without renderer mode.

Change
llustration of a running power grid with our renderer on the default IEEE14 grid environment.
NB: the renderer drastically slows the performance of pypownet. it takes ~40s to compute 1000 timesteps without renderer mode.

Include damaged lines in the observation

We should include a new part to our observation : the damaged lines.
The underlying idea is that an agent should be able to retrieve the reward from what it observes

Change in simulate functionality

For the presentation of August 30, we should have a function simulate that would be able to try an action, get the full reward, and then go back to the state before the action.
What's currently being done is computing a partial reward with a new topology, but with the same injections.

simulate interference

Simulate method is still modifying the environment.

Test:
To see that, you can run a simple do nothing agent vs a do nothing agent after simulating at least 2 actions. If the rewards timestep after timestep are not the same, it means that simulation is interferring.

Bugs:

  1. Using resetting load_entries_from_timestep_id in reload_minus_1_timestep :
    => it resets the production values to the values of the chronic after a simulation, which were not the values of the production before the simulation. Indeed before a simulation, the state of the grid is a result of a load flow after applying the chronics and a previous action. The load-flow adjust the productions from the chronics, especially for the slack bus. So reapplying the chronics without a load-flow does not give the same state.
  2. self.grid.mpc = copy.deepcopy(before_mpc) + apply_topo in reload_minus_1_timestep:
    => if you resset the mpc, it should not be modified because of apply_topo then. However you should also reset "are_loads" which is modified in apply_topo but was not reset

Before in reload_minus_1_timestep:
self.grid.mpc = before_mpc # Change grid mpc before apply topo
self.grid.apply_topology(before_topology) # Change topo before loading entries (reflects what happened)
self.load_entries_from_timestep_id(before_timestep_id, silence=True)

Should be changed for:
self.grid.mpc = before_mpc # Change grid mpc before apply topo
self.grid.are_loads=before_are_loads
self.current_timestep_id=before_timestep_id

Don't forget to copy are_loads when starting simulate:
before_are_loads=copy.deepcopy(self.grid.are_loads)

There is an error in the Pypownet.doc

Hello

Thanks for generous contribution to this project. I wonder if you are still updating this project?
And there is a mistake in the doc of pypownet, which occurs in the subpart of "Using environment"--"Reading observations". It seems that the description of the observation of "line" are all wrong?

gym compatibility

There are a few modifications that could be made so as to improve the compatibility with the openai.gym framework. More specifically, I tried to directly implement a dqn agent developed to work on gym problems, and I observed the following errors:

1/ 'ObservationSpace' object has no attribute 'shape'
This is not the same as the arrribute env.observation_space.grid_number_of_elements! The shape attribute is simply not defined

2/ AttributeError: 'ActionSpace' object has no attribute 'n'

3/ AttributeError: 'ActionSpace' object has no attribute 'sample'

4/ new_state, reward, is_done, _ = self.env.step(action)
One needs to convert new_state to an array using new_state.as_array(), while in the gym framework it is directly an array

I'll add some comments if I find some other compatibility issue

Time lag for line breaking

This is a "long-term" feature request

Isabelle suggests that if there is an overflow, we wait a few time steps before opening the line. This time lag is still to be defined. However, the impact on the cascading failure, would be that it would prevent it from being instantaneous.

environement.step() reset action

from runner.log : lines 81-84

action = self.agent.act(observation)
observation, reward_aslist, done, info = self.environment.step(action, do_sum=False)

when action is object Action (and not array)
after environement.step(..)
the action is reset to the Do_Nothing action

Note that the action is applied.

but it is a bit inconsistent
and when,t action is an np.array this doesn't happen.

Flows and thermal limits

we should have access to both the actual flow in the lines, and the thermal limit, and not just a normalized version of the flows

Max number action bug

in game.py, in _verify_illegal_action:
'if n_switched_substations > self.max_number_actionned_lines '

should be
if n_switched_substations > self.max_number_actionned_substations

apply_topology in game.py - load setting problem

From these lines:
!# Replace their values with the one of its associated node
bus[id_bus, 2] = bus[(lo + self.n_nodes // 2) % self.n_nodes, 2]
bus[id_bus, 3] = bus[(lo + self.n_nodes // 2) % self.n_nodes, 3]

you should replace "lo" by id_bus:
=> bus[id_bus, 2] = bus[(id_bus + self.n_nodes // 2) % self.n_nodes, 2]

and maybe do more simply:
bus[id_bus, 2] = bus[id_bus + self.n_nodes // 2 , 2]

Codalab pytorch

When submitting an controller on the Codalab page of the challenge, I encountered the following error : ModuleNotFoundError: No module named 'torch'
torch should be available

CSV files missing for 118 case

Hi Marvin,
Thanks for making this repository public.
The files 'hazards.csv' and 'maintenance.csv' are missing in the input folder for 118 bus. Can you please upload them?
Also is there any documentation on how to create the .csv files for larger cases?
Thanks
Amar

Confusing action name

In agent.py, in the GreedySearch agent, we observed the following line of code:

action_space.set_switches_configuration_of_substation(action=action,
substation_id=substation_id,
new_configuration=new_configuration)

It mentions a 'new_configuration' as the action. However, it is confusing with regards to the action space encoding that we have designed and used so far. Here the term 'new_configuration' can make people think that this is the topology state that we want our grid to be, while it is actually just a vector that says which connections you want to modify.

Score & reward are intermixed - give access to observation in simulate

If participants don't use machine learning but optimization based on simulate and its rewards, when they will submit on the codalab competition, the simulation reward will be the one of our score in reward.py.
To let them compute their own reward when submitting, we can give them the observation as an output of simulate in environment.py, as in game.py. Then they will be able to recompute a local reward.
Modifications of the agents using simulate API will need to be updated after this change.

function reset

On RunEnv there is a reset method.
this function reset the game :
if game over = "hard"
-load the next chronic
else
-load the next timestep

BUT
It is used in runner.loop() with the meaning to restart the current chronic (to run episodes)
but this will load the next chronic

One time you are using this function to handle game over
One time you are using this function to run episodes

Luca

Online injection timeseries generation

We should allow two modes for importing the injections:
1/ A mode where the injections are already stored in a file. That will be useful to compute the final score of a policy
2/ A mode where the injections are generated online. We also should decide whether we want to compute timestep by timestep, or if we want to compute the whole period at the beginning.

chronic end bug with geedy Search

running with
python -m pypownet.main --parameters parameters/default14 -lv hard -a TreeSearchLineServiceStatus -n 10000

this happen with an agent that use predict. at the last timestep of a chronic the predict &, to counter this problem
switch to the next chronic one timstep earlier

proposition int game.py line 345 replace :
if self.current_timestep_id == timesteps_ids[-1]:
by
if self.current_timestep_id == timesteps_ids[-3]:
#changing by -2 doesn't work

L'agent par défaut n'est pas défini

Après avoir suivi la procédure d'installation (sans docker) telle que décrite dans le README.md, je lance la commande python -m pypownet.main suggérée dans la section Basic Usage.

Je tombe sur l'erreur suivante :
File "/home/barbesantvin/challenge/pypownet/pypownet/main.py", line 52, in main agent = agent_class(env)
TypeError: Can't instantiate abstract class Agent with abstract methods act

Cela est du au fait que l'agent par défaut défini dans le main.py est Agent au lieu de DoNothing :
parser.add_argument('-a', '--agent', metavar='AGENT_CLASS', default='Agent', type=str, help='class to use for the agent (must be within the \'pypownet/agent.py\' file); 'default class Agent')

A titre de vérification, quand je lance la commande python -m pypownet.main --agent DoNothing, tout se passe bien.

Install: no module named gym

Describe the bug
Dependency with 'gym' but not listed in read-me. Problem when running pypownet.main the first time.

=> update read.me for git install (and maybe docker)

Display chronic name in render

It would be awesome if you may include the scenario name somewhere in the render windows because until now you do not know which scenario you are analyzing (even with date and time).

When you are dealing with large numbers of scenarios, if you have some kind of information in the render windows you could have more control when you are visualizing it.

30 nodes

Enable a full support of case 30 (environment and renderer)

simulate

This function should be deleted from env.py (because the frame by frame approach makes it useless)
However we still need it for developing purposes for at least a few weeks

Pausing / unpausing the renderer

This repository's issues are reserved for feature requests and bug reports.

  • You are submitting a ...

    • bug report
    • [x ] feature request
  • What is the current behavior?
    You can't pause the renderer

  • If the current behavior is a bug, please provide the steps to reproduce

  • What is the expected behavior?
    Key press when the rederer is active would atomatically pause it at the current frame. Another key pressed, and the work would resume where it has been paused.

  • Please tell us about your environment:

    • Version: 2.0.3
    • OS: Linux (ubuntu 16.04)
    • Installation source: setup.py
  • Other information (e.g. detailed explanation, stack/error traces, related issues, suggestions how to fix)

Benchmark performance calcul réseau

This repository's issues are reserved for feature requests and bug reports.

  • You are submitting a ...

    • feature request: créer un benchmark des performances de calcul de réseau selon différents solvers sur un scénario
  • What is the current behavior?

  • le solver DC est plus rapide que le solver AC (fast-decoupled), de combien? Pas d'idées pour le solver AC avec Newton-Raphson, à tester.
  • If the current behavior is a bug, please provide the steps to reproduce

  • What is the expected behavior?
    Go as fast as possible!

  • Please tell us about your environment:

    • Version: 2.0.3
    • OS: [Linux | Windows | macOS | ... ]
    • Installation source: [Docker | setup.py | ...]
  • Other information (e.g. detailed explanation, stack/error traces, related issues, suggestions how to fix)

Echec du mode render

La commande python -m pypownet.main --agent DoNothing -r donne l'erreur suivante :

File "/home/barbesantvin/challenge/pypownet/pypownet/renderer.py", line 431, in draw_surface_diagnosis self.data['number_unavailable_nodes'], self.data['max_number_isolated_loads'],
KeyError: 'number_unavailable_nodes'

Le rendu et le calcul sont bloqués à la première itération.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.