Giter Club home page Giter Club logo

swmp_ml's Introduction

Physics-Informed Heterogeneous Graph Neural Networks for DC Blocker Placement

This repository contains the code for the paper Physics-Informed Heterogeneous Graph Neural Networks for DC Blocker Placement.

  • Paper Abstract

    The threat of geomagnetic disturbances (GMDs) to the reliable operation of the bulk energy system has spurred the development of effective strategies for mitigating their impacts. One such approach involves placing transformer neutral blocking devices, which interrupt the path of geomagnetically induced currents (GICs) to limit their impact. The high cost of these devices and the sparsity of transformers that experience high GICs during GMD events, however, calls for a sparse placement strategy that involves high computational cost. To address this challenge, we developed a physics-informed heterogeneous graph neural network (PIHGNN) for solving the graph-based dc-blocker placement problem. Our approach combines a heterogeneous graph neural network (HGNN) with a physics-informed neural network (PINN) to capture the diverse types of nodes and edges in ac/dc networks and incorporates the physical laws of the power grid. We train the PIHGNN model using a surrogate power flow model and validate it using case studies. Results demonstrate that PIHGNN can effectively and efficiently support the deployment of GIC dc-current blockers, ensuring the continued supply of electricity to meet societal demands. Our approach has the potential to contribute to the development of more reliable and resilient power grids capable of withstanding the growing threat that GMDs pose.

Installation

Install required packages from sh setup.sh if you have GPU available. Otherwise, run sh setup.sh cpu to install the required packages for CPU only.

In addition, the package also requires the Julia optimizer JuMP and Ipopt to solve the optimization problem. The installation of the Julia optimizer is not included in the setup.sh script. Please install the Julia optimizer manually and follow the steps in the data/README.md folder to install the required packages for the heuristic solvers.

Usage

The package read the customized MATPOWER data file containing both AC and DC networks in *.m, and convert the the heterogeneous graph in PyG format. The package also provides the training and testing of the PIHGNN model.

To Run the model with Heterogeneous Graph Neural Network (HGNN) only, run epri21 as the following command:

python demo_hgnn.py --names epri21 --verbose

To run the model with Physics-Informed Heterogeneous Graph Neural Network (PIHGNN), run the following command:

python demo_pihgnn.py 

The usage of both scripts is as follows:

python demo_hgnn.py --help 
usage: demo_hgnn.py [-h] [--names NAMES [NAMES ...]] [--force] [--lr LR] [--weight_decay WEIGHT_DECAY] [--hidden_size HIDDEN_SIZE] [--num_heads NUM_HEADS] [--num_conv_layers NUM_CONV_LAYERS] [--num_mlp_layers NUM_MLP_LAYERS]
                    [--act {relu,rrelu,hardtanh,relu6,sigmoid,hardsigmoid,tanh,silu,mish,hardswish,elu,celu,selu,glu,gelu,hardshrink,leakyrelu,logsigmoid,softplus,tanhshrink}] [--conv_type {hgt,han}] [--dropout DROPOUT] [--epochs EPOCHS]
                    [--batch_size BATCH_SIZE] [--seed SEED] [--no_norm] [--test_split TEST_SPLIT] [--gpu GPU] [--weight] [--log] [--verbose]

optional arguments:
  -h, --help            show this help message and exit
  --names NAMES [NAMES ...]
                        list of names of networks, seperated by space
  --force               Force to reprocess data
  --lr LR               learning rate
  --weight_decay WEIGHT_DECAY
                        weight decay rate for Adam
  --hidden_size HIDDEN_SIZE
                        hidden dimension in HGT
  --num_heads NUM_HEADS
                        number of heads in HGT
  --num_conv_layers NUM_CONV_LAYERS
                        number of layers in HGT
  --num_mlp_layers NUM_MLP_LAYERS
                        number of layers in MLP
  --act {relu,rrelu,hardtanh,relu6,sigmoid,hardsigmoid,tanh,silu,mish,hardswish,elu,celu,selu,glu,gelu,hardshrink,leakyrelu,logsigmoid,softplus,tanhshrink}
                        specify the activation function used
  --conv_type {hgt,han}
                        select the type of convolutional layer (hgt or han)
  --dropout DROPOUT     dropout rate
  --epochs EPOCHS       number of epochs in training
  --batch_size BATCH_SIZE
                        batch size in training
  --seed SEED           Random seed. Set `-1` to ignore random seed
  --no_norm             No normalization of the data
  --test_split TEST_SPLIT
                        the proportion of datasets to use for testing
  --gpu GPU             which GPU to use. Set -1 to use CPU.
  --weight              use weighted loss.
  --log                 logging the training process
  --verbose             print the training process
  • For transfer learning, train the model with one network, there will be a saved model file _model.pt, and run the demo_transfer_learning.py with another network.

    python demo_hignn.py --names epri21 
    # it will create a file _model.pt to save model's parameters
    python demo_transfer_learning.py --names uiuc150
    # it will make prediction based on a model trained on epri21

Copyright

Copyright (c) 2024 Argonne National Laboratory, 
                   Oak Ridge National Laboratory, 
                   Lawrence Berkeley National Laboratory, 
                   Los Alamos National Laboratory, 
                   and University of California, Berkeley.

License

This project is under the MIT License. See the LICENSE file for the full license text.

swmp_ml's People

Contributors

bluejuniper avatar cshjin avatar ccoffrin avatar dancinggoat avatar turtlecamera avatar rb004f avatar adammate avatar akbarnes avatar pseudocubic avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

swmp_ml's Issues

Perturbations on Load lead to same results

  • With and without perturbations on load_bus leads to the same results on status, qd, pd
  • The optimized status, qd, pd are all 0s
  • Verified on epri21 and ots_test

Something is wrong with run_soc_gmd_mld_decoupled

Model Pipeline

  • what's the task of GNN model? regression? prediction? simulation? or function approximation?
  • what's the labels/values (y) of the graphs (nodes)?
  • How to incorporate the GNN model with the GMD.jl optimization problem?

revise the inputs and output of MLD

Input: part of the buses are perturbed in pd and qd

  • replace the pertueb pd/qd
# TODO: replace the net_data['load'] with buses in mods file
for k in net_data['load']:
    # update pd/qd with augmented config
    mpc['bus'].loc[mpc['bus']['bus_i'] == int(k), "Pd"] = net_data['load'][k]['pd'] * 100
    mpc['bus'].loc[mpc['bus']['bus_i'] == int(k), "Qd"] = net_data['load'][k]['qd'] * 100

output:

  • reg - only consider the buses (load) with active status
    • replace y = [res_data['solution']['load'][k]['qd'] for k in sorted(res_data['solution']['load'].keys())] with keys in mods only
    • update the output from forward function with aligned keys only
  • clf - all buses

node feature

Provide a detailed process to extract the node features.

Keys associated with nodes

  • bus
  • gen
  • gmd_bus
  • bus_gmd

Data augmentation on existing GMD data

Data augmentation:

  • random node feature perturbation
  • random edge feature perturbation
  • keep the topology the same

Steps:

  1. collect more data from augmentation
  2. run the GMD optimizer to collect the results
  3. send into GNN as a classification problem

Data normalization

Data normalization is needed for the regression model

Output from dummy GNN model:

Epoch 1, MSE loss 291651.8750
Epoch 2, MSE loss 281571.1250
Epoch 3, MSE loss 271607.1250
Epoch 4, MSE loss 261304.9375
Epoch 5, MSE loss 251363.6875
Epoch 6, MSE loss 242007.3281
Epoch 7, MSE loss 232743.7812
Epoch 8, MSE loss 224139.9688
Epoch 9, MSE loss 216054.5156
Epoch 10, MSE loss 208501.4375
Epoch 11, MSE loss 201487.0469
Epoch 12, MSE loss 195011.6562
Epoch 13, MSE loss 189070.6094
Epoch 14, MSE loss 183654.8438
Epoch 15, MSE loss 178751.4062
Epoch 16, MSE loss 174343.3281
Epoch 17, MSE loss 170410.2812
Epoch 18, MSE loss 166928.7500
Epoch 19, MSE loss 163872.3438
Epoch 20, MSE loss 161212.3281
Epoch 21, MSE loss 158917.8906
Epoch 22, MSE loss 156956.8281
Epoch 23, MSE loss 155296.1094
Epoch 24, MSE loss 153902.3281
Epoch 25, MSE loss 152742.4062
Epoch 26, MSE loss 151784.0312
Epoch 27, MSE loss 150996.2188
Epoch 28, MSE loss 150349.7969
Epoch 29, MSE loss 149817.6250
Epoch 30, MSE loss 149375.0781
Epoch 31, MSE loss 149000.0625
Epoch 32, MSE loss 148673.3594
Epoch 33, MSE loss 148378.4531
Epoch 34, MSE loss 148101.6875
Epoch 35, MSE loss 147832.1094
Epoch 36, MSE loss 147561.3125
Epoch 37, MSE loss 147283.2969
Epoch 38, MSE loss 146994.1719
Epoch 39, MSE loss 146691.9375
Epoch 40, MSE loss 146376.2031
Epoch 41, MSE loss 146047.9219
Epoch 42, MSE loss 145709.0625
Epoch 43, MSE loss 145362.3438
Epoch 44, MSE loss 145011.0312
Epoch 45, MSE loss 144658.6875
Epoch 46, MSE loss 144308.8594
Epoch 47, MSE loss 143965.0625
Epoch 48, MSE loss 143630.5312
Epoch 49, MSE loss 143308.1719
Epoch 50, MSE loss 143000.3906
Epoch 51, MSE loss 142709.1406
Epoch 52, MSE loss 142435.7656
Epoch 53, MSE loss 142181.2031
Epoch 54, MSE loss 141945.8125
Epoch 55, MSE loss 141729.4844
Epoch 56, MSE loss 141531.7344
Epoch 57, MSE loss 141351.7031
Epoch 58, MSE loss 141188.2344
Epoch 59, MSE loss 141040.0000
Epoch 60, MSE loss 140905.5312
Epoch 61, MSE loss 140783.2812
Epoch 62, MSE loss 140671.6562
Epoch 63, MSE loss 140569.1875
Epoch 64, MSE loss 140474.4062
Epoch 65, MSE loss 140386.0312
Epoch 66, MSE loss 140302.9531
Epoch 67, MSE loss 140224.1406
Epoch 68, MSE loss 140148.8438
Epoch 69, MSE loss 140076.3750
Epoch 70, MSE loss 140006.2812
Epoch 71, MSE loss 139938.2969
Epoch 72, MSE loss 139872.1875
Epoch 73, MSE loss 139807.8594
Epoch 74, MSE loss 139745.3750
Epoch 75, MSE loss 139684.7969
Epoch 76, MSE loss 139626.2344
Epoch 77, MSE loss 139569.8281
Epoch 78, MSE loss 139515.7500
Epoch 79, MSE loss 139464.1406
Epoch 80, MSE loss 139415.0781
Epoch 81, MSE loss 139368.6875
Epoch 82, MSE loss 139325.0156
Epoch 83, MSE loss 139284.0625
Epoch 84, MSE loss 139245.8125
Epoch 85, MSE loss 139210.2031
Epoch 86, MSE loss 139177.1250
Epoch 87, MSE loss 139146.5000
Epoch 88, MSE loss 139118.1562
Epoch 89, MSE loss 139091.9688
Epoch 90, MSE loss 139067.7500
Epoch 91, MSE loss 139045.3438
Epoch 92, MSE loss 139024.5781
Epoch 93, MSE loss 139005.3281
Epoch 94, MSE loss 138987.4062
Epoch 95, MSE loss 138970.7344
Epoch 96, MSE loss 138955.1406
Epoch 97, MSE loss 138940.5625
Epoch 98, MSE loss 138926.9062
Epoch 99, MSE loss 138914.0938
Epoch 100, MSE loss 138902.0625
Epoch 101, MSE loss 138890.7812
Epoch 102, MSE loss 138880.1875
Epoch 103, MSE loss 138870.2656
Epoch 104, MSE loss 138860.9688
Epoch 105, MSE loss 138852.3125
Epoch 106, MSE loss 138844.2188
Epoch 107, MSE loss 138836.7188
Epoch 108, MSE loss 138829.7500
Epoch 109, MSE loss 138823.3125
Epoch 110, MSE loss 138817.3750
Epoch 111, MSE loss 138811.9375
Epoch 112, MSE loss 138806.9219
Epoch 113, MSE loss 138802.3594
Epoch 114, MSE loss 138798.1719
Epoch 115, MSE loss 138794.3438
Epoch 116, MSE loss 138790.8594
Epoch 117, MSE loss 138787.6875
Epoch 118, MSE loss 138784.7812
Epoch 119, MSE loss 138782.1406
Epoch 120, MSE loss 138779.7188
Epoch 121, MSE loss 138777.5312
Epoch 122, MSE loss 138775.5156
Epoch 123, MSE loss 138773.6562
Epoch 124, MSE loss 138771.9844
Epoch 125, MSE loss 138770.4375
Epoch 126, MSE loss 138769.0156
Epoch 127, MSE loss 138767.7188
Epoch 128, MSE loss 138766.5469
Epoch 129, MSE loss 138765.4688
Epoch 130, MSE loss 138764.4844
Epoch 131, MSE loss 138763.5938
Epoch 132, MSE loss 138762.7656
Epoch 133, MSE loss 138762.0469
Epoch 134, MSE loss 138761.3750
Epoch 135, MSE loss 138760.7812
Epoch 136, MSE loss 138760.2500
Epoch 137, MSE loss 138759.7812
Epoch 138, MSE loss 138759.3438
Epoch 139, MSE loss 138758.9844
Epoch 140, MSE loss 138758.6250
Epoch 141, MSE loss 138758.3594
Epoch 142, MSE loss 138758.0781
Epoch 143, MSE loss 138757.8594
Epoch 144, MSE loss 138757.6562
Epoch 145, MSE loss 138757.4688
Epoch 146, MSE loss 138757.3125
Epoch 147, MSE loss 138757.1875
Epoch 148, MSE loss 138757.0625
Epoch 149, MSE loss 138756.9375
Epoch 150, MSE loss 138756.8438
Epoch 151, MSE loss 138756.7656
Epoch 152, MSE loss 138756.7031
Epoch 153, MSE loss 138756.6250
Epoch 154, MSE loss 138756.5625
Epoch 155, MSE loss 138756.5156
Epoch 156, MSE loss 138756.4844
Epoch 157, MSE loss 138756.4375
Epoch 158, MSE loss 138756.4062
Epoch 159, MSE loss 138756.3750
Epoch 160, MSE loss 138756.3594
Epoch 161, MSE loss 138756.3281
Epoch 162, MSE loss 138756.3281
Epoch 163, MSE loss 138756.2969
Epoch 164, MSE loss 138756.2969
Epoch 165, MSE loss 138756.2812
Epoch 166, MSE loss 138756.2812
Epoch 167, MSE loss 138756.2812
Epoch 168, MSE loss 138756.2656
Epoch 169, MSE loss 138756.2500
Epoch 170, MSE loss 138756.2656
Epoch 171, MSE loss 138756.2500
Epoch 172, MSE loss 138756.2500
Epoch 173, MSE loss 138756.2500
Epoch 174, MSE loss 138756.2500
Epoch 175, MSE loss 138756.2656
Epoch 176, MSE loss 138756.2500
Epoch 177, MSE loss 138756.2500
Epoch 178, MSE loss 138756.2500
Epoch 179, MSE loss 138756.2500
Epoch 180, MSE loss 138756.2500
Epoch 181, MSE loss 138756.2500
Epoch 182, MSE loss 138756.2500
Epoch 183, MSE loss 138756.2500
Epoch 184, MSE loss 138756.2500
Epoch 185, MSE loss 138756.2500
Epoch 186, MSE loss 138756.2500
Epoch 187, MSE loss 138756.2500
Epoch 188, MSE loss 138756.2500
Epoch 189, MSE loss 138756.2500
Epoch 190, MSE loss 138756.2500
Epoch 191, MSE loss 138756.2500
Epoch 192, MSE loss 138756.2500
Epoch 193, MSE loss 138756.2500
Epoch 194, MSE loss 138756.2500
Epoch 195, MSE loss 138756.2500
Epoch 196, MSE loss 138756.2500
Epoch 197, MSE loss 138756.2656
Epoch 198, MSE loss 138756.2500
Epoch 199, MSE loss 138756.2500
Epoch 200, MSE loss 138756.2500

dc network data

  • dc network as homogeneous graph (optional)
  • dc network as heterogeneous graph

Optimize the exp 'pipeline'

  1. generate perturbations to a base case
  • perturbation in Python, OR in Julia, serving as part of $X$, along with other standard features
  • run Julia solver to solve the particular problem, serving as ground true $y$
  • store results in data folder
  1. process data into HeteroData
  • processed data stored in \tmp folder
  1. train the model
  • temporary store the best model in logs
  1. evaluate the model
  • evaluate the model on a best local model on the test set

[ERROR] Core dumped error when have julia script (reevaluation part) in HPS

Error message as follows

$ python hps.py 
/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/dataset.py:190: UserWarning: The `pre_transform` argument differs from the one used in the pre-processed version of this dataset. If you want to make use of another pre-processing technique, make sure to delete '/home/jinh/tmp/data/GMD/processed' first
  warnings.warn(
Processing...
Done!
Number of workers:  8
All results:

signal (11): Segmentation fault
in expression starting at none:0
Allocations: 43526567 (Pool: 43509011; Big: 17556); GC: 48
Segmentation fault (core dumped)

ac network data

Convert MATPOWER data into pytorch_geometric data

  • ac network as homogenous graph
  • ac network as heterogeneous graph

process file more efficiently

Detailed improvements in dataset.py

  • read the MPC outside for loop
  • using load_bus_mask instead of dictionary
    • test on mini-batch data loader
  • more general settings for clf and reg models with different GMD problem settings.

Baselines for comparison

For our current MLD problem (regression model), try to compare the results from HGT with

  • logistic regression -LR- using bus information only
  • neural network - MLP - using bus information only

HPS error with hidden_size

Error message:

Traceback (most recent call last):
  File "/home/pieterg/LBL/OE/code/swmp_ml/hps.py", line 247, in <module>
    results = search.search(max_evals=20)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/search/_search.py", line 131, in search
    self._search(max_evals, timeout)
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/search/hps/_cbo.py", line 419, in _search
    new_X = self._opt.ask(
            ^^^^^^^^^^^^^^
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/skopt/optimizer/optimizer.py", line 521, in ask
    x = self._ask()
        ^^^^^^^^^^^
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/skopt/optimizer/optimizer.py", line 834, in _ask
    [self.space.distance(next_x, xi) for xi in self.Xi]
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/skopt/optimizer/optimizer.py", line 834, in <listcomp>
    [self.space.distance(next_x, xi) for xi in self.Xi]
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/skopt/space/space.py", line 1472, in distance
    distance += dim.distance(a, b)
                ^^^^^^^^^^^^^^^^^^
  File "/home/pieterg/LBL/OE/code/env/lib/python3.11/site-packages/deephyper/skopt/space/space.py", line 903, in distance
    raise RuntimeError(
RuntimeError: Can only compute distance for values within the space, not 1 and 64.

Temp solution:
Include 1 in hidden_size and batch_size

Handle missing index in some matpower config files

  • reported error with epri21.m data
Traceback (most recent call last):
  File "test_dataset.py", line 46, in <module>
    output = model(data)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "test_dataset.py", line 24, in forward
    x = self.conv1(x, edge_index)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/nn/conv/sage_conv.py", line 131, in forward
    out = self.propagate(edge_index, x=x, size=size)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 366, in propagate
    coll_dict = self.__collect__(self.__user_args__, edge_index,
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 260, in __collect__
    data = self.__lift__(data, edge_index, dim)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 230, in __lift__
    return src.index_select(self.node_dim, index)
IndexError: index out of range in self

This is due to the missing index from bus are not handled in branch and gen

Data from julia optimizer to pyg labels

Workflow from Julia

Install package

  1. start Julia env
$ julia
julia> 
  1. start package manager by pressing "]" in Julia env
(@v1.6) pkg> 
  1. install the package from the gic-blocker branch
(@v1.6) pkg> add https://github.com/lanl-ansi/PowerModelsGMD.jl#gic-blocker
  1. test the installation
test PowerModelsGMD
  1. run the blocker-placement problem
using PowerModels, PowerModelsGMD, Ipopt, JuMP

network_file = joinpath(dirname(pathof(PowerModelsGMD)), "../test/data/epri21.m")
case = PowerModels.parse_file(network_file)
solver = JuMP.optimizer_with_attributes(Ipopt.Optimizer)

result = PowerModelsGMD.run_gmd_blocker_placement(case, solver)
  1. results
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:      191
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        0

Total number of variables............................:      128
                     variables with only lower bounds:        0
                variables with lower and upper bounds:       27
                     variables with only upper bounds:        0
Total number of equality constraints.................:       55
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  2.6730000e+01 2.07e+02 1.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  2.3816703e+01 1.20e-12 4.47e-01  -1.0 2.10e+02  -4.0 5.53e-01 1.00e+00h  1
   2  2.3816676e-01 8.14e-13 9.46e-03  -1.7 1.59e+00  -4.5 1.00e+00 5.51e-01f  1
   3  8.1238859e-02 1.52e-12 6.46e-08  -2.5 5.81e-03  -5.0 1.00e+00 1.00e+00f  1
   4  4.2533690e-03 5.47e-13 1.06e-08  -3.8 2.85e-03  -5.4 1.00e+00 1.00e+00f  1
   5  5.0137241e-05 1.52e-12 1.92e-10  -5.7 1.56e-04  -5.9 1.00e+00 1.00e+00f  1
   6 -2.0224898e-07 5.51e-13 7.67e-13  -8.6 1.86e-06  -6.4 1.00e+00 1.00e+00f  1

Number of Iterations....: 6

                                   (scaled)                 (unscaled)
Objective...............:  -2.0224897619453249e-07   -2.0224897619453249e-07
Dual infeasibility......:   7.6716561392298102e-13    7.6716561392298102e-13
Constraint violation....:   5.5067062021407764e-13    5.5067062021407764e-13
Complementarity.........:   2.5092971842578934e-09    2.5092971842578934e-09
Overall NLP error.......:   2.5092971842578934e-09    2.5092971842578934e-09


Number of objective function evaluations             = 7
Number of objective gradient evaluations             = 7
Number of equality constraint evaluations            = 7
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 1
Total CPU secs in IPOPT (w/o function evaluations)   =      0.249
Total CPU secs in NLP function evaluations           =      0.129

EXIT: Optimal Solution Found.
Dict{String, Any} with 8 entries:
  "solve_time"         => 0.631855
  "optimizer"          => "Ipopt"
  "termination_status" => LOCALLY_SOLVED
  "dual_status"        => FEASIBLE_POINT
  "primal_status"      => FEASIBLE_POINT
  "objective"          => -2.02249e-7
  "solution"           => Dict{String, Any}("baseMVA"=>100, "gmd_branch"=>Dict{String,…
  "objective_lb"       => -Inf

gather the results

  • specify random blocker placement on case
  • gather results (NOT minimum cost) given the placement
  • (TODO) To confirm, there is no optimization in Julia, just calculating the the cost

optimize the `read_mpc`

  • handle spaces and tabs automatically
  • remove the single quote ' in str
  • add process to 'bus_sourceid', 'gen_sourceid', 'branch_sourceid'

multiple gen on a single bus

In some scenarios, there could be multiple generators on the single regular bus.

Previously, concatenate the feature from gen to the feature to bus.

  • create virtual (multi-)edge between bus and gen
    image

  • verify the result in some synthetic data

edge feature

Keys are associated with edges

  • branch
  • gmd_branch
  • (optional) branch_gmd
  • (optional) branch_thermal

*[ ] finalize the edge features for branch and gmd_branch

NoneType of `gen` for opf model in HGT model

Popup error:

Traceback (most recent call last):
  File "demo_train_opf.py", line 117, in <module>
    out = model(data.x_dict, data.edge_index_dict)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "demo_train_opf.py", line 45, in forward
    return self.lin(x_dict['gen'])
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/nn/dense/linear.py", line 118, in forward
    return F.linear(x, self.weight, self.bias)
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType

[BUG] inactivated `activation` function

In model.py, the activation is used as a hyperparameter in the HGT model, but never used in the forward function. Instead, it always takes the relu as an activation function.

# in __init__
self.activation = activation

...
# in forward
for node_type, x in x_dict.items():
            x_dict[node_type] = self.lin_dict[node_type](x).relu_()

@TurtleCamera Can you fix it and run with the HPS again?

Issue in MultiGMD for mini-batch settings

Traceback (most recent call last):
  File "demo_train.py", line 70, in <module>
    dataset = MultiGMD("./test/data",
  File "/home/jinh/SWMP/swmp_ml/py_script/dataset.py", line 201, in __init__
    super().__init__(root, transform, pre_transform, pre_filter)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/in_memory_dataset.py", line 50, in __init__
    super().__init__(root, transform, pre_transform, pre_filter)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 87, in __init__
    self._process()
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/dataset.py", line 170, in _process
    self.process()
  File "/home/jinh/SWMP/swmp_ml/py_script/dataset.py", line 224, in process
    data, slices = self.collate(data_list)
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/in_memory_dataset.py", line 105, in collate
    data, slices, _ = collate(
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/collate.py", line 84, in collate
    value, slices, incs = _collate(attr, values, data_list, stores,
  File "/home/jinh/miniconda3/envs/swmp/lib/python3.8/site-packages/torch_geometric/data/collate.py", line 155, in _collate
    value = torch.cat(values, dim=cat_dim or 0, out=out)
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 10 but got size 12 for tensor number 1 in the list.
  • dimensions don't match cross different HeteroData

create separate env (folder) for each hps

The locally saved model states will be corrupted by other script if two or more are running at the same time.

  • create folder based on task and timestamp for each run
  • save local models there instead of the root folder

Run the optimal power flow on AC network with augmented data

Change in code

  • create a new branch ac_opf
  • make changes in h_data['y'] with the optimal pg from gen
    • without the optimal results
      • inputs from config file
    • with the optimal results
      • update/read node/edge features from result
      • branch has qf, qt, pt, pf
      • gen has qg
      • bus has va, vm

Change in training

  • split the generated data into train/val/test as 80/10/10
  • train the model in mini-batch
    • build data from DataLoader

Update the model as the regression model to predict the pg/qg/va/vm in generator/bus
Keep the DC network as it is.

  • verify the performance of HeteroGNN

[HPS] choose between multiple best

The results from HPS may contain multiple rows with the same best hp.

  • choose the one with the best test metric
  • may be solved with MOO HPS

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.