Giter Club home page Giter Club logo

argos.jl's Introduction

Argos.jl

DOI

Argos.jl extends the power-system modeler ExaPF.jl and the interior-point solver MadNLP.jl to solve optimal power flow (OPF) problems entirely in Julia.

The package is structured as follows:

  • in src/Evaluators/, various optimization evaluators implement the different callbacks (objective, gradient, Hessian) required in the optimization algorithms.
  • in src/Algorithms/, an Augmented Lagrangian algorithm is implemented, targeting primarily the resolution of large-scale OPF problems on GPU architectures.
  • in src/Wrappers/, a wrapper for MathOptInterface and a wrapper for NLPModels.jl are implemented.

Installation

One can install Argos with the default package manager:

add Argos

To check that everything is working as expected, please run

test Argos

By default, this command tests all the Evaluators implemented in Argos on the CPU and, if available, on a CUDA GPU.

Quickstart

The function run_opf is the entry point to Argos. It takes as input a path to a MATPOWER file and solves the associated OPF with MadNLP:

# Solve in the full-space
ips = Argos.run_opf("data/case9.m", Argos.FullSpace())

The second argument specifies the formulation used inside MadNLP to solve the OPF problem. FullSpace() implements the classical full-space formulation, (as implemented inside MATPOWER or PowerModels.jl). Alternatively, one may want to solve the OPF using the reduced-space formulation of Dommel and Tinney:

# Solve in the reduced-space
ips = Argos.run_opf("data/case9.m", Argos.DommelTinney())

How to use Argos' evaluators?

Argos implements two evaluators to solve the OPF problem: the FullSpaceEvaluator implements the classical OPF formulation in the full-space, whereas ReducedSpaceEvaluator implements the reduced-space formulation of Dommel & Tinney.

Using an evaluator

Instantiating a new evaluator from a MATPOWER file simply amounts to

# Reduced-space evaluator
nlp = Argos.ReducedSpaceEvaluator("case57.m")
# Full-space evaluator
flp = Argos.FullSpaceEvaluator("case57.m")

An initial optimization variable can be computed as

u = Argos.initial(nlp)

The variable u is the control that will be used throughout the optimization. Once a new point u obtained, one can refresh all the structures inside nlp with:

Argos.update!(nlp, u)

Once the structures are refreshed, the other callbacks can be evaluated as well:

Argos.objective(nlp, u) # objective
Argos.gradient(nlp, u)  # reduced gradient
Argos.jacobian(nlp, u)  # reduced Jacobian
Argos.hessian(nlp, u)   # reduced Hessian

MOI wrapper

Argos implements a wrapper to MathOptInterface to solve the optimal power flow problem with any nonlinear optimization solver compatible with MathOptInterface:

nlp = Argos.ReducedSpaceEvaluator("case57.m")
optimizer = Ipopt.Optimizer() # MOI optimizer
# Update tolerance to be above tolerance of Newton-Raphson subsolver
MOI.set(optimizer, MOI.RawOptimizerAttribute("tol"), 1e-5)
# Solve reduced space problem
solution = Argos.optimize!(optimizer, nlp)

NLPModels wrapper

Alternatively, one can use NLPModels.jl to wrap any evaluators implemented in Argos. This amounts simply to:

nlp = Argos.FullSpaceEvaluator("case57.m")
# Wrap in NLPModels
model = Argos.OPFModel(nlp)

x0 = NLPModels.get_x0(model)
obj = NLPModels.obj(model, x0)

Once the evaluator is wrapped inside NLPModels.jl, we can leverage any solver implemented in JuliaSmoothOptimizers to solve the OPF problem.

How to deport the solution of the OPF on the GPU?

ExaPF.jl is using KernelAbstractions to implement all its core operations. Hence, deporting the computation on GPU accelerators is straightforward. Argos.jl inherits this behavior and all evaluators can be instantiated on GPU accelerators, simply as

using CUDAKernels # Load CUDA backend for KernelAbstractions
using ArgosCUDA
nlp = Argos.ReducedSpaceEvaluator("case57.m"; device=CUDADevice())

When doing so, all kernels are instantiated on the GPU to avoid memory transfer between the host and the device. The sparse linear algebra operations are handled by cuSPARSE, and the sparse factorizations are performed using cusolverRF via the Julia wrapper CUSOLVERRF.jl. This package is loaded via the included ArgosCUDA.jl package in /lib. When deporting the computation on the GPU, the reduced Hessian can be evaluated in parallel.

Batch evaluation of the reduced Hessian

Instead of computing the reduced Hessian one Hessian-vector product after one Hessian-vector product, the Hessian-vector products can be evaluated in batch. To activate the batch evaluation for the reduced Hessian, please specify the number of Hessian-vector products to perform in one batch as

nlp = Argos.ReducedSpaceEvaluator("case57.m"; device=CUDADevice(), nbatch_hessian=8)

Note that on large instances, the batch computation can be demanding in terms of GPU's memory.

argos.jl's People

Contributors

frapac avatar michel2323 avatar degleris1 avatar

Stargazers

jiangplus avatar Andrew Rosemberg avatar Uwe Hernandez Acosta avatar Alexis Montoison avatar  avatar Elias Carvalho avatar  avatar oldwenzi avatar Linwei Sang avatar Huy Le Nguyen avatar Richard Lincoln avatar Tangi Migot avatar  avatar Ilya Orson  avatar Dennis Lucarelli avatar Adrian Maldonado avatar Sungho Shin avatar

Watchers

Youngdae Kim avatar Adrian Maldonado avatar Mihai avatar Kibaek Kim avatar  avatar  avatar

argos.jl's Issues

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Add support for CUDA runtime 12.3

Argos is broken with CUDA 12.3. When running the tests, we are facing the following error:

[CUDA] Solve OPF with Argos.BieglerReduction(): Error During Test at /home/fpacaud/dev/Argos.jl/test/Algorithms/MadNLP_wrapper.jl:121
  Got exception outside of a @test                                                                                     
  CUDA error: misaligned address (code 716, ERROR_MISALIGNED_ADDRESS)  

The issue arises when calling cusolverRF in the solution of the KKT system:
https://github.com/exanauts/Argos.jl/blob/master/src/KKT/reduced_newton.jl#L438-L439

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.