kibaekkim / dualdecomposition.jl Goto Github PK
View Code? Open in Web Editor NEWAn algorithmic framework for parallel dual decomposition methods in Julia
License: MIT License
An algorithmic framework for parallel dual decomposition methods in Julia
License: MIT License
It is not trivial to extend this package to the distributionally robust variant of stochastic programs (e.g., the formulation of Corollary 1 in http://www.optimization-online.org/DB_FILE/2020/04/7723.pdf). A main obstacle is the Lagrangian multipliers defined for non-coupling variables (i.e., y
in the paper). To address that, the package may need to improve the design.
To this end, we need to resolve this issue (kibaekkim/BundleMethod.jl#23) first.
Regarding the technical manuscript, I have created two branches:
apps/all
: a branch to combine all applications laterapps/uc
: stochastic unit commitment applicationCan you guys also create a branch like that (e.g., apps/time
, apps/network
, apps/multi
) to work on your own application?
The package does not consider termination statuses for block models other than MOI.OPTIMAL and MOI.LOCALLY_SOLVED. For block models that are non-convex, it is necessary to consider other possibilities.
Add a wrapper function for code in these lines, so that user-defined extensions are allowed.
The current implementation uses BundleMethod.jl
, but this can be generalized to use other methods (e.g., projected subgradient method).
There seems no way to change the parameters for the bundle method.
I have Julia version 1.2.0 and JuMP version 0.19.2. I understand that JuDD.jl is not compatible with this version of JuMP. I tried the following: (but still does not work):
Went to Pkg>
Pkg> activate tutorial
Then the following happened:
(tutorial) pkg> add https://github.com/kibaekkim/BundleMethod.jl.git
Updating git-repo https://github.com/kibaekkim/BundleMethod.jl.git
[ Info: Assigning UUID 03bf20df-d8a0-5746-8978-d623d54f279b to BundleMethod
Updating git-repo https://github.com/kibaekkim/BundleMethod.jl.git
Resolving package versions...
Updating C:\Users\...\tutorial\Project.toml
[03bf20df] + BundleMethod v0.0.0 #master (https://github.com/kibaekkim/BundleMethod.jl.git)
Updating C:\Users\...\tutorial\Manifest.toml
[03bf20df] + BundleMethod v0.0.0 #master (https://github.com/kibaekkim/BundleMethod.jl.git)
(tutorial) pkg> add "https://github.com/kibaekkim/JuDD.jl", rev="mpi"
Cloning git-repo https://github.com/kibaekkim/JuDD.jl
Updating git-repo https://github.com/kibaekkim/JuDD.jl
Updating git-repo https://github.com/kibaekkim/JuDD.jl
Cloning git-repo ,
ERROR: failed to clone from ,, error: GitError(Code:ERROR, Class:Net, unsupported URL protocol)
(tutorial) pkg> add https://github.com/kibaekkim/JuDD.jl
Updating git-repo https://github.com/kibaekkim/JuDD.jl
Updating git-repo https://github.com/kibaekkim/JuDD.jl
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package StructJuMPSolverInterface [7a00d66c]:
StructJuMPSolverInterface [7a00d66c] log:
├─StructJuMPSolverInterface [7a00d66c] has no known versions!
└─restricted to versions * by JuDD [23e7f524] — no versions left
└─JuDD [23e7f524] log:
├─possible versions are: 0.0.0 or uninstalled
└─JuDD [23e7f524] is fixed to version 0.0.0
(tutorial) pkg> add https://github.com/StructJuMP/StructJuMPSolverInterface.jl.git
Updating git-repo https://github.com/StructJuMP/StructJuMPSolverInterface.jl.git
[ Info: Assigning UUID 7a00d66c-3a07-54c1-b1e3-917bd47595ee to StructJuMPSolverInterface
Updating git-repo https://github.com/StructJuMP/StructJuMPSolverInterface.jl.git
Resolving package versions...
Installed MPI ────────────── v0.10.0
Installed MathOptInterface ─ v0.9.1
Updating C:\Users\...\tutorial\Project.toml
[7a00d66c] + StructJuMPSolverInterface v0.0.0 #master (https://github.com/StructJuMP/StructJuMPSolverInterface.jl.git)
Updating C:\Users\...\tutorial\Manifest.toml
[6e4b80f9] + BenchmarkTools v0.4.2
[b6b21f68] + Ipopt v0.6.0
[682c06a0] + JSON v0.21.0
[da04e1cc] + MPI v0.10.0
[b8f27783] + MathOptInterface v0.9.1
[69de0a69] + Parsers v0.3.6
[ae029012] + Requires v0.5.2
[7a00d66c] + StructJuMPSolverInterface v0.0.0 #master (https://github.com/StructJuMP/StructJuMPSolverInterface.jl.git)
Building MPI → C:\Users\...\.julia\packages\MPI\RzbIV\deps\build.log
┌ Error: Error building MPI
:
│ ERROR: LoadError: No MPI library found.
│ Ensure an MPI implementation is loaded, or set the JULIA_MPI_PATH
variable.
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at C:\Users....julia\packages\MPI\RzbIV\deps\build.jl:42
│ [3] include at .\boot.jl:328 [inlined]
│ [4] include_relative(::Module, ::String) at .\loading.jl:1094
│ [5] include(::Module, ::String) at .\Base.jl:31
│ [6] include(::String) at .\client.jl:431
│ [7] top-level scope at none:5
│ in expression starting at C:\Users....julia\packages\MPI\RzbIV\deps\build.jl:41
└ @ Pkg.Operations C:\cygwin\home\Administrator\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.2\Pkg\src\backwards_compatible_isolation.jl:647
(tutorial) pkg> add https://github.com/JuliaParallel/MPI.jl.git
Cloning git-repo https://github.com/JuliaParallel/MPI.jl.git
Updating git-repo https://github.com/JuliaParallel/MPI.jl.git
Updating git-repo https://github.com/JuliaParallel/MPI.jl.git
Resolving package versions...
Updating C:\Users\...\tutorial\Project.toml
[da04e1cc] + MPI v0.10.0 #master (https://github.com/JuliaParallel/MPI.jl.git)
Updating C:\Users\...\tutorial\Manifest.toml
[da04e1cc] ~ MPI v0.10.0 ⇒ v0.10.0 #master (https://github.com/JuliaParallel/MPI.jl.git)
Building MPI → C:\Users\...\.julia\packages\MPI\Dw1uR\deps\build.log
┌ Error: Error building MPI
:
│ ERROR: LoadError: No MPI library found.
│ Ensure an MPI implementation is loaded, or set the JULIA_MPI_PATH
variable.
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at C:\Users....julia\packages\MPI\Dw1uR\deps\build.jl:42
│ [3] include at .\boot.jl:328 [inlined]
│ [4] include_relative(::Module, ::String) at .\loading.jl:1094
│ [5] include(::Module, ::String) at .\Base.jl:31
│ [6] include(::String) at .\client.jl:431
│ [7] top-level scope at none:5
│ in expression starting at C:\Users....julia\packages\MPI\Dw1uR\deps\build.jl:41
└ @ Pkg.Operations C:\cygwin\home\Administrator\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.2\Pkg\src\backwards_compatible_isolation.jl:647
Finally, MPI.jl and StructJuMPSolverInterface.jl is not loading
I was using dual decomposition to solve a problem a encountered the following error
Attempting to use an MPI routine after finalizing MPICH
It didn't tell which line the error was triggered. The error can be reproduced by running the file test_DD.jl. The run!() function of dual decomposition was used in file util.jl
https://bitbucket.org/kibaekkim/energystorage/commits/773d73b383de54779c66e9c6372bec4f7d67c387
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
Returns assertion error with mismatching dimensions between bundle_init
and num_all_coupling_variables
at https://github.com/hideakiv/DRMSMIP.jl/blob/master/examples/investment_mpi.jl
Hi,
I was just wondering what is the easiest way to query the solution of the sub-models (meaning the variables at the optimal point), given that the DD instance has been solved.
When I gave it a try I got the following status: OPTIMIZE_NOT_CALLED::TerminationStatusCode = 0
Thanks in advance!
Hi authors,
I tried to print out the result of my model after optimization completed. I used the standard form of JuMP, for example, I have variable z in the 1st stage and variable x in the 2nd stage (z[1:J], binary, x[1:I]>=0). The standard syntax in JuMP should be:
x = model[2, :x]
for i in 1:I:
println(value(x[i]))
end
z = model[1,:z]
for j in 1:J
println(value(z[j]))
end
However, it ran into the following errors:
┌ Warning: The model has been modified since the last call to `optimize!` (or `optimize!` has not been called yet). If you are iteratively querying solution information and modifying a model, query all the results first, then modify the model.
└ @ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\optimizer_interface.jl:695
ERROR: OptimizeNotCalled()
Stacktrace:
[1] get(model::Model, attr::MathOptInterface.VariablePrimal, v::VariableRef)
@ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\optimizer_interface.jl:701
[2] value(v::VariableRef; result::Int64)
@ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\variables.jl:1703
[3] value(v::VariableRef)
@ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\variables.jl:1702
[4] top-level scope
@ c:\Users\ango1\MyDual\Testing09.jl:160
It would be great if you could show the way to print out the value of decision variables. Thank you!
The current implementation requires to Allgatherv
on Vector{CouplingVariableRef}
: https://github.com/kibaekkim/DualDecomposition.jl/blob/mpi/src/LagrangeDual.jl#L50 Although this needs to be done only once at the beginning, this seems very slow. Any idea to improve the performance?
The following message sometimes shows up when running code that does not initialize MPI, and the program is killed:
"Attempting to use an MPI routine after finalizing MPICH"
Would it be better to turn off all the heuristics?
I just found a Julia package for generating scenario trees: https://github.com/kirui93/ScenTrees.jl, and need to see if we can use this.
@michel2323 @youngdae Can we merge branches master
and julia0.4.7
, and merge branches mpi
and mpi-julia0.4.7
? How about creating directory legacy
to put stuff from julia0.4.7
, checking VERSION
and redirecting to the directory (whenever needed)?
and update example
We need a better design to allow user-defined variables and constraints to the Lagrangian master problem without modifying the code.
Conditioned when the coupling variables are free, e.g., during the temporal decomposition
@hideakiv Can you add your multistage stochastic optimization example (not the DRO one) to this repo (in example
directory)? You can create a PR.
With mpi
branch, we cannot run with more than a process. If we run with a process, the dual decomposition does not converge.
I think the current checking of solution status is too restrictive as allowing MOI.OPTIMAL
or MOI.LOCALLY_SOLVED
only. See https://github.com/kibaekkim/DualDecomposition.jl/blob/master/src/LagrangeDual.jl#L106
We may want to allow more statuses.
Need to address these..
┌ Warning: `Allgatherv(sendbuf, counts::Vector{Cint}, comm::Comm)` is deprecated, use `Allgatherv!(sendbuf, VBuffer(similar(sendbuf, sum(counts)), counts), comm)` instead.
│ caller = allcollect(::Array{DualDecomposition.CouplingVariableKey,1}) at parallel.jl:75
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:75
┌ Warning: `Allgatherv(sendbuf, counts::Vector{Cint}, comm::Comm)` is deprecated, use `Allgatherv!(sendbuf, VBuffer(similar(sendbuf, sum(counts)), counts), comm)` instead.
│ caller = allcollect(::Array{DualDecomposition.CouplingVariableKey,1}) at parallel.jl:76
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:76
┌ Warning: `Allgatherv(sendbuf, counts::Vector{Cint}, comm::Comm)` is deprecated, use `Allgatherv!(sendbuf, VBuffer(similar(sendbuf, sum(counts)), counts), comm)` instead.
│ caller = combine_dict(::Dict{Int64,Float64}) at parallel.jl:104
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:104
┌ Warning: `Gatherv(sendbuf::AbstractArray, counts::Vector{Cint}, root::Integer, comm::Comm)` is deprecated, use `Gatherv!(view(sendbuf, 1:counts[MPI.Comm_rank(comm) + 1]), if Comm_rank(comm) == root
│ VBuffer(similar(sendbuf, sum(counts)), counts)
│ else
│ nothing
│ end, root, comm)` instead.
│ caller = combine_dict(::Dict{Int64,Float64}) at parallel.jl:105
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:105
┌ Warning: `Gatherv(sendbuf::AbstractArray, counts::Vector{Cint}, root::Integer, comm::Comm)` is deprecated, use `Gatherv!(view(sendbuf, 1:counts[MPI.Comm_rank(comm) + 1]), if Comm_rank(comm) == root
│ VBuffer(similar(sendbuf, sum(counts)), counts)
│ else
│ nothing
│ end, root, comm)` instead.
│ caller = combine_dict(::Dict{Int64,Float64}) at parallel.jl:106
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:106
┌ Warning: `Allgatherv(sendbuf, counts::Vector{Cint}, comm::Comm)` is deprecated, use `Allgatherv!(sendbuf, VBuffer(similar(sendbuf, sum(counts)), counts), comm)` instead.
│ caller = combine_dict(::Dict{Int64,SparseArrays.SparseVector{Float64,Ti} where Ti<:Integer}) at parallel.jl:124
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:124
┌ Warning: `Gatherv(sendbuf::AbstractArray, counts::Vector{Cint}, root::Integer, comm::Comm)` is deprecated, use `Gatherv!(view(sendbuf, 1:counts[MPI.Comm_rank(comm) + 1]), if Comm_rank(comm) == root
│ VBuffer(similar(sendbuf, sum(counts)), counts)
│ else
│ nothing
│ end, root, comm)` instead.
│ caller = combine_dict(::Dict{Int64,SparseArrays.SparseVector{Float64,Ti} where Ti<:Integer}) at parallel.jl:125
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:125
┌ Warning: `Allgatherv(sendbuf, counts::Vector{Cint}, comm::Comm)` is deprecated, use `Allgatherv!(sendbuf, VBuffer(similar(sendbuf, sum(counts)), counts), comm)` instead.
│ caller = collect(::Array{SparseArrays.SparseVector{Float64,Ti} where Ti<:Integer,1}) at parallel.jl:86
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:86
┌ Warning: `Gatherv(sendbuf::AbstractArray, counts::Vector{Cint}, root::Integer, comm::Comm)` is deprecated, use `Gatherv!(view(sendbuf, 1:counts[MPI.Comm_rank(comm) + 1]), if Comm_rank(comm) == root
│ VBuffer(similar(sendbuf, sum(counts)), counts)
│ else
│ nothing
│ end, root, comm)` instead.
│ caller = collect(::Array{SparseArrays.SparseVector{Float64,Ti} where Ti<:Integer,1}) at parallel.jl:87
└ @ DualDecomposition.parallel ~/REPOS/DualDecomposition.jl/src/parallel.jl:87
While updating the package, I have moved the ADMM part to a separate branch. @youngdae please feel free to update the method and make a PR.
Need to add function to set objective bound to the Lagrangian master for termination
Hi author,
My model is quite similar to an example "sslp.jl" in this repository (has only "x" variable in 1st stage), but there are two variables in 1st stage:
@variable(model, x[1:J], Bin)
@variable(mode, y>=0) #Only 1 variable
ERROR: MethodError: no method matching DualDecomposition.CouplingVariableRef(::Int64, ::VariableRef)
Closest candidates are:
DualDecomposition.CouplingVariableRef(::Int64, ::Any, ::VariableRef)
@ DualDecomposition C:\Users\ango1.julia\packages\DualDecomposition\rqrRm\src\BlockModel.jl:25
DualDecomposition.CouplingVariableRef(::DualDecomposition.CouplingVariableKey, ::VariableRef)
@ DualDecomposition C:\Users\ango1.julia\packages\DualDecomposition\rqrRm\src\BlockModel.jl:20
Stacktrace:
[1] top-level scope
@ c:\Users\ango1\MyDSPOpt\Testing05.jl:129
I understand there should be at least two variables to push, but If I dont use "push", how do the model determine variable "y" belongs to the first stage? Hope to hear from you. Thanks.
@youngdae I cannot do using ADMM
. The module ADMM
is not accessible, I think.
Hi,
Currently, I am trying to solve a consensus-finding problem among a set of bi-level optimization problems (implemented via https://github.com/joaquimg/BilevelJuMP.jl). For this, the use of the DualDecomposition package would be particularly interesting as not many distributed algorithms can deal with the non-convex NLP subproblems.
However, currently, if the subproblems are formulated with BilevelJuMP the package raises an error:
ERROR: MethodError: Cannot `convert` an object of type BilevelModel to an object of type Model
Do you think the package could be seamlessly extended to such model instances?
Thanks in advance for the response!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.