- Julia stuff
Other stuff
jonniedie / componentarrays.jl Goto Github PK
View Code? Open in Web Editor NEWArrays with arbitrarily nested named components.
License: MIT License
Arrays with arbitrarily nested named components.
License: MIT License
Would be nice if logical indexing was supported.
# this works
y = [1,2,1,4]
y[y .> 1.0]
# this doesn't
x = ComponentArray(a=1, b=[2, 1, 4])
x[x .> 1.0]
I was wondering whether or not it would be a good idea to special-case the type Diagonal
to only store the internal vector representing the diagonal of the matrix? I'm not familiar enough with the internals of ComponentArrays to determine whether or not this would be a straightforward addition or not. Of course, this opens up for the question whether or not all structured matrices should have a special representation, and if there would be a way to handle them all uniformly.
It looks like Zygote is using ChainRules now, so this is probably the direction to go.
Hi,
I hit this issue when converting some of my code to ComponentArrays, reduced here to an MWE.
julia> A = @SMatrix [1 2; 3 4]
2×2 SArray{Tuple{2,2},Int64,2,4} with indices SOneTo(2)×SOneTo(2):
1 2
3 4
julia> b = ComponentArray( a = 1, b = 2)
ComponentVector{Int64}(a = 1, b = 2)
julia> A*b
ERROR: MethodError: *(::SArray{Tuple{2,2},Int64,2,4}, ::ComponentArray{Int64,1,Array{Int64,1},Tuple{Axis{(a = 1, b = 2)}}}) is ambiguous. Candidates:
*(A::StaticArray{Tuple{N,M},T,2} where T where M where N, B::AbstractArray{T,1} where T) in StaticArrays at /Users/dcabecinhas/.julia/packages/StaticArrays/mlIi1/src/matrix_multiply.jl:7
*(x::AbstractArray{T,2} where T, y::ComponentArray) in ComponentArrays at /Users/dcabecinhas/.julia/packages/ComponentArrays/b9u6s/src/math.jl:16
Possible fix, define
*(::StaticArray{Tuple{N,M},T,2} where T where M where N, ::ComponentArray{T,1,A,Axes} where Axes<:Tuple{Vararg{AbstractAxis,N} where N} where A<:AbstractArray{T,1} where T)
Stacktrace:
[1] top-level scope at REPL[29]:1
Everything we need to know about the named indices of theAbstractAxis
types is available at construction. There is no reason we can’t just check bounds on construction and run all of the getindex
/getproperty
calls inbounds.
This should work:
ca = CArray(a=5, b=[(a=20., b=0), (a=33., b=0), (a=44., b=3)], c=(a=2, b=[1., 2.]))
ca.b[1] == CArray(a=20, b=0)
Stuffing everything into Tuple
s and NamedTuple
s inside the Axis
index map is starting to get difficult to maintain, especially if we want to do #11 (which we do).
Not only would this simplify quite a bit of the code, it should enable much faster performance across the board. Before introducing component with arrays of ComponentArrays, views didn't slow things down too much, but now its a more serious issue. This is especially true in the multidimensional case.
As an added benefit, ComponentArrays could be true DenseArrays and benefit from faster linear algebra.
ComponentArray
s are up to 10x slower than LabelledArray
s for this problem! ComponentArray
s broadcast to ComponentArray
s and LabelledArray
s broadcast to plain Array
s, so there is that disadvantage for these types of problems, but the disparity shouldn't be nearly this high. All of the microbenchmarks show that ComponentArray
s should be almost the same as normal Array
s. From profiling, it appears that there is are plain flat indices that are making it through to the specialized @generated _getindex
, which shouldn't be happening. I suspect that's the main problem
using ComponentArrays
using DifferentialEquations
using LabelledArrays
function f1(du,u,p,t)
du.x .= -1 .* u.x .* u.y .* p[1]
du.y .= -1 .* u.y .* p[2]
end
const n = 1000
p = [0.1,0.1]
lu_0 = @LArray fill(1000.0,2*n) (x = (1:n),y = (n+1:2*n))
cu_0 =ComponentArray(x=fill(1000.0, n), y=fill(1000.0, n))
lprob1 = ODEProblem(f1,lu_0,(0,100.0),p)
cprob1 = ODEProblem(f1,cu_0,(0,100.0),p)
lsol = solve(lprob1, Rodas5());
lsol = solve(lprob1, Rodas5(autodiff=false));
csol = solve(cprob1, Rodas5());
csol = solve(cprob1, Rodas5(autodiff=false));
Or at least better documentation.
I'm trying to index into a CA with a symbol but I get type inference issues.
It looks to me that f()
and h()
should be equivalent but somehow f()
is not being infered.
Is this related with #9? Is there a way around this issue?
julia> ca = ComponentArray(a=1,b=2)
ComponentVector{Int64}(a = 1, b = 2)
julia> s = :a
:a
julia> function f(ca,s)
getproperty(ca,s)
end
f (generic function with 1 method)
julia> function g(ca)
ca.a
end
g (generic function with 1 method)
julia> function h(ca)
getproperty(ca,:a)
end
h (generic function with 1 method)
julia> @code_warntype f(ca,s)
Variables
#self#::Core.Compiler.Const(f, false)
ca::ComponentVector{Int64}
s::Symbol
Body::Any
1 ─ %1 = Main.getproperty(ca, s)::Any
└── return %1
julia> @code_warntype g(ca)
Variables
#self#::Core.Compiler.Const(g, false)
ca::ComponentVector{Int64}
Body::Int64
1 ─ %1 = Base.getproperty(ca, :a)::Int64
└── return %1
julia> @code_warntype h(ca)
Variables
#self#::Core.Compiler.Const(h, false)
ca::ComponentVector{Int64}
Body::Int64
1 ─ %1 = Main.getproperty(ca, :a)::Int64
└── return %1
If f
is just assignment a .= b
then it doesn't allocate but if composed with +
or *
it does.
using ComponentArrays
using BenchmarkTools
a = rand(3)
b = rand(3)
c = ComponentArray(a = rand(3))
d = ComponentArray(a = rand(3))
function f(a,b)
a .+= b
end
@btime f($a,$b);
# 15.156 ns (0 allocations: 0 bytes)
@btime f($c,$d);
# 42.662 ns (1 allocation: 112 bytes)
@btime f($a,$c);
# 15.188 ns (0 allocations: 0 bytes)
@btime f($c,$a);
# 42.043 ns (1 allocation: 112 bytes)
ComponentArrays.jl/src/broadcasting.jl
Line 74 in 85f99a3
f
changes the type of the numerical values, they will not fit in new_data
.I'm trying to save some memory while running large ensemble simulations.
Is there an easy way around this issue?
It actually works for a single save_idxs
, if passed as a single Int, but not for ranges or arrays.
To reproduce
using ComponentArrays
using OrdinaryDiffEq
u0 = ComponentArray(a=1.0,b=2.0,c=3.0)
f(u,p,t) = 1.01*u
tspan = (0.0,1.0)
prob = ODEProblem(f,u0,tspan)
sol = solve(prob, Tsit5(), save_idxs=[1,2])
Output
ERROR: MethodError: Cannot `convert` an object of type Array{Float64,1} to an object of type SubArray{Float64,1,Array{Float64,1},Tuple{Array{Int64,1}},false}
Closest candidates are:
convert(::Type{T}, ::T) where T<:AbstractArray at abstractarray.jl:14
convert(::Type{T}, ::LinearAlgebra.Factorization) where T<:AbstractArray at /Applications/Julia-1.5.app/Contents/Resources/julia/share/julia/stdlib/v1.5/LinearAlgebra/src/factorization.jl:55
convert(::Type{T}, ::T) where T at essentials.jl:171
...
Stacktrace:
[1] setindex!(::Array{SubArray{Float64,1,Array{Float64,1},Tuple{Array{Int64,1}},false},1}, ::Array{Float64,1}, ::Int64) at ./array.jl:847
[2] _unsafe_copyto!(::Array{SubArray{Float64,1,Array{Float64,1},Tuple{Array{Int64,1}},false},1}, ::Int64, ::Array{Array{Float64,1},1}, ::Int64, ::Int64) at ./array.jl:257
[3] unsafe_copyto! at ./array.jl:311 [inlined]
[4] _copyto_impl! at ./array.jl:335 [inlined]
[5] copyto! at ./array.jl:321 [inlined]
[6] copyto! at ./array.jl:347 [inlined]
[7] copyto_axcheck! at ./abstractarray.jl:946 [inlined]
[8] Array at ./array.jl:562 [inlined]
[9] convert at ./array.jl:554 [inlined]
[10] push!(::Array{Array{SubArray{Float64,1,Array{Float64,1},Tuple{Array{Int64,1}},false},1},1}, ::Array{Array{Float64,1},1}) at ./array.jl:934
[11] copyat_or_push!(::Array{Array{SubArray{Float64,1,Array{Float64,1},Tuple{Array{Int64,1}},false},1},1}, ::Int64, ::Array{SubArray{Float64,1,Array{Float64,1},Tuple{Array{Int64,1}},false},1}, ::Type{Val{true}}) at /Users/dcabecinhas/.julia/packages/RecursiveArrayTools/Wte2p/src/utils.jl:72
[12] copyat_or_push! at /Users/dcabecinhas/.julia/packages/RecursiveArrayTools/Wte2p/src/utils.jl:62 [inlined]
[13] __init(::ODEProblem{ComponentVector{Float64},Tuple{Float64,Float64},false,DiffEqBase.NullParameters,ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem}, ::Tsit5, ::Tuple{}, ::Tuple{}, ::Tuple{}, ::Type{Val{true}}; saveat::Tuple{}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Array{Int64,1}, save_everystep::Bool, save_on::Bool, save_start::Bool, save_end::Bool, callback::Nothing, dense::Bool, calck::Bool, dt::Float64, dtmin::Nothing, dtmax::Float64, force_dtmin::Bool, adaptive::Bool, gamma::Rational{Int64}, abstol::Nothing, reltol::Nothing, qmin::Rational{Int64}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, qoldinit::Rational{Int64}, fullnormalize::Bool, failfactor::Int64, beta1::Nothing, beta2::Nothing, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(LinearAlgebra.opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias_u0::Bool, alias_du0::Bool, initializealg::OrdinaryDiffEq.DefaultInit, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /Users/dcabecinhas/.julia/packages/OrdinaryDiffEq/J4PxP/src/solve.jl:413
[14] #__solve#391 at /Users/dcabecinhas/.julia/packages/OrdinaryDiffEq/J4PxP/src/solve.jl:4 [inlined]
[15] #solve_call#456 at /Users/dcabecinhas/.julia/packages/DiffEqBase/kRzKx/src/solve.jl:65 [inlined]
[16] solve_up(::ODEProblem{ComponentVector{Float64},Tuple{Float64,Float64},false,DiffEqBase.NullParameters,ODEFunction{false,typeof(f),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem}, ::Nothing, ::ComponentVector{Float64}, ::DiffEqBase.NullParameters, ::Tsit5; kwargs::Base.Iterators.Pairs{Symbol,Array{Int64,1},Tuple{Symbol},NamedTuple{(:save_idxs,),Tuple{Array{Int64,1}}}}) at /Users/dcabecinhas/.julia/packages/DiffEqBase/kRzKx/src/solve.jl:86
[17] #solve#457 at /Users/dcabecinhas/.julia/packages/DiffEqBase/kRzKx/src/solve.jl:74 [inlined]
[18] top-level scope at REPL[7]:1
I think this is due to the fact that vcat
/hcat
on a VectorOfArray
of ComponentArray
s splats out the vector. It would be better to call everything on the inner array types and just add axes after the fact.
We need static ComponentArrays. Here is what is holding that back:
Need to be able to hold matrices that aren't <:StridedArray
. May need to make separate StaticComponentArray
or something in the short term to work around this. See #35
Need faster constructor because ComponentArrays will need to be made on the fly in inner loops. Maybe #29 would help.
Maybe Setfield.jl is a good way to start experimenting with this so we don't have to overhaul the constructor right away.
It'd be great to add support for ReverseDiff. ForwardDiff is great, and even FiniteDiff works fine. But with ReverseDiff I get an error:
using ComponentArrays
using ForwardDiff
using ReverseDiff
using FiniteDiff
using Parameters: @unpack
ca = ComponentArray(x = [1, 2], θ = [1.0, 100.0], deg = 2)
function F(ca)
@unpack x, θ, deg = ca
(θ[1] - x[1])^deg + θ[2] * (x[2] - x[1]^deg)^deg
end
gradsF = ForwardDiff.gradient(ca -> F(ca), ca).x
gradsFD = FiniteDiff.finite_difference_gradient(ca -> F(ca), ca).x
gradsR = ReverseDiff.gradient(ca -> F(ca), ca).x
# ERROR: type TrackedArray has no field x
Thanks for the awesome package! It's been very helpful in putting together some complicated models. One of my simulations has a matrix of ODEs and I tried using ComponentArrays to handle it, but it seems to flatten it into a vector. MWE:
julia> c = ComponentArray(a = 1, b = rand(2,2))
ComponentArray{Float64}(a = 1.0, b = [0.24410936846253284, 0.7216454349529038, 0.7719341762466598, 0.5748953546713345])
julia> c[:b]
4-element view(::Array{Float64,1}, 2:5) with eltype Float64:
0.24410936846253284
0.7216454349529038
0.7719341762466598
0.5748953546713345
Instead I was expecting something like this:
julia> c[:b]
4-element view(::Array{Float64,2}, 2:5, 2:5) with eltype Float64:
0.244109 0.771934
0.721645 0.574895
Currently this fails because the CArray
constructor falls back to calling a CArray
field input an AbstractArray
:
ca = CArray(a=1, b=[1,5,2])
ca2 = CArray(a=3, b=5, c=ca)
@test ca2.c isa CArray
Need to add method for CArray
s as field value inputs.
When there are physical conversions between components, we often have things like
x.comp1.out = fun(x.comp2.in)
However, when solving differential equations, we must have the same number of states as the number of equations, so we cannot actually introduce the new x.comp2.in
variable without adding the trivial algebraic equation
du[i] = x.comp1.out - fun(x.comp2.in)
I wonder if we can add some kind of mechanism such that x.comp2.in
directly returns fun(x.comp1.out)
.
Possible syntax:
x = ComponentArray(
comp1 = ComponentArray(out=1),
comp2 = ComponentArray(in=AliasVar(UpperLevel().comp1.out, fun)),
)
where UpperLevel
signifies that comp2
array must keep a reference of the upper level ComponentArray
x.
Subtypes of AbstractAxis
should be true axes in that they will be returned by Base.axes
instead of needing a separate getaxes
function. This will make the broadcasting machinery much simpler because we can use the default broadcasting methods.
The standard constructor goes through the NamedTuple
and pushes values to the data array one at a time. There are two problems with this:
It’s really slow. Now that there is a constructor for creating new ComponentArray
s with additional fields from old ones, it is reasonable to expect to use this in a hot loop. With a do
block, it should be a lot faster.
Zygote doesn’t allow array mutation, so the standard ComponentArray
constructor is not differentiable (the ComponentArray(data, axes)
one is, though).
Both of these are related to issue #22
For x'*x
, the result should be a scalar, but it currently computes a vector with length 1. We can either:
Axis
a proper axes type so it can be returned by the axes
function. This should make adjoints/transposes "just work".CArray
s of Adjoint
s.Should have probably done this from the start as it is easier to infer what a ComponentArray
does from its name than a CArray
. This also follows the julia naming convention for packages that mainly export one type.
Will probably keep CArray
around for a while with a depreciation warning in case there are some people using this package already.
It looks like if you use ComponentArrays with Sundials, it "succeeds" but the solution is wrong, eg here it doesn't move from the initial value even though the answer should be e
.
julia> using DiffEqBase, Sundials, ComponentArrays
julia> prob = ODEProblem((u,p,t)->u, ComponentArray(x=ones(10)), (0.,1.));
julia> sol = solve(prob, CVODE_BDF(linear_solver=:BCG), reltol=1e-3);
julia> sol(1)
ComponentVector{Float64}(x = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
This is Julia 1.5.2, ComponentArrays v0.8.7, and Sundials v4.2.6.
ComponentMatrix
and ComponentVector
are exported functions, but are not mentioned in the API documentation here
vcat
of two ComponentVector
s where they share any of the same component names should become a plain Array
cat
definedcat
s, if possibleA few of the linear algebra fallbacks from fixing #33 are slower than I thought they would be. With the v0.6.6, ComponentArray
s now subtype from DenseArray
to get the faster BLAS routines without trying to go back to the ambiguity jungle of method forwarding. The problem is we have to give up having inner non-DenseArray
s like StaticArray
s and similar inner array types for now. This shouldn't be breaking because ComponentArray{SVector{2}}(a = 5, b = 2)
was an undocumented feature anyway, but it's definitely something that needs to be fixed. I want gpu and static ComponentArray
s and I don't want to make a separate type for them.
It would be nice to be able to do:
julia> using ComponentArrays, StaticArrays
julia> x = ComponentArray{SVector{7, Float32}}(a=4, b=[1, 9, 3], c=(a=1.5, b=[7, 3.2]))
ComponentArray{SVector{7, Float32}}(a = 4.0, b = [1.0, 9.0, 3.0], c = (a = 1.5, b = [7.0, 3.2]))
Also, it would be nice for it to print a shortened version of the array type like the output above when the type is something other than a plain Array
. Basically, it should print out its own constructor. Since this might be hard to do for any arbitrary array type, consider whether it would still be worth printing out the type if it looked like:
ComponentArray{SArray{Tuple{7},Float32,1,7}}(a = 4.0, b = [1.0, 9.0, 3.0], c = (a = 1.5, b = [7.0, 3.2]))
It seems that the ComponentArray
constructor is not differentiable.
Context: I have a two-step loss function, where I do some upfront work to estimate some parameters from the data, then predict using those parameters and others, so I'm trying to build a single parameter array combining the sets of parameters.
Reproducible code sample, adapted from docs
using ComponentArrays
using OrdinaryDiffEq
using Plots
using UnPack
using DiffEqFlux: sciml_train
using Flux: glorot_uniform, ADAM
using Optim: LBFGS
u0 = Float32[2.; 0.]
datasize = 30
tspan = (0.0f0, 1.5f0)
dense_layer(in, out) = ComponentArray(W=glorot_uniform(out, in), b=zeros(out))
function trueODEfunc(du, u, p, t)
true_A = [-0.1 2.0; -2.0 -0.1]
du .= ((u.^3)'true_A)'
end
t = range(tspan[1], tspan[2], length = datasize)
prob = ODEProblem(trueODEfunc, u0, tspan)
ode_data = Array(solve(prob, Tsit5(), saveat = t))
function dudt(u, p, t)
@unpack L1, L2 = p
return L2.W * tanh.(L1.W * u.^3 .+ L1.b) .+ L2.b
end
prob = ODEProblem(dudt, u0, tspan)
layers = (L1=dense_layer(2, 50), L2=dense_layer(50, 2))
θ = ComponentArray(u=u0, p=layers)
predict_n_ode(θ) = Array(solve(prob, Tsit5(), u0=θ.u, p=θ.p, saveat=t))
function loss_n_ode(θ)
other_params = rand(3) # simulates additional work done
θ2 = ComponentArray(u = θ.u, p = θ.p, other = other_params) # constructor
pred = predict_n_ode(θ2) # changed
loss = sum(abs2, ode_data .- pred)
return loss, pred
end
loss_n_ode(θ)
cb = function (θ, loss, pred; doplot=false)
display(loss)
# plot current prediction against data
pl = scatter(t, ode_data[1,:], label = "data")
scatter!(pl, t, pred[1,:], label = "prediction")
display(plot(pl))
return false
end
cb(θ, loss_n_ode(θ)...)
data = Iterators.repeated((), 1000)
res1 = sciml_train(loss_n_ode, θ, ADAM(0.05); cb=cb, maxiters=100)
cb(res1.minimizer, loss_n_ode(res1.minimizer)...; doplot=true)
res2 = sciml_train(loss_n_ode, res1.minimizer, LBFGS(); cb=cb)
cb(res2.minimizer, loss_n_ode(res2.minimizer)...; doplot=true)
Error message:
ERROR: type NamedTuple has no field axes
Stacktrace:
[1] getproperty at .\Base.jl:33 [inlined]
[2] getindex at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\set_get.jl:49 [inlined]
[3] _broadcast_getindex_evalf at .\broadcast.jl:631 [inlined]
[4] _broadcast_getindex at .\broadcast.jl:604 [inlined]
[5] (::Base.Broadcast.var"#19#20"{Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple},Nothing,typeof(getindex),Tuple{Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}},Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple},Nothing,typeof(ComponentArrays.getval),Tuple{Tuple{DataType}}}}}})(::Int64) at .\broadcast.jl:1024
[6] ntuple at .\ntuple.jl:41 [inlined]
[7] copy at .\broadcast.jl:1024 [inlined]
[8] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple},Nothing,typeof(getindex),Tuple{Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}},Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple},Nothing,typeof(ComponentArrays.getval),Tuple{Tuple{DataType}}}}}) at .\broadcast.jl:820
[9] #s16#21(::Any, ::Any, ::Any) at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\set_get.jl:74
[10] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any,N} where N) at .\boot.jl:526
[11] getproperty at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\set_get.jl:68 [inlined]
[12] adjoint at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\if_required\zygote.jl:10 [inlined]
[13] _pullback(::Zygote.Context, ::typeof(ZygoteRules.literal_getproperty), ::ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}}, ::Val{:axes}) at C:\Users\username\.julia\packages\ZygoteRules\6nssF\src\adjoint.jl:47
[14] _pullback(::Zygote.Context, ::typeof(getfield), ::ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}}, ::Symbol) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\lib\lib.jl:221
[15] getaxes at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\set_get.jl:31 [inlined]
[16] make_idx at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:131 [inlined]
[17] _pullback(::Zygote.Context, ::typeof(ComponentArrays.make_idx), ::Array{Any,1}, ::ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W
= View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}}, ::UnitRange{Int64}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[18] make_idx at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:121 [inlined]
[19] _pullback(::Zygote.Context, ::typeof(ComponentArrays.make_idx), ::Array{Any,1}, ::NamedTuple{(:u, :p, :other),Tuple{SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}},Array{Float64,1}}}, ::Int64) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[20] make_carray_args at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:111 [inlined]
[21] _pullback(::Zygote.Context, ::typeof(ComponentArrays.make_carray_args), ::Type{Array{Float64,1}}, ::NamedTuple{(:u, :p, :other),Tuple{SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}},Array{Float64,1}}}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[22] make_carray_args at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:109 [inlined]
[23] _pullback(::Zygote.Context, ::typeof(ComponentArrays.make_carray_args), ::Type{Float64}, ::NamedTuple{(:u, :p, :other),Tuple{SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}},Array{Float64,1}}}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[24] make_carray_args at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:108 [inlined]
[25] _pullback(::Zygote.Context, ::typeof(ComponentArrays.make_carray_args), ::NamedTuple{(:u, :p, :other),Tuple{SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}},Array{Float64,1}}}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[26] ComponentArray at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:64 [inlined]
[27] _pullback(::Zygote.Context, ::Type{ComponentArray}, ::NamedTuple{(:u, :p, :other),Tuple{SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b
= 101:102)))}}},Array{Float64,1}}}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[28] #ComponentArray#12 at C:\Users\username\.julia\packages\ComponentArrays\fNphq\src\componentarray.jl:66 [inlined]
[29] _pullback(::Zygote.Context, ::ComponentArrays.var"##ComponentArray#12", ::Base.Iterators.Pairs{Symbol,AbstractArray{Float64,1},Tuple{Symbol,Symbol,Symbol},NamedTuple{(:u, :p, :other),Tuple{SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},ComponentArray{Float64,1,SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true},Tuple{Axis{(L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))}}},Array{Float64,1}}}}, ::Type{ComponentArray}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0 (repeats 2 times)
[30] loss_n_ode at .\untitled-79d0146585cdbfb1aa13a5142027add7:39 [inlined]
[31] _pullback(::Zygote.Context, ::typeof(loss_n_ode), ::ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(u = 1:2, p = View(3:254, (L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))))}}}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[32] adjoint at C:\Users\username\.julia\packages\Zygote\uGBKO\src\lib\lib.jl:179 [inlined]
[33] _pullback at C:\Users\username\.julia\packages\ZygoteRules\6nssF\src\adjoint.jl:47 [inlined]
[34] #24 at C:\Users\username\.julia\packages\DiffEqFlux\7Lfxh\src\train.jl:99 [inlined]
[35] _pullback(::Zygote.Context, ::DiffEqFlux.var"#24#29"{Tuple{},typeof(loss_n_ode),ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(u = 1:2, p = View(3:254, (L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))))}}}}) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface2.jl:0
[36] pullback(::Function, ::Zygote.Params) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface.jl:172
[37] gradient(::Function, ::Zygote.Params) at C:\Users\username\.julia\packages\Zygote\uGBKO\src\compiler\interface.jl:53
[38] macro expansion at C:\Users\username\.julia\packages\DiffEqFlux\7Lfxh\src\train.jl:98 [inlined]
[39] macro expansion at C:\Users\username\.julia\packages\ProgressLogging\g8xnW\src\ProgressLogging.jl:328 [inlined]
[40] (::DiffEqFlux.var"#23#28"{var"#42#44",Int64,Bool,Bool,typeof(loss_n_ode),ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(u = 1:2, p = View(3:254, (L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))))}}},Zygote.Params})() at C:\Users\username\.julia\packages\DiffEqFlux\7Lfxh\src\train.jl:43
[41] maybe_with_logger(::DiffEqFlux.var"#23#28"{var"#42#44",Int64,Bool,Bool,typeof(loss_n_ode),ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(u = 1:2, p = View(3:254, (L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))))}}},Zygote.Params}, ::Nothing) at C:\Users\username\.julia\packages\DiffEqBase\Co6yv\src\utils.jl:259
[42] sciml_train(::Function, ::ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(u = 1:2, p = View(3:254, (L1 = View(1:150, (W = View(1:100, ShapedAxis((50, 2), NamedTuple())), b = 101:150)), L2 = View(151:252, (W = View(1:100, ShapedAxis((2, 50), NamedTuple())), b = 101:102)))))}}}, ::ADAM, ::Base.Iterators.Cycle{Tuple{DiffEqFlux.NullData}}; cb::Function, maxiters::Int64, progress::Bool, save_best::Bool) at C:\Users\username\.julia\packages\DiffEqFlux\7Lfxh\src\train.jl:42
[43] top-level scope at none:0
julia> ax = Axis(pos=1:2, vel=3:4);
julia> ax_with_y = Axis(ax, y=1);
or...
julia> ax_with_y = Axis(ax, y=ax.pos[1]);
Or something.
using ComponentArrays
using DifferentialEquations
using Parameters: @unpack
tspan = (0.0, 20.0);
function lorenz!(D, u, p, t; f=0.0)
@unpack σ, ρ, β = p
@unpack x, y, z = u
D.x = σ*(y - x)
D.y = x*(ρ - z) - y - f
D.z = x*y - β*z
return nothing
end
lorenz_p = (σ=10.0, ρ=28.0, β=8/3)
lorenz_ic = CArray(x=1.0, y=0.0, z=0.0)
lorenz_prob = ODEProblem(lorenz!, lorenz_ic, tspan, lorenz_p)
lorenz_sol = solve(lorenz_prob, Rodas5())
The following construction fails:
ComponentArray(a=1, b=[2, 1, 4, missing], c=[1, 2, 3])
because in utils.jl
there is no method for recursive_length(a::Missing)
I don't know how much work it would be, but it would be really cool to have a modelingtoolkitize
overload for ODEProblem
s of ComponentArray
s that takes the property names of the ComponentArray
and assigns them to ODESystem
and variable names. Just by quick inspection, this doesn't seem too difficult.
Adapted from documentation:
using ComponentArrays
c = (a=2, b=[1, 2]);
x = ComponentArray(a=1, b=[2, 1, 4], c=c)
x2 = x .* x'
good = sum(abs2, x)
bad = sum(abs2, x2) # fails here
workaround = sum(abs2, getdata(x2)) # succeeds
The fact that the 1-dimensional version works without getdata
, but the 2-dimensional form needs it is what tripped me up. Would be nice if the error could be handled automatically, but if not, the workaround is fine.
Error message:
ERROR: MethodError: no method matching abs2(::SubArray{Float64,0,Array{Float64,2},Tuple{Int64,Int64},true})
This example works with LabelledArray
s but fails with ComponentArray
s. Doing it in-place works fine. Also, ForwardDiff works with Optim.jl and ComponentArray
s (see example in examples folder).
using ComponentArrays
using DifferentialEquations
using LabelledArrays
function rober(vars, p, t)
y₁, y₂, y₃ = vars
k₁, k₂, k₃ = p
D = similar(vars)
D.y₁ = -k₁*y₁+k₃*y₂*y₃
D.y₂ = k₁*y₁-k₂*y₂^2-k₃*y₂*y₃
D.y₃ = k₂*y₂^2
return D
end
ic = LVector(y₁=1.0, y₂=0.0, y₃=0.0)
prob = ODEProblem(rober, ic, (0.0,1e11), (0.04,3e7,1e4))
solve(prob, Rosenbrock23())
ic = ComponentArray(y₁=1.0, y₂=0.0, y₃=0.0)
prob = ODEProblem(rober, ic, (0.0,1e11), (0.04,3e7,1e4))
solve(prob, Rosenbrock23())
Arrays of components in a ComponentArray
unpack pretty slowly because they allocate. Something like this:
boid = ComponentArray(pos=zeros(2), vel=zeros(2))
boid_box = ComponentArray(pos=zeros(2), vel=zeros(2), θ=0, ω=0)
u0 = ComponentArray(boid_box=boid_box, boids=repeat([boid], 100))
Right now, doing u0.boids
would allocate and be too expensive to use.
Maybe the right answer is StaticArrays. They wouldn't be able to be used for very large arrays of components, but it would at least it would make them useful for some problems.
julia> C = [0 0.5 0.5]; f = v -> C*v.x;
julia> ax = Axis(x=1:3, y=f);
julia> ca = ComponentArray([0.1, 4, 2], ax);
julia> ca.y
3.0
Not sure if this is possible with the way AbstractAxis types currently work. I think this should be possible though. ShapedAxis
could just be a regular ViewAxis
with a reshape
-calling FuncAxis
wrapped around it.
julia> using ComponentArrays
julia> x1 = ComponentArray(a=[1.1,2.1], b=[0.1])
ComponentVector{Float64}(a = [1.1, 2.1], b = [0.1])
julia> x2 = ComponentArray(a=[1.1,2.1], b=0.1)
ComponentVector{Float64}(a = [1.1, 2.1], b = 0.1)
julia> typeof(x1)
ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(a = 1:2, b = 3:3)}}}
julia> typeof(x2)
ComponentArray{Float64,1,Array{Float64,1},Tuple{Axis{(a = 1:2, b = 3)}}}
julia> promote_rule(Axis{(a = 1:2, b = 3:3)},Axis{(a = 1:2, b = 3)})
ERROR: StackOverflowError:
Stacktrace:
[1] promote_rule(::Type{Axis{(a = 1:2, b = 3:3)}}, ::Type{Axis{(a = 1:2, b = 3)}}) at /Users/dcabecinhas/.julia/packages/ComponentArrays/Z6vz2/src/similar_convert_copy.jl:60
[2] promote_type(::Type{Axis{(a = 1:2, b = 3:3)}}, ::Type{Axis{(a = 1:2, b = 3)}}) at ./promotion.jl:223
... (the last 2 lines are repeated 39990 more times)
[79983] promote_rule(::Type{Axis{(a = 1:2, b = 3:3)}}, ::Type{Axis{(a = 1:2, b = 3)}}) at /Users/dcabecinhas/.julia/packages/ComponentArrays/Z6vz2/src/similar_convert_copy.jl:60
Constant propagation works for Axis
indexing but not for ComponentArray
indexing. Not sure what to do here. I’m wrapping the symbolic indexes immediately in a Val
and passing them into a function that calls a @generated
function. fastindices
is a workaround that I’d rather not have to have.
This should be possible:
julia> x = ComponentArray(a=[], b=3.5, c=[15, 2]);
julia> x.a
0-element Array{Float64,1}
Should be doable by just writing a method for indexing into a ComponentArray
with ComponentIndex{Nothing,AbstractAxis}
and a method of make_carray_args
that creates a ViewAxis
with nothing
as the view index.
Currently the broadcasted copy will create a ComponentArray
with the correct element type but incorrect array type.
Right now this happens:
julia> x = ComponentArray(a=1, b=2f0)
ComponentVector{Float64}(a = 1.0, b = 2.0)
when this should happen:
julia> x = ComponentArray(a=1, b=2f0)
ComponentVector{Float32}(a = 1.0, b = 2.0)
We also want this (which currently errors):
julia> x = ComponentArray(a=1, b=missing)
ComponentVector{Union{Missing, Int64}}(a = 1, b = missing)
julia> x = ComponentArray(a=1, b=2+im)
ComponentVector{Complex{Int64}}(a = 1 + 0im, b = 2 + 1im)
``
It seems the slow (as per v0.50 release notes) kwargs constructor is being used for all CA creation and not just for the CA(...; x=y) syntax.
I'm not sure if this was the result of a recent change or if it has always been like this.
julia> @btime ComponentArray(a=5, b=[1, 2])
23.316 μs (45 allocations: 2.14 KiB)
ComponentVector{Int64}(a = 5, b = [1, 2])
julia> @code_warntype ComponentArray(a=5, b=[1, 2])
Variables
#unused#::Core.Compiler.Const(Core.var"#Type##kw"(), false)
@_2::NamedTuple{(:a, :b),Tuple{Int64,Array{Int64,1}}}
@_3::Type{ComponentArray}
kwargs...::Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol},NamedTuple{(:a, :b),Tuple{Int64,Array{Int64,1}}}}
Body::Any
1 ─ (kwargs... = Base.pairs(@_2))
│ %2 = ComponentArrays.:(var"#ComponentArray#13")(kwargs..., @_3)::Any
└── return %2
From #22 .
The idea is:
julia> x = ComponentArray(a=1.5, b=[100, 30, 4])
ComponentVector{Float64}(a = 1.5, b = [100.0, 30.0, 4.0])
julia> new_x = ComponentArray(x; c=20)
ComponentVector{Float64}(a = 1.5, b = [100.0, 30.0, 4.0], c = 20.0)
I am not sure if this is a ComponentArrays issue.
I tried extending the DiffEqFlux tutorial to an SDE, and the performance took a huge knock, each iteration of training takes around 2 minutes on my machine.
using ComponentArrays
using OrdinaryDiffEq
using Plots
using UnPack
using DiffEqFlux: sciml_train
using Flux: glorot_uniform, ADAM
using Optim: LBFGS
using DifferentialEquations
using DiffEqSensitivity
u0 = Float32[2.; 0.]
datasize = 30
tspan = (0.0f0, 1.5f0)
dense_layer(in, out) = ComponentArray(W=glorot_uniform(out, in), b=zeros(out))
function trueODEfunc(du, u, p, t)
true_A = [-0.1 2.0; -2.0 -0.1]
du .= ((u.^3)'true_A)'
end
t = range(tspan[1], tspan[2], length = datasize)
trueprob = ODEProblem(trueODEfunc, u0, tspan)
ode_data = Array(solve(trueprob, Tsit5(), saveat = t)) .+ (randn(2, datasize) .* 0.3)
function dudt(u, p, t)
# display(typeof(p))
@unpack L1, L2 = p.p
return L2.W * tanh.(L1.W * u.^3 .+ L1.b) .+ L2.b
end
function noisy(u,p,t)
@unpack noise_params = p
return noise_params .* u .* 0.1
end
prob = SDEProblem(dudt, noisy, u0, tspan)
layers = (L1=dense_layer(2, 50), L2=dense_layer(50, 2))
θ = ComponentArray(u=u0, p=( p = layers, noise_params = rand(2)))
predict_n_ode(θ) = Array(solve(prob, Tsit5(), u0=θ.u, p=θ.p, saveat=t, sensealg=BacksolveAdjoint()))
function loss_n_ode(θ)
pred = predict_n_ode(θ)
loss = sum(abs2, ode_data .- pred)
return loss, pred
end
loss_n_ode(θ)
cb = function (θ, loss, pred; doplot=false)
display(loss)
# plot current prediction against data
pl = scatter(t, ode_data[1,:], label = "data")
scatter!(pl, t, pred[1,:], label = "prediction")
display(plot(pl))
return false
end
cb(θ, loss_n_ode(θ)...)
data = Iterators.repeated((), 1000)
res1 = sciml_train(loss_n_ode, θ, ADAM(0.05); cb=cb, maxiters=100)
cb(res1.minimizer, loss_n_ode(res1.minimizer)...; doplot=true)
res2 = sciml_train(loss_n_ode, res1.minimizer, LBFGS(); cb=cb)
cb(res2.minimizer, loss_n_ode(res2.minimizer)...; doplot=true)
It seems that 1.5.0 added a new broadcasted similar
method that makes ours ambiguous.
Should be an easy enough fix, but:
julia> ca = CArray(a=100, b=[4, 1.3], c=(a=(a=1, b=[1.0, 4.4]), b=[0.4, 2, 1, 45]));
julia> transpose(ca) * ca
1-element Array{Float64,1}:
12069.210000000001
julia> adjoint(ca) * ca
12069.210000000001
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.