Giter Club home page Giter Club logo

forwarddiff.jl's Introduction

CI Coverage Status

ForwardDiff.jl

ForwardDiff implements methods to take derivatives, gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions (or any callable object, really) using forward mode automatic differentiation (AD).

While performance can vary depending on the functions you evaluate, the algorithms implemented by ForwardDiff generally outperform non-AD algorithms (such as finite-differencing) in both speed and accuracy.

Here's a simple example showing the package in action:

julia> using ForwardDiff

julia> f(x::Vector) = sin(x[1]) + prod(x[2:end]);  # returns a scalar

julia> x = vcat(pi/4, 2:4)
4-element Vector{Float64}:
 0.7853981633974483
 2.0
 3.0
 4.0

julia> ForwardDiff.gradient(f, x)
4-element Vector{Float64}:
  0.7071067811865476
 12.0
  8.0
  6.0

julia> ForwardDiff.hessian(f, x)
4×4 Matrix{Float64}:
 -0.707107  0.0  0.0  0.0
  0.0       0.0  4.0  3.0
  0.0       4.0  0.0  2.0
  0.0       3.0  2.0  0.0

Functions like f which map a vector to a scalar are the best case for reverse-mode automatic differentiation, but ForwardDiff may still be a good choice if x is not too large, as it is much simpler. The best case for forward-mode differentiation is a function which maps a scalar to a vector, like this g:

julia> g(y::Real) = [sin(y), cos(y), tan(y)];  # returns a vector

julia> ForwardDiff.derivative(g, pi/4)
3-element Vector{Float64}:
  0.7071067811865476
 -0.7071067811865475
  1.9999999999999998

julia> ForwardDiff.jacobian(x) do x  # anonymous function, returns a length-2 vector
         [sin(x[1]), prod(x[2:end])]
       end
2×4 Matrix{Float64}:
 0.707107   0.0  0.0  0.0
 0.0       12.0  8.0  6.0

See ForwardDiff's documentation for full details on how to use this package. ForwardDiff relies on DiffRules for the derivatives of many simple function such as sin.

See the JuliaDiff web page for other automatic differentiation packages.

Publications

If you find ForwardDiff useful in your work, we kindly request that you cite the following paper:

@article{RevelsLubinPapamarkou2016,
    title = {Forward-Mode Automatic Differentiation in {J}ulia},
   author = {{Revels}, J. and {Lubin}, M. and {Papamarkou}, T.},
  journal = {arXiv:1607.07892 [cs.MS]},
     year = {2016},
      url = {https://arxiv.org/abs/1607.07892}
}

forwarddiff.jl's People

Contributors

andreasnoack avatar chriselrod avatar chrisrackauckas avatar dependabot[bot] avatar devmotion avatar dlfivefifty avatar fredo-dedup avatar fredrikekre avatar github-actions[bot] avatar gragusa avatar hyrodium avatar jrevels avatar kmsquire avatar kristofferc avatar maleadt avatar mcabbott avatar mlubin avatar mohamed82008 avatar papamarkou avatar powerdistribution avatar ranocha avatar rgiordan avatar sethaxen avatar simonbyrne avatar thomvet avatar timholy avatar tkelman avatar tkf avatar tpapp avatar yuyichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

forwarddiff.jl's Issues

type stability of `derivative` function

First of all, thank you for this package!

When I define a simple function, it seems its derivate function is not type-stable:

julia> f(x) = 2x
f (generic function with 1 method)

julia> g = derivative(f)
d (generic function with 1 method)

julia> @code_warntype g(1)
Variables:
  x::Int64
  ##f#7365::F
  ##x#7366::Int64
  ###s31#7367::Type{Void}
  ##result#7368::Any
  ####grad#7364#7369::Tuple{Int64}

Body:
  begin  # /Users/ken/.julia/v0.4/ForwardDiff/src/api/derivative.jl, line 43:
      ##result#7368 = call(ForwardDiff.ForwardDiffResult,(f::F)($(Expr(:new, 
ForwardDiff.GradientNumber{1,Int64,Tuple{Int64}}, :(x::Int64), :($(Expr(:new, 
ForwardDiff.Partials{Int64,Tuple{Int64}}, :((top(tuple))(1)::Tuple{Int64})))))))::Any)::Any
      return (ForwardDiff.derivative)(##result#7368)::Any
  end::Any

I would imagine calling this derivative function in a tight loop will hinder performance, right? But do correct me if I'm wrong.

a few simple examples would be great

Assuming the source code would be relatively brief,
it would be helpful to see a couple of simple examples in full:
e.g. how to use it with sin(x::Real) x in -Pi/2..Pi/2
and perhaps, f(z::Complex) z in whatever range looks good

Automatic chain differential for system of equations?

Hi,

I'm working on a problem, which has a system of equations. I would like to get partial derivates of the whole system.

julia> using ForwardDiff

julia> f1(s) = (s/3)^2
f1 (generic function with 1 method)

julia> f2(s, l) = s^2 + exp(3*l/sqrt(4))
f2 (generic function with 1 method)

julia> function f(x)
           return f1(x[1]) + f2(x[1], x[2])
       end
f (generic function with 1 method)

julia> g = forwarddiff_gradient(f, Float64)
g (generic function with 1 method)

julia> g([0.1, 56.2])
ERROR: BoundsError: attempt to access 1-element Array{DualNumbers.Dual{Float64},
1}:
 0.1+0.0du
  at index [2]
 in dual_fad at C:\Users\ovax03\.julia\v0.4\ForwardDiff\src\dual_fad\univariate_
range.jl:6
 in g at C:\Users\ovax03\.julia\v0.4\ForwardDiff\src\dual_fad\univariate_range.j
l:22

Is there an error in my code or should I make a macro, which would derivate all the terms?

source code transformation

I'd like to propose using source code transformation instead of operator overloading. Operator overloading is slow in general and not really optimized in Julia; the compiler doesn't seem to optimize away the temporary objects, and the GC isn't great at handling them.

On the other hand, Julia makes implementing source code transformation much easier than in other languages. The AST of a function is (mostly) user accessible:

julia> x(t) = 2t

julia> methods(x).defs.func.code
AST(:($(Expr(:lambda, {:t}, {{}, {{:t, :Any, 0}}, {}}, quote  # none, line 1:
        return *(2,t)
    end))))

The transformed functions can be compiled with the JIT, so that AD incurs only a one-time cost. With this approach, AD in Julia will be very competitive with the state of the art.

Of course, development time is limited, and I'm not sure how much I'll be able to work on this in the short term. It is certainly reasonable to go forward with an operator overloading approach just to get something working and to have a basis for comparison.

Finalizing basics of forward mode with operator overloading and of package structuring

I opened an issue so that we conclude the ongoing discussion on possible partial rewriting of the forward AD code.

  1. Tom, I understand your last message re the gradient. My problem with the output is that the gradient doesn't need to be an nxn square matrix. All we need is its diagonal! This is more consistent with the mathematical definition of gradient, plus it takes less space to store (n numbers instead of n^2 since the matrix is diagonal).
  2. Are we dealing with f:R^n->R smooth functions only or with the more general case f:R^n->R^m?
  3. Kevin, as long as we won't need a separate type for reverse mode, Dual is not so bad choice. autodiff() is my favourite name for our AD functions, nevertheless we need to address the issue of having several AD modes, such as forward, reverse or hybrid. This could be done by using separate function names, which we need to device, or by having a function argument mode in autodiff(), i.e. autodiff(x, ..., mode="forward") so as to choose the mode of differentiation. To generalize further, we may need another function argument, say method, to choose between operator overloading (oo) and source code transformation (sct), for example autodiff(x, ..., mode="forward", method="oo").
  4. If the n parameter of the ADForward type does not exhibit any improvement in performance, I suggest we remove it.
  5. Kevin did a very tidy job on modularization so far. We could do a tiny bit more of structuring and file naming once we address the previous questions.

Derivating J2-plasticity yield function

Hi,

I was making this example an I noticed something weird which I couldn't figure out. I was working with J2 plasticity model (see link, under Reduced von Mises equation for different stress conditions). When I took gradient from function it only gave NaN vector

https://en.wikipedia.org/wiki/Von_Mises_yield_criterion

Code:

>> using ForwardDiff
>> function J2(σ)
    e1 = (σ[1] - σ[2])^2
    e2 = (σ[2] - σ[3])^2
    e3 = (σ[3] - σ[1])^2
    e4 = σ[4]^2 
    e5 = σ[5]^2
    e6 = σ[6]^2
    return sqrt(( e1 + e2 + e3 + 6 * (e4 + e5 + e6)) / 2.)
end

>> x= [100.0, 0.0, 0.0, 0.0, 0.0, 0.0]
>> dJ2 = ForwardDiff.gradient(J2)
>> dJ2(x)

[NaN,NaN,NaN,NaN,NaN,NaN]

I also tested the typical forward derivate

>> h = 1e-12
>> numerical_vals = zeros(Float64, 6)
>> for i=1:6
    vals = copy(x)
    vals[i] += h
    numerical_vals[i] = (J2(vals)- J2(x)) / h
end
>> numerical_vals

6-element Array{Float64,1}:
  0.99476
 -0.49738
 -0.49738
  0.0    
  0.0    
  0.0 

Background: I just updated into Julia 0.5 dev version so might that have something to do with this?

Question: Algebra

This is cool stuff! I have a few questions...

If I understand, GradientNumbers share a single epsilon (e) so the algebra of GradientNumbers is completely determined by that of the reals, linearity, plus e^2 = 0, i.e.

(a+b_e)_(c+d_e) = a_c + (a_d+b_c)*e

There is no concept of distinct epsilons e1 and e2 for GradientNumbers. Is that correct?

HessianNumbers, on the other hand have 2 distinct epsilons e1 and e2. The documentation (in the source code) states that

e1 != 0, e2 != 0

e1^2 = e2^2 = (e1*e2)^2 = 0

But what is the product e1*e2? Do you assume e1 and e2 commute?

Consider the sum e1+e2. Is this an epsilon? If e1+e2 is an epsilon and e1 and e2 commute, then e1*e2 = 0, which would not be very interesting. I guess the epsilons are not closed under addition (too bad!).

Similarly, for TensorNumbers, do you assume the 3 epsilons commute? It seems so.

It would be interesting to consider non-commuting epsilons closed under addition such that

(e1+e2)^2 = e1_e2 + e2_e1 = 0 => e1_e2 = -e2_e1.

This would allow you to use AD ideas for (Kaehler) differential forms and more general abstract differential algebras.

More generally, I think it is interesting to consider differentials of epsilons, e.g.

df = @f_i de^i (Einstein summation)

I did some work along these lines in grad school (13+ years ago... ouch. Getting old :))

It gets fun if you let 0-forms (functions) and 1-forms (differentials) not commute, i.e.

f(x) de = de f(x+e)

This has a nice geometric interpretation. We can consider e to be an infinitesimal curve and multiplying the function on the left of de evaluates at the beginning of the curve and multiplying on the right evaluates at the end of the curve.

This is nice because

[de,f] = f'(x) de

i.e. the derivative may be expressed as a commutator which automatically satisfies the product rule.

[de,fg] = [de,f] g + f [de,g]

(but the order matters above :))

Erratic errors in typed Hessians

ForwardDiff.forwarddiff_hessian intermittently returns hessians that look like they are unset memory - - they are completely wrong. Since this happens intermittently and only in certain settings I haven't been able to narrow down exactly what the problem is, but I have a couple relatively minimal examples that reproduce the problem reliably on my computer.

I happen to have discovered it when using atan. I haven't been able to reproduce it for simple polynomials and haven't systematically tried other functions yet. Here's the simplest example:

using ForwardDiff

function my_fun2{T <: Number}(y::Array{T})
    @assert length(y) == 1
    atan(y[1])
end

my_hess2 = ForwardDiff.forwarddiff_hessian(my_fun2, Float64, fadtype=:typed, n=1)

y = [4.]
original = my_hess2(y);
bad_versions = 0
max_iters = 1000
for iter = 1:max_iters
    this_hess = my_hess2(y)
    if maximum(abs(this_hess - original)) > 1
        println("$iter: ", this_hess)
        bad_versions += 1;
    end
end
bad_versions / max_iters   # Often around 0.04 or so, though sometimes 0.0

I don't know if it's a red herring, but happened to discover it when indexing into the argument using a closure. The following, more complicated example, fails much more often:

# This produces the problem intermittently:
indices = Int64[3, 2, 1]
coeffs = [10., 100., 1000.]
function my_fun{T <: Number}(y::Array{T})
    @assert length(y) == 3
    T[ coeffs[1] * y[indices[3]], coeffs[2] * y[indices[2]] ^ 2, coeffs[2] * atan(y[indices[3]]) ]
end

function get_hess_func_vec(my_fun_arg::Function, K::Int64)
    [ ForwardDiff.forwarddiff_hessian(y -> my_fun(y)[k], Float64, fadtype=:typed, n=K) for k=1:K ]
end

my_hess_funcs = get_hess_func_vec(my_fun, 3);
y = [1., 2., 3.]

# Repeated evaluation of this function gives erratic results:
original = my_hess_funcs[3](y);
bad_versions = 0
max_iters = 1000
for iter = 1:max_iters
    this_hess = my_hess_funcs[3](y)
    if maximum(abs(this_hess - original)) > 1
        println("$iter: ", this_hess)
        bad_versions += 1;
    end
end
bad_versions / max_iters # As low as 0.02 or so, often around 0.5 (of course, the original could be wrong, too)

Please let me know if you have any ideas or want more details.

tests for AD using dual numbers

@mlubin I open this issue as a placeholder - I will start organizing the tests in ForwardDiff this weekend. Would you like to take care of adding some tests for forward mode AD based on dual numbers to share the workload? :)

A definition of isless for GraDual?

Shouldn't GraDual{T} be comparable to T so that isless(graDual, x) == isless(graDual.v, x)?

Otherwise it's not possible to differentiate x > 0 ? x : -x

Allowing additional arguments to function being differentiated

I have being using ForwardDiff.jl for a while. My use case is a little bit different as I need to differentiate f(x, args...) only with respect to x. Of source, in typical usage this can be accomplished by closure. By in some instances it is not possible.

I locally extended the api to allow args... to be passed down down (see, e.g., here).

I have been following the work on #27 (which is great by the way) and I was wondering whether either such extension would be possible or whether the new API will allow additional arguments.

InexactError()

Hi Scidom,
I have the following model:

point = 2 # can be used within function
y = [0:(point - 1)]
function GPCM(para)
  a = para[1]
  d = para[2]
  tau = para[3:4]
  t = para[5]
  nu = zeros(point)
  for k = 1:point
    nu[k] = exp(a.*(y[k].*(t-d) - sum(tau[1:k]) )) [1]
  end
  de =  sum(nu)
  p = nu ./ de
end

para = [1.,0.,0.,0.,1.]
Jac = forwarddiff_jacobian(GPCM, Float64, fadtype=:typed)
Jac(para)

But, it returns an error message:
InexactError()
in setindex! at array.jl:307

It indicates that the nu[k] is not right. Hessian cannot be obtained as well. Any suggestion about this?
Thanks.

Offer a way for users to access the lower-order values that end up being calculated by FAD methods

Previously discussed here.

For example, if somebody calls hessian(f,x) they should be able to tell the function to return ∇f(x) and/or f(x) as well, since those values are already calculated by way of the Hessian calculation.

Here some ideas for what this might look like:

  1. Have a keyword argument that specifies that the function should return the calculated ForwardDiffNumber, then provide methods in the API to extract the data you want from it. Something like this:
julia> data = hessian(f, x, alldata=true) # returns ForwardDiffNumber
julia> gradient(data) # extract gradient from data
julia> hessian(data) # extract hessian from data

I don't really like alldata as a name for the keyword argument, but you get my drift.

  1. Pass symbols to the method saying what you want up front (once again, via a keyword argument):
# returns in the same order as it's given, with Hessian first 
julia> hess, val, grad = hessian(f, x, also=(:value, :gradient)) 

Thoughts?

jacobians with output_length takes a long time to create

Some background of my use case:
I need to solve a smallish nonlinear problem in a lot of points where each points have a set of different parameters. For a specific point I make a closure from the parameters of the point and use ForwardDiff to compute the jacobian of that closure. I then pass the closure and the jacobian to an external solver and everything works great.

I now wanted to try a solver that only allows for in place updates of the function values. This means I should use the output_length option to jacobian. When I do that, the analysis performance takes a large hit because from what I believe is the compilation time in generating the function here in each new point.

Since I have a constant output_length it should in theory be possible to cache something similar to the newf function that takes the function I want to compute the jacobian of as an extra argument instead of creating the function from scratch.

Would this be possible/sensible?

[PkgEval] ForwardDiff may have a testing issue on Julia 0.4 (2014-10-27)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.4

  • On 2014-10-26 the testing status was Tests pass.
  • On 2014-10-27 the testing status changed to Tests fail, but package loads.

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

>>> 'Pkg.add("ForwardDiff")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing ForwardDiff v0.0.2
INFO: Package database updated

>>> 'using ForwardDiff' log

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
Warning: New definition 
    * at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:108
is ambiguous with: 
    * at bool.jl:50.
To fix, define 
    *
before the new definition.
Warning: New definition 
    * at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:108
is ambiguous with: 
    * at bool.jl:50.
To fix, define 
    *
before the new definition.
Warning: New definition 
    * at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:133
is ambiguous with: 
    * at bool.jl:50.
To fix, define 
    *
before the new definition.
Julia Version 0.4.0-dev+1318
Commit 7a7110b (2014-10-27 03:52 UTC)
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3
(T<:Real,GraDual{T<:Real,n})(Bool,T<:Number)(Bool,_<:GraDual{Bool,n})(T<:Real,FADHessian{T<:Real,n})(Bool,T<:Number)(Bool,_<:FADHessian{Bool,n})(T<:Real,FADTensor{T<:Real,n})(Bool,T<:Number)(Bool,_<:FADTensor{Bool,n})
>>> test log

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
Warning: New definition 
    * at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:108
is ambiguous with: 
    * at bool.jl:50.
To fix, define 
... truncated ...
 in include at ./boot.jl:242
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:293
 in _start at ./client.jl:362
 in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
WARNING: (oftype{T})(::Type{T},c) is deprecated, use convert(T,c) instead.
 in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:150
 in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:230
 in f at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
 in include at ./boot.jl:242
 in include_from_node1 at ./loading.jl:128
 in anonymous at no file:8
 in include at ./boot.jl:242
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:293
 in _start at ./client.jl:362
 in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
ERROR: mismatch of non-finite elements: 
  hessian(output) = [NaN]
  hessianf(args...) = 0.6213341259967982
 in test_approx_eq at test.jl:125
 in test_approx_eq at test.jl:143
 in include at ./boot.jl:242
 in include_from_node1 at ./loading.jl:128
 in anonymous at no file:8
 in include at ./boot.jl:242
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:293
 in _start at ./client.jl:362
 in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl, in expression starting on line 276
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl, in expression starting on line 5

INFO: Testing ForwardDiff
=============================[ ERROR: ForwardDiff ]=============================

failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl`, ProcessExited(1)) [1]

================================================================================
INFO: No packages to install, update or remove
ERROR: ForwardDiff had test errors
 in error at error.jl:21
 in test at pkg/entry.jl:719
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28
 in test at pkg.jl:68
 in process_options at ./client.jl:221
 in _start at ./client.jl:362
 in _start_3B_3773 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so

>>> end of log

[PkgEval] ForwardDiff may have a testing issue on Julia 0.4 (2014-10-17)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.4

  • On 2014-10-16 the testing status was Tests pass.
  • On 2014-10-17 the testing status changed to Tests fail, but package loads.

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

>>> 'Pkg.add("ForwardDiff")' log
INFO: Installing Calculus v0.1.5
INFO: Installing DualNumbers v0.1.0
INFO: Installing ForwardDiff v0.0.2
INFO: Package database updated

>>> 'using ForwardDiff' log

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
Julia Version 0.4.0-dev+1119
Commit de2ba52 (2014-10-17 04:00 UTC)
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

>>> test log

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:108.
Use "[]" instead.

WARNING: deprecated syntax "{}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/symbolic.jl:121.
Use "[]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:41.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a,b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/differentiate.jl:58.
Use "Any[a,b, ...]" instead.

WARNING: deprecated syntax "{a=>b, ...}" at /home/idunning/pkgtest/.julia/v0.4/Calculus/src/deparse.jl:1.
Use "Dict{Any,Any}(a=>b, ...)" instead.
WARNING: (oftype{T})(::Type{T},c) is deprecated, use convert(T,c) instead.
 in log2 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:149
 in f at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/GraDual.jl:90
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
... truncated ...
WARNING: (oftype{T})(::Type{T},c) is deprecated, use convert(T,c) instead.
 in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:150
 in log10 at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:230
 in f at /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in anonymous at no file:8
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:293
 in _start at ./client.jl:362
 in _start_3B_3778 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
ERROR: mismatch of non-finite elements: 
  hessian(output) = [NaN]
  hessianf(args...) = -124.36073976692232
 in test_approx_eq at test.jl:125
 in test_approx_eq at test.jl:143
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in anonymous at no file:8
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:293
 in _start at ./client.jl:362
 in _start_3B_3778 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/FADHessian.jl, in expression starting on line 265
while loading /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl, in expression starting on line 5
Running tests:
 * dual_fad.jl
 * GraDual.jl
 * FADHessian.jl

INFO: Testing ForwardDiff
=============================[ ERROR: ForwardDiff ]=============================

failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/ForwardDiff/test/runtests.jl`, ProcessExited(1)) [1]

================================================================================
INFO: No packages to install, update or remove
ERROR: ForwardDiff had test errors
 in error at error.jl:21
 in test at pkg/entry.jl:719
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28
 in test at pkg.jl:68
 in process_options at ./client.jl:221
 in _start at ./client.jl:362
 in _start_3B_3778 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so


>>> end of log

Bug in `typed_fad_gradient`

Consider the following function

using ForwardDiff;
srand(1)
y = randn(100);
x = randn(100,2);
g(theta)  = x.*(y-x*theta);
gn(theta) = vec(sum(g(theta), 1));
gn([.1,.1])

I want to use typed_fad_gradient to calculate the derivative of gn with respect to theta. This code

    h1 = ForwardDiff.typed_fad_gradient(gn, Float64);
    h1([.1,.1])

provides the correct gradient

 2x2 Array{Float64,2}:
  -103.869     11.1779
    11.1779  -119.393

as it can be seen by comparing it with the analytic gradient

    -x'x

Suppose now the function to be derived is defined using a linear algebra
operation, i.e.,

 p = ones(100);
 wgn(theta) = g(theta)'p;
 h2 = ForwardDiff.typed_fad_gradient(wgn, Float64);
 h2([.1,.1])

Although wgn([.1,.1]).==gn([.1,.1]), the gradient returned by type_fad_gradient is

 2x2 Array{Float64,2}:
  103.869   -11.1779
  -11.1779  119.393  

which is different from h1([.1,.1]) and from -x'x by a factor of -1.

Dual numbers faster than reals??

This is not strictly related to this package but I thought maybe one of you could explain this, just close else. I was playing around a bit with the short implementation of dual numbers I found in @mlubin's Github page. I then wanted to benchmark a bit to see performance differences. I then get results that calling the function with dual numbers is faster than with normal reals (??).

First, here is the short implementation

importall Base
immutable Dual{T} <: Number
    re::T
    ɛ::T
end
real(z::Dual) = z.re
dual(z::Dual) = z.ɛ;


(+)(x::Dual,y::Dual) = Dual(real(x)+real(y), dual(x)+dual(y))
(-)(x::Dual,y::Dual) = Dual(real(x)-real(y), dual(x)-dual(y))
(*)(x::Dual,y::Dual) = Dual(real(x)*real(y), real(x)*dual(y)+real(y)*dual(x))
(/)(x::Dual,y::Dual) = Dual(real(x)/real(y), (dual(x)*real(y)-real(x)*dual(y))/(real(y)*real(y)))
exp(x::Dual) = Dual(exp(real(x)), dual(x)*exp(real(x)))
sin(x::Dual) = Dual(sin(real(x)), dual(x)*cos(real(x)))
cos(x::Dual) = Dual(cos(real(x)), -dual(x)*sin(real(x)));

promote_rule{S<:Real,T<:Real}(::Type{Dual{S}},::Type{T}) = Dual{promote_type(T,S)};
convert{T<:Real}(::Type{Dual{T}}, x::Real) = Dual(convert(T,x), zero(T));

const ɛ = Dual(0.0, 1.0);

This is the trial function I used

f(x) = exp(x) / (cos(x)^3 + sin(x)^3)

And the benchmarking:

Pkg.clone("https://github.com/johnmyleswhite/Benchmarks.jl")
using Benchmarks

@benchmark f/4 + ɛ)

@benchmark f/4 + im)

@benchmark f/4)

which gives

julia> @benchmark f/4 + ɛ)
================ Benchmark Results ========================
     Time per evaluation: 104.91 ns [104.65 ns, 105.17 ns]
   Number of evaluations: 3726101
 Time spent benchmarking: 0.52 s


julia> @benchmark f/4 + im)
================ Benchmark Results ========================
     Time per evaluation: 315.03 ns [314.58 ns, 315.48 ns]
   Number of evaluations: 1437201
 Time spent benchmarking: 0.54 s


julia> @benchmark f/4)
================ Benchmark Results ========================
     Time per evaluation: 198.68 ns [196.78 ns, 200.58 ns]
   Number of evaluations: 2314101
 Time spent benchmarking: 0.50 s

Am I making some obvious mistake here because it seems that the dual numbers are significantly faster than the real numbers. That can't be possible...

List current status in readme

To avoid confusion, we should:

  • mention that the readme describes the API in the master branch which isn't released yet (and requires 0.4), and
  • provide a link to the old documentation for the time being

Function typing restrictions

So, first, this is a wicked package and I'm a complete convert to this AD thing!

I was wondering what the rationale is behind the type restrictions on functions. I can make ForwardDiff work excellently most of the time but I'm hitting a snag in one place. I'm doing triangulation and interpolation on unstructured data points, and using the package GeometricalPredicates to test if things are in triangles. The snag is that GeometricalPredicates has a custom Point(x::Real, y::Real) type, and ForwardDiff needs that to be Point(x::Number, y::Number) to pass its custom number type through. Given that we're currently limited to differentiating real functions anyway, is there any particular reason for that restriction? Is it to make adding complex differentiation easier later on?

deprecation warnings

On 0.4 release, I get this:

julia> using ForwardDiff
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
  likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/Partials.jl:49
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
  likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/GradientNumber.jl:58
WARNING: Base.MathConst is deprecated, use Base.Irrational instead.
  likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/GradientNumber.jl:137
WARNING: Base.MathConst is deprecated, use Base.Irrational instead.
  likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/GradientNumber.jl:137
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
  likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/HessianNumber.jl:47
WARNING: Base.Uint64 is deprecated, use UInt64 instead.
  likely near /home/mlubin/.julia/v0.4/ForwardDiff/src/TensorNumber.jl:47

Sign error on derivatives with certain type of functions.

Am I doing something forbidden here?

# Simple working example
f(x) = x[1]^2
g = forwarddiff_gradient(f, Float64, fadtype=:typed)
g([3.0])
#6.0, OK!

# But rewrite it in this way:
f(x) = [x[1]^2]' * [1.0]
g = forwarddiff_gradient(f, Float64, fadtype=:typed)
g([3.0])
 # -6.0, Wups!

Edit: Oh, this is almost exactly: #24 but it should have been fixed in #25?

Edit2: Looked into it a bit more, the problem is not with transpose but with conjugate:

A = ForwardDiff.GraDual{Float64,1}[GraDual(3.0, [1.0])]

julia> A'*[1.0]
1-element Array{ForwardDiff.GraDual{Float64,1},1}:
 GraDual(3.0,
[-1.0])

julia> A.'*[1.0]
1-element Array{ForwardDiff.GraDual{Float64,1},1}:
 GraDual(3.0,
[1.0])

Basically, why does conj on a Array{ForwardDiff.GraDual{Float64,1},1} invert the gradient part?

autodiff API for forward and reverse mode

@scidom, @mlubin, @kmsquire, @powerdistribution : Here I go, for a (probably biased) proposal of a common interface for the forward/reverse mode symbolic derivation function :

  • should take an Expression as input rather than a function, that seems more flexible, and a wrapper function using Base.uncompress_ast can still be build around it for inputs as functions. + it doesn't require a function creation on the part of the caller (useful if the function is called recursively for higher order derivation).
  • should output an expression, for similar reasons. But may be a composite type containing the expression and additional information would be useful
  • with a method=:forward or method=:reverse parameter to indicate which algo to use (and not limited to these 2 when new methods are implemented).
  • a list of pairs [Symbol, Value] to specify which variables should be derived against and an initial value, so as to know if they are scalars, vectors, etc.. and infer the type of all other variables
  • the name of the variable containing the result in the expression. We could have instead the algos take the last evaluated value in the expression but it may be turn out to be a too strong requirement when using the function recursively. Could be optionnal.

Name of the function : differentiate() ? diff(), ? derive() ?

Matrix-valued DualNumber: coding problems

Hi All

I would love to contribute to the autodiff tools, but I have hit a hickup that is making me so despondent that I am seriously considering giving up on Julia for a while until it has matured.

I have spent a large effort on developing a dualnumber type (similar to my existing MATLAB implementation) that can take matrix-valued fields. This allows one get forward-mode AD to work smoothly through all kinds of matrix operations, which will be executed efficiently in BLAS/LAPACK. This has worked really well for me in MATLAB.

I have gone to the trouble of translating most of the functionality I have in MATLAB, but now I find I have a piece if code that I simply cannot debug.

If anybody is interested, my code is here:
https://github.com/bsxfan/ad4julia/blob/master/DualNumbers.jl

There are certainly many bugs and missing features remaining in this code, but at the moment I am stuck at the cat() function, which overloads vertical and horizontal concatention of dualnumber matrices.

This function behaves really weirdly. It is supposed to return a DualNum object. It manages to construct that object and can display it, via show() inside my cat() function. But when I return that value, it has disappeared!

julia> require("DualNumbers.jl")
julia> using(DualNumbers)
julia> a = dualnum(1)
standard part: 1.0
differential part: 0.0

julia> b = dualnum(2)
standard part: 2.0
differential part: 0.0

julia> c = [a b]
here in cat
ST:
1x2 Float64 Array:
1.0  2.0
DI:
1x2 Float64 Array:
0.0  0.0
D:
standard part: 1x2 Float64 Array:
1.0  2.0
differential part: 1x2 Float64 Array:
0.0  0.0    

Note that the stuff that gets displayed is my debug info from cat(). Nothing is returned:

julia> typeof(c)
Nothing

I don't know how to fix this.

[PkgEval] ForwardDiff may have a testing issue on Julia 0.3 (2014-06-17)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.3

  • On 2014-06-16 the testing status was Tests pass.
  • On 2014-06-17 the testing status changed to Tests fail, but package loads.

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

INFO: Cloning cache of ForwardDiff from git://github.com/JuliaDiff/ForwardDiff.jl.git
INFO: Installing Calculus v0.1.4
INFO: Installing DualNumbers v0.1.0
INFO: Installing ForwardDiff v0.0.2
INFO: Package database updated
ERROR: mismatch of non-finite elements: 
  hessian(output) = [NaN]
  hessianf(args...) = -352.65949394886985
 in test_approx_eq at test.jl:101
 in test_approx_eq at test.jl:119
 in include at ./boot.jl:244
 in include_from_node1 at ./loading.jl:128
 in anonymous at no file:8
 in include at ./boot.jl:244
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:285
 in _start at ./client.jl:354
while loading /home/idunning/pkgtest/.julia/v0.3/ForwardDiff/test/FADTensor.jl, in expression starting on line 475
while loading /home/idunning/pkgtest/.julia/v0.3/ForwardDiff/run_tests.jl, in expression starting on line 5
INFO: Package database updated

Inference problem or ambiguous definition of `<`

As mentioned here using isless(g::GradientNumber, x::Real) works but <(g::GradientNumber, x::Real) doesn't.

I found that <(g::GradientNumber, x::Real) calls <(x::Real, y::Real) from julia's promotion.jl instead of calling isless(g::ForwardDiffNumber, x::Real). This promotes both arguments to GradientNumber and that reports there is no a corresponding method (although, I thought it was going to call the method argument_error from here).

I'm not sure if this is a method ambiguity or an inference bug.

Derivative of `atan2` function

ForwardDiff does not appear to support the atan2 function:

using ForwardDiff

function angle(xin)
    x, y = xin
    atan2(y, x)
end

angleJ = jacobian(angle)

angleJ([1.0, 2.0]) # exception!

I don't know enough about how to program for the dual number implementation used by ForwardDiff. However, I believe the derivative at all points is the same as the derivative of arctan(y/x), i.e. 1 / (1 + (y/x)^2). CORRECTION: We should be using the partial derivatives of this function w.r.t. to x and y (see below).

Differentiation of functions of the form `f!(x, y)` where `x` is the input and `y` is the output

If memory serves, a pervious version of ForwardDiff supported functions that looked like this:

function sphere2cart!(xin::Vector, xout::Vector)
    rho, theta, phi = xin
    rho_sin_phi = rho * sin(phi)
    xout[1] = rho_sin_phi * cos(theta)
    xout[2] = rho_sin_phi * sin(theta)
    xout[3] = rho * cos(phi)
end

However now (according to the docs, anyway), it seems I am only allowed to write:

function sphere2cart(xin::Vector)
    rho, theta, phi = xin
    rho_sin_phi = rho * sin(phi)
    x = rho_sin_phi * cos(theta)
    y = rho_sin_phi * sin(theta)
    z = rho * cos(phi)
    return [x, y, z]
end

Sadly, this second form is considerably slower, due to the allocation of the result vector. Is there any way take the Jacobian of a vector-valued function written using the result placement style?

Hessian of a vector function?

Hi,

Is it possible to calculate Hessian of a vector valued function ?

E.g, lets say I have a function:
f(x::Vector) = x./(sqrt(x[1]^2+x[2]^2+x[3]^3))

I can easily calculate its Jacobian by:
ForwardDiff.jacobian(f,rand(3))

but I get a convert error when I try to calculate Hessian. I assume this is because the hessian is expecting a scalar function. Is there trick to get around this ?

The hessian for the above example would be a 3d tensor, with each matrix component corresponding to a hessian w.r.t. to each output component of the function.

Possible second argument to input function

As I am trying to connect ForwardDiff with Lora, I came to realise that this can work in one of two possible cases, namely when the log-target function has a signature such as logtarget(x::Vector). There is a second case, however, which is perhaps more common and interesting in real applications; in this latter case, the log-target takes a second argument, which may be a cell array (or dictionary, but let's say cell array for now) of auxiliary variables.

The main question is whether we can allow ForwardDiff to work with functions with signature logtarget(x::Vector, y::Vector). In principle, ForwardDiff will carry on working the same way by performing autodiff with respect to x only, i.e. by ignoring any subsequent input argument after the first one. y will only appear in the body of logtarget to pass additional values in the body of log-target, but as far as autodiff goes y will be a constant.

All it takes to add this extra feature is a slight change of API allowing extra arguments that won't change the course of differentiation. Do you think we could possibly deal with this?

Package separation

@fredo-dedup, I opened this as a separate issue, so as to comply with what has been asked. Once you create your ReverseDiffSource package and place your autodiff code there, I will then clean up the present ForwardDiff to register it.

A vector of second partial derivatives?

Is there a quick way to compute a vector of second partial derivatives? Essentially, what I want is the diagonal of the Hessian, but since I only want a diagonal, computing the entire matrix is way too costly and it feels like there should be a better way.

Installing ForwardDiff

I am a new user to ForwardDiff and not super Julia savvy. I tried installing it on a brand new Amazon cloud instance running RHEL 7. This Linux install is clean and up to date; I have installed nothing other than Julia 0.4-dev and dependencies on it. I got Julia from

https://copr.fedoraproject.org/coprs/nalimilan/julia/

Pkg.add("ForwardDiff") works fine.

When I type using ForwardDiff I get the following error:

julia> using ForwardDiff
ERROR: LoadError: syntax: invalid "import" statement
while loading /home/ec2-user/.julia/v0.4/ForwardDiff/src/ForwardDiff.jl, in expression starting on line 25

Type inference problem with binary operators on Julia master

Just a heads up that something is broken in type inference when using Julia master.

Using this simple function:

function foo(x)
    println(eltype(x))
    b = 5.0
    c = x / b
    println(eltype(c))

    d = x * (1/b)
    println(eltype(d))
    return 3 * c
end

gives on 0.4.1 the normal:

julia> ForwardDiff.jacobian(foo, rand(2))
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
2x2 Array{Float64,2}:
 0.6  0.0
 0.0  0.6

However, on 0.5 this gives:

julia> ForwardDiff.jacobian(foo, rand(2))
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
########
ForwardDiff.GradientNumber{N,T,C} # <------ NOTE
########
ForwardDiff.GradientNumber{2,Float64,Tuple{Float64,Float64}}
ERROR: no promotion exists for Int64 and ForwardDiff.GradientNumber{N,T,C}
 [inlined code] from promotion.jl:160
 in .* at arraymath.jl:118
 in foo at none:9
 in _calc_jacobian at /home/kristoffer/.julia/v0.5/ForwardDiff/src/api/jacobian.jl:101
 in jacobian at /home/kristoffer/.julia/v0.5/ForwardDiff/src/api/jacobian.jl:84
 in eval at ./boot.jl:263

It seems that the division x/b cause julia to lose the type inference for the array.

[PkgEval] ForwardDiff may have a testing issue on Julia 0.3 (2014-05-26)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.3

  • On 2014-05-25 the testing status was Tests pass..
  • On 2014-05-26 the testing status changed to Tests fail, but package loads..

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Calculation of the gradient of a function that contains derivatives of other function freezes

The minimal example that i could find:

julia> f(x) = x/2
f (generic function with 1 method)

julia> dfdx = derivative(f)
d (generic function with 1 method)

julia> g(x) = dfdx(x[1])*f(x[2])
g (generic function with 1 method)

julia> ForwardDiff.gradient(g, [1., 2.])
# Does not return, freezes

Note that this bug doesn't appear if f is identity function or something like 2x, also when g takes only one argument everything is OK, but the bug appears in all other, more complex cases. I use Julia v0.4.0-rc2, ForwardDiff v0.1.0.

[PkgEval] ForwardDiff may have a testing issue on Julia 0.3 (2014-05-21)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.3

  • On 2014-05-20 the testing status was Tests pass..
  • On 2014-05-21 the testing status changed to Tests fail, but package loads..

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

There are lots of warnings when used with Julia 0.4

julia> Pkg.test("ForwardDiff")
INFO: Testing ForwardDiff
Running tests:
 * dual_fad.jl
Warning: New definition 
    *(T<:Real,ForwardDiff.GraDual{T<:Real,n}) at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:108
is ambiguous with: 
    *(Bool,T<:Number) at bool.jl:47.
To fix, define 
    *(Bool,_<:ForwardDiff.GraDual{Bool,n})
before the new definition.
Warning: New definition 
    *(T<:Real,ForwardDiff.FADHessian{T<:Real,n}) at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:108
is ambiguous with: 
    *(Bool,T<:Number) at bool.jl:47.
To fix, define 
    *(Bool,_<:ForwardDiff.FADHessian{Bool,n})
before the new definition.
Warning: New definition 
    *(T<:Real,ForwardDiff.FADTensor{T<:Real,n}) at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:133
is ambiguous with: 
    *(Bool,T<:Number) at bool.jl:47.
To fix, define 
    *(Bool,_<:ForwardDiff.FADTensor{Bool,n})
before the new definition.
 * GraDual.jl
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
 in depwarn at ./deprecated.jl:40
 in oftype at deprecated.jl:29
 in log2 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:149
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/GraDual.jl:90
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
 in depwarn at ./deprecated.jl:40
 in oftype at deprecated.jl:29
 in log10 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/GraDual.jl:150
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/GraDual.jl:90
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
 * FADHessian.jl
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
 in depwarn at ./deprecated.jl:40
 in oftype at deprecated.jl:29
 in log2 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:212
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
 in depwarn at ./deprecated.jl:40
 in oftype at deprecated.jl:29
 in log10 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADHessian.jl:225
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADHessian.jl:179
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
 * FADTensor.jl
WARNING: int64(x::FloatingPoint) is deprecated, use round(Int64,x) instead.
 in depwarn at ./deprecated.jl:40
 in int64 at deprecated.jl:29
 in t2h at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:104
 in ^ at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:221
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADTensor.jl:8
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
 in depwarn at ./deprecated.jl:40
 in oftype at deprecated.jl:29
 in log2 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:327
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADTensor.jl:347
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
WARNING: oftype{T}(::Type{T},c) is deprecated, use convert(T,c) instead.
 in depwarn at ./deprecated.jl:40
 in oftype at deprecated.jl:29
 in log10 at /Users/dhlin/.julia/v0.4/ForwardDiff/src/typed_fad/FADTensor.jl:349
 in f at /Users/dhlin/.julia/v0.4/ForwardDiff/test/FADTensor.jl:347
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in anonymous at no file:8
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:305
 in _start at ./client.jl:389
INFO: ForwardDiff tests passed
INFO: No packages to install, update or remove

Applying ForwardDiff to functions that return literal constants results in a conversion error

This confused me for a while (well, in a more complex setting):

julia> derivative(x->0.0, 5.)
ERROR: MethodError: `convert` has no method matching convert(::Type{ForwardDiff.ForwardDiffResult{Float64}}, ::Float64)
This may have arisen from a call to the constructor ForwardDiff.ForwardDiffResult{Float64}(...),
since type constructors fall back to convert methods.
Closest candidates are:
  call{T}(::Type{T}, ::Any)
  convert{T}(::Type{T}, ::T)
  ForwardDiff.ForwardDiffResult{T}(::ForwardDiff.ForwardDiffNumber{N,T<:Number,C})
  ...
 in call at /home/mauro/.julia/v0.4/ForwardDiff/src/api/ForwardDiffResult.jl:11
 in derivative at /home/mauro/.julia/v0.4/ForwardDiff/src/api/derivative.jl:24

Could the error give a hint that zero needs to be used?

Also, the documentation says that the method needs to be type-stable, which x->0.0 is. Is the requirement that input and output type need to be equal?

`Irrational` values don't work as input to API functions

This seems like a small detail to fix up:

derivative(sin)(pi)
ERROR: MethodError: `convert` has no method matching convert(::Type{Irrational{:π}}, ::Int64)
This may have arisen from a call to the constructor Irrational{:π}(...),
since type constructors fall back to convert methods.

[PkgEval] ForwardDiff may have a testing issue on Julia 0.3 (2015-06-24)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.3

  • On 2015-06-23 the testing status was Tests pass.
  • On 2015-06-24 the testing status changed to Tests fail.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

>>> 'Pkg.add("ForwardDiff")' log
INFO: Cloning cache of ForwardDiff from git://github.com/JuliaDiff/ForwardDiff.jl.git
INFO: Installing Calculus v0.1.8
INFO: Installing DualNumbers v0.1.3
INFO: Installing ForwardDiff v0.0.2
INFO: Installing NaNMath v0.0.2
INFO: Package database updated

>>> 'Pkg.test("ForwardDiff")' log
INFO: Testing ForwardDiff
Running tests:
 * dual_fad.jl
 * GraDual.jl
 * FADHessian.jl
ERROR: mismatch of non-finite elements: 
  hessian(output) = [NaN]
  hessianf(args...) = -352.65949394886985
 in test_approx_eq at test.jl:101
 in test_approx_eq at test.jl:119
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in anonymous at no file:8
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:285
 in _start at ./client.jl:354
while loading /home/vagrant/.julia/v0.3/ForwardDiff/test/FADHessian.jl, in expression starting on line 254
while loading /home/vagrant/.julia/v0.3/ForwardDiff/test/runtests.jl, in expression starting on line 5

=============================[ ERROR: ForwardDiff ]=============================

failed process: Process(`/home/vagrant/julia/bin/julia /home/vagrant/.julia/v0.3/ForwardDiff/test/runtests.jl`, ProcessExited(1)) [1]

================================================================================
INFO: No packages to install, update or remove
ERROR: ForwardDiff had test errors
 in error at error.jl:21
 in test at pkg/entry.jl:718
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28
 in test at pkg.jl:67
 in process_options at ./client.jl:213
 in _start at ./client.jl:354


>>> End of log

Nested derivate functions

Hi, I got my code working with the help of last issue: #30. Though I found a another example how I would like to make my code to work

Like last example:

using ForwardDiff
f1(s) = (s/3)^2
f2(x) = x[1]^2 + exp(3*x[2]/sqrt(4))
function f(x)
    g = forwarddiff_gradient(f2, Float64, fadtype=:typed, n=2)
return f1(x[1]) + g(x)
end
g = forwarddiff_gradient(f, Float64, fadtype=:typed, n=2)
g([-1., 2.0])

ended up with error message:

LoadError: MethodError: `g` has no method matching g(::Array{ForwardDiff.GraDual{Float64,2},1})
while loading In[11], in expression starting on line 9

 in f at In[11]:6
 in g at C:\Users\ovax03\.julia\v0.4\ForwardDiff\src\typed_fad\GraDual.jl:188

I already found a workaround, but Im not sure should it work like this:

using ForwardDiff
f1(s) = (s/3)^2
f2(x) = x[1]^2 + exp(3*x[2]/sqrt(4))
function f(x)
    xd = map(t->t.v, x)
    g = forwarddiff_gradient(f2, Float64, fadtype=:typed, n=2)
    return f1(x[1]) + g(xd)
end
g = forwarddiff_gradient(f, Float64, fadtype=:typed, n=2)
g([-1., 2.0])

which gave me a hessia as expected:

2x2 Array{Float64,2}:
 -0.222222  0.0
 -0.222222  0.0

note: I'm working with finite element material models, which happen to have this kind of functions

[PackageEvaluator.jl] Your package ForwardDiff may have a testing issue.

This issue is being filed by a script, but if you reply, I will see it.

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their test (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3).

The results of this script are used to generate a package listing enhanced with testing results.

The status of this package, ForwardDiff, on...

  • Julia 0.2 is 'Tests fail, but package loads.' PackageEvaluator.jl
  • Julia 0.3 is 'Tests pass.' PackageEvaluator.jl

'No tests, but package loads.' can be due to their being no tests (you should write some if you can!) but can also be due to PackageEvaluator not being able to find your tests. Consider adding a test/runtests.jl file.

'Package doesn't load.' is the worst-case scenario. Sometimes this arises because your package doesn't have BinDeps support, or needs something that can't be installed with BinDeps. If this is the case for your package, please file an issue and an exception can be made so your package will not be tested.

This automatically filed issue is a one-off message. Starting soon, issues will only be filed when the testing status of your package changes in a negative direction (gets worse). If you'd like to opt-out of these status-change messages, reply to this message.

gradient only accurate to 10 digits ?

Here is a function I like to calculate a gradient of:

function getW33(kv :: Vector)
    k = kv[1]
    sqrt2l= sqrt(2.0)
          t2 = k*k
          t3 = acos(t2 - 1.0)
          t4 = 2.0 - t2
          t6 = 2*π - t3
          t5 = 1.0/t4
          W =   (t6 * sqrt(t5) - k)*t5
    return W
end

Its actual derivative at kv = [-0.5] is -2.8185256628482382
but using the function gradient I get an answer which is only 10 digits accurate:

g = gradient(getW33)
g([-0.5])[1]
> -2.8185256629946482

Using complex step derivative I get full 16 digits of accuracy.

kv = [complex(-0.5,1E-30)]
imag(getW33(kv)) / 1E-30
> -2.8185256628482387

I was under the impression that the ForwardDiff was accurate to machine precision, is that wrong ?

I am on Julia 0.4, MacOSX, LLVM 3.3

thanks,
Nitin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.