Giter Club home page Giter Club logo

forneylab.jl's Introduction

ForneyLab.jl

Build Status Documentation Code coverage

ForneyLab.jl is a Julia package for automatic generation of (Bayesian) inference algorithms. Given a probabilistic model, ForneyLab generates efficient Julia code for message-passing based inference. It uses the model structure to generate an algorithm that consists of a sequence of local computations on a Forney-style factor graph (FFG) representation of the model. For an excellent introduction to message passing and FFGs, see The Factor Graph Approach to Model-Based Signal Processing by Loeliger et al. (2007). Moreover, for a comprehensive overview of the underlying principles behind this tool, see A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms by Cox et. al. (2018).

We designed ForneyLab with a focus on flexible and modular modeling of time-series data. ForneyLab enables a user to:

  • Conveniently specify a probabilistic model;
  • Automatically generate an efficient inference algorithm;
  • Compile the inference algorithm to executable Julia code.

The current version supports belief propagation (sum-product message passing), variational message passing and expectation propagation.

The ForneyLab project page provides more background on ForneyLab as well as pointers to related literature and talks. For a practical introduction, have a look at the demos.

Documentation

Full documentation is available at BIASlab website.

It is also possible to build documentation locally. Just execute

$ julia make.jl

in the docs/ directory to build a local version of the documentation.

Installation

Install ForneyLab through the Julia package manager:

] add ForneyLab

If you want to be able to use the graph visualization functions, you will also need to have GraphViz installed. On Linux, just use apt-get install graphviz or yum install graphviz. On Windows, run the installer and afterwards manually add the path of the GraphViz installation to the PATH system variable. On MacOS, use for example brew install graphviz. The dot command should work from the command line.

Some demos use the PyPlot plotting module. Install it using ] add PyPlot.

Optionally, use ] test ForneyLab to validate the installation by running the test suite.

Getting started

There are demos available to get you started. Additionally, the ForneyLab project page contains a talk and other resources that might be helpful.

License

MIT License, Copyright (c) 2022 BIASlab http://biaslab.org see LICENSE.md file for details.

forneylab.jl's People

Contributors

albertpod avatar anoukvdiepen avatar bartvanerp avatar bertdv avatar bvdmitri avatar github-actions[bot] avatar ismailsenoz avatar ivan-bocharov avatar keithwm avatar kristofferc avatar magnuskoudahl avatar marcocox avatar mroavi avatar mschauer avatar murphyk avatar semihakbayrak avatar thijsvdlaar avatar vtjnash avatar wielrenner avatar wmkouw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

forneylab.jl's Issues

Improve stability when inverting positive definite matrices

The cholinv function is used extensively to invert positive definite (mostly covariance/precision) matrices. The idea is that this function assumes that the argument matrix should be positive definite, despite numeric instabilities. If an initial Cholesky decomposition fails, a slight regularization is added to the diagonal, and another attempt is made. However, the regularization is arbitrary, and might not be enough to render the matrix positive definite.

A simple fix might be (after a failed initial Cholesky decomposition) to obtain an eigenvalue decomposition of the matrix, set the negative eigenvalues to tiny, and reconstruct the inverse matrix:

W = [2.0 0.0 0.0; 0.0 0.2 -0.5; 0.0 -0.5 0.2] # A singular precision matrix
(l, Q) = eigen(W) # Deconstruct
k = clamp.(l, tiny, Inf) # Correct for negative eigenvalues
V = Q'*Diagonal(1.0./k)*Q # Inverted reconstruction (positive definite)

This approach is rather blunt (and more expensive), because it will render any matrix positive definite without raising an error. In order to resolve this, we might raise a warning if one of the eigenvalues is largely negative.

Composite nodes: message passing on subgraphs without custom rules

Problem:
When a composite node does not have a specified message passing rule, it is desired that this composite node simply gets expanded including its internal schedule. The number of messages initialized in the function messagePassingAlgorithm() in src/engine/julia/message_passing.jl bases this on the condensed version of the message passing schedule.
image
Therefore the following error occurs when executing the message passing.

BoundsError: attempt to access 7-element Array{Message,1} at index [8]
Stacktrace:
 [1] setindex! at .\array.jl:782 [inlined]
 [2] step!(::Dict{Symbol,Real}, ::Dict{Any,Any}, ::Array{Message,1}) at .\none:12
 [3] step! at .\none:5 [inlined] (repeats 2 times)
 [4] top-level scope at .\In[15]:26

Proposed solution
Moving n_messages = length(schedule) after the second condense() call in the function above solves the problem and does not affect the composite_node.ipynb demo. I am, however, not sure whether it conflicts with any other part of the code.

Help with hierarchical HMMs

Hi, I have some trouble implementing a hierarchical HMM and hope that someone can help me resolve the issue. I was following the tutorial on HMMs and made the following model

g = FactorGraph()

@RV A1 ~ Dirichlet(ones(15, 3)) # Vague prior on transition model
@RV A2 ~ Dirichlet(ones(15, 5)) # Vague prior on transition model
@RV B ~ Dirichlet([10.0 1.0 1.0; 1.0 10.0 1.0; 1.0 1.0 10.0]) # Stronger prior on observation model
@RV s_0 ~ Categorical(1/3*ones(3))
@RV d_0 ~ Categorical(ones(5)/3)
@RV z_0 = (s_0 - 1) * 5  + d_0

s = Vector{Variable}(undef, n_samples) # one-hot coding
d = Vector{Variable}(undef, n_samples) # one-hot coding
z = Vector{Variable}(undef, n_samples) # one-hot coding
x = Vector{Variable}(undef, n_samples) # one-hot coding
z_t_min = z_0
for t = 1:n_samples
    @RV s[t] ~ Transition(z_t_min, A1)
    @RV d[t] ~ Transition(z_t_min, A2)
    @RV x[t] ~ Transition(s[t], B)
    @RV z[t] = (s[t] - 1) * 5 + d[t]
    
    z_t_min = z[t]
    
    placeholder(x[t], :x, index=t, dims=(3,))
end

However if I try to define a factorisation
# Define the recognition factorization q = RecognitionFactorization(A1, A2, B, [s_0; s], [d_0; d], ids=[:A1, :A2, :B, :S, :D])

I get the following error

type Nothing has no field node

i also tried
# Define the recognition factorization q = RecognitionFactorization(A1, A2, B, [s_0; s], [d_0; d], [z_0; z] ids=[:A1, :A2, :B, :S, :D, :Z])

but nothing changed ...
Any ideas what I am doing wrong?

Conflicts with existing identifiers/uses needing to be qualified

A couple of times after entering using ForneyLab into Julia I got various warnings about conflicts with existing identifiers/uses needing to be qualified, the warnings are shown below:

julia> using ForneyLab
[ Info: Precompiling ForneyLab [9fc3f58a-c2cc-5bff-9419-6a294fefdca9]
WARNING: using ForneyLab.sample in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Multivariate in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Univariate in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Exponential in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Wishart in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Beta in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Gamma in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Dirichlet in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Bernoulli in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.LogNormal in module Main conflicts with an existing identifier.
WARNING: using ForneyLab.Categorical in module Main conflicts with an existing identifier.
julia> using ForneyLab

julia> WARNING: both ForneyLab and Distributions export "Bernoulli"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Beta"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Categorical"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Dirichlet"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Exponential"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Gamma"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "LogNormal"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Multivariate"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Univariate"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "Wishart"; uses of it in module Main must be qualified
WARNING: both ForneyLab and Distributions export "sample"; uses of it in module Main must be qualified

Is this behaviour intentional?

Add support for indexing with tuples to `placeholder`

Currently placeholder function only accepts integer indices. It is convenient to index multidimensional data column-wise or row-wise (see e.g. #17).

An example that will index data column-wise:

for t = 1:T
    ...
    placeholder(y[t], :y, dims=(obs_dim,), index=(:, t))
    ...
end 

Computing marginals from update rules directly

In some situations, algorithms can be efficiently implemented by recursive updates that directly set marginals instead of sending messages. One example is nonlinear estimation with the unscented Kalman filter, where marginals are efficiently computed by recursive estimation: "On approximate nonlinear Gaussian message passing for factor graphs" (Petersen, 2018). (Note that all computations remain local to the node).

ForneyLab requires that update always explicitly compute messages. Currently, this is implemented by dividing the marginal by the incoming message, thus yielding an outbound message. (The multiplication of the incoming and outbound message yields the marginal.) In some situations however, this division can be costly and/or lead to improper outbound messages.

It would be beneficial for efficiency and stability to incorporate some mechanism that can cope with computing marginals directly from a node update rule. Alternative ideas are also welcome.

Links to ForneyLab docs

The link to the documentation of ForneyLab can only be found using the blue docs button of the Github page. The rest of the github page does not include this link elsewhere, and also the Biaslab website lacks this link.

Factor availability

I started converting some examples from Infer.Net.
I would like to know if the following factors are available or possible to get:

  • And, Or, Not
  • Difference (Subtraction), Ratio (Division)
  • Pow, Sqrt, Log, Inverse
  • Min, Max
  • IsGreaterThan, IsBetween, IsPositive

import/using keyword in ForneyLab.jl

Lines 8-14 in the current ForneyLab.jl package header have only import keyword used:
import Base: show, convert, ==,
...
import Statistics: mean, var, cov

This totally makes sense when the imported function is extended indeed, but sometimes it isn't, which might confuse a little bit.

Handling loops in variational message passing algorithm.

I am trying to build the inference algorithm in the coupled random walk model:

Screenshot 2020-07-08 at 12 54 57

We assume the following factorization: q = PosteriorFactorization(x_0, x, γ_x, w_0, w, γ_w)
However, ForneyLab throws an error when calling variationalAlgorithm(q, free_energy=false) due to the loop (green edges):
ArgumentError: The input graph contains a loop around Interface 2 (m) of GaussianMeanPrecision gaussianmeanprecision_3

In my opinion, if the loop occurs, ForneyLab should still be able to build the algorithm, but the user would have to provide a schedule.

Here is the code snippet:

using ForneyLab

n_samples = 2

fg = FactorGraph()

# Hidden process model
@RV x_0 ~ GaussianMeanVariance(placeholder(:m_x_0),
                               placeholder(:v_x_0))

@RV γ_x ~ ForneyLab.Gamma(placeholder(:a_x), placeholder(:b_x))

# Noise model
@RV w_0 ~ GaussianMeanVariance(placeholder(:m_w_0),
                               placeholder(:v_w_0))

@RV γ_w ~ ForneyLab.Gamma(placeholder(:a_w), placeholder(:b_w))

# Transition and observation models
x = Vector{Variable}(undef, n_samples)
w = Vector{Variable}(undef, n_samples)
y = Vector{Variable}(undef, n_samples)

x_i_min = x_0
w_i_min = w_0
for i in 1:n_samples
    global x_i_min, w_i_min
    @RV x[i] ~ GaussianMeanPrecision(x_i_min, γ_x)
    @RV w[i] ~ GaussianMeanPrecision(w_i_min, γ_w)
    @RV y[i] = x[i] + w[i]

    # Data placeholder
    placeholder(y[i], :y, index=i)

    # Reset state for next step
    x_i_min = x[i]
    w_i_min = w[i]
end

q = PosteriorFactorization(x_0, x, γ_x, w_0, w, γ_w, ids=[:X0 :X :ΓX :W0 :W :ΓW])
algo = variationalAlgorithm(q, free_energy=false)

How to implement boolean operations/factors for binary random variables, or more generally conditional probability tables?

I'm trying to work through Winn et al.'s MBML book using ForneyLab and am stuck trying to implement the model in Chapter 2. I've been through some of the demos and the documentation, and am sorry if I've missed something obvious.

This seems to boil down to implementing boolean factors and factors representing more general conditional probability tables for binary variables. Is it possible to do something like:

g = FactorGraph()

## prior statistics
p_x = placeholder(:p_x)
p_y = placeholder(:p_y)

## priors
@RV x ~ Bernoulli(p_x)
@RV y ~ Bernoulli(p_y)

## MOCK And factor
@RV z ~ And(x, y) 

placeholder(z)
;

and more generally for factors defined using conditional probability tables?

The model has both an AND factor and an "AddNoise" factor defined as a table, as seen below and in the book.

image

Many thanks!

Automated documentation generator

At the moment it is tedious to maintain the documentation, since it needs to be handwritten from scratch. Function definitions and explanations are repeated and as a result the documentation usually lags behind the actual code.

It would be convenient to have a script that automatically renerates/supplements .rst files in the doc folder from the docstrings in the source code. For example, an include statement in the .rst files might indicate the julia source file for which the docstrings and the calling signatures should be included. Probably we only want to include the exported functions in the user documentation.

Simply running the script then incorporates the docstrings for the appropriate functions in the corresponding documentation file (in restructured text).

UndefVarError: STDOUT not defined

I ran into this error when running the 1_state_estimation_forward_only demo using the Atom Juno IDE. I'm running Ubuntu 18.04.

The error occurs while trying to run

open(`dot -Tx11`, "w", STDOUT) do io

inside function viewDotExternalInteractive(dot_graph::AbstractString)

ProbabilityDistribution with Integers

The ProbabilityDistribution() function only takes floats as arguments and not integers. This is confusing when the RV macro accepts both. We should consider adapting the function to take integers as well to smoothen the learning curve for new users.

Minimum example that fails

Using ForneyLab
g = Factorgraph()
b = ProbabilityDistribution(Univariate, GaussianMeanPrecision, m=1, w=1)

Changing the last line to

b = ProbabilityDistribution(Univariate, GaussianMeanPrecision, m=1.0, w=1.0)

works just fine.

No max-sum algorithm?

Please correct me if I'm wrong but I did not see any code for the max-sum algorithm. Is that not supported yet?

Outgoing message of multiplication node

When trying to compute the outgoing message of the "gain" (multiplication) node, the following MWE fails

fg = FactorGraph()
@RV x ~ GaussianMeanVariance(1.0, 1.0)
@RV y = 4.0 * x

q = PosteriorFactorization(fg)
algo3 = messagePassingAlgorithm(y, id=:_y_fwd)

With the error and stacktrace

KeyError: key nothing not found

Stacktrace:
 [1] getindex at ./dict.jl:467 [inlined]
 [2] isDeterministic(::Interface, ::Dict{Interface,Bool}) at /home/mkoudahl/.config/julia/packages/ForneyLab/e83A8/src/factor_graph.jl:241
 [3] deterministicEdgeSet(::Edge) at /home/mkoudahl/.config/julia/packages/ForneyLab/e83A8/src/factor_graph.jl:203
 [4] deterministicEdges(::FactorGraph) at /home/mkoudahl/.config/julia/packages/ForneyLab/e83A8/src/factor_graph.jl:174
 [5] PosteriorFactorization(::FactorGraph) at /home/mkoudahl/.config/julia/packages/ForneyLab/e83A8/src/algorithms/posterior_factorization.jl:61
 [6] top-level scope at In[7]:6
 [7] include_string(::Function, ::Module, ::String, ::String) at ./loading.jl:1091
 [8] execute_code(::String, ::String) at /home/mkoudahl/.config/julia/packages/IJulia/rWZ9e/src/execute_request.jl:27
 [9] execute_request(::ZMQ.Socket, ::IJulia.Msg) at /home/mkoudahl/.config/julia/packages/IJulia/rWZ9e/src/execute_request.jl:86
 [10] #invokelatest#1 at ./essentials.jl:710 [inlined]
 [11] invokelatest at ./essentials.jl:709 [inlined]
 [12] eventloop(::ZMQ.Socket) at /home/mkoudahl/.config/julia/packages/IJulia/rWZ9e/src/eventloop.jl:8
 [13] (::IJulia.var"#15#18")() at ./task.jl:356

ForneyLab is v 0.11.1. Usecase is for demonstration purposes in the BMLIP course ran by BIASlab

Understanding the need for random variable and placeholder id's

I'm having difficulties trying to understand the need for the optional :id's that can be assigned to random variables and placeholders. I'm also trying to understand how to use them.

For example, in demo 1, the following script defines two random variables to which the id's :m_x_t_min and :v_x_t_min are implicitly assigned.

# declare priors as random variables
@RV m_x_t_min # m_x_t_min = Variable(id=:m_x_t_min)
@RV v_x_t_min # v_x_t_min = Variable(id=:v_x_t_min)

Afterward, 2 placeholders are defined to which the exact same id's are assigned, but this time explicitly.

# Placeholders for prior
placeholder(m_x_t_min, :m_x_t_min) # placeholder(:m_x_t_min) does not work
placeholder(v_x_t_min, :v_x_t_min)

When feeding data to the algorithm, the id's are vaguely referring to the placeholders and not to the random variables (they both have the same name).

# Prepare data and prior statistics
data = Dict(:y_t       => y[t],
            :m_x_t_min => m_x_t_min,
            :v_x_t_min => v_x_t_min)

I find this somewhat obscure. Is there a way in which we could make these id's transparent to the user?

Matrix multiplication with singleton dimension

Hi all,

I wanted to implement the following relationship into FL:
y = d*x
where x is a univariate RV, d a deterministic vector and y a multivariate RV.

In order to prevent issues with univariate-multivariate conversion, the variable x is implemented as a multivariate RV with just a single entry and d as a matrix, as follows:

add_dim(x::Array) = reshape(x, (size(x)...,1))
d = add_dim(collect(1:10))                           #  (d) 10x1
@RV x ~ GaussianMeanVariance([1.0], mat(2.0))        #  (x) 1x1
@RV y = d*x                                          #  (y) 10x1 = (d) 10x1 * (x) 1x1
GaussianMeanVariance(y, 1.5*ones(10), 2.5*Ic(10))    #  (y) 10x1

Problems arise in the corresponding messages. Here is one of these messages:

messages[3] = ruleSPMultiplicationIn1GNP(messages[2], nothing, Message(MatrixVariate, PointMass, m=[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]))

The semicolons are meant to denote that we are dealing with a matrix (d) of dimension 10x1. However, in Julia [1; 2; 3; 4; 5; 6; 7; 8; 9; 10] gets parsed as an one-dimensional array, leading to errors as the m argument in the message is expected to be MatrixVariate. This only happens when the second dimension of the matrix has size 1.

Is there a way of implementing this relationship without throwing this error? I am now just editing the algorithm source code after it has been generated from

messages[3] = ruleSPMultiplicationIn1GNP(messages[2], nothing, Message(MatrixVariate, PointMass, m=[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]))

to

messages[3] = ruleSPMultiplicationIn1GNP(messages[2], nothing, Message(MatrixVariate, PointMass, m=add_dim([1; 2; 3; 4; 5; 6; 7; 8; 9; 10])))

Mapping continuous variables into discrete variables

Hi,
Sorry that I ask my question as an issue. I am trying to use ForneyLab tool for a hierarchical model which the top level is continuous and the bottom level is discrete and I would like to use a softmax function to map continuous Gaussian variables into discrete Categorical variables. But I do not see in the set of distributions, any implemented softmax function. How can I use this package to implement my model? Thanks!

Cycling Time Gaussian Mixture case

Hi, I have some questions related to this model (attached at the end):

  1. I can not make this following model to generate a solution close to the expected one. Can you suggest remedies?
  2. I am not sure if the "n_its" is set correctly.
  3. Can I show results in specified distribution as Gamma, Dirichlet?
  4. Can you recommend any changes in the code for correctness or efficiency?
    Code:
#---------------------------------------------------------------
# CyclingTime 3

using ForneyLab

TravelTime_data = [ 13.0, 17.0, 16.0, 12.0, 13.0, 12.0, 14.0, 18.0, 16.0, 16.0,27.0, 32.0,27.0, 32.0,27.0, 32.0,27.0, 32.0,27.0, 32.0 ]
n=20

g3 = FactorGraph()

# Specify generative model
@RV _pi ~ Beta(1.0, 1.0)
@RV AverageTime_1 ~ GaussianMeanVariance(15.0, 100.0)
@RV TrafficNoise_1 ~ Gamma(2.0, 0.5)
@RV AverageTime_2 ~ GaussianMeanVariance(30.0, 100.0)
@RV TrafficNoise_2 ~ Gamma(2.0, 0.5)

z = Vector{Variable}(undef, n)
TravelTime = Vector{Variable}(undef, n)
for i = 1:n
    @RV z[i] ~ Bernoulli(_pi)
    @RV TravelTime[i] ~ GaussianMixture(z[i], AverageTime_1, TrafficNoise_1, AverageTime_2, TrafficNoise_2)
    placeholder(TravelTime[i], :TravelTime, index=i)
end

q = RecognitionFactorization(_pi, AverageTime_1, TrafficNoise_1, AverageTime_2, TrafficNoise_2, z,
                                ids=[:PI, :AverageTime_1, :TrafficNoise_1, :AverageTime_2, :TrafficNoise_2, :Z])

# Generate the algorithm
algo = variationalAlgorithm(q);
algo_F = freeEnergyAlgorithm(q);

eval(Meta.parse(algo))
eval(Meta.parse(algo_F));

data = Dict(:TravelTime => TravelTime_data)

# Prepare recognition distributions
marginals = Dict(:_pi => vague(Beta),
                 :AverageTime_1 => ProbabilityDistribution(Univariate, GaussianMeanVariance, m=-1.0, v=1e4),
                 :TrafficNoise_1 => vague(Gamma),
                 :AverageTime_2 => ProbabilityDistribution(Univariate, GaussianMeanVariance, m=1.0, v=1e4),
                 :TrafficNoise_2 => vague(Gamma))
for i = 1:n
    marginals[:z_*i] = vague(Bernoulli)
end

# Execute algorithm
n_its = n*2
F = Float64[]
for i = 1:n_its
    stepZ!(data, marginals)
    stepPI!(data, marginals)
    stepAverageTime_1!(data, marginals)
    stepTrafficNoise_1!(data, marginals)
    stepAverageTime_2!(data, marginals)
    stepTrafficNoise_2!(data, marginals)

    # Store variational free energy for visualization
    push!(F, freeEnergy(data, marginals))
end

mean(marginals[:AverageTime_1]), var(marginals[:AverageTime_1])
#(16.156251733553507, 0.08699525635593493)
mean(marginals[:AverageTime_2]), var(marginals[:AverageTime_2])
#(17.907081197841816, 3.5320629822501064)

mean(marginals[:TrafficNoise_1]), var(marginals[:TrafficNoise_1])
#(4.169853434502669, 5.143993801619434)
mean(marginals[:TrafficNoise_2]), var(marginals[:TrafficNoise_2])
#(0.029331249662152697, 0.00012996177209009975)
F #40-element Array{Float64,1}:

"""
Expected Results:
Average travel time distribution 1 = Gaussian(14.7, 0.3533)
Average travel time distribution 2 = Gaussian(29.51, 1.618)
Traffic noise distribution 1 = Gamma(7, 0.0403)[mean=0.2821]
Traffic noise distribution 2 = Gamma(3, 0.1013)[mean=0.304]
Mixing coefficient distribution = Dirichlet(11 3)
"""

After last updates, ForneyLab cannot build message passing algorithm.

After the commit 61aba6fef0b06b12cbba177c60089a17d8ebd295 (Renamed update rules for Equality node.), ForneyLab fails to build the algorithm for the following state-space model:

z_t ~ N(z_{t-1}, \gamma^{-1})
x_t ~ N(z_{t}, \tau^{-1})
y_t ~ N(z_{t}, \omega*\tau^{-1})

The corresponding code:

using ForneyLab

# Building the model
n_samples = 1
fg = FactorGraph()

# State prior
@RV z_0 ~ GaussianMeanVariance(placeholder(:m_z_0), placeholder(:v_z_0))

@RV γ ~ Gamma(placeholder(:a_γ), placeholder(:b_γ))

@RV τ ~ Gamma(placeholder(:a_τ), placeholder(:b_τ))

# Transition
z = Vector{Variable}(undef, n_samples)
# Input
x = Vector{Variable}(undef, n_samples)
# Output
y = Vector{Variable}(undef, n_samples)
# Intervention
ω = Vector{Variable}(undef, n_samples)

z_i_min = z_0
for i in 1:n_samples

    @RV z[i] ~ GaussianMeanPrecision(z_i_min, γ)

    @RV x[i] ~ GaussianMeanPrecision(z[i], τ)

    @RV ω[i]

    @RV y[i] ~ GaussianMeanPrecision(z[i], ω[i]*τ)


    # Data placeholder
    placeholder(y[i], :y, index=i)
    placeholder(ω[i], , index=i)
    placeholder(x[i], :x, index=i)

    # Reset state for next step
    z_i_min = z[i]
end

q = PosteriorFactorization(z_0, z, γ, τ, ids=[:Z0 :Z  :T])
algo = messagePassingAlgorithm(free_energy=true)

The error at the aforementioned commit:

LoadError: No applicable SumProductRule{Multiplication} update for Multiplication node with inbound types: Message{Union{Gamma, Wishart},var_type} where var_type<:ForneyLab.VariateType, Nothing, Message{PointMass,var_type} where var_type<:ForneyLab.VariateType

Improve error message for algo contruction on model with dangling edges

Defining an algorithm that requires a backward message on a dangling edge will throw a cryptic error. For example:

using ForneyLab
g = FactorGraph()
@RV x ~ GaussianMeanVariance(0.0, 1.0)
@RV y ~ GaussianMeanVariance(0.0, 1.0)
@RV z = x + y
sumProductAlgorithm(x)

will throw

KeyError: key nothing not found

Improving this error should enable a user to more quickly identify the source of the problem. This issue relates to #64 .

Think of a killer app

In order to truly show the added value of ForneyLab we should come up with a "killer application", based on a real problem, that wows people. Let's collect ideas here.

Remove antipatterns

This blogpost from Lyndon White mentions several antipatterns for Julia code: https://white.ucc.asn.au/2020/04/19/Julia-Antipatterns.html (thanks @bauglir for pointing this out). Some of the antipatterns mentioned here are also present in the FL code.

  1. The most prominent one is the over-constraining of argument types. Some very specific constraints are needed for the update rules, but in other places the constraints need not be as strict as they are now;

  2. There are also still some Dicts that should be NamedTuples;

  3. We could make more efficient use of Julia's type inference system to get rid of the matches functions that are used for update rule-matching.

  4. Macros, e.g. @symmetrical can be improved for clarity

Unexpected error encountered while building FE algorithm

Together with @ismailsenoz we encountered a problem with building the free energy algorithm for the following model (summation of two gaussian RVs):

using ForneyLab

# Model
g = FactorGraph()

@RV x ~ GaussianMeanVariance(placeholder(:mx), placeholder(:vx))
@RV y ~ GaussianMeanVariance(placeholder(:my), placeholder(:vy))
@RV z = dot(x, 1.0) + dot(y, 1.0)

# Data placeholder
placeholder(z, :z)

# Factorization
q = PosteriorFactorization(x, y)
algo = variationalAlgorithm(q, free_energy=true)

By calling the variationalAlgorithm(q, free_energy=true) we get the following error:

ERROR: KeyError: key Variable(:clamp_1, Edges:
Edge belonging to variable clamp_1: ( clamp_1.i[out] )----( dotproduct_1.i[in1] ).
) not found
Stacktrace:
 [1] getindex(::Dict{Union{ForneyLab.Cluster, Variable},MarginalEntry}, ::Variable) at ./dict.jl:477
 [2] assembleFreeEnergy!(::InferenceAlgorithm) at ~/.julia/dev/ForneyLab/src/engines/free_energy_assemblers.jl:28
 [3] variationalAlgorithm(::PosteriorFactorization; id::Symbol, free_energy::Bool) at ~/.julia/dev/ForneyLab/src/algorithms/variational_bayes/naive_variational_bayes.jl:30
 [4] top-level scope at none:0

It seems like ForneyLab tries to assign the entropy to a clamped variable which shouldn't be the case.

NOTE: you can see that the dot product node in the example is meaningless, this was done for showing a minimum example that reproduces the error.

Nonlinear node fails when input distributions are specified using integers

The nonlinear node fails in the local "approximate" function if input distributions are specified using integers.

function approximate(x_hat::Union{Float64, Vector{Float64}}, g::Function, J_g::Function)
    A = J_g(x_hat)
    b = g(x_hat) .- A*x_hat

    return (A, b)
end

This leads to unintuitive behaviour. For example the following graph is invalid

using ForneyLab

fa(x) = 1 * tanh(x)
faprime(x) = 1-1*tanh(x)^2  

@RV x ~ GaussianMeanVariance(0,1)

@RV u ~ Nonlinear(x,fa,faprime)

@RV y ~ GaussianMeanVariance(u,1)
placeholder(y,:y)

while this one is valid

using ForneyLab

fa(x) = 1 * tanh(x)
faprime(x) = 1-1*tanh(x)^2  

@RV x ~ GaussianMeanVariance(0.0,1.0)

@RV u ~ Nonlinear(x,fa,faprime)

@RV y ~ GaussianMeanVariance(u,1)
placeholder(y,:y)

The error message specifies the "approximate" function as the culprit which takes some digging to find. I propose we remove the type check on x_hat and let the user specified functions throw errors. Will submit a pull request.

We should decide on a consistent convention for using integers in FL. These types of errors can be very discouraging to newcomers

Extend scheduling targets with interfaces

The scheduler currently targets Variables (for computing marginals) and Clusters (for joint marginals). However, for some updates (e.g. relating to the Nonlinear node), the forward message depends on the backward message on the same edge. Currently, there is no principled manner to explicitly set targets (through setTargets) on specific messages.

This leads to trouble in the following example:

       (1)(2) (3)(4)
       -> <-  -> <-
----[g]----[=]----
            | z

where computation of (1) requires (2). The scheduler however, requiring the marginal for z, might satisfy this requirement by scheduling messages (3) and (4). In this case, message (2) is not scheduled, and as a result (1) cannot be computed. Therefore, upon inclusion of a Nonlinear node (or nodes requiring similar mechanics) in the model, an explicit target should be set for scheduling (2).

Incorrect initialization of precision parameter in EM-like algorithm

Consider following minimal example:

using ForneyLab

g = FactorGraph()

@RV m ~ placeholder(:m)
@RV V ~ placeholder(:V)
@RV x ~ GaussianMeanPrecision(m,V)

q = RecognitionFactorization(x, m, V)
algo = variationalAlgorithm(q)

In this case algo will contain following generated code:

function initrecognitionfactor_3()
        #= none:50 =#
        messages = Array{Message}(undef, 1)
        #= none:51 =#
        messages[1] = Message(vague(Union{Gamma, Wishart}))
        #= none:53 =#
        return messages
end

which throws an error because of vague(Union{Gamma, Wishart}) call.

RecognitionFactorization with 4 arguments

Calling RecognitionFactorization() with exactly 4 arguments and no ids fail. Instead of creating the factorization, it calls the constructor directly and attempts to convert the 1st argument to a new currentGraph. Minimum example below.

using ForneyLab

g = FactorGraph()

@RV a ~ GaussianMeanVariance(0,1)
@RV b ~ GaussianMeanVariance(a,1)
@RV c ~ GaussianMeanVariance(b,1)
@RV d ~ GaussianMeanVariance(c,1)

q = RecognitionFactorization(a,b,c,d)  

Output of above:

MethodError: Cannot `convert` an object of type Variable to an object of type FactorGraph
Closest candidates are:
  convert(::Type{T}, !Matched::T) where T at essentials.jl:154
  FactorGraph(::Any, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any) at /home/mkoudahl/.julia/dev/ForneyLab/src/factor_graph.jl:15
in top-level scope at base/none
in RecognitionFactorization at dev/ForneyLab/src/algorithms/variational_bayes/recognition_factorization.jl:171

I tracked the issue to the following lines in ForneyLab/src/algorithms/variational_bayes/recognition_factorization.jl

RecognitionFactorization() = setCurrentRecognitionFactorization(
    RecognitionFactorization(
        currentGraph(),
        Dict{Symbol, RecognitionFactor}(),
        Dict{Edge, RecognitionFactor}(),
        Dict{Tuple{FactorNode, Edge}, Symbol}()))

"""
Construct a RecognitionFactorization consisting of one
RecognitionFactor for each argument
"""
function RecognitionFactorization(args...; ids=Symbol[])
    rf = RecognitionFactorization()
    isempty(ids) || (length(ids) == length(args)) || error("Length of ids must match length of recognition factor arguments")
    for (i, arg) in enumerate(args)
        if isempty(ids)
            RecognitionFactor(arg, id=generateId(RecognitionFactor))
        else
            RecognitionFactor(arg, id=ids[i])
        end
    end
    return rf
end

My understanding is that usually RecognitionFactorization() calls the function that accepts any number of args. However when given exactly 4 arguments the constructor takes priority and attempts to assign the first input to currentGraph(), producing the cryptic error above.

A current workaround is to pass ids as an argument so the correct function takes priority again (Thanks @ThijsvdLaar) however we should look at finding a more permanent solution.

Broadcast mimic for Univariate nodes

In order to properly stack i.i.d. variables in a vector, I suggest to introduce a sort of broadcasting for (some of?) ::Univariate nodes.

Let’s say, I have a distribution, for example, Gamma, which is available only as ::Univariate. I can do that:

D = 2
x = Vector{Variable}(undef, D)
for d in 1:D
  @RV x[d] ~ Gamma(1, 1)
end

But further in the code x is ::Vector{Variable}, so you cannot use it in the model with some kind of vector arithmetics. I'd rather want it as follows:

@RV x ~ Gamma(ones(2), ones(2))

so that further x is a proper vector of two i.i.d. Gamma components.

The broadcast mimic option for Gamma node in this sense follows the spirit of other Julia packages (for instance, I can type pdf.(Gamma(1,1), x) where x can be either Array or scalar).

Adding (Gaussian) mixture random variables

Example signal model

Consider the probabilistic model:

X ~ GMM(z_x, mu_x1, lambda_x1, ..., mu_xN, lambda_xN) 
Y ~ GMM(z_y, mu_y1, lambda_y1, ..., mu_yM, lambda_yM) 
Z = X + Y

where X and Y are distributed as (Gaussian) mixtures with N and M number of clusters, respectively. Suppose in this case that Z is observed and that all the means and precisions in the mixture model are also known. The goal is to infer X and Y including the posterior class probabilities. From a theoretical point of view the random variable Z can be represented as a Gaussian mixture model with NM clusters, where each cluster corresponds to the sum of one of the clusters in X and one of the clusters in Y. This relationship has been derived for example in https://stats.stackexchange.com/questions/174791/sum-of-gaussian-mixture-and-gaussian-scale-mixture.

ForneyLab implementation

The implementation of the probabilistic model in ForneyLab is rather straightforward using the available Gaussian mixture node.
Using variational message passing the messages can be derived for inferring the latent variables X and Y. The Gaussian messages currently flowing out of the Gaussian mixture nodes correspond to the weighted sum of the individual mixture components.

Problem

The variational messages flowing out of the Gaussian mixture node can introduce a significant bias in determining the posterior class probabilities and consequently in the latent states X and Y. This bias seems to be determined by the priors of the Gaussian mixture models (class probabilities, means, precisions/variances). Especially in higher dimensional spaces with non-uniform prior class probabilities, this bias significantly deteriorates the performance of the latent state tracking.

Affected signal models

The example above represents a simple case where the problem occurs. However, this problem occurs for any model where latent random variables are added, which each are related to some sort of mixture models. The addition of 'switching models' is therefore also affected by the problem. This extends past the class of Gaussian messages. Intuitively, all models affected are in the form:

p(z | x_1, ..., x_D) = delta(z - sum(x_i))
p(x_i | {some set of parameters}) = ...
{one of these parameters} ~ mixture model(...)

At least 2 variables should be somehow related to a mixture model.
This problem might even extend to arbitrary conditional distributions, where (at least) two conditional arguments are mixture models. This claim has not been verified.

Workaround

For the simple example at hand, the problem is relieved by first updating the posterior class probabilities once using an external function and by using these posterior class probabilities in ForneyLab for determining the hidden states.

for n = 1:N
        for m = 1:M
            log_p_posterior[n,m] = log(p(z_xn)) + log(p(z_ym)) + logpdf(Gaussian(Z | mu_xn+mu_ym, 1/(1/lambda_xn+1/lambda_ym)))
        end
end
z_posterior = exp.(log_p_posterior) ./ sum(exp.(log_p_posterior))

Normalization over its dimensions, leads to the posterior class probabilities of z_x and z_y. Multiple additions would increase the number of loops.

Required ForneyLab adaptations

For the implementation of these kinds of problems in ForneyLab, the implementation of (Gaussian) mixture messages is the most straightforward. For the addition node from the example the mixture message from Z should be automatically expanded based on the incoming messages. Backward messages to the individual edges (e.g. X), however, would likely require more advanced update mechanisms, such that the Gaussian mixture message of Z is properly decomposed.
One major downside of this approach is in its computational complexity. It would require some sort of node broadcasting, such that the individual mixture components are processed individually.
Furthermore, multiple additions would result in inference which no longer evolves linearly in complexity O(NM). In Optimal Mixture Approximation of the Product of Mixtures - Schrempf some methods are proposed to reduce the complexity, which are based on the sparsity of the mixture model. In ALGONQUIN: Iterating Laplace’s Method to Remove Multiple Types of Acoustic Distortion for Robust Speech Recognition - Frey 2001 and Super-human multi-talker speech recognition: A graphical modelling approach - Hershey 2010 the authors claim to have derived/be working on a version which is linear in the number of additions, O(N+M) instead of O(NM). However, it seems that no follow-up papers have been published to verify this claim.

UndefVarError: m_x_t_min not defined

I ran into this error while running the 1_state_estimation_forward_only demo as a .jl script inside the Atom Juno IDE. I'm running Ubuntu 18.04.

This is expected behavior since Julia v. 1.0.0. as noted here: JuliaLang/julia#28523

In order to access global variables inside blocks, they must be specified with the global keyword inside the block.

This snippet reproduces the error:

i = 10
while i > 0
    println(i -= 1)
end 

UndefVarError: i not defined

This is the fix:

i = 10
while i > 0
    global i
    println(i -= 1)
end

Note that this error does not occur with IJulia notebooks. Apparently, this behavior is defined differently for interactive evaluation contexts: JuliaLang/julia#28789.

Use PDMats.jl to work with positive-definite matrices

The current version is not very robust w.r.t. numerically unstable matrix operations (mainly inverse and Cholesky decomposition). Matrices that theoretically should be positive definite (i.e. covariance matrices) sometimes aren't due to numerical precision errors, which can crash inference algorithms.

A clean and concise way to address this would be to explicitly represent PD matrices by special types, for example based on PDMats.jl. The types in PDMats.jl explicitly track the Cholesky factor of the PD matrix at hand, and all kinds of matrix operations are overloaded to leverage that. In theory, this should eliminate exceptions related to inverting PD matrices.

Cryptic error messages

In order to improve the usability of ForneyLab it is important to return informative error messages. Often the search for the true origin of an error requires so much in-depth knowledge of ForneyLab internals that it becomes impossible to decrypt. Let's collect such errors here, together with a short description of how they arised. Separate issues or pull requests can be opened for improvement proposals.

Cannot construct localized free energy algorithm.

I am trying to construct free energy algorithm for the graph which corresponds to the following LDS:

x(t) = A*x(t-1)
y(t) = cT*x(t) + n(t), n(t) ~ N(0, 100)
cT = [1, 0] - transposed unit vector

state vector x(t) consists of two components, while observations are represented by a scalar.

After specifying the recognition distribution and calling freeEnergyAlgorithm() I get the error:

ERROR: LoadError: Cannot construct localized free energy algorithm. Recognition distribution for factor with id :X_t does not factor according to local graph structure. This is likely due to a conditional dependence in the posterior distribution (see Bishop p.485). Consider wrapping conditionally dependent variables in a composite node.
Stacktrace:
 [1] #freeEnergyAlgorithm#160(::String, ::Function, ::RecognitionFactorization) at ~/.julia/dev/ForneyLab/src/engines/julia/variational_bayes.jl:113
 [2] freeEnergyAlgorithm(::RecognitionFactorization) at ~/.julia/dev/ForneyLab/src/engines/julia/variational_bayes.jl:109 (repeats 2 times)
 [3] top-level scope at none:0
in expression starting at untitled-29303387388e5e8f070c66f5e9432ef8:31

Code example:

using ForneyLab

n_samples = 100
A = [1 1; 0 1] # Transtiion matrix
u = [0.5, 1] # Control vector
c = zeros(2); c[1] = 1 # unit vector (observation vector)
x = [A*[x_t, 1] for x_t in 1:n_samples] # real state
y = [c'x[t] + sqrt(100)*randn() for t in 1:n_samples] # noisy observations of postition

# Graph definition
g = FactorGraph()

@RV m_x_t_min
@RV v_x_t_min
@RV x_t_min ~ GaussianMeanVariance(m_x_t_min, v_x_t_min)

@RV x_t = A*x_t_min
c = zeros(2); c[1] = 1;
@RV m_y_t
@RV y_t ~ GaussianMeanPrecision(m_y_t, 0.01)
DotProduct(y_t, c, x_t)

placeholder(m_x_t_min, :m_x_t_min, dims=(2,))
placeholder(v_x_t_min, :v_x_t_min, dims=(2, 2))

placeholder(m_y_t, :m_y_t)

ForneyLab.draw(g)

# Specify recognition factorization
q = RecognitionFactorization(x_t, x_t_min, ids=[:X_t :X_t_min])
# Construct free energy algorithm
algoF = freeEnergyAlgorithm()

Replacing ForneyLab methods with native Julia methods if possible

It is a good practice to stick to native Julia methods as much as possible so that the current inference algorithms in ForneyLab can easily interface with other Julia packages. For example: cholinv method of ForneyLab sometimes prevent the flow of Flux variables through FFG but when it is replaced with inv() method of Julia, the problem disappears.

Bug with multiplication node.

The following code snippet aims at building the inference algorithm for the SSM of the form:
x_t = x_{t-1} + N(0, 1)
z_t = x_t + 2*u_t
y_t = z_t + N(0, 1)

using ForneyLab

graph = FactorGraph()
@RV u_t ~ GaussianMeanPrecision(placeholder(:m_η), placeholder(:w_η))
@RV x_t_prev ~ GaussianMeanPrecision(placeholder(:m_x_t_prev), placeholder(:w_x_t_prev))
@RV x_t ~ GaussianMeanPrecision(x_t_prev, 1.0)
@RV z_t = x_t + u_t*2.0
@RV y_t ~ GaussianMeanPrecision(z_t, 1.0)
placeholder(y_t, :y_t)

q = PosteriorFactorization([x_t, x_t_prev], ids=[:x])
algo = messagePassingAlgorithm(id=:MF, free_energy=true)
source_code = algorithmSourceCode(algo, free_energy=true)

It turns out that ForneyLab fails to build message passing algorithm messagePassingAlgorithm(id=:MF, free_energy=true) throwing the error.

ERROR: LoadError: KeyError: key Variable(:clamp_2, Edges:
Edge belonging to variable clamp_2: ( clamp_2.i[out] )----( multiplication_1.i[in1] ).
) not found

This can be fixed by swapping the order of multiplication u_t*2.0 to 2.0*u_t in the model specification.

using ForneyLab

graph = FactorGraph()
@RV u_t ~ GaussianMeanPrecision(placeholder(:m_η), placeholder(:w_η))
@RV x_t_prev ~ GaussianMeanPrecision(placeholder(:m_x_t_prev), placeholder(:w_x_t_prev))
@RV x_t ~ GaussianMeanPrecision(x_t_prev, 1.0)
@RV z_t = x_t + 2.0*u_t
@RV y_t ~ GaussianMeanPrecision(z_t, 1.0)
placeholder(y_t, :y_t)

q = PosteriorFactorization([x_t, x_t_prev], ids=[:x])
algo = messagePassingAlgorithm(id=:MF, free_energy=true)
source_code = algorithmSourceCode(algo, free_energy=true)

cannot get simple "Bayes rule for Gaussians" to work

I am trying to perform inference in a simple linear Gaussian X->Y model,
where X and Y could be scalars or vectors. Essentially this is a simplification of the kalman smoothing code from the list of demos.

Here is my code:

using ForneyLab
g = FactorGraph()

nhidden = 1 #2
nobs = 1
A = 0.2 # [1 0]
Q = 1 # eye(nhidden)
R = 0.1 # 0.1*eye(nobs)
y_data = 1.2

@RV x ~ GaussianMeanVariance(zeros(nhidden), Q)
@RV obs_noise ~ GaussianMeanVariance(zeros(nobs), R)
@RV y = x + obs_noise
placeholder(y, :y) # add clamping

algo = Meta.parse(sumProductAlgorithm(x))
eval(algo) # Load algorithm
data = Dict(:y     => y_data)
marginals = step!(data);

I get this error:

ERROR: LoadError: MethodError: no method matching ruleSPAdditionIn1PVG(::Message{PointMass,Univariate}, ::Nothing, ::Message{GaussianMeanVariance,Multivariate})
Closest candidates are:
  ruleSPAdditionIn1PVG(::Message{PointMass,V<:Union{Multivariate, Univariate}}, ::Nothing, ::Message{F<:Gaussian,V<:Union{Multivariate, Univariate}}) where {F<:Gaussian, V<:Union{Multivariate, Univariate}} at /home/kpmurphy/.julia/packages/ForneyLab/4DRfg/src/engines/julia/update_rules/addition.jl:68
Stacktrace:
 [1] step!(::Dict{Symbol,Float64}, ::Dict{Any,Any}, ::Array{Message,1}) at ./none:5
 [2] step! at ./none:3 [inlined] (repeats 2 times)

I also tried this code:

@RV x ~ GaussianMeanVariance(zeros(nhidden), Q)
@RV y ~ GaussianMeanVariance(x, R)
placeholder(y, :y) # add clamping

but that gives this error

ERROR: LoadError: MethodError: no method matching prod!(::ProbabilityDistribution{Multivariate,GaussianMeanVariance}, ::ProbabilityDistribution{Univariate,GaussianMeanVariance})
Closest candidates are:
  prod!(::ProbabilityDistribution{Univariate,PointMass}, ::ProbabilityDistribution{Univariate,F<:Gaussian}) where F<:Gaussian at /home/kpmurphy/.julia/packages/ForneyLab/4DRfg/src/factor_nodes/gaussian.jl:63
  prod!(::ProbabilityDistribution{Univariate,PointMass}, ::ProbabilityDistribution{Univariate,F<:Gaussian}, ::ProbabilityDistribution{Univariate,PointMass}) where F<:Gaussian at /home/kpmurphy/.julia/packages/ForneyLab/4DRfg/src/factor_nodes/gaussian.jl:63
  prod!(::ProbabilityDistribution{Univariate,F1<:Gaussian}, ::ProbabilityDistribution{Univariate,F2<:Gaussian}) where {F1<:Gaussian, F2<:Gaussian} at /home/kpmurphy/.julia/packages/ForneyLab/4DRfg/src/factor_nodes/gaussian.jl:52
  ...
Stacktrace:
 [1] *(::ProbabilityDistribution{Multivariate,GaussianMeanVariance}, ::ProbabilityDistribution{Univariate,GaussianMeanVariance}) at /home/kpmurphy/.julia/packages/ForneyLab/4DRfg/src/ForneyLab.jl:94
 [2] step!(::Dict{Symbol,Float64}, ::Dict{Any,Any}, ::Array{Message,1}) at ./none:6
 [3] step! at ./none:3 [inlined] (repeats 2 times)

What am I doing wrong?
(I am using Julia 1.1 and ForneyLab 0.9.1.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.