Giter Club home page Giter Club logo

Comments (4)

rjrosati avatar rjrosati commented on June 3, 2024 2

That's a good idea, and if possible could make this a lot easier. I'm not quite sure how you could enforce indices matching, etc in Symbolics.jl, but maybe I just need to think about it more. I'd love to hear ideas if you have them. And apparently Symbolics.jl is supposed to eventually support array-valued variables, so then adding rules for tensor contractions might be pretty easy.

The rough sketch I had in mind for SymbolicUtils went something like this:
Following https://symbolicutils.juliasymbolics.org/interface/, we just need to define istree, operation, arguments for the tensor objects. And maybe more?

So I was thinking something like this, basing the rough types off of sympy.tensor.tensor

abstract type Tensor <: Number end

struct TensorHead <: Tensor
    num_indices::Integer
    TIT::TensorIndexType
    # index symmetries, maybe more stuff
end

struct TensorIndexType
    name::Symbol
end

struct TensorIndex
    name::Symbol
    TIT::TensorIndexType
    is_up::Bool
end

struct TensorExpr <: Tensor
    operation::Symbol
    args::Vector{T} where T <: Tensor
    free_indices::Vector{TensorIndex}
end


istree(tens::T) where T <: Tensor = false
istree(tens::TensorExpr) = true
operation(T::TensorExpr) = T.operation
arguments(T::TensorExpr) = T.args

I think I got that right? And then actual addition / index contraction / canon_bp, etc would have to be expressed as @rule statements of TensorExprs.
Like this one might be addition, if I am using the rules correctly

function add(a::TensorExpr, b::TensorExpr)
    if all([aᵢ  b.free_indices for aᵢ in a.free_indices]) &&
        all([bᵢ  a.free_indices for bᵢ in b.free_indices])
        # there's definitely a faster way to check this
        # addition is associative and commutative so use acrule
        r = @acrule ~x + ~y => TensorExpr(operation=:+,args=push((~x).args,~y))
        return r(a + b)
    else    
        error("incompatible indices")
    end     
end

from symbolictensors.jl.

Potatoasad avatar Potatoasad commented on June 3, 2024 1

Can we implement the current system using just symbolics.jl without getting into the symbolic utils part of it?

They seem to give an endpoint to "register" functions so they are treated as primitive symbols/functions. That sounds like it might help

https://juliasymbolics.github.io/Symbolics.jl/dev/manual/functions/

But one would still have to write the canon_bp rules using the symbolicutils API probably.

If you have an idea on how to begin to implement this in symbolicutils.jl I'm very curious, and would love to hear a very rough sketch.

from symbolictensors.jl.

Potatoasad avatar Potatoasad commented on June 3, 2024 1

This is actually an excellent starting point, I think. It would probably be better to implement this using SymbolicUtils then, this starting point looks very clear and elegant to me. I imagine the index lowering and raising problems may be too finicky to implement in Symbolics.jl. I've coded up a Minimum Working Example of the above, and one thing I've noted is that SymbolicUtils is not very easy to work with and the error messages are not at all helpful.

Starting from the code you have provided, the next step would be to have something like A(μ,ν) + B(μ,ν) and that should equal to some sort of TensorExpr with operation + as the head

Aside: The operation for a TensorExpr needs to be a Function and not a Symbol

I was trying to set it up with the following code.

#Define Index Type
L = TensorIndexType(:Lorentz)

#Define Indices
μ = TensorIndex(,L,true)
ν = TensorIndex(,L,true)
ρ = TensorIndex(,L,true)
σ = TensorIndex(,L,true)

#Define Tensor Heads
A = TensorHead(2,L)
B = TensorHead(2,L)

#Do the operation (this currently not implemented)
C = A(μ,ν) + B(μ,ν)

Now one would need a way to define a Tensor with some indices, like A(μ,ν), and have it work with add.

The issue is that A(μ,ν) would then have to be a TensorExpr (to work with add), and then the question becomes what is the operation property of this TensorExpr? We could define a new function AssignIndices and call that the operation but I'm not sure if that's the direction we should take here.

I propose we create a new struct TensorInstance <: Tensor. This is an instance of the TensorHead, with specific indices provided such as A(μ,ν).

TensorHead need not be a subtype of Tensor. Since Tensor <: Number, and a TensorHead is not technically a number until we provide it with some indices. Ex. A does not act like number, but A(μ,ν) does.

We can create an abstract type AbstractTensor and have + work with just Tensor:

abstract type AbstractTensor end
abstract type Tensor <: Number end

struct TensorIndexType
    name::Symbol
end

struct TensorIndex
    name::Symbol
    TIT::TensorIndexType
    is_up::Bool
end

struct TensorHead <: AbstractTensor
    num_indices::Integer
    TIT::TensorIndexType
    # index symmetries, maybe more stuff
end

struct TensorInstance <: Tensor
    tensorhead::TensorHead
    free_indices::Vector{TensorIndex}
end

function (A::TensorHead)(indices::Vararg{TensorIndex})::TensorInstance
    ## Error check to see if the right number of indices is provided
    ## If we allow Tensorhead to be a parametric type like
    ## TensorHead{num_indices}, we can use dispatch to get rid of the 
    ## error check
    if length(indices) == A.num_indices
        TensorInstance(A,collect(indices))
    else
        error("Number of Indices provided doesn't match the 
            number of indices of the the tensor")
    end
end

struct TensorExpr <: Tensor
    operation::Function
    args::Vector{Number}
    free_indices::Vector{TensorIndex}
end

using SymbolicUtils
import Base.+, Base.-, Base.*

free_indices(a::TensorExpr) = a.free_indices
free_indices(a::TensorInstance) = a.free_indices

index_matching(a::Tensor, b::Tensor) = (all([aᵢ  free_indices(b) for aᵢ in free_indices(a)]) && 
                                                all([bᵢ  free_indices(a) for bᵢ in free_indices(b)]))


function (+)(a::Tensor, b::Tensor)
    if index_matching(a,b)
        TensorExpr(operation= +,args=vcat(a,b),free_indices = free_indices(a))
    else    
        error("incompatible indices")
    end     
end

function (-)(a::Tensor, b::Tensor)
    if index_matching(a,b)
        TensorExpr(operation= -,args=vcat(a,b),free_indices = free_indices(a))
    else    
        error("incompatible indices")
    end     
end

(*)(a::Number, b::Tensor) = TensorExpr(operation= *,args=vcat(a,b),free_indices = free_indices(b))
(*)(a::Tensor, b::Number) = TensorExpr(operation= *,args=vcat(a,b),free_indices = free_indices(a))

Then I tested the code above by running:

#Define Index Type
L = TensorIndexType(:Lorentz)

#Define Indices
μ = TensorIndex(,L,true)
ν = TensorIndex(,L,true)
ρ = TensorIndex(,L,true)
σ = TensorIndex(,L,true)

#Define Tensor Heads
A = TensorHead(2,L)
B = TensorHead(2,L)


C = A(μ,ν) + A(μ,ν)

C |> display #Basically gives A(μ,ν) + A(μ,ν) as a TensorExpr

r = @rule( ~x => ~x*10 )

r(C) |> display #Basically gives 10*(A(μ,ν) + A(μ,ν)) as a TensorExpr

r2 = @rule( ~x + ~x => 2*~x )

r2(C) |> display #Returns Nothing, which means it was unable to match this expression

Hence doing 3*A(μ,ν) + 4*B(μ,ν) is basically implemented as an operation.
However, SymbolicUtils is not able to match some simple expressions. This may be because we may still needs some additional functions implemented, and I'm looking into how to implement them.

Please do critique any design choices and/or symbolicutils implementation here.

If you want I could make a PR with the Minimal Working example and you can examine that code.
Alternatively, we could test this on a completely new repo before finalising the structure and what needs to be done, and add a finished backend to SymbolicTensors.jl when it is ready, whatever is better.

from symbolictensors.jl.

rjrosati avatar rjrosati commented on June 3, 2024

Wow, awesome work. Yeah I think a PR should work well, as far as I know nobody is using this repo in production so we can just commit to master. I'll tag the current state just in case someone needs it.
I think you now officially have more experience actually using SymbolicUtils than me, since I've just been reading the docs haha. It's a shame that the error messages aren't very good. On Zulip people are saying that its public API is at least guaranteed to not change, but that a lot of internals are still being worked on.

One thing I am just now realizing is that we probably want tensors with mixed index types (think like changing bases and things like that) so maybe TensorHead should have been

struct TensorHead <: AbstractTensor
    index_types::Vector{TensorIndexType}
    # index symmetries, maybe more stuff
end

in practice, TensorHead::num_indices is the same as the length of the types vector, so no need to store it either.

Also great idea with TensorInstance, that makes more sense than packing it all into TensorHead.
I'm not sure exactly how to get SymbolicUtils to distribute coefficients and things like that -- from what I'm reading we might need to be careful about defining symtype(::TensorExpr). One thing I really didn't like about sympy's tensor module is that it didn't treat fully contracted tensors like scalars (e.g. 1/(A(-a)*A(a)) would error), but we might be able to make sure that doesn't happen by telling SymbolicUtils the appropriate symtype. Another SymbolicUtils function we probably have to define is simiilarterm, but I need to read up more how that works.

But yeah, awesome work and we'll probably have to play around with the exact implementation for a while before simplification is working well, but this is great progress.

from symbolictensors.jl.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.