Giter Club home page Giter Club logo

tensorkit.jl's People

Contributors

dependabot[bot] avatar gertian avatar github-actions[bot] avatar ho-oto avatar jagot avatar juliatagbot avatar jutho avatar leburgel avatar lkdvos avatar maartenvd avatar mhauru avatar qmortier avatar tangwei94 avatar texify[bot] avatar xiaoyu-dong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorkit.jl's Issues

contractcheck does not work together with braiding symbols

Contractcheck doesn't seem capable of dealing with planar contractions containing the tau symbol. The following provides a MWE :

sp =^2
T = TensorMap(rand, ComplexF64, sp, sp)

a = @planar opt=true T[1;2]*T[2;1]
b = @planar opt=true T[1;4]*T[3;2]*τ[4 1;2 3]

c = @planar contractcheck=true opt=true T[1;2]*T[1;2]    #works as expected
b = @planar contractcheck=true opt=true T[1;4]*T[3;2]*τ[4 1;2 3] #ERROR: UndefVarError: `τ` not defined

`ComplexSpace` constructor from sector-degeneracy pairs similar to that of `GradedSpace`?

At some point I needed a routine to increase the degeneracies in a given space while preserving its sectors. The (seemingly) simplest way to do this fails for the ComplexSpace case, since this doesn't have a constructor which takes sector-degeneracy pairs as input.

using TensorKit

function expand_degeneracies(V::ElementarySpace; fact=1.5)
    return Vect[sectortype(V)](s => ceil(Int, dim(V, s) * fact) for s in sectors(V))
end
V1 = Z2Space(0 => 2, 1 => 2)
expand_degeneracies(V1)
Rep[ℤ₂](0=>3, 1=>3)
V2 =^4
expand_degeneracies(V2)
ERROR: MethodError: no method matching ComplexSpace(::Base.Generator{TensorKit.OneOrNoneIterator{Trivial}, var"#8#9"{Float64, ComplexSpace}})

Closest candidates are:
  ComplexSpace(::Any, ::Any)
   @ TensorKit ~/.julia/packages/TensorKit/j71BN/src/spaces/complexspace.jl:9
  ComplexSpace()
   @ TensorKit ~/.julia/packages/TensorKit/j71BN/src/spaces/complexspace.jl:12
  ComplexSpace(::AbstractDict; kwargs...)
   @ TensorKit ~/.julia/packages/TensorKit/j71BN/src/spaces/complexspace.jl:21
  ...

Would it make sense to add such a constructor for ComplexSpace, since it already behaves like a GradedSpace in most cases anyway?

Also, even then the CartesianSpace case is a bit tricky, since Vect[sectortype(::CartesianSpace)] == ComplexSpace.

more general catdomain and catcodomain?

Is it possible to support more general catdomain and catcodomain? Current only one axes and be concatenated, however to support MPS/MPO addition one has to concatenate two axes..

step cannot be zero

I think empty blocks still cause issues, because this fails somewhere deep in strided :

t = TensorMap(rand,ComplexF64,ProductSpace(U₁Space(0=>12, 1=>0, -1=>12, -2=>12)) ← (U₁Space(0=>1)' ⊗ U₁Space(0=>12, 1=>12, -1=>12)))
mul!(similar(t,ComplexF64),t,1/norm(t))

Problem with gradient when using tensors with Float64 and ComplexF64 entries

I am trying to compute the gradient of function, that takes a matrix as input and contracts with another tensor. If both tensors have either Float64 or ComplexF64 values, the Zygote gradient works. However, if one has Float64 and the other one ComplexF64 entries, it fails and returns an InexactError: Float64() error. Below I provide a MWE, which has the same behaviour as the actual function I need.

normDiff(a, b) = norm(a - b);

A = TensorMap(randn, Float64, ComplexSpace(2), ComplexSpace(2));
B = TensorMap(randn, Float64, ComplexSpace(2), ComplexSpace(2));
Zygote.gradient((x, y) -> normDiff(x, y), A, B)

A = TensorMap(randn, ComplexF64, ComplexSpace(2), ComplexSpace(2));
B = TensorMap(randn, ComplexF64, ComplexSpace(2), ComplexSpace(2));
Zygote.gradient((x, y) -> normDiff(x, y), A, B)

A = TensorMap(randn, Float64, ComplexSpace(2), ComplexSpace(2));
B = TensorMap(randn, ComplexF64, ComplexSpace(2), ComplexSpace(2));
Zygote.gradient((x, y) -> normDiff(x, y), A, B)

libblas not available on nightly

Trying to add TensorKit on julia nightly, I encounter the following error:

ERROR: The following 1 direct dependency failed to precompile:

TensorKit [07d1fe3e-3e46-537d-9eac-e9e13d0d4cec]

Failed to precompile TensorKit [07d1fe3e-3e46-537d-9eac-e9e13d0d4cec] to /home/jb6888/.julia/compiled/v1.9/TensorKit/jl_nB8hAb.
ERROR: LoadError: UndefVarError: libblas not defined
Stacktrace:
 [1] include(mod::Module, _path::String)
   @ Base ./Base.jl:427
 [2] include(x::String)
   @ TensorKit ~/.julia/packages/TensorKit/GctVn/src/TensorKit.jl:6
 [3] top-level scope
   @ ~/.julia/packages/TensorKit/GctVn/src/TensorKit.jl:119
 [4] include
   @ ./Base.jl:427 [inlined]
 [5] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt64}}, source::Nothing)
   @ Base ./loading.jl:1423
 [6] top-level scope
   @ stdin:1
in expression starting at /home/jb6888/.julia/packages/TensorKit/GctVn/src/auxiliary/linalg.jl:5
in expression starting at /home/jb6888/.julia/packages/TensorKit/GctVn/src/TensorKit.jl:6
in expression starting at stdin:1
Stacktrace:
 [1] pkgerror(msg::String)
   @ Pkg.Types ~/packages/julias/julia-latest/share/julia/stdlib/v1.9/Pkg/src/Types.jl:68
 [2] precompile(ctx::Pkg.Types.Context, pkgs::Vector{String}; internal_call::Bool, strict::Bool, warn_loaded::Bool, already_instantiated::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ Pkg.API ~/packages/julias/julia-latest/share/julia/stdlib/v1.9/Pkg/src/API.jl:1427
 [3] precompile
   @ ~/packages/julias/julia-latest/share/julia/stdlib/v1.9/Pkg/src/API.jl:1060 [inlined]
 [4] #precompile#225
   @ ~/packages/julias/julia-latest/share/julia/stdlib/v1.9/Pkg/src/API.jl:1057 [inlined]
 [5] precompile (repeats 2 times)
   @ ~/packages/julias/julia-latest/share/julia/stdlib/v1.9/Pkg/src/API.jl:1057 [inlined]
 [6] top-level scope
   @ REPL[3]:1

This seems to be because BLAS.libblas has been removed

julia> using LinearAlgebra.BLAS

julia> BLAS.libblas
ERROR: UndefVarError: libblas not defined
Stacktrace:
 [1] getproperty(x::Module, f::Symbol)
   @ Base ./Base.jl:31
 [2] top-level scope
   @ REPL[6]:1

julia> VERSION
v"1.9.0-DEV.432"

Permute for Non-abelian tensor is slow.

I found that the length(fushiontrees(f1,f2)) would be very large for large-index SU2 tensors in some cases, which slows down the permute operation. The easiest way to solve this problem would be parallizing the _add_general_kernel! function. Is there any other better solution for this problem?

Support for A <: AbstractMatrix

Is there any particular reason the interface for Tensor requires A <: DenseMatrix for the storage type? Does this package assume column-major indexing, for example? If not, would a PR removing that requirement and instead requiring A <: AbstractMatrix be welcome?

norm of empty symmetric tensors

currently this

norm(TensorMap(zeros,ComplexF64,ℂ[U₁](1=>1),ℂ[U₁](0=>1)))

errors. Would it be correct to make this return 0?

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Feature request: `convert` methods for `AdjointTensorMap` -> `TensorMap` and changing `eltype`

Possible implementations:

Base.convert(::Type{TensorMap}, t::TensorKit.AdjointTensorMap) = copy(t)
function Base.convert(::Type{TensorMap{S, N1, N2, G, A, F1, F2}},
                      t::TensorKit.AdjointTensorMap{S, N1, N2, G, A, F1, F2}
                     ) where {S, N1, N2, G, A, F1, F2}
    return convert(TensorMap, t)
end

function Base.convert(::Type{TensorMap{S, N1, N2, G, Matrix{E1}, F1, F2}},
                      t::TensorMap{S, N1, N2, G, Matrix{E2}, F1, F2}
                     ) where {S, N1, N2, G, E1, E2, F1, F2}
    return copyto!(similar(t, E1), t)
end

function Base.convert(::Type{TensorMap{S, N1, N2, G, TensorKit.SectorDict{G, Matrix{E1}}, F1, F2}},
                      t::TensorMap{S, N1, N2, G, TensorKit.SectorDict{G, Matrix{E2}}, F1, F2}
                     ) where {S, N1, N2, G, E1, E2, F1, F2}
    return copyto!(similar(t, E1), t)
end

`fuse` inconsistent in handling dual spaces

Encountered the folowing inconsistent/inconvenient behaviour for fusing vector spaces: fuse(a, b...) automatically results in a non-dual vectorspace, while fuse(a) simply returns a. I think it might be more convenient to ensure that fuse either always returns a non-dual space, or has a keyword to switch between the outputs.

julia> using TensorKit

julia> a = U1Space(1 => 1)
Rep[U₁](1=>1)

julia> fuse(a * a)
Rep[U₁](2=>1)

julia> fuse(a)
Rep[U₁](1=>1)

julia> fuse(a')
Rep[U₁](1=>1)'

julia> fuse(a' * a)
Rep[U₁](0=>1)

In my particular case, I am iteratively fusing more and more spaces together to determine the maximal virtual spaces of an MPS, and now have to insert a weird special case to catch when a single dual space is being entered.

error when using `svd` with truncation

It seems that SVD with truncation is broken due to changes in the iterator interface in Julia.
Specifically in _truncate! there are calls to start, next and done which do not exist anymore (Julia v1.2). As far as I understand all of these were replaced by iterate.

Here is a MWE

S = U₁Space(n=>1 for n in 1:10)
M = TensorMap(rand,Float64, S, S)
svd(M; trunc=truncerr(1e-10))

I might try to fix this myself at some point in case you don't get to it before I have time for that (but at the moment I have to go do other work).

fuse tensor legs?

Currently there is no obvious way to fuse the tensor legs for the TensorMap or Tensor object.
It would be ideal if there is a built-in function for this functionality.
As an example, for the TensorMap object A

A = TensorMap(rand, ComplexF64, ℂ^2*^2*^2, ℂ^4)

where A is a TensorMap((ℂ^2 ⊗ ℂ^2 ⊗ ℂ^2) ← ProductSpace(ℂ^4)).
One might expect something like:

A_fused = fuse(A, (1,2))

so that A_fused is a TensorMap((ℂ^4 ⊗ ℂ^2) ← ProductSpace(ℂ^4)).

show throws sectormismatch

The following tensor
t = TensorMap(rand,ComplexF64,U₁Space(0=>12, 1=>0, -1=>12, -2=>12),(U₁Space(0=>1)' ⊗ U₁Space(0=>12, 1=>12, -1=>12)));

can be created, but when trying to see it's contents I get a sectormismatch.

totally unrelated, but norm(t) works while norm(t') fails because norm() is defined on general abstract tensormaps, while it assumes the layout of a normal tensormap. I think it should be something like (linalg.jl:133)

LinearAlgebra.norm(t::AdjointTensorMap{<:EuclideanSpace}, p::Real) = norm(t.parent,p);
LinearAlgebra.norm(t::TensorMap{<:EuclideanSpace}, p::Real) = ...

Long compile times for (n~12)-index tensors

First, thank you for this awesome package as well as the documentation. I went through it today and learned quite a few things!

I am not particularly sure if the below is an issue or not, but here it is:

I try to make a matrix product state, using a vector (e.g. ground state of some spin chain that has been found by exact diagonalization of hamiltonian in a U(1) charge sector). I get very large, rapidly growing, compile times of hundreds of seconds even for L=12:

L = 8 takes ~ 10 seconds
L = 10 takes ~ 25 seconds
L = 12 takes ~ 210 seconds
(took so long I stopped the compilation for larger!)

I understand this is not an ideal approach of writing this function since it requires a (L+2)-index tensor (3-index tensors should be enough), therefore longer compile times are expected to compile functions for larger index tensors. So, my question/issue is: are these compile times too large or expected?

Here is the function that you can test by vector2mps(L, rand(binomial(L, div(L,2)))

function vector2mps(L::Int, v::Vector{T}; m::Int=div(L,2)) where {T<:Number}

    V0 = U₁Space(U₁(0)=>1)    # dummy space for start
    VL = U₁Space(U₁(m)=>1)    # dummy space for end   
    Vd = U₁Space(U₁(x)=>1 for x=0:1)   # physical spaces

    # is there a better way to initialize?
    A = TensorMap(TensorKit.SectorDict(U₁(m)=>reshape(v,length(v), 1)),
              V0  prod(Vd for _=1:L)  VL)

    mps = Vector{TensorMap{U₁Space,2,1}}()
    for x = 1:L-1
        U,S,Vt = svd(permuteind(A, Tuple(1:2), Tuple(3:L-x+3)))
        push!(mps, U)
        A = S*Vt
    end
    push!(mps, permuteind(A, (1,2), (3,)))
    mps
end

TensorMap is not bitstype.

I am using TensorKit to construct tensor network. I want to parallel my code using MPI. However, isbitstype(TensorMap) returns false. So, I can't use MPI.reduce to gather the data calculated by Tensorkit. Is there any solution to this problem?

Inconsistent behaviour for tensor contraction

Hi,

I have come across an issue after updating from TensorKit v.0.10.0 to v.0.11.0, which uses TensorOperations v.4. In the code I construct a hermitian MPO out of a direct sum of two other MPOs, both of which are non-hermitian. In TensorKit v.0.10.0 this worked fine, but after updating the resulting MPO is no longer hermitian, as is should be. Below there is a MWE for the direct sum of the MPO on a single site, which already shows a discrepancy between the two versions.

using TensorKit

# initialize MPO tensor AC and BC
AC = TensorMap(reshape([1 +2 ; -2 1], (1, 2, 2, 1)), ComplexSpace(1) ⊗ ComplexSpace(2), ComplexSpace(2) ⊗ ComplexSpace(1));
BC = TensorMap(reshape([1 -2 ; +2 1], (1, 2, 2, 1)), ComplexSpace(1) ⊗ ComplexSpace(2), ComplexSpace(2) ⊗ ComplexSpace(1));

# take sum of MPOs
isoLA = isometry(space(AC, 1) ⊕ space(BC, 1), space(AC, 1));
isoLB = leftnull(isoLA);
isoRA = isometry(space(AC, 4)' ⊕ space(BC, 4)', space(AC, 4)')';
isoRB = rightnull(isoRA);
@tensor CC[-1 -2; -3 -4] := isoLA[-1, 1] * AC[1, -2, -3, 4] * isoRA[4, -4] + isoLB[-1, 1] * BC[1, -2, -3, 4] * isoRB[4, -4];
CC = convert(Array, CC);
display(reshape(CC, size(CC, 1) * size(CC, 2), size(CC, 3) * size(CC, 4)))

Running the code with [07d1fe3e] TensorKit v0.10.0 produces the output

4×4 Matrix{Float64}:
  1.0  2.0  0.0   0.0
  0.0  0.0  1.0  -2.0
 -2.0  1.0  0.0   0.0
  0.0  0.0  2.0   1.0

while running it with [07d1fe3e] TensorKit v0.11.0 produces

4×4 Matrix{Float64}:
 0.0  0.0  0.0   0.0
 0.0  0.0  2.0  -4.0
 0.0  0.0  0.0   0.0
 0.0  0.0  4.0   2.0

I have checked that the individual tensors are the same for both versions of TensorKit, but so far I couldn't figure out what causes the difference.

transpose can hang

the following hangs

t = Tensor(rand,ComplexF64,Rep[SU₂](0=>1)); transpose(t,(),(1,))

I quick'n dirty fix may be the following commit
c61b531

Support for permutational index symmetry

In #50 (comment) I found

symmetries under index permutations are currently not supported. In general, it's quite hard to take them into account, in particular in a way that leads to a large speedup.

I just wanted to come back to it (given that this statement is from 2021) and ask: Has there been any work in this direction in the meantime?

FWIW I think already support in the sense of figuring out that if $f_{ab} = f_{ba}$, then it is enough to compute and store the elements for $b &gt; a$ (which especially for higher-dimensional tensors can lead to significant memory and also processing time savings).

Edge cases for empty `ProductSpace`s

The following throws an error:

julia> blockdim(ComplexSpace(2), Trivial())
2

julia> blockdim(one(ComplexSpace(2)), Trivial())
ERROR: MethodError: no method matching fusiontrees(::Tuple{}, ::Trivial)

Closest candidates are:
  fusiontrees(::Tuple{Vararg{I, N}}, ::I, ::Tuple{Vararg{Bool, N}}) where {N, I<:Sector}
   @ TensorKit ~/Projects/People/Sylvain/dev/TensorKit/src/fusiontrees/iterator.jl:4
  fusiontrees(::Tuple{I, Vararg{I}}, ::I) where I<:Sector
   @ TensorKit ~/Projects/People/Sylvain/dev/TensorKit/src/fusiontrees/iterator.jl:8

Stacktrace:
 [1] blockdim(P::ProductSpace{ComplexSpace, 0}, c::Trivial)
   @ TensorKit ~/Projects/People/Sylvain/dev/TensorKit/src/spaces/productspace.jl:182

Similar results for graded spaces etc. It seems like this was introduced in 9ad2804, and is fixed by just replacing Tuple{I,Vararg{I}} with Tuple{Vararg{I}} in this line. I think this got introduced because Aqua was complaining?

More efficient support for tsvd

See tensors/factorizations.jl , 415:425

for (c, b) in blocks(t)
        U, Σ, V = _svd!(b, alg)
        Udata[c] = U
        Vdata[c] = V
        if @isdefined Σdata # cannot easily infer the type of Σ, so use this construction
            Σdata[c] = Σ
        else
            Σdata = SectorDict(c=>Σ)
        end
        dims[c] = length(Σ)
    end

This code gets executed unconditionally, independently of the truncation parameter, so the full svd is always computed. Is there a way to only calculate the biggest singular values, until truncdim, without calculating the rest? Similar to what the LowRankApprox.jl package does.

creating a GenericRepresentationSpace with only one sector

Hi,

This is a minor issue, but in the past it was possible to do something like U₁Space(1=>1) which now fails. One has to call U₁Space((1=>1, ) ) instead.
This could be fixed by defining GenericRepresentationSpace{G}(d1::Pair; kwargs...).

deepcopy of TensorMap causes Julia to crash (used in e.g. `exp`)

Hi,
I started playing with your package in order to add U(1) charge conservation to my MPS code and it looks really promising. So first of all- thanks!
I encountered an issue with using exp function, which seems to actually originate from deepcopy of a tensor. This happens on both linux and macOS. Am I doing something wrong?

MWE:

using TensorKit
S = U₁Space(-1=>1,1=>1)
W=  S  S => S  S
t = TensorMap(rand,Float64,W)
exp(t) # (or just deepcopy(t) ) 

This results in

Unreachable reached at 0x7ff30576b5d5

signal (4): Illegal instruction
in expression starting at no file:0
deepcopy at ./deepcopy.jl:28
copy at /home/USER/.julia/packages/TensorKit/Y6AiJ/src/tensors/tensor.jl:171
exp at /home/USER/.julia/packages/TensorKit/Y6AiJ/src/tensors/linalg.jl:24
jl_fptr_trampoline at /buildworker/worker/package_linux64/build/src/gf.c:1864
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2219
do_call at /buildworker/worker/package_linux64/build/src/interpreter.c:323
eval_value at /buildworker/worker/package_linux64/build/src/interpreter.c:411
eval_stmt_value at /buildworker/worker/package_linux64/build/src/interpreter.c:362 [inlined]
eval_body at /buildworker/worker/package_linux64/build/src/interpreter.c:773
jl_interpret_toplevel_thunk_callback at /buildworker/worker/package_linux64/build/src/interpreter.c:885
unknown function (ip: 0xfffffffffffffffe)
unknown function (ip: 0x7ff316563c5f)
unknown function (ip: 0xffffffffffffffff)
jl_interpret_toplevel_thunk at /buildworker/worker/package_linux64/build/src/interpreter.c:894
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:764
jl_toplevel_eval_in at /buildworker/worker/package_linux64/build/src/toplevel.c:793
eval at ./boot.jl:328
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2219
eval_user_input at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:85
macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:117 [inlined]
#26 at ./task.jl:259
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2219
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1571 [inlined]
start_task at /buildworker/worker/package_linux64/build/src/task.c:572
unknown function (ip: 0xffffffffffffffff)
Allocations: 18209101 (Pool: 18205573; Big: 3528); GC: 41
Illegal instruction (core dumped)

here is the version info for the linux machine:

Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, haswell)

Vectors/Covectors

Hi there,

Thanks for all your work producing this package. Forgive me if this is a silly question as I am fairly new to tensor networks and my maths knowledge is very lacking.
I wish to construct what is effectively a vector that lives in some representation space of a certain group. Specifically, I would like to construct a vector living in the space

space = (U1Space(-1=>1)  U1Space(1=>1))  (U1Space(-1=>1)  U1Space(1=>1))

Of course this product space is isomorphic to the space U1Space(-2=>1, 0=>2, 2=>1), however I require the product structure.

Using the Tensor constructer like Tensor(data, space) does not allow me to have elements of charge 2 or -2, only of charge 0. The underlying data I want in my tensor is something like

TensorKit.SortedVectorDict{U1Irrep, Matrix{ComplexF64}} with 3 entries:
  0  => [0.0+0.0im; 0.0 + 0.0im;;]
  2  => [0.0+0.0im;;]
  -2 => [0.0+0.0im;;]

however when providing this data as data to Tensor(data, space), the data in the blocks corresponding to charge -2 and 2 are lost without warning. To be honest, I don't really understand what a Tensor (i.e. TensorMap with no domain) is in the context of symmetric tensors, so please let me know if there is a good reason for this (something to do with conservation of charge?)

So my question is, is this the intended behaviour? And if so, is there anyway to achieve what I want using the tools provided by TensorKit? If the first is true and the second false, do you think adding new data types like Vector and Covector would be a worthwhile endeavour?

Cheers,
Jack

Confusing/bugged? index convention for braiding tensors.

In the following code, which contracts two tensors in various ways depicted below, I'd expect all diagrams to be equivalent.

Nevertheless I get a spacemismatch for d-type diagram for reasons that totally evade me. I'm not sure if this is intended behavior or a bug but either way I think it deserves clarification.

using TensorKit, TensorOperations, Test

sp =^2
T = TensorMap(rand, ComplexF64, sp, sp)

#the normal diagram
a = @planar opt=true T[a;b]*T[b;a]

#the diagram with one winding, both conventions for the labeling of the braiding
b = @planar opt=true T[d;a]*T[b;c]*τ[c b;a d]
c = @planar opt=true T[d;a]*T[b;c]*τ[a c;d b]
@test a  b  c

#the diagrams with one pulling trough --> two taus --> both conventions lead to 4 diagrams
d = @planar opt=true T[f;a]*T[c;d]*τ[d b;c e]*τ[e b;a f]   #this one gives a space mismatch for reasons that evade me !
e = @planar opt=true T[f;a]*T[c;d]*τ[d b;c e]*τ[a e;f b]
f = @planar opt=true T[f;a]*T[c;d]*τ[c d;e b]*τ[e b;a f]
g = @planar opt=true T[f;a]*T[c;d]*τ[c d;e b]*τ[a e;f b]

potential_bug

Resulting error for the d-type diagram :

ERROR: SpaceMismatch("(ℂ^2 ⊗ ℂ^2) ← ProductSpace{ComplexSpace, 0}() ≠ permute((ℂ^2 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^2)[(1, 2), (3, 4)] * ℂ^2 ← ℂ^2[(2, 1), ()], ((1, 2), ())")
Stacktrace:
 [1] planarcontract!(::TensorMap{ComplexSpace, 2, 0, Trivial, Matrix{ComplexF64}, Nothing, Nothing}, ::TensorKit.BraidingTensor{ComplexSpace, Matrix{Float64}}, ::Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}, ::TrivialTensorMap{ComplexSpace, 1, 1, Matrix{ComplexF64}}, ::Tuple{Tuple{Int64, Int64}, Tuple{}}, ::Tuple{Tuple{Int64, Int64}, Tuple{}}, ::One, ::Zero)
   @ TensorKit ~/.julia/dev/TensorKit/src/tensors/braidingtensor.jl:232
 [2] _planarcontract!(::Any, ::Any, ::Any, ::Any, ::Any, ::Any, ::Any, ::Any)
   @ TensorKit ~/.julia/dev/TensorKit/src/planar/postprocessors.jl:57
 [3] top-level scope
   @ REPL[652]:1

Not understand the tutorial about symmetry part

Hello TensorKit Developers,

I saw
https://jutho.github.io/TensorKit.jl/stable/man/tutorial/#Symmetries
about
"julia> V1 = ℤ₂Space(0=>3,1=>2)
Rep[ℤ₂](0=>3, 1=>2)

julia> dim(V1)
5"
Got difficulty to understand it.

What does "0=>3,1=>2" mean? And why the dim(V1)=5? I posted on https://discourse.julialang.org/t/why-the-dimension-of-v1-space-0-3-1-2-is-5/59960 as well

Thank you very much. I am not sure if here is the right place to ask this question.

\otimes+TAB in PyJulia

Hi,

I am using PyJulia to call Julia code from Python 3, and I have found an issue when trying to type the tensor product. Namely, \otimes+TAB doesn't work, neither it does copy pasting from Julia (both things tried on terminal on an Ubuntu machine). Is there any solution for this?

I found this in the PyJulia documentation suggesting that it might not be possible, but a workaround would be highly appreciated.

Thanks!

braidingtensor cant deduce deducable links

chained braidingtensors cannot be correctly deduced, even though in principle they can.

tau[... 2;-3 ...] * tau [... 1;2 ...]

if the macro analyzes the first tau, it would give up, while actually the second tau shows that 1 - which it knows - is the same space as 2, and therefore the same space as -3.

There seems to be currently no way to control the order of deducing (it gets stored in a dict, so loses all order). I will also look into this.

planar scalars

t = TensorMap(rand,ComplexF64,ℂ^2,ℂ^2);
@planar2 y[-1;-2] := t[-1;1]*t[1;-2]*exp(3*4)

fails

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.