Giter Club home page Giter Club logo

fembase.jl's Introduction

FEMBase.jl

Build status Coverage status Stable docs Latest docs

somebody

Package author: Jukka Aho (@ahojukka5, [email protected])

FEMBase.jl is a base package for FEM codes. It implements all the basic data structures in an efficient way and aims to be the rock-solid backbone which can be used to build your very own FEM solver. The basis principle is that FEMBase is a tiny and well-optimized, efficient subset of larger finite element method software with pre- and postprocessors. FEMBase.jl is used in JuliaFEM.jl.

In order to make a stable package, all operations that are not closely related to the basic operations are left out. This encourages to develop a highly modular FEM where features are implemented in their own packages. The stragegy chosen is intentional. Pre- and posprocessing, physical models, file formats and things like that evolve in time, while the basic principles stay. We want that those evolving things stay in their own packages as much as possible.

What FEMBase.jl does

FEMBase implements all kind of basic data types like Problem, Element, Node, IntegrationPoint, Analysis and so on so that user can define the discretized geometry and integration scheme. Each element does have a topology like Poi1, Seg2, Tri3, Quad4, Tet4, Hex8 for isoparametric linear elements having only corner nodes or Seg3, Tri6, Quad8, Tet10, Hex20 for isoparametric quadratic elements having nodes in middle of the edges. Then there is also elements having middle nodes like Tri7 and Quad9.

Data of model is repsented in fields. Data type can be scalar, vector, tensor or something user-defined for spesific needs. Data can be defined in analysis, problem, element, node or integration point. Data can depend on time and it can be interpolated with respect to time using different interpolation schemes.

Data can also be interpolated in spatial direction using basis functions, which are Lagrange basis functions at the moment.

What FEMBase.jl doesn't do

FEMBase.jl doesn't solve the system of equations. You can define model and data, integrate and interpolate, but you need to write your own solver. And as a consequence it doesn't write the results to anywhere, so you need to implement your own results writer or use some of the existing solutions like XdmfWriter.jl.

FEMBase.jl doesn't have any physical models. FEMBase doesn't know anything about displacements, temperatures, velocities or pressures. FEMBase.jl does assemble a problem, but it doesn't define it. So you need to build your physical model in some other package or use some of the existing solutions like HeatTransfer.jl. Things related to geometry, like calculating Jacobians, are supported.

FEMBase does not parse mesh from other format. For that, you need to implement your own parser or use some existing solutions, like AbaqusReader.jl, AsterReader.jl or Gmsh.jl.

What FEMBase.jl is planning to do

Given data and basis functions, we should be able to naturally interpolate any kind of data. In current implementation, we have an assumption that data is defined in nodes like it is done standard FE methods. Then we have things like hierarchical elements, NURBS, material particle method and so on, which should also be possible to implement using FEMBase.

Contributing

Contributions in code and idea level are welcome and highly appreciated. If you have some great ideas how to improve package, raise an issue or drop me email.

fembase.jl's People

Contributors

ahojukka5 avatar femtocleaner[bot] avatar kristofferc avatar rezarastak avatar sebastianpech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

fembase.jl's Issues

Consider using symbols instead of strings for fields

Currently, fields are indexed using Strings (haskey(element, "flux")). For cases like this, using Symbols is usually better since Symbols are interned and therefore represented by their pointer. This means comparing symbols is just a pointer comparison and hashing them is faster than for strings. For example, the columns in a DataFrame are indexed using Symbols.

Give nodes, elements, integration points, name instead of number?

Most of the FEM packages are "naming" nodes and elements by some running integer j. However, in e.g. Code Aster, each node or element is having a name. By default, nodes are prefixed with N and running number, elements with M and running number.

Advantages

I see a couple of benefits of this kind of naming approach:

  • Explicit is better than implicit. Currently, we are finding the dof number based on node id number, and this is a somewhat inflexible approach. By having a name, e.g. :coupling_node, we explicitly say that "do not use id number to calculate anything", and an arbitrary dof mapping/handler must be used to find dofs corresponding to that node, dofs[:coupling_node_1].
  • Give additional information about nodes or elements in their name. For example, we could have :coupling_node_1, :coupling_node_2, and so on. When importing several meshes, if having an integer-type node id, there most likely will happen collisions because same node ids are used in different bodies, and renumbering will be needed anyway. We could prefix nodes and elements in that case, e.g. :body1_N1, :body1_N2, :body2_N1, and so on.
  • Naming also most likely explicitly describe the type. Now 5 can mean node 5, element 5 or integration point 5. If we have :N5, :E5 and :IP5, it's immediately clear which one is a node, which one is an element and which one is an integration point. Yet one more advantage is how we should handle ABAQUS surfaces, which are defined in ABAQUS using element id + S1 ... S4, and in current JuliaFEM implementation we should figure out some explicit new id number for them. If using names, we could have :E1S1, :E1S2, :E1S3, :E1S4.
  • ABAQUS is using negative node ids internally, giving hints that they also have been finding the usual positive id range to be insufficient for all needs. For example, if we need to automatically generate internal/virtual nodes or elements during the simulation, we need a private namespace to avoid name collisions. This could be done easily by prefixing automatically generated stuff.

Disadvantages

I cannot find any of them, but probably there is some. At least we cannot calculate anything based on the node id number, no dim*(j-1)+i, where j is node number and i is local dof number, but I think this is actually something what we (should) want.

Changes to parsers

In AsterReader, we can directly assign names to mesh. In code aster meshes, nodes usually are prefixed with N and elements with M, like described above. In AbaqusReader, we just have to give some prefix to nodes and elements.

What do you think?

Package broken

(@v1.5) pkg> add FEMBase
   Updating registry at `~/.julia/registries/General`
   Updating git-repo `https://github.com/JuliaRegistries/General.git`
  Resolving package versions...
Updating `~/.julia/environments/v1.5/Project.toml`
  [fbcbbc08] + FEMBase v0.3.1
Updating `~/.julia/environments/v1.5/Manifest.toml`
  [49dc2e85] + Calculus v0.5.1
  [fbcbbc08] + FEMBase v0.3.1
  [353fb843] + FEMBasis v0.2.0
  [be8e8821] + FEMQuad v0.3.2

(@v1.5) pkg> test FEMBase
    Testing FEMBase
Status `/tmp/jl_yvKAwY/Project.toml`
  [fbcbbc08] FEMBase v0.3.1
  [353fb843] FEMBasis v0.2.0
  [be8e8821] FEMQuad v0.3.2
  [a759f4b9] TimerOutputs v0.5.6
  [37e2e46d] LinearAlgebra
  [2f01184e] SparseArrays
  [10745b16] Statistics
  [8dfed614] Test
Status `/tmp/jl_yvKAwY/Manifest.toml`
  [49dc2e85] Calculus v0.5.1
  [fbcbbc08] FEMBase v0.3.1
  [353fb843] FEMBasis v0.2.0
  [be8e8821] FEMQuad v0.3.2
  [a759f4b9] TimerOutputs v0.5.6
  [2a0f44e3] Base64
  [8ba89e20] Distributed
  [b77e0a4c] InteractiveUtils
  [8f399da3] Libdl
  [37e2e46d] LinearAlgebra
  [56ddb016] Logging
  [d6f4376e] Markdown
  [de0858da] Printf
  [9a3f8284] Random
  [9e88b42a] Serialization
  [6462fe0b] Sockets
  [2f01184e] SparseArrays
  [10745b16] Statistics
  [8dfed614] Test
  [4ec0a83e] Unicode
ERROR: LoadError: LoadError: Evaluation into the closed module `Calculus` breaks incremental compilation because the side effects will not be permanent. This is likely due to some other module mutating `Calculus` with `eval` during precompilation - don't do this.
Stacktrace:
 [1] eval at ./boot.jl:331 [inlined]
 [2] simplify(::Expr) at /home/jukka/.julia/packages/Calculus/mbqhh/src/symbolic.jl:96
 [3] calculate_interpolation_polynomials(::Expr, ::Array{Float64,2}) at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/create_basis.jl:25
 [4] create_basis(::Symbol, ::String, ::Tuple{Tuple{Float64},Tuple{Float64}}, ::Expr) at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/create_basis.jl:48
 [5] top-level scope at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/lagrange_segments.jl:4
 [6] include(::Function, ::Module, ::String) at ./Base.jl:380
 [7] include at ./Base.jl:368 [inlined]
 [8] include(::String) at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/FEMBasis.jl:4
 [9] top-level scope at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/FEMBasis.jl:16
 [10] include(::Function, ::Module, ::String) at ./Base.jl:380
 [11] include(::Module, ::String) at ./Base.jl:368
 [12] top-level scope at none:2
 [13] eval at ./boot.jl:331 [inlined]
 [14] eval(::Expr) at ./client.jl:467
 [15] top-level scope at ./none:3
in expression starting at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/lagrange_segments.jl:4
in expression starting at /home/jukka/.julia/packages/FEMBasis/hrK6J/src/FEMBasis.jl:16
ERROR: LoadError: Failed to precompile FEMBasis [353fb843-c566-51e6-ba49-78b3e3d5ebb5] to /home/jukka/.julia/compiled/v1.5/FEMBasis/x3cNT_EipM7.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1305
 [3] _require(::Base.PkgId) at ./loading.jl:1030
 [4] require(::Base.PkgId) at ./loading.jl:928
 [5] require(::Module, ::Symbol) at ./loading.jl:923
 [6] include(::Function, ::Module, ::String) at ./Base.jl:380
 [7] include(::Module, ::String) at ./Base.jl:368
 [8] top-level scope at none:2
 [9] eval at ./boot.jl:331 [inlined]
 [10] eval(::Expr) at ./client.jl:467
 [11] top-level scope at ./none:3
in expression starting at /home/jukka/.julia/packages/FEMBase/qGDOl/src/FEMBase.jl:19
ERROR: LoadError: Failed to precompile FEMBase [fbcbbc08-f1bf-5204-9233-b69f5d396135] to /home/jukka/.julia/compiled/v1.5/FEMBase/6HUoq_EipM7.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1305
 [3] _require(::Base.PkgId) at ./loading.jl:1030
 [4] require(::Base.PkgId) at ./loading.jl:928
 [5] require(::Module, ::Symbol) at ./loading.jl:923
 [6] include(::String) at ./client.jl:457
 [7] top-level scope at none:6
in expression starting at /home/jukka/.julia/packages/FEMBase/qGDOl/test/runtests.jl:4
ERROR: Package FEMBase errored during testing

get_integration_points for Seg2 not working anymore

julia> get_integration_points(Element(Seg2, (1, 2)))
ERROR: MethodError: no method matching FEMBase.Point{IntegrationPoint}(::Int64, ::Float64, ::Float64)
Closest candidates are:
  FEMBase.Point{IntegrationPoint}(::Any, ::Any, ::Any, ::Any, ::Any) where P<:FEMBase.AbstractPoint at C:\Users\jahx06\.julia\dev\FEMBase\src\types.jl:9
  FEMBase.Point{IntegrationPoint}(::Any, ::Any, ::Tuple) at C:\Users\jahx06\.julia\dev\FEMBase\src\types.jl:66
  FEMBase.Point{IntegrationPoint}(::Any, ::Any, ::Array{T,1} where T) at C:\Users\jahx06\.julia\dev\FEMBase\src\types.jl:70
Stacktrace:
 [1] (::getfield(FEMBase, Symbol("##31#32")))(::Tuple{Int64,Tuple{Float64,Float64}}) at .\none:0
 [2] iterate at .\generator.jl:47 [inlined]
 [3] collect(::Base.Generator{Base.Iterators.Enumerate{Base.Iterators.Zip{Tuple{Tuple{Float64,Float64},Tuple{Float64,Float64}}}},getfield(FEMBase, Symbol("##31#32"))}) at .\array.jl:606
 [4] get_integration_points(::Element{Seg2}) at C:\Users\jahx06\.julia\dev\FEMBase\src\elements.jl:377
 [5] top-level scope at none:0

Update fields with dictionaries is not consistent

I just realized that we have two kinds of results when updating fields with dictionaries. First one, when we already have a DVTV field, and then update with dictionary:

using FEMBase
T = Dict(1 => 1.0, 2 => 2.0)
element = Element(Seg2, (1, 2))
update!(element, "temperature", 0.0 => (0.0, 0.0))
update!(element, "temperature", 1.0 => T)
element.fields["temperature"]

# output

DVTV{2,Float64}(Pair{Float64,Tuple{Float64,Float64}}[0.0=>(0.0, 0.0), 1.0=>(1.0, 2.0)])

However, if we create a a new field:

update!(element, "temperature2", 0.0 => T)
element.fields["temperature2"]

# output

DVTVd{Float64}(Pair{Float64,Dict{Int64,Float64}}[0.0=>Dict(2=>2.0,1=>1.0)])

It cannot be like this. For sure this is causing confusion. Because it is very unlikely that the user actually wants to create DVTVd, we should always have DVTV and creating DVTVd should need some special attention, maybe

temp_field = field(0.0 => T)
update!(element, "temperature", temp_field)

TimerOutputs

Error: TimerOutputs in its dependencies

ERROR: LoadError: ArgumentError: Package FEMBase does not have TimerOutputs in its dependencies:
- If you have FEMBase checked out for development and have
  added TimerOutputs as a dependency but haven't updated your primary
  environment's manifest file, try `Pkg.resolve()`.
- Otherwise you may need to report an issue with FEMBase

Time-dependent element connectivity?

Would it make sense to make connectivity time-dependent? If we define if using a field, i.e.

update!(element, :connectivity, 0.0 => (1, 2, 3, 4))
update!(element, :connectivity, 1.0 => (5, 6, 7, 8))

we could easily implement different mesh refinement strategies.

Tri3 -> Topology + Basis

Currently, we define elements using style

element = Element(Tri3, [1, 2, 3])

and it is somewhat implicitly assumed that the basis for the element is a nodal basis, e.g. N(xi, eta) = (xi, eta, 1-xi-eta) in this particular case. We should extend this definition by explicitly giving a basis, which defaults to nodal basis. So we can have Tri3{CG}, Tri3{Morley}, Tri3{DKT} and so on. We could implement this several ways, here's couple options:

abstract type AbstractBasis end
struct CG <: AbstractBasis end
struct Morley <: AbstractBasis end
abstract type AbstractTopology end
struct Tet10{T<:AbstractBasis} <: AbstractTopology end
abstract type AbstractElement end

struct Element{T<:AbstractTopology} <: AbstractElement
    # rest of stuff ...
    topology :: T
end

Element(Tet10{CG}())

# output

Element{Tet10{CG}}(Tet10{CG}())

Other way is

struct Tet4 <: AbstractTopology end

struct Element2{T<:AbstractTopology, B<:AbstractBasis} <: AbstractElement
    # rest of stuff ...
    topology :: T
    basis :: B
end

Element2(Tet4(), CG())

# output

Element2{Tet4,CG}(Tet4(), CG())

I don't know which one is better or is there maybe third and better way to think this. Anyway we can do this change so that element = Element(Tri3, [1, 2, 3]) is still working and having CG basis as default one:

function Element2(::Type{T}, connectivity) where {T <: AbstractTopology}
    return Element2(T(), CG())
end

Element2(Tet4, [1, 2, 3, 4])

# output

Element2{Tet4,CG}(Tet4(), CG())

Single element assemble + override default assemble operation

So, if we introduce "local buffer", containing workspace/memory for assembling one element without any additional memory allocations, we could go back to assemble one element at a time.

It would be cool if we can make dispatching work so that we can override or extend assembling of a single element if we have some special case we need to take care of. Would it work if we have something like:

function assemble_stiffness_matrix!(problem, assembly, other_stuff, Val{:E1})
    # my special assembly procedure
end

where :E1 is the name of the element.

We also should construct everything such that we can assemble mass matrix, stiffness matrix, force vector and so on separately or all together.

Enhancements for fields

It would make sense to add some extra information for fields. I have some ideas:

  • to show that should all components of a vector or tensor field be active, we need some flag (see issue #47)
  • some fields may depend on time but might still not be possible to interpolate at least with linear interpolation, we should have some options to define how to interpolate in a time direction (nearest neighbor, linear, quadratic) and also how to extrapolate, when outside of time span
  • possibility to store "dual" variables needed to perform AD
  • possibility to store "increment" variables, i.e. if we have ´displacement`, but we actually solve increment \delta u, which is then added to displacement, u_t = u_{t-1} + \delta u, when converged

README does not attract new users

Explain the main goals of the package. Why this kind of package is needed? README.md should be written so that the user gets interested in the package. Maybe advertise, how easily some basic tasks can be performed?

Element function accepts incompatible input arguments

Element function allows creation of Quad8 type element with four nodes:

julia> element2 = Element(Quad8, [1,2,3,4])
FEMBase.Element{FEMBasis.Quad8}(-1, [1, 2, 3, 4], FEMBase.Point{")FEMBase.IntegrationPoint}[], Dict{String,FEMBase.AbstractField}(), FEMBasis.Quad8())

It might good to check compatibility of input arguments here.

Abstract stress calculation from integration point

In the integration point, we need to have a place for a material model and we need to define a way to ask a stress state and jacobian from there given some strain or displacement. It should be abstract presentation enough so that we can have very different approaches for material modeling, e.g. microscale models added to integration point. In the same spirit as in #43, we could then have a possibility to override or extend a stress calculation for some particular integration points.

Perhaps tweak the way loads are defined to specific dimension

Right now, a load is applied to a set of dimensions by giving a string with the dimension after the name of the load, for example

update!(support_elements, "displacement load 1", 0.0)
update!(support_elements, "displacement load 2", 0.0)

means we apply a load in dimension 1 and 2.

In the problem this is then checked using something like

            if haskey(element, "displacement load")
                T = element("displacement load", ip, time)
                f_ext += w*vec(T*N)
            end

            for i=1:dim
                if haskey(element, "displacement load $i")
                    b = element("displacement load $i", ip, time)
                    f_ext[i:dim:end] += w*vec(b*N)
                end
            end

I would argue that it would be better to specify the dimensions in some type of vector instead of in a string, for example:

update!(support_elements, "displacement load", 1, 1.0) # dim 1
update!(support_elements, "displacement load", 2:3, 1.0) # dim 2, 3
update!(support_elements, "displacement load", [1, 3], 1.0) # dim 1, 3
update!(support_elements, "displacement load", 1.0) # dim 1, 2 ,(3) all dimension by default

and the code using the load would get the dimensions it is applied to and the value as

b, dim = element("displacement load", ip, time)

This would require fewer dictionary looksups and be a bit more "Julian".

Changes to 0.3.0

Some of changes are probably breaking backward compatibility. New ideas welcome.

  • Give connectivity as tuple + other things that does not need calculations.

  • Element id required, change problem.elements from array to dict? In output writing we need some order anyway and id-numering elements is quite standard in other FEMs.

  • Naming: element.properties -> element.basis.

  • More intelligent way to calculate global dofs. Give unknown field name of problem with a tuple of symbols, (:u1, :u2, :u3) for vector field, :T for temperature field and so on. Or ("Displacement 1", "Displacement 2"). For hybrid problems, e.g. displacement + pressure, have ((:u1, :u2, :u3), :p). The question is how to set up this information to BoundaryProblems and how to update elements after global solution.

  • A separate algorithm to calculate global dofs for each element before assembly.

  • Just like element.fields, also solver and problem have equivalent way to create and update fields.

Fields converting Tensor variables to Arrays

How to reproduce:

  1. Update a value to integration point
    update!(ip, variable, time => value),
    where value <: AbstractTensor.
    typeof(value)
    [ Info: Tensors.SymmetricTensor{2,3,Float64,6}

  2. Check the realized type
    typeof(ip(variable, time))
    [ Info: Array{Float64,2}

Simulation times increases in long simulations

image

image

In my test setup, the tool was not moving at all, so I just run a similar loop about 500 times. The first run is about 0.5 second while the last one was 2 seconds. The only thing change is that new values are updated to fields at the end of each increment. My guess is that querying data from fields is getting slower when the field contains more time increments. At first, the field is interpolated in time direction here, and I don't immediately see any reasons why fetching the newest value from the field should slow down as the size of the data vector grows.

Performance problem with interpolate

The current interpolate has a bit too much overhead to be acceptable

For example

X = Dict(
    1 => [0.0, 0.0, 0.0],
    2 => [1.0, 0.0, 0.0],
    3 => [1.0, 1.0, 0.0],
    4 => [0.0, 1.0, 0.0],
    5 => [0.0, 0.0, 1.0],
    6 => [1.0, 0.0, 1.0],
    7 => [1.0, 1.0, 1.0],
    8 => [0.0, 1.0, 1.0])

u = Dict(
    1 => [0.0, 0.0, 0.0],
    2 => [-1/3, 0.0, 0.0],
    3 => [-1/3, -1/3, 0.0],
    4 => [0.0, -1/3, 0.0],
    5 => [0.0, 0.0, 1.0],
    6 => [-1/3, 0.0, 1.0],
    7 => [-1/3, -1/3, 1.0],
    8 => [0.0, -1/3, 1.0])

element = Element(Hex8, (1, 2, 3, 4, 5, 6, 7, 8))
update!(element, "geometry", 0.0 => X)
update!(element, "displacement", 0.0 => u)
update!(element, "youngs modulus", 288.0)
update!(element, "poissons ratio", 1/3)

ip = get_integration_points(element)

using BenchmarkTools

Benchmarking this gives

julia> @btime element("youngs modulus", $ip, 1.0)
  87.709 ns (2 allocations: 32 bytes)

This is a bit long and there are two allocations.

I tried to store the field using two vectors, instead of a dictionary with the assumption that the number of keys are small with moderate success. Using d54d5d3 I got

julia> @btime element("youngs modulus", $ip, 1.0)
  65.823 ns (2 allocations: 32 bytes)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.