Giter Club home page Giter Club logo

fastai.jl's People

Contributors

amqdn avatar carlolucibello avatar chandu-4444 avatar codeboy5 avatar cpfiffer avatar darsnack avatar dave7895 avatar evoart avatar github-actions[bot] avatar itan1 avatar jay-sanjay avatar johnnychen94 avatar logankilpatrick avatar lorenzoh avatar manikyabard avatar natema avatar nathanaelbosch avatar opus111 avatar samuelzxu avatar touchesir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastai.jl's Issues

Two FluxTraining entry in Project.toml

Hi,
FluxTraining may be enter twice in Project.toml and causing an error when I try to run Pkg.add.
Please see the error below.
Thank you.
-Charles

julia> Pkg.add(url="https://github.com/FluxML/FastAI.jl")
Updating git-repo https://github.com/FluxML/FastAI.jl
ERROR: Could not parse project: TOML Parser error:
/tmp/jl_BdmL3c/Project.toml:52:14 error: key already has a value
FluxTraining = "0.2"
^
Stacktrace:
[1] pkgerror(::String, ::Vararg{String, N} where N)
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Types.jl:55
[2] read_project(f_or_io::String)
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/project.jl:134
[3] read_package(path::String)
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Types.jl:434
[4] resolve_projectfile!(ctx::Pkg.Types.Context, pkg::Pkg.Types.PackageSpec, project_path::String)
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Types.jl:701
[5] (::Pkg.Types.var"#44#45"{Pkg.Types.Context, Pkg.Types.PackageSpec, String})(repo::LibGit2.GitRepo)
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Types.jl:671
[6] with(f::Pkg.Types.var"#44#45"{Pkg.Types.Context, Pkg.Types.PackageSpec, String}, obj::LibGit2.GitRepo)
@ LibGit2 /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LibGit2/src/types.jl:1150
[7] handle_repo_add!(ctx::Pkg.Types.Context, pkg::Pkg.Types.PackageSpec)
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Types.jl:623
[8] handle_repos_add!(ctx::Pkg.Types.Context, pkgs::Vector{Pkg.Types.PackageSpec})
@ Pkg.Types /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Types.jl:690
[9] add(ctx::Pkg.Types.Context, pkgs::Vector{Pkg.Types.PackageSpec}; preserve::Pkg.Types.PreserveLevel, platform::Base.BinaryPlatforms.Platform, kwargs::Base.Iterators.Pairs{Symbol, Base.TTY, Tuple{Symbol}, NamedTuple{(:io,), Tuple{Base.TTY}}})
@ Pkg.API /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/API.jl:183
[10] add(pkgs::Vector{Pkg.Types.PackageSpec}; io::Base.TTY, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Pkg.API /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/API.jl:79
[11] add(pkgs::Vector{Pkg.Types.PackageSpec})
@ Pkg.API /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/API.jl:77
[12] #add#22
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/API.jl:74 [inlined]
[13] add
@ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/API.jl:74 [inlined]
[14] add(; name::Nothing, uuid::Nothing, version::Nothing, url::String, rev::Nothing, path::Nothing, mode::Pkg.Types.PackageMode, subdir::Nothing, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Pkg.API /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/API.jl:97
[15] top-level scope
@ REPL[3]:1

Guides for block level customization

Hi,
Currently, the customization tutorial is based on low level API, it would be nice to have a similar guides but based on block level interface, covering materials such as how to write customized blocks for different use cases, and what functions need to be implemented for the new block to work properly. It appears to be a big jump from using existing blocks to customize everything.

Personally, I'm struggling to figure out how to make new blocks work properly by looking at the source code, so I'm really looking forward to such guides.

Error in reproducing the "Data containers" tutorial: "key :nothing not found"

I run the following code in Data containers in a Jupyter notebook in VS code.

using FastAI
import FastAI: Image
data, _ = loaddataset("imagenette2-160", (Image, Label))

image, class = obs = getobs(data, 1)
@show class
image

The image cannot be displayed properly with the error info below.

KeyError: key :nothing not found

Stacktrace:
  [1] getindex(h::Dict{Symbol, Int64}, key::Symbol)
    @ Base ./dict.jl:481
  [2] _parse_color
    @ ~/.julia/packages/Crayons/u3AH8/src/crayon.jl:143 [inlined]
  [3] #Crayon#1
    @ ~/.julia/packages/Crayons/u3AH8/src/crayon.jl:171 [inlined]
  [4] encodeimg(::ImageInTerminal.SmallBlocks, colordepth::ImageInTerminal.TermColor256, img::PermutedDimsArray{RGB{FixedPointNumbers.N0f8}, 2, (2, 1), (2, 1), Matrix{RGB{FixedPointNumbers.N0f8}}}, maxheight::Int64, maxwidth::Int64)
    @ ImageInTerminal ~/.julia/packages/ImageInTerminal/aJGpm/src/encodeimg.jl:65
  [5] imshow(io::IOContext{IOBuffer}, img::PermutedDimsArray{RGB{FixedPointNumbers.N0f8}, 2, (2, 1), (2, 1), Matrix{RGB{FixedPointNumbers.N0f8}}}, colordepth::ImageInTerminal.TermColor256, maxsize::Tuple{Int64, Int64})
    @ ImageInTerminal ~/.julia/packages/ImageInTerminal/aJGpm/src/imshow.jl:31
  [6] imshow(io::IOContext{IOBuffer}, img::PermutedDimsArray{RGB{FixedPointNumbers.N0f8}, 2, (2, 1), (2, 1), Matrix{RGB{FixedPointNumbers.N0f8}}}, colordepth::ImageInTerminal.TermColor256)
    @ ImageInTerminal ~/.julia/packages/ImageInTerminal/aJGpm/src/imshow.jl:26
  [7] show(io::IOContext{IOBuffer}, mime::MIME{Symbol("text/plain")}, img::PermutedDimsArray{RGB{FixedPointNumbers.N0f8}, 2, (2, 1), (2, 1), Matrix{RGB{FixedPointNumbers.N0f8}}})
    @ ImageInTerminal ~/.julia/packages/ImageInTerminal/aJGpm/src/ImageInTerminal.jl:67
  [8] limitstringmime(mime::MIME{Symbol("text/plain")}, x::PermutedDimsArray{RGB{FixedPointNumbers.N0f8}, 2, (2, 1), (2, 1), Matrix{RGB{FixedPointNumbers.N0f8}}})
    @ IJulia ~/.julia/packages/IJulia/e8kqU/src/inline.jl:43
  [9] display_mimestring
    @ ~/.julia/packages/IJulia/e8kqU/src/display.jl:71 [inlined]
 [10] display_dict(x::PermutedDimsArray{RGB{FixedPointNumbers.N0f8}, 2, (2, 1), (2, 1), Matrix{RGB{FixedPointNumbers.N0f8}}})
    @ IJulia ~/.julia/packages/IJulia/e8kqU/src/display.jl:102
 [11] #invokelatest#2
    @ ./essentials.jl:716 [inlined]
 [12] invokelatest
    @ ./essentials.jl:714 [inlined]
 [13] execute_request(socket::ZMQ.Socket, msg::IJulia.Msg)
    @ IJulia ~/.julia/packages/IJulia/e8kqU/src/execute_request.jl:112
 [14] #invokelatest#2
    @ ./essentials.jl:716 [inlined]
 [15] invokelatest
    @ ./essentials.jl:714 [inlined]
 [16] eventloop(socket::ZMQ.Socket)
    @ IJulia ~/.julia/packages/IJulia/e8kqU/src/eventloop.jl:8
 [17] (::IJulia.var"#15#18")()
    @ IJulia ./task.jl:423

Note that I can display a normal color matrix (i.e., an image) successfully like rand(RGB{Colors.N0f8}, 3, 3).

FastAI seems very slow compared to "vanilla" Flux

When I try to train a simple resnet on CIFAR10 dataset, FastAi seems very slow compared to Flux (≈ 9-19 times slower).
It seems, it could be a garbage collector problem, because with Flux I can have a batch-size of 512, and with FastAI I can't exceed 128 without having a out of memory error.

FastAI code:

using FastAI
using ResNet9 # Pkg.add(url = "https://github.com/a-r-n-o-l-d/ResNet9.jl", rev="v0.1.1")

data, blocks = loaddataset("cifar10", (Image, Label))
method = ImageClassificationSingle(blocks)
model = resnet9(inchannels=3, nclasses=10, dropout=0.0)
method = ImageClassificationSingle(blocks)
learner = methodlearner(method, data; 
    lossfn=Flux.crossentropy,
    callbacks=[ToGPU()],
    batchsize=16,
    model=model,
    optimizer=Descent())

@time fitonecycle!(learner, 5, 1f-3, pct_start=0.5, divfinal=100, div=100)

Flux code:

using Flux
using Flux: DataLoader, onehotbatch
using Augmentor
using MLDatasets
using ParameterSchedulers
using ParameterSchedulers: Scheduler
using ResNet9 # Pkg.add(url = "https://github.com/a-r-n-o-l-d/ResNet9.jl", rev="v0.1.1")

normpip = SplitChannels() |> PermuteDims(3, 2, 1) |> ConvertEltype(Float32)

labels = CIFAR10.classnames() .|> Symbol

function datasets(batchsize)
    train = let
        x = CIFAR10.traintensor() |> CIFAR10.convert2image
        y = map(i -> labels[i + 1], CIFAR10.trainlabels())
        DataLoader((x, y), batchsize = batchsize, shuffle = true, partial = false)
    end

    test = let
        x = CIFAR10.testtensor() |> CIFAR10.convert2image
        y = map(i -> labels[i + 1], CIFAR10.testlabels())
        DataLoader((x, y), batchsize = batchsize)
    end
    
    train, test
end

function minibatch(x, y)
    h, w, n = size(x)
    xb = Array{Float32}(undef, w, h, 3, n)
    augmentbatch!(CPUThreads(), xb, x, normpip)
    yb = onehotbatch(y, labels)
    xb, yb
end

function train!(model, optimiser, nepochs)
    loss_hist = []
    loss(x, y) = Flux.crossentropy(model(x), y)
    ps = params(model)
    for e in 1:nepochs
        # Training phase
        tloss = 0
        trainmode!(model)
        for (x, y)  train
            x, y = minibatch(x, y) |> gpu
            gs = gradient(ps) do
                l = loss(x, y)
                tloss += l
                l
            end
            Flux.Optimise.update!(optimiser, ps, gs)
        end
        tloss /= length(train)
        # Validation phase
        testmode!(model)
        vloss = 0
        for (x, y)  test
            x, y = minibatch(x, y) |> gpu
            vloss += loss(x, y)
        end
        vloss /= length(test)
        push!(loss_hist, (tloss, vloss))
    end
    
    loss_hist
end

train, test = datasets(16)
nepochs = 5
s = Triangle(λ0 = 1f-5, λ1 = 1f-3, period = nepochs * length(train))
opt = Scheduler(s, Descent())
model = resnet9(inchannels = 3, nclasses = 10, dropout = 0.0) |> gpu
@time train!(model, opt, nepochs)

Results on a RTX 2080 Ti:
FastAI:
1841.008685 seconds (3.92 G allocations: 212.561 GiB, 59.59% gc time, 0.00% compilation time)
Flux:
98.444806 seconds (106.49 M allocations: 16.643 GiB, 3.58% gc time, 2.58% compilation time)

Results on a Quadro P5000:
FastAI:
1574.714976 seconds (3.92 G allocations: 212.473 GiB, 11.08% gc time)
Flux:
177.416636 seconds (105.55 M allocations: 16.639 GiB, 2.05% gc time, 1.42% compilation time)

Functions `getcoltypes` and `gettransformdict` are not exported properly

When following the Tabular Classification tutorial, I encountered the following error: UndefVarError: getcoltypes not defined.

The reason is that these symbols are not exported in "Tabular.jl" and thus not exported by FastAI through @reexport using .Tabular.

I can make a PR. However, I am not clear about your original intention. Do you want FastAI.getcoltypes or Tabular.getcoltypes? The latter works now without changing the source code.

Makie 0.17 support

When using Makie.jl 0.17, there are some errors cropping (see https://github.com/FluxML/FastAI.jl/runs/6404954707?check_suite_focus=true) up in the code for showing multiple observations, here:

# Add titles to named blocks
M.Label(grid[0, 1], "", tellwidth = false, textsize = 25)
for (i, title) in enumerate(header)
M.Label(grid[1, i], title, tellwidth = false, textsize = 25)
end

This code tries to add titles to the beginning of each column of a previously created GridLayout.

Data Block API

See #136 .

It would be nice to have an API for easily constructing learning methods as manually implementing all the methods can get tedious and the resulting methods don't compose well.

The API would be similar to fastai's data block API with the main difference that it is limited to learning methods, i.e. it keeps data container creation and task-specific data encoding separate.

Based on Blocks which represent a kind of data and Encodings, transformations that encode data and are optionally invertible allowing the decoding of outputs.

API

Best to give an example of what using it would look like. Below are reimplementations of some of FastAI.jl's computer vision methods.

ImageClassificationSingle(sz, classes) = Method(
    blocks=(Image{2}(), Label(classes)),
    encodings=[
        ProjectiveTransforms(sz),
        ImagePreprocessing(),
        OneHot()
    ]
)

ImageSegmentation(sz, classes) = Method(
    blocks=(Image{2}(), Mask{2}(classes)),
    encodings=[
        ProjectiveTransforms(sz),
        ImagePreprocessing(),
        OneHot()
    ]
)

SiameseSimilarity() = Method(
    blocks=((Image{2}(), Image{2}()), Label([true, false])),
    encodings=[
        ProjectiveTransforms(sz),
        ImagePreprocessing(),
    ],
)

Given just these short definitions and the block and encoding definitions and the right interfaces inplace, it would be possible to derive the following:

  • core interface (encoding, decoding, incl. buffered versions)
  • validation of input data
  • plotting interface
  • model building based on input and target blocks
  • loss functions based on target block

By grouping functionality by block or encoding, it would be much easier to compose and reuse different steps.

Improve projective data augmentation

Currently augmentation on images and keypoints is restricted to a random crop by default. fastai uses much more aggressive augmentation, and so should we. This should be straightforward to implement using DataAugmentation.jl's composable ProjectiveTransform.

The result should be something mirroring the functionality of fastai's aug_transforms.

To accomplish this, the following affine transformations should be added to DataAugmentation.jl:

  • Vertical and horizontal flip
  • Random rotation
  • Random warp
  • Random lighting
  • Random zoom

And then a helper in FastAI.jl that creates the augmentations which can be passed to ProjectiveTransforms.

Discriminative learning rates

Discriminative learning rates means using different learning rates for differents part of a model, so-called layer groups. This is used in fastai when finetuning models.

Learning method: Image segmentation

A LearningMethod for image segmentation. See https://docs.fast.ai/tutorial.vision.html#Points for fastai equivalent.

Status

Implementation:

  • LearningTask and LearningMethod definition
  • core interface
  • training interface
  • testing interface
  • plotting interface
  • loadtaskdata for canonical dataset store

Documentation:

  • complete reference in docstring
  • used in a tutorial

Tests:

  • LearningMethod test suite

Keypoint Regression example in documentation

I was following the KeypointRegression example from the documentation and ran into the following two errors:

julia> using FastAI.FilePathsBase, FastAI.StaticArrays, FastAI.DLPipelines
ERROR: UndefVarError: DLPipelines not defined
Stacktrace:
 [1] top-level scope
   @ ~/.julia/packages/CUDA/Uurn4/src/initialization.jl:52

and

julia> task = BlockTask(
           (Image{2}(), Keypoints{2}(1)),
           (
               ProjectiveTransforms(sz, buffered=true, augmentations=augs_projection(max_warp=0)),
               ImagePreprocessing(),
               KeypointPreprocessing(sz),
           )
       )
ERROR: UndefVarError: FlipX not defined
Stacktrace:
 [1] augs_projection(; flipx::Bool, flipy::Bool, max_zoom::Float64, max_rotate::Float64, max_warp::Int64)
   @ FastAI.Vision ~/.julia/packages/FastAI/GAIqz/src/Vision/encodings/projective.jl:200
 [2] top-level scope
   @ REPL[7]:1
 [3] top-level scope
   @ ~/.julia/packages/CUDA/Uurn4/src/initialization.jl:52

I think the first error can be solved simply by removing the FastAI.DLPipelines import.

Regarding the second error, FlipX alone can also not be found in the REPL but with the prefix DataAugmentation. it works.

julia> DataAugmentation.FlipX
FlipX (generic function with 1 method)

julia> FlipX
ERROR: UndefVarError: FlipX not defined
Stacktrace:
 [1] top-level scope
   @ :0
 [2] top-level scope
   @ ~/.julia/packages/CUDA/Uurn4/src/initialization.jl:52

I'd be happy to submit a PR fixing both issues. Is this prefix ^ also how you would solve it or do you prefer something different?

Julia version: 1.7.2
FastAI version: 0.4.0
Platform: macOS on M1

Tutorial errors out

The tutorial to train a model from scratch errors out and complains about a wrong keyword argument:

DataLoader(::Any, ::Any; collate, buffered, partial, useprimary) at ~\.julia\packages\DataLoaders\uGlPg\src\DataLoaders.jl:54 got unsupported keyword argument "validbsfactor"
DataLoader(::Any) at ~\.julia\packages\DataLoaders\uGlPg\src\DataLoaders.jl:54 got unsupported keyword argument "validbsfactor"

I followed the stack trace to DLPipelines.jl and DataLoaders.jl and can't find validbsfactor anywhere. What does it do?

`LoadError` in some pages of documentation.

I was working and getting myself familiar with Data containers, data recipes. But the corresponding documentation page here shows a LoadError for the first few cells. I've also seen it on the introduction page.

Some maintenance and housekeeping things

We should do a full code review, so the package matches the Flux code style and approach a bit better. There are also several places that need cleanup with having logging statements still around.

MLUtils.jl transition

In the medium-term, FastAI.jl will move to depend on MLUtils.jl instead of MLDataPattern.jl + LearnBase.jl + DataLoaders.jl.

Data container transformations

groupobs, joinobs and the likes in transformations.jl. These have already been copied over to MLUtils.jl in JuliaML/MLUtils.jl#21. Functions here can be removed once FastAI.jl depends on MLUtils.jl.

DataLoader

Path to replace dependency on DataLoaders.jl. Support for parallel data iterators has been merged into MLUtils.jl (JuliaML/MLUtils.jl#33), but is not released (yet) and since it uses a different task scheduler than DataLoaders.jl, possible performance regressions on full training workloads still need to be benchmarked.

If that is not an issue and MLUtils.jl releases a stable DataLoader with feature parity of DataLoaders.jl, the dependency on DataLoaders.jl will be dropped.

Timeseries learning usecase

Hi,

https://github.com/timeseriesAI/tsai is a TS package on top of FastAI python.

Just adding this to the wishlist. I might have some time to work on it (hopefully) for an upcoming project (or maybe will have to use the python package depending on cycles).

The salient question now is, are the mid-level abstractions sufficiently generic for the TS usecase and if not what needs to change to support that?

And how would this fit into the higher high level FasterAI stuff? Just musing out loud

I'll try to dig into that package this week to see what sort of functionality it needs.

Contributions?

I am really interested in this project and want to contribute but currently, I am not sure how this project is planned. Is there any way if I can help?

N-dimensional CNN models

fastai has this already for certain models to support audio and time series processing. We should be able to do that but even better, because Flux's Conv and Pooling layers can (theoretically) handle an arbitrary number of dimensions.

Register Everything

FastAI.jl already has some functionality for discovering features as explained in this tutorial. This allows users to

  • find a dataset for a given task (e.g. find a dataset that can be loaded with (Image, Label) blocks)
  • find a supervised learning task for a given data modality (e.g. given (Image, Label) , find the ImageClassificationSingle function)

These are two examples of a more general idea: a list of features with rich information that can be queried. This list can be extended by sub- and external modules to add functionality and make it available through a common interface.
Here I propose the implementation of a more general Registry that is more flexible in what information can be stored and has a consistent interface for querying and extending.

There could be registries for the following groups of features (and more):

  • Dataset: defines how a remote dataset is made available. For example imagenette2 can be downloaded
  • Dataset recipe: a recipe is a way to load a dataset into a data container ready for a learning task. For example imagenette2 can be loaded into a (Image{2}(), Label(...)) data container.
  • Learning task: find high-level constructors based on blocks (i.e. what findlearningmethods does currently)
  • Encodings: find encodings that are used with a certain block, e.g. all Encodings that work on Images
  • Architectures: find (possibly pretrained) model architectures that can work with some block data, e.g. convolutional architectures can take in ImageTensors.

Having these registries would make it easier

  • for users to find functionality relevant to their use case
  • for third-party (e.g. domain) modules to extend base FastAI.jl
  • to generate no-code interfaces populated with registry data where you can select functionality from dropdowns

Prototype

As an idea for the API, I've created a prototype registry that stores dataset information for the fastai datasets.

image

The information is incomplete for now, but we can already do some querying based on the columns:

image

Then select an entry and load it, downloading if it is not available:

image

Complete columns can also be extracted and worked with:

image


Having something like this just for the datasets or models may also be relevant to MLDatasets.jl or Metalhead.jl @CarloLucibello @darsnack

Documentation plans

With the new Pollen.jl frontend having been adopted in #203, I am taking the opportunity to think about changes and additions to the documentation.

The term Reader refers to someone who is reading the documentation.
I'll also be referencing the terms Tutorial, How-To, Reference, and Background so if you're not familar with this system for organizing documentation, please read https://diataxis.fr/.

Structural changes

Domain documentation

With #240 making domain-specific functionality subpackages, FastAI.jl has moved toward a one core, multiple domain extensions design. I think this is also beneficial for Readers who consult the docs for help with a problem in some domain they want to solve.

Each domain (e.g. computer vision, tabular, time series, text) will have its own page group in the docs menu, which should include the following pages:

  • Overview: gives a short background of the topic, links to related tutorials and also gives a short reference of learning tasks (e.g. TabularRegression), of the kinds of data it deals with (i.e. Blocks) and relevant data processing steps (i.e. Encodings) for those blocks.
  • Beginner Tutorial: for every domain, there should be at least one Tutorial that guides the Reader through a simple use case (e.g. single-label image classification). It should use the high-level interface (loaddataset, ready-made learning task, tasklearner) and link frequently to other pages with more detailed Reference and Background information.
  • Reference: An overview of the API of the domain (sub)module. Each exported symbol should have a comprehensive docstring that: gives a short description, explains required and optional arguments, and an Examples section that shortly covers some use cases in a How-To fashion.

Documentation for a domain module may also contain

  • more Tutorials: tutorials for intermediate and advanced use cases in the domain are a great way for Readers to engage with the library and possibly learn something about the domain as well
  • How-Tos: these should tell you how to perform common tasks, e.g. using augmentations in computer vision tasks.
  • Background: this can be used to explain topics related to the domain, design choices made when implementing the library and other topics that don't fit into the other categories.
  • Task pages: these can go into more detail about a specific learning task. Each should start with a 5-10 line end-to-end example, and then walk down the ladder of abstraction, showing the kinds of data and encodings being used.

General documentation

Next to the domain-specific docs, the domain-agnostic parts of FastAI.jl, like concepts, interfaces, training, data handling etc. should be documented.
Good examples from domain submodules should be used in tutorials and how-tos to set explanations into context.

Additions

APIs overview

FastAI.jl has a lot of API layers, that build on top of each other and having a page that summarizes these in a neat diagram would be nice.

API tour

As a more interactive tour through the API and how pieces relate, I have long been thinking of something organized as follows: the tour starts with a high-level, 5-line example (as in the README), and gives some context for what is happening. Then, you can "drill down" into each of the lines and it'll give you the extended version using APIs one layer below. Consider the following high-level example:

data, blocks = loaddataset("imagenette2-320", (Image, Label))
task = ImageClassificationSingle(blocks)
learner = tasklearner(task, data)
fitonecycle!(learner)
plotpredictions(task, learner)

We could then drill down on each line, e.g. the first would take us to the following, expanded code:

path = datasetpath("imagenette2-160")
data = inputs, targets = Datasets.loadfolderdata(
    path,
    filterfn=isimagefile,
    loadfn=(loadfile, parentname))
classes = unique(eachobs(targets))
blocks = (Image{2}(), Label(classes))

We could again drill down on relevant lines, demystifying the API at every step, showing the Reader how they could use their custom components and linking to relevant material everywhere.
For some more examples of "drilling down" from high-level one-liners, see this older post under the heading "API flexibility".

Extending

Every interface that is extensible should have documentation describing how to do so. Since most interfaces belong to the core FastAI.jl (i.e. not a domain library), this should be part of the general documentation.

  • Reference for how to implement the interface. This is best put under an "Extending" section in the abstract type's docstring, which should give an overview and link to necessary functions to implement. Each of these functions should have a more detailed "Extending" section.
  • Where possible, testing utilities like test_encoding that perform automated checks on the interface's invariants should be provided
  • Examples of extending an interface can also be featured in How-Tos or tutorials

Contributing

To make it easier to contribute and decrease maintainer burden, a contributing section should be part of the docs. It should clarify the following topics

  • Community standard and contribution process, e.g. ColPrac
  • coding style guidelines
  • how to implement interfaces
  • how the code is organized, especially that of domain submodules
  • how tests are written using InlineTest.jl and ReTest.jl
  • how to add documentation and run the docs interactively
  • PR template/checklist

Other content

(This is copied from FluxML/FluxML-Community-Call-Minutes#35)

  • Tutorials
    • FastAI.jl for fast.ai users: Multi-part tutorial series to help fast.ai users get started with FastAI.jl
      • (Part 1) Julia Basics: Syntax basics, array programming
      • (Part 2) Flux.jl vs. PyTorch: Differences between the frameworks, code comparisons for building a model
      • (Part 3) FastAI.jl vs. fast.ai: Differences shown by comparing the code for a basic finetuning task. Pointers to more resources.
    • Using parts of the API separately: Explains how FastAI.jl is built on many decoupled packages and that you don't have to use all of them. For example, showing how to use the LearningMethod machinery with a regular Flux.jl training loop and, inversely, using a Learner but with a custom data iterator and no learning method.
    • Serving predictions on a web server: Reuses the trained model from the serialization tutorial and shows how to package it into a small HTTP server that can be used to get predictions.
    • Implementing callbacks: Go from using callbacks to implementing your own callbacks, and explore how several existing callbacks are implemented. (Basic version here)
    • Siamese image similarity: Showcase different parts of FastAI.jl's APIs to implement an image similarity learning task (original fast.ai tutorial), FastAI.jl#31
    • Progressive resizing: Explain the method and implement it by building on the presizing tutorial. Train a vision model using it.
    • Transfer learning: Explain transfer learning, backbones, pretrained models and the techniques used to successfully finetune them.
  • How-to
    • Implement callbacks: Checklist for implementing callbacks.
    • Evaluate models: Measuring performance on trained models
  • Reference
    • FastAI.jl vs. fast.ai cheatsheet: Compare concepts and their equivalents in both libraries.
    • Packages: Overview of packages that FastAI.jl depends on for different parts of its API: Flux.jl, DLPipelines.jl, DataAugmentation.jl, DataLoaders.jl, Metalhead.jl, ...

Update for Flux 0.13

Flux will stop exporting params, which I think causes this error in tests here:

getbatch: Error During Test at /home/runner/.julia/packages/ReTest/WnRIG/src/ReTest.jl:517
[1118](https://github.com/FluxML/Flux.jl/runs/5451760234?check_suite_focus=true#step:6:1118)
  Got exception outside of a @test
[1119](https://github.com/FluxML/Flux.jl/runs/5451760234?check_suite_focus=true#step:6:1119)
  UndefVarError: params not defined
[1120](https://github.com/FluxML/Flux.jl/runs/5451760234?check_suite_focus=true#step:6:1120)
  Stacktrace:
[1121](https://github.com/FluxML/Flux.jl/runs/5451760234?check_suite_focus=true#step:6:1121)
    [1] paramsrec(m::Function)
[1122](https://github.com/FluxML/Flux.jl/runs/5451760234?check_suite_focus=true#step:6:1122)
      @ FluxTraining ~/.julia/packages/FluxTraining/PJMOa/src/learner.jl:140
[1123](https://github.com/FluxML/Flux.jl/runs/5451760234?check_suite_focus=true#step:6:1123)

MIT vs Apache license

FastAI seems to be under a apache license, not sure you can port it and license it under MIT.

Support for non-supervised learning tasks

So right now, much of the data block magic is specialized on supervised learning tasks, e.g. blocklossfn, blockmodel and the visualizations. It would be nice to better support other kinds of tasks as well. The underlying APIs are already pretty general, e.g. encode has no restraints on number of blocks encodings. It's more that right now, BlockMethod assumes a tuple (inputblock, targetblock) and uses that to determine what block the model output is and what blocks are in a batch for visualization.

For the FastAI Q&A @darsnack prepared a variational autoencoder tutorial where the data is really just one image that itself is the target and we ended up having to overwrite some methods to get it to work.

So, the thing that is mostly different for different learning tasks, is how each block of data is used. Below I try to summarize the different kinds of blocks that need to be known to use the block method APIs.

  • sampleblock i.e. what are the blocks of one sample
  • outputblock i.e. what block is output by the model
  • xblock i.e. what block is input to the model
  • yblock i.e. what block does the loss function compare outputblock to
  • showpredblocks i.e. what blocks are compared when using showpredictions
  • showoutputblocks i.e. what blocks are compared when using showoutputs

For a basic supervised learning method, we have

  • sampleblock = (inputblock, targetblock) = method.blocks
  • outputblock = yblock (can be overwritten)
  • xblock, yblock = encode(sampleblock)
  • predblock = decode(yblock) (usually same as targetblock)
  • showoutputblocks = (xblock, yblock, outputblock)
  • showpredblocks = (inputblock, targetblock, predblock) = decode(showoutputblocks)

For the autoencoder example, we have

  • sampleblock = inputblock = targetblock = predblock
  • outputblock = xblock = yblock = encode(sampleblock)
  • showpredblocks = (inputblock, predblock)
  • showoutputblocks = (xblock, outputblock)

This is how the blocks are used:

  • core interface
    • encode() = encode(sampleblock)
    • encodeinput() = encode(inputblock)
    • encodetarget() = encode(targetblock)
    • decodeypred() = decode(outputblock)
    • decodex() = decode(xblock)
    • decodey() = decode(yblock)
  • training interface
    • methodlossfn() = blocklossfn(outputblock, yblock)
    • methodmodel() = blockmodel(xblock, outputblock, blockbackbone(xblock))
  • interpretation interface
    • showpredictions() = showblocks(showpredblocks)
    • showoutputs() = showblocksinterpertable(showoutputblocks)
  • testing interface
    • mocksample() = mockblock(sampleblock)
    • mockmodel() = mockmodel(xblock, outputblock)

Add `TableDataset`

We need a TableDataset data container to work with tabular data. This data container should implement the getobs/nobs interface from LearnBase.jl, and it should wrap any "table" that satisfies the interface from Tables.jl.

Colab integration

Ideally, all the documentation pages based off of Jupyter notebooks should have a button to launch them in Colab.

For example, fastai's documentation has such a button:
image

First step would be a tutorial on how to set up Colab with Julia and install the dependencies.

See this Discourse post that shows how to get Julia running on Colab.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Move datasets to MLDatasets.jl

In the long term, we'd like most of the src/datasets code to move to MLDatasets.jl. To make this happen, we need a refactor of MLDatasets.jl to be more extensible and build on top of LearnBase.jl. Below is the structure envisioned for MLDatasets.jl:

  1. Low-level API: structs for different types of I/O (e.g. FileDataset) that support reading from the underlying I/O via getobs and nobs from LearnBase.jl
  2. High-level API: specific datasets (e.g. CIFAR10) implement using the low-level API

To achieve this goal, we need to complete the following stages:

  • Move data containers (e.g. FileDataset) to MLDatasets.jl
  • Move data container transformations (e.g. mapobs, groupsobs, etc.) to MLDataPattern.jl (these transformations apply generically to any iterator of observations, not just data containers)
  • Refactor existing data sets in MLDatasets.jl to utilize the low-level APIs
  • Move FastAI.jl datasets to MLDatasets.jl

Keypoint regression example: The input graph contains at least one loop

Context:
FastAI v0.4.3
Flux v0.13.3
Julia 1.7.2 on mac M1

In the final fitonecycle!(learner, 5) step of the keypoint regression example, the following error occurs in topological_sort_by_dfs() in Graphs.jl: The input graph contains at least one loop.

Below are the error and MWE (reduced version of the tutorial in the documentation).

The error
ERROR: The input graph contains at least one loop.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] topological_sort_by_dfs(::Type{Graphs.IsDirected{Graphs.SimpleGraphs.SimpleDiGraph{Int64}}}, g::Graphs.SimpleGraphs.SimpleDiGraph{Int64})
    @ Graphs ~/.julia/packages/Graphs/zrMoC/src/traversals/dfs.jl:65
  [3] topological_sort_by_dfs(g::Graphs.SimpleGraphs.SimpleDiGraph{Int64})
    @ Graphs ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331
  [4] (::FluxTraining.var"#16#17"{Learner})()
    @ FluxTraining ~/.julia/packages/FluxTraining/bday3/src/callbacks/execution.jl:9
  [5] ignore
    @ ~/.julia/packages/Zygote/DkIUK/src/lib/utils.jl:25 [inlined]
  [6] handle(runner::FluxTraining.LinearRunner, event::FluxTraining.Events.EpochBegin, phase::TrainingPhase, learner::Learner)
    @ FluxTraining ~/.julia/packages/FluxTraining/bday3/src/callbacks/execution.jl:8
  [7] (::FluxTraining.var"#handlefn#77"{Learner, TrainingPhase})(e::FluxTraining.Events.EpochBegin)
    @ FluxTraining ~/.julia/packages/FluxTraining/bday3/src/training.jl:102
  [8] runepoch(epochfn::FluxTraining.var"#67#68"{Learner, TrainingPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Training}}}}, learner::Learner, phase::TrainingPhase)
    @ FluxTraining ~/.julia/packages/FluxTraining/bday3/src/training.jl:104
  [9] epoch!
    @ ~/.julia/packages/FluxTraining/bday3/src/training.jl:22 [inlined]
 [10] (::FastAI.var"#154#156"{Tuple{Pair{TrainingPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Training}}}}, Pair{ValidationPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Validation}}}}}, Learner, Int64})()
    @ FastAI ~/.julia/packages/FastAI/sjHxr/src/training/onecycle.jl:36
 [11] withcallbacks(f::FastAI.var"#154#156"{Tuple{Pair{TrainingPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Training}}}}, Pair{ValidationPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Validation}}}}}, Learner, Int64}, learner::Learner, callbacks::Scheduler)
    @ FastAI ~/.julia/packages/FastAI/sjHxr/src/training/utils.jl:79
 [12] #153
    @ ~/.julia/packages/FastAI/sjHxr/src/training/onecycle.jl:33 [inlined]
 [13] withfields(f::FastAI.var"#153#155"{Tuple{Pair{TrainingPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Training}}}}, Pair{ValidationPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Validation}}}}}, Learner, Int64, Scheduler}, x::Learner; kwargs::Base.Pairs{Symbol, Flux.Optimise.ADAM, Tuple{Symbol}, NamedTuple{(:optimizer,), Tuple{Flux.Optimise.ADAM}}})
    @ FastAI ~/.julia/packages/FastAI/sjHxr/src/training/utils.jl:53
 [14] fitonecycle!(learner::Learner, nepochs::Int64, maxlr::Float64; phases::Tuple{Pair{TrainingPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, SubArray{Int64, 1, Vector{Int64}, Tuple{Vector{Int64}}, false}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Training}}}}, Pair{ValidationPhase, DataLoaders.GetObsParallel{DataLoaders.BatchViewCollated{FastAI.TaskDataset{Tuple{MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}, MLDataPattern.DataSubset{FastAI.Datasets.MappedData{typeof(loadannotfile), SubArray{String, 1, Vector{String}, Tuple{Vector{Int64}}, false}}, Vector{Int64}, LearnBase.ObsDim.Undefined}}, SupervisedTask{NamedTuple{(:input, :target, :sample, :encodedsample, :x, :y, :ŷ, :pred), Tuple{Image{2}, Keypoints{2, 1}, Tuple{Image{2}, Keypoints{2, 1}}, Tuple{Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}}, Bounded{2, FastAI.Vision.ImageTensor{2}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, FastAI.Vision.KeypointTensor{2, Float32, 1}}, Bounded{2, Keypoints{2, 1}}}}, Tuple{ProjectiveTransforms{2, NamedTuple{(:training, :validation, :inference), Tuple{DataAugmentation.BufferedThreadsafe, DataAugmentation.BufferedThreadsafe, DataAugmentation.Sequence{Tuple{DataAugmentation.CroppedProjectiveTransform{DataAugmentation.ScaleKeepAspect{2}, DataAugmentation.PadDivisible}, DataAugmentation.PinOrigin}}}}}, ImagePreprocessing{FixedPointNumbers.N0f8, 3, ColorTypes.RGB{FixedPointNumbers.N0f8}, Float32}, KeypointPreprocessing{2, Float32}}}, Validation}}}}}, wd::Float64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ FastAI ~/.julia/packages/FastAI/sjHxr/src/training/onecycle.jl:32
 [15] fitonecycle!(learner::Learner, nepochs::Int64, maxlr::Float64) (repeats 2 times)
    @ FastAI ~/.julia/packages/FastAI/sjHxr/src/training/onecycle.jl:25
 [16] top-level scope
    @ REPL[41]:1
 [17] top-level scope
    @ ~/.julia/packages/CUDA/GGwVa/src/initialization.jl:52
MWE
using DelimitedFiles: readdlm
using FastAI
using FastAI.FilePathsBase, FastAI.StaticArrays
import FastAI.DataAugmentation

data = (
    mapobs(i->rand(Float32, 300, 300), 1:100),
    mapobs(i->[50, 50], 1:100)
)
traindata = datasubset(data, (1:70))
validdata = datasubset(data, (71:100))

sz = (224, 224)
task = SupervisedTask(
    (Image{2}(), Keypoints{2}(1)),
    (
        ImagePreprocessing(),
        KeypointPreprocessing(sz),
    )
)

model = taskmodel(task, Models.xresnet18());
traindl, validdl = FastAI.taskdataloaders(traindata, validdata, task, 16)

import Flux
learner = Learner(
    model,
    (traindl, validdl),
    Flux.ADAM(),
    tasklossfn(task),
    ToGPU())

fitonecycle!(learner, 1)

Problem in ResNet50 backbone of "Image segmentation" example

I suspect that the following code line from the Image segmentation example

backbone = Metalhead.ResNet50(pretrain=true).layers[1:end-3]

is not doing what is intended since ResNet50 from Metalhead (https://github.com/darsnack/Metalhead.jl/tree/darsnack/vision-refactor) returns a 2 item Chain (backbone and head), and the 1:end-3 indexing returns an empty Chain.

Funny enough, with the model return by methodmodel (basically a Conv((1,1), 3=>32)) the example still works and is able to produce some image segmentation (does it works like just a pixel color indexer?).

I'd say the expected code should be something Metalhead.ResNet50(pretrain=true).layers[1].layers and I would open a PR, but I'm not sure since, with that, the example fails later in the training loop.

Better visualization/interpretation API

The block plotting API can be improved. Right now, it is

  • focused on supervised tasks, and
  • specific to Makie.jl plotting

Based on blocks we should be able to create an interface that is backend-agnostic (i.e. plot with Makie.jl, show as text, or as image) and usable for different tasks.

Backend-agnostic interface

The backend-agnostic functions are prefixed with show- and

  • showsample(method, sample): Show an unencoded sample.
  • showprediction(method, input, targetpred): Show a
  • showxy(method, x, y): Show an encoded sample, decoded until interpretable
  • showoutput(method, x, y, output): Show an encoded sample and a model output, decoded until interpretable

For every function, there is a version that shows multiples, i.e. a vector of unencoded data or a collated batch of encoded data:

  • showsamples, showpredictions, showxys/showbatch, showoutputs

The corresponding blocks (e.g. what makes up a sampleblock) can be derived from the learning method. The high-level function are all lowered to the generic showblock(display, block, data). For example, showsample(method, sample) will lower to `showblock(display, method, sampleblock, sample).

Here, display is a backend for showing. Possible backends are:

  • Makie.jl for plotting
  • Textual

Block types implement a method that indicates that they support a backend. A backend is then automatically chosen given all block types and considering

  • which backends the block types support: the backend must be supported by all blocks
  • what display options are available: e.g. in a text-only environment, we can't show Makie figures;
  • if there are multiple backends that fulfill these conditions, choose the one with highest fidelity, e.g. Makie > Text

Concrete examples

To give some context, let's take an image classification learning method as an example. Here we have sample = (image, label) and sampleblock = (Image{2}(), Label(classes)).

  • showsample(method, sample) hence calls showblock(display, (Image{2}(), Label(classes)), (image, label))

    • display here will be the Makie backend, as it both Image and Label should be showable through Makie.
    • For Makie, by default, each block in a tuple of blocks will get its own axis, however some simplifications can be made, e.g. by overwriting showblock(::MakieBackend, ::Tuple{<:Image, <:Label}, _) we could implement a method where the label is put as a title instead of having its own axis. Same logic applies for a segmentation task where the mask could be drawn over an image.
    • In a text-only environment, the blocks could also be rendered as text, e.g. using ImageInTerminal.jl to render the image.
  • showxy calls showblockinterpretable(encodings, (ImageTensor{2}(3), OneHotTensor{0}(classes)), (x, y))

    • It applies decodings to the blocks until they are interpretable:
      • ImageTensor is not interpretable by itself and is decoded back into an Image
      • OneHotTensor could be decoded back into a Label and rendered as text, or a showblock could be defined for OneHotTensor{0} that visualizes it as a barplot of probabilities/logits.
    • After the decodings, showblockinterpretable then defers to showblock(display, (Image{2}(), OneHotTensor{0}(classes)), (imagedecoded, y))

Backends

A Makie backend that plots data and creates static figures akin to the current plot* functions

A text backend that draws to a terminal, using fancy printing packages like PrettyTables.jl, ImageInTerminal.jl and UnicodePlots.jl

A interactive Makie backend that augments the figure with interactive sliders, e.g. to show slices of a heatmap or such. Probably out of scope for FastAI.jl itself but could be an extension package.

Makie crashes precompilation

This looks like a Makie bug affecting FastAI.

I try to precompile FastAI, but it segfaults:

(FastAI) pkg> status
      Status `~/tmp/FastAI/Project.toml`
  [5d0beca9] FastAI v0.1.0

julia> using FastAI
[ Info: Precompiling FastAI [5d0beca9-ade8-49ae-ad0b-a3cf890e669f]
WARNING: Method definition Type##kw(Any, Type{Base.MPFR.BigFloat}, Base.Irrational{:log4π}) in module IrrationalConstants at irrationals.jl:180 overwritten in module LogExpFunctions on the same line (check for duplicate calls to `include`).
  ** incremental compilation may be fatally broken for this module **
...
signal (11): Segmentation fault
in expression starting at ~/.julia/packages/FastAI/4mXj2/src/FastAI.jl:11

The line which panics: using Makie

I confirm the same error when using only Makie:

(FastAI) pkg> status
      Status `~/tmp/FastAI/Project.toml`
  [ee78f7c6] Makie v0.15.0

julia> using Makie

signal (11): Segmentation fault

Can anybody else reproduce with this version of Makie?

Thank you!

@opus111 I just wanted to leave a note to say thanks for working on this. Please let me know if I can help in any way. You might be interested in checking out SwiftAI, which uses a functional approach to creating a fastai-like with little if any mutation:

https://github.com/fastai/swiftai

We have a discord here if you want to chat in real-time about anything:

https://discord.com/invite/xnpeRdg

If this project gets popular I'd be happy to create a forum category for it if that would be helpful.

Visualization functions are not working

It appears as if the visualization functions provided under interpretation/method.jl don't come into scope.

When following the tutorial under keypoint regression, when I do: julia> showbatch(method, (xs, ys))
I get:

ERROR: UndefVarError: showbatch not defined
Stacktrace:
 [1] top-level scope
   @ REPL[27]:1

This happens for every show... function in method.jl. Using more explicit importing or namespace syntax doens't seem to help.
I'm on Windows 10 and working in VS CODE 1.62.3 (Julia extension v1.5.6). Currently Julia v1.7.0 but the issue arose in 1.6.x as well. I also get the issue when working in a Pluto.jl notebook.

Julia Version 1.7.0
Commit 3bf9d17731 (2021-11-30 12:12 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, tigerlake)
Environment:
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 14
pkg> status
  [336ed68f] CSV v0.8.5
  [052768ef] CUDA v3.5.0
  [13f3f980] CairoMakie v0.6.6
  [5d0beca9] FastAI v0.2.0
  [916415d5] Images v0.24.1
  [ee78f7c6] Makie v0.15.3
  [90137ffa] StaticArrays v1.2.13
  [bd369af6] Tables v1.6.0

FasterAI: a roadmap to user-friendly, high-level interfaces

FasterAI is the working name for a new high-level interface for FastAI.jl with the goal of making it easier for beginner users to get started by

  • reducing the amount of code needed for common tasks,
  • making it harder to do something wrong,
  • aiding discoverability of functionality and content without overwhelming; and
  • providing user-friendly feedback when an error occurs

Motivation The current documentation examples are decently short and do a good job of showcasing the mid-level APIs. However, they can be daunting for beginner users and could be made much shorter with only a few convenience interfaces.

using FastAI
path = datasetpath("imagenette2-160")
data = loadfolderdata(
    path,
    filterfn=isimagefile,
    loadfn=(loadfile, parentname))
classes = unique(eachobs(data[2]))
method = BlockMethod(
    (Image{2}(), Label(classes)),
    (
        ProjectiveTransforms((128, 128), augmentations=augs_projection()),
        ImagePreprocessing(),
        OneHot()
    )
)
learner = methodlearner(method, data, Models.xresnet18(), callbacks...)
fitonecycle!(learner, 10)
xs, ys = makebatch(method, data, 1:8)
ypreds = learner.model(xs, ys)
plotpredictions(method, xs, ypreds, ys)

The logic can be reduced to the following operations which constitute a step in the basic workflow. A new, high-level interface (codenamed "FasterAI") should make each operation a one-liner.

  1. Dataset: downloading and loading a dataset
  2. Learning method: creating a learning method
  3. Learner: creating a Learner
  4. Training: training the Learner
  5. Visualization: visualizing results after training

The existing abstractions are well-suited for building high-level user-friendly interfaces on top. The above example, and all the learning methods in the docs, could then be written in 5 lines:

data, classes = ImageClfFolders(labelfn=parentname)(datasetpath("imagenette2-160"))
method = ImageClassification((128, 128), classes)
learner = methodlearner(method, data, Models.xresnet18(), callbacks...)
fitonecycle!(learner, 10)
plotpredictions(method, learner)

Importantly, a good deal of customization is retained, and every line can be replaced by the parts in the original example to offer full customizability without affecting the other lines. If you want to change the dataset you're using, only change the first line. If you want to use different training hyperparameters, change line 4 and so on...

It is important to note that there are no changes required to the existing APIs, so FasterAI will be easy to implement while not breaking existing functionality.

Ideas

Following are ideas for improving each of the above steps

Dataset

For some dataset formats like the basic image classification, the loadfolderdata helper already makes it possible to write one-liners for loading a dataset:

data = loadfolderdata(datasetpath("imagenette2-160"), filterfn=isimagefile, loadfn=(loadfile, parentname))

For others, this isn't always possible. Consider the segmentation and multi-label classification examples from the quickstart docs:

df = loadfile(joinpath(path, "train.csv"))
data = (
    mapobs(f -> loadfile(joinpath(path, "train", f)), df.fname),  # images
    map(labelstr -> split(labelstr, ' '), df.labels),              # labels
)
classes = readlines(open(joinpath(path, "codes.txt")))
data = (
    loadfolderdata(joinpath(path, "images"), filterfn=isimagefile, loadfn=loadfile),
    loadfolderdata(joinpath(path, "labels"), filterfn=isimagefile, loadfn=f -> loadmask(f, classes))
)

And even for the single-label classification case, you have to manually call unique(eachobs(data[2])). These steps could be intimidating for users unfamiliar with the data container API.

One solution for this would be to create dataset recipes that encapsulate the logic for loading a data container that is stored in a common format along with metadata. The recipes could still allow for some customization through arguments while keeping a consistent API for every dataset. A recipe is just a configuration object that can be called, for example on a path, returning a data container and metadata:

data, metadata = DatasetRecipe(args...; kwargs...)(path)

data, classes = ImageClfFolders(labelfn=parentname)(datasetpath("imagenette2-160"))
data, classes = SegmentationFolders(
    labelfile=p"codes.txt",
    imagefolder="images",
    labelfolder="labels")(datasetpath("camvid_tiny"))

Additionally, the recipe approach also makes it easier to document the data loading process: Each recipe's docstring describes the expected format, and the configuration options. The recipes also make it possible to give user-friendly error messages when the file format is different than expected.

Looking further, this dataset recipe abstraction could also improve other parts of the workflow.

  • dataset selection: it would be possible to associate every dataset in the fastai dataset collection with one or more named recipes configured for their specific use case. These could be queried for a dataset, so you can find common ways a dataset is used and quickly load it. Note that one dataset needs to be able to have multiple recipes associated with it: for example, pascal_2007 is originally an object detection dataset but can also be used for multi-label image classification.
  • learning method selection: it should also be possible to find datasets and recipes that can be used for a specific learning method.
    This can be done by associating (partial) data block information with the recipes and full data block information with the recipes for concrete fastai datasets. For example the container returned by ImageClfFolders, will have always have blocks of types (Image{2}, Label). The recipes associated with concrete datasets like "dogscats" could optionally carry the full block information, i.e. (Image{2}(), Label(["dogs", "cats"]). Functionality inspired by the MLJ model zoo that provides similar features (though based oin scientific types).

API examples (see also code above for recipe examples)

  • datasets(method): For a BlockMethod, list compatible datasets in the fastai datasets collection and the different recipes that are compatible for use with the learning method.
  • recipes(datasetname): Given a name of a dataset in the fastai dataset collection, list the different ways it can be loaded.

Learning method

The first thing to introduce here is a collection of function wrappers for creating common learning methods. Same as the dataset recipes above, this allows documenting them, throwing helpful errors and constraints the number of mistakes possible when trying to adapt an existing example.

ImageClassificationSingle(sz::NTuple{N}, classes) where N = BlockMethod(
    (Image{N}(), Label(classes)),
    (ProjectiveTransforms(sz), ImagePreprocessing(), OneHot())
)

ImageClassificationMulti(sz::NTuple{N}, classes) where N = BlockMethod(
    (Image{N}(), LabelMulti(classes)),
    (ProjectiveTransforms(sz), ImagePreprocessing(), OneHot())
)

ImageSegmentation(sz::NTuple{N}, classes) where N = BlockMethod(
    (Image{N}(), Mask{N}(classes)),
    (ProjectiveTransforms(sz), ImagePreprocessing(), OneHot())
)

Additionally, there could be a function for discovering these learning methods given just block types:

julia> learningmethods(Tuple{Image, Mask})
    [ImageSegmentation,]
julia> learningmethods(Tuple{Image, Any})
    [ImageSegmentation, ImageClassificationSingle, ImageClassificationMulti]

Learner

methodlearner is already a good high-level function that takes care of many things.

Training

fitonecycle!, finetune! and lrfind are high-level enough one-liners for many trianing use cases. Maybe add fitflatcos! for fastai feature parity.

Visualization

plotpredictions already exists and compares predictions vs. targets for supervised tasks. For usability, plotpredictions, plotbatch, and plotsamples need convenience functions that take a learner directly:

plotpredictions(method, learner)
plotoutputs(method, learner)
plotsamples(method, learner)

Dataset recipes

With #151, FastAI.jl is getting high-level interfaces for searching datasets (finddatasets) and loading datasets into task-specific data containers (loaddataset). There is also a new DatasetRecipe that encapsulates configuration for loading a data container and the block information from a path. These recipes can be registered with a dataset so that they can be found using the above high-level functions.

The fastai dataset colletion comes with quite a lot of datasets, so only a few have recipes yet. This issue tracks the progress on adding recipes to all the datasets. Contributions of recipe types and recipe configs for datasets are welcome.

See src/datasets/recipes.jl for example recipe implementations and src/datasets/fastairegistry for how recipes are registered. listdatasources() gives you a list of all dataset sources and datasetpath(name) downloads them and returns the download folder.

Progress

For datasets that can be used for multiple tasks, they are listed below. Otherwise a checked dataset that at least one recipe is already implemented.

  • CUB_200_2011
  • bedroom (not sure how the folders are layed out)
  • caltech_101
  • cifar10
  • cifar100
  • food-101
  • imagenette-160
  • imagenette-320
  • imagenette
  • imagenette2-160
  • imagenette2-320
  • imagenette2
  • imagewang-160
  • imagewang-320
  • imagewang
  • imagewoof-160
  • imagewoof-320
  • imagewoof
  • imagewoof2-160
  • imagewoof2-320
  • imagewoof2
  • mnist_png
  • mnist_var_size_tiny
  • oxford-102-flowers
  • oxford-iiit-pet
  • stanford-cars
  • ag_news_csv
  • amazon_review_full_csv
  • amazon_review_polarity_csv
  • dbpedia_csv
  • giga-fren
  • imdb
  • sogou_news_csv
  • wikitext-103
  • wikitext-2
  • yahoo_answers_csv
  • yelp_review_full_csv
  • yelp_review_polarity_csv
  • biwi_head_pose
  • camvid
  • pascal-voc
  • pascal_2007
    • multi-label image classification ((Image{2}, LabelMulti))
    • object detection
  • pascal_2012
  • siim_small
  • skin-lesion
  • tcga-small
  • adult_sample
  • biwi_sample
  • camvid_tiny
  • dogscats
  • human_numbers
  • imdb_sample
  • mnist_sample
  • mnist_tiny
  • movie_lens_sample
  • planet_sample
  • planet_tiny
  • coco_sample
  • coco-train2017
  • coco-val2017
  • coco-test2017
  • coco-unlabeled2017
  • coco-image_info_test2017
  • coco-image_info_unlabeled2017
  • coco-annotations_trainval2017
  • coco-stuff_annotations_trainval2017
  • coco-panoptic_annotations_trainval2017

Drop DLPipelines.jl

There is little reason for this to be a separate package now; it is more likely that FastAI.jl will turn into a smaller core with domain-specific extensions, so the interfaces in DLPipelines.jl can be moved into core FastAI.jl. This also removes the transitive dependency on DataLoaders.jl once the #196 is complete.

fastai parity

This issue tracks the progress on fastai parity.

Last updated 2021/08/11

Datasets

Data pipelines

  • create data pipelines from data block information (BlockMethod)
    • visualizations based on blocks
  • fast, paralllelized data loading (DataLoaders.jl)
  • fast, composable affine augmentations for images, masks and keypoints (DataAugmentation.jl)
  • advanced data augmentation
    • MixUp
    • CutMix

Models

Training

Applications

  • computer vision
    • image classification
      • single-label
      • multi-label
    • image segmentation
    • image keypoint regression
  • tabular
    • classification
    • regression
  • recommender systems
  • natural language processing

Use JpegTurbo.jl to load .jpg images

JpegTurbo.jl can speed up image loading quite a bit, and the previous ImageIO.jl versions had severe performance regressions for 1.7, which should be fixed if JpegTurbo.jl is used.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.