Giter Club home page Giter Club logo

image-creator's People

Contributors

benoit74 avatar rgaudin avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

image-creator's Issues

Prevent download bomb

The standard use case of image-creator is to be fed with arbitrary links to content that will be stored in the target image.
We don't care about what those files contain but given the frontend service will not validate neither, we should protect ourselves (or our running host to be more precise) against easy filesystem size attacks.

It would be easy for an attacker to serve a file on a lying server sending a reasonable Content-Length but sending enormous amounts of data when fetching.

Our downloader should stop/halt once the expected size has been reached.

Add keep-latest-versions for OCI Images

OCI Images (as well as their filters) should allow a keep_latest_versions: that is an int defaulting to 0.
This commands the number of different versions for matching image based on the tag.
For a single image (filenames end in :{version}), versions are compared naturally and only the N higher ones are kept.

To keep a single copy:

oci_images:
  keep_latest_versions: 1

This is somewhat similar to the max_num option we already have but doesn't require adding an entry for each image

Add support for shrink

YAML should allow setting shrink option that would reduce produced image to min required

(How to) Download container images

It seems we agreed that we want to download container images in a form of (compressed?) tarballs.

These tarballs to be then at first RPI start (first usage?):

  • Load in the local container image repository
  • Run the image to create a container

This approach follows the docker save and docker load workflow.

But a few clarifications are still needed IMO to answer the following questions:

  • How to download a tarball if the container runtime engine (docker) does not run in image-creator? Do we/Should we have a script which can deal with the the online image repository?
  • How should we secure that the cache works (see #1) and is in sync with upstream online image repositories?
  • Considering that compressing this container image tarballs would be beneficial (less data to download, less data to write to the SD image/card), it seems we might need to precompute these tarballs and store them somewhere... but how & where precisely?

Investigate very slow downloads from worker to Kiwix

Worker downloads at very slow speed (up to 50Mbps only) while runnig Ookla's speedtest at the same time shows that server can handle 700Mbps (at this time) and client (in docker) can handle close to 1Gbps.

Progress reporting might be an issue

Prevent expansion bomb

The standard use case of image-creator is to be fed with arbitrary links to content that will be stored in the target image.
We don't care about what those files contain but given the frontend service will not validate neither, we should protect ourselves (or our running host to be more precise) against easy filesystem size attacks.

It would be easy for an attacker to provide an archive and lie about its expanded size (as it is provided and not computed) resulting in enormous amounts of data being extracted.

Our archive expander should stop/halt once the expected size has been reached.

See #15

Implement Basic program structure

image-creator creates images by running a number of tasks in order, via a State Machine anything procedural.

  • parse inputs
  • download base image
  • resize image
  • attach image
  • resize p3
  • resize p3 fs
  • mount p3
  • download into p3
    • container images
    • content files [optionnaly extracting]
  • unmount p3
  • mount p1
  • write offspot.yaml
  • unmount p1
  • detach loop-device
  • shrink image?
  • compress?

This ticket would implement stub tasks for those.

Implement Downloader

Most of the image-creator's work consists of downloading files:

  • base-image
  • container images
  • content to be placed on the card

Need is basic for now: given a URL, write it to a specified location on disk, reporting progress.
It should also be somewhat resilient to network hiccups

Better Cache Eviction Process

On each run, the cache manager walks the cache to evict outdated ones by checking that the remote digest is still the same as the one on record.

Although it's a light operation (mostly a HEAD request), it may last a significant time as the cache grows (since we want it to be very large)
OCI Images involves a heavier/longer process but cached images are to be kept a raisonnable number

Unable to use two versions of same OCI Image

Due to OCI Image files being saved as /data/images/ghcr.io_offspot_file-manager.tar for instance, it's impossible to create an image with version 1.0 and version 1.1. One of them will overwrite the other.

Add check-after

Add to cache policy a new check-after param for each entry that is a number of seconds (or parse-able timespan) commanding how much time should pass in-between attempts to check for outdateness of the entry.

Will be useful for entries under our control or those we know won't change to prevent checking on each run while chances of change are close to zero

Specify the card/project configuration format & workflow

To configure the image creation process, a few things will have to be given. In a nutshell:

  • Meta information like for example: timezone, UI language, admin credentials, ...
  • Content: container images, containers, application configs & data (input+output)

I seems that JSON is the format which should be used to store this. So far we could reuse/rebase on:

The question of the naming has to be clarified as well:

  • Kiwix tends to use "card configuration"
  • OLIP tends to use "project configuration"

To provide such a configuration to the image creator we could use:

  • a fs path OR
  • an URL

Considering that this input will be generated by other softwares and that probably the content part will/should be reused in other parts of the toolchain, we should consider the reader/writer to better be generic IMO and proposed as a library.

Remark: obviously, things will evolved while the image-creator will be developed, but we still better have a pretty good agreement on the basics before we start.

Cache directory should be configurable

Currently, it is not, the cache directory is in the (image) output directory. This brings problems:

  • This is not configurable and it exists various scenarios where this is not wanted
  • It is "hidden", ie. if the user does not look accurately to the fs, it can miss the fact that there is a cache (just had a user report about that).
  • The way the installer behaves is implicit

IMO the current behaviour is an appropriate default behaviour. But it should be visible/configuratble in the UI/cmd. I would propose to put it below the output directory file picker. Once the output directory has been set, configure the cache directory automatically (if not already configured manually).

This ticket is a follow of offspot/kiwix-hotspot#623.

Add keep-latest-zim-versions for files

Similar to #24 except this applies to files and the version is computed based on _YYYY-MM.{suffix} and compared alphabetically.
It's main usage would be to keep only one (or a different value) ZIM version in cache but can of course be used with any URL matching the period pattern.
Naming it ZIM makes it clear IMO that it matches our ZIM version pattern

Implement image file manipulation

Beside downloading #9, manipulating image is key. It's a collection of operations that all work together toward a single goal: open, fill and close an image.

  • resize existing image file (base-image). This is done via qemu-img
  • attach file image as loop-device
  • resize partition on the loop-device
  • resize partition filesystem of a device
  • mount a loop-device partition
  • unmount a loop-device partition
  • detach a loop-device
  • shrink an image file (qemu-img)

Implement download cache on disk

The creator needs a download cache on disk because:

  • Onprem users should not have to redownload everything if they recreate an image (from the same configuration for example)
  • SaaS runs shoukd be as quick as possible. Without a cache, even with high speed connections, this can take a significant time to download 500GB of data.

That said, it shoukd not be mandatory and location shoukd be configurable, see #11.

Kiwix-hotspot as a very simple cache which provides already this feature. It could be reused, but I wonder if this would not be a better idea to have a more sophisticated solution to allow more things like for example:

  • Multiple evictions strategies
  • LRU cache

Implement Progress Reporting

image-creator being a machine-tool to be used by other systems as well as it being task-oriented and sensible to exofailures (downloads), it should be able to report its status (where it is in the state machine), what it's doing and overall progress in a machine-readable format.

As a crucial SPOF, it should also allow post-mortem investigations.

Progress report and History, although regularly mixed for simplicity, respond to different needs and constraint:

  • History is to be used by humans, so text format is preferred
  • Progress Report is to be used by machines so JSON is preferred
  • Textual Progress Report should also be possible

Options for it includes stdout/stderr output, output to file or socket, file or TCP socket to read or query status.

Let's keep in mind that it will mostly be used from within a container (I suppose).

At this moment, my proposal would be:

  • Textual history + progress to stdout.
  • JSON live progress to a specified file.

stdout is very easy to integrate and work-with. Kind of how it's always being used, so history can easily be read/fetched/archived.
Progress would be dynamic, using CR character so that it doesn't pollute history (not visible in docker logs).

Using a specified file for machine-readable progress has several advantages:

  • easy to consume
  • passive, synchronous. no need to query or wait for answer
  • it's atomic: you just read the whole file instead of wondering where you picked the stream at or how much you should be reading from the socket.
  • Previous point is super useful when parsing JSON
  • can be set to a RAMfs (/dev/shm for instance) so you're not hitting IOs
  • easy to bind and reuse when using containers

It's important that long-lasting tasks report both to the history and periodically on the progress to avoid the common ambiguity of the last running task: is it still running or is the process stalled?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.