Giter Club home page Giter Club logo

lcfs's Introduction

LCFS Storage Driver for Docker

Build Status Go Report Card Docker Pulls

tl;dr: Every time you build, pull or destroy a Docker container, you are using a storage driver. Current storage drivers like Device Mapper, AUFS, and Overlay2 implement container behavior using file systems designed to run a full OS. We are open-sourcing a file system that is purpose-built for the container lifecycle. We call this new file system Layer Cloning File System (LCFS). Because it is designed only for containers, it is up to 2.5x faster to build an image and up to almost 2x faster to pull an image. We're looking forward to working with the container community to improve and expand this new tool.

Overview

Layer Cloning FileSystem (LCFS) is a new filesystem purpose-built to be a Docker storage driver. All Docker images are constructed of layers using storage drivers (graph drivers) like AUFS, OverlayFS, and Device Mapper. As a design principle, LCFS focuses on layers as the first-class citizen. The LCFS filesystem operates directly on top of block devices, as opposed to merging separate filesystems. Thereby, LCFS aims to directly manage at the container image’s layer level, eliminate the overhead of having a second filesystem that then is merged, and to optimize for density.

LCFS will also support the snapshot driver interface being defined by containerd.

The future direction is to enhance LCFS with cluster-level operations, provide richer container statistics, and pave the way towards content integrity in container images.

  • cluster operations: where image pulls can be cooperatively satisfied by images across a group of servers instead of being isolated to a single server, as it is today.
  • statistics: answering what are the most popular layers, and so on.
  • content-integrity: ensuring container content has not been altered. See OCI scope.

LCFS filesystem is an open source project, and we welcome feedback and collaboration. The driver is currently experimental.

Design Principles

Today, running containers on the same server is often limited by side effects that come from mapping container behavior over general filesystems. The approach impacts the entire lifecycle: building, launching, reading data, and exiting containers.

Historically, filesystems were built with the expectation that content is read/writeable. However, Docker images are constructed using many read-only layers and a single read-write layer. As more containers are launched using the same image (like fedora/apache), reading a file within a container requires traversing (up to) all of the other containers running that image.

The design principles are:

  • layers are managed directly: inherently understand layers, their different states, and be able to directly track and manage layers.
  • clone independence: create and run container images as independent entities, from an underlying filesystem perspective. Each new instantiation of the same Docker image is an independent clone, at the read-only layer.
  • containers in clusters: optimize for clustered operations (cooperative 'pull'), optimize for common data patterns (coalesce writes, data that is ephemeral), and avoid inheriting behavior that overlaps (when to use the graph-database).

Performance Goals and Architecture

An internal filesystem level measure of success is to make the creation and management independent of the size of the image and the number of layers. An external measure of success is to make the launch and termination of one hundred containers a constant time operation.

Additional performance considerations:

  • page cache: non-filesystem storage drivers create multiple copies of the same image layers in memory. This leaves less host memory for containers. We should not.
  • inodes: some union filesystems create multiple inodes per file, leading to inode exhaustion in build scenarios and at scale. We should not.
  • space management: a lot can be done to improve garbage collection and space management, automatically removing orphaned layers. We should do this.

Measured Performance

The current experimental release of LCFS is shown against several of the top storage drivers. These tests were run against a local repository to remove network variablity.

The below table shows a quick comparison of how long it takes LCFS to complete some common Docker operations compared to other storage drivers, using an Ubuntu 14.04 system with a single SATA disk. Times are measured in seconds, and the number in () shows the % decrease in time with respect to the comparison driver.

test LCFS AUFS DEVICE MAPPER Overlay Overlay2
docker pull gourao/fio 8.831s 10.413s (18%) 13.520s (53%) 11.301s (28%) 10.523s (19%)
docker pull mysql 13.359s 16.438s (23%) 24.998s (87%) 19.170s (43%) 16.252s (22%)
docker build 221.539s 572.677s (159%) 561.403s (153%) 549.851s (148%) 551.893s (149%)

Create / Destroy: The diagram below depicts the time to create and destroy 20, 40, 60, 80 and 100 fedora/apache containers. The image was pulled before the test. The cumulative time measured: LCFS at 44 seconds, Overlay at 237 sec, Overlay2 at 246 sec, AUFS 285 sec, Btrfs at 487 sec, and Device Mapper at 556 sec. alt text

Build: The diagram below depicts the time to build docker sources using various storage drivers. The individual times measured: Device Mapper at 1512 sec, Btrfs at 956 seconds, AUFS at 574 seconds, Overlay at 914 seconds, Overlay2 at 567 seconds, and LCFS at 437 seconds. alt text

Architecture

The LCFS filesystem is user-level, written in C, POSIX-compliant, and integrated into Linux and macOS via Fuse. It does not require any kernel modifications, enabling it to be a portable filesystem.

To start to explain the layers-first design, let us compare launching three containers that use Fedora. In the following diagram, the left side shows a snapshot based storage driver (like Device Mapper or Btrfs). Each container boots from its own read-write (rw) layer and read-only (init) layers. Good.

However, OS filesystems have been historically built to expect content to be read-writeable, often using snapshots of the first container’s init layer to create the second and third. One side effect becomes that almost all operations from the third container then must traverse the lower init layers of the first two containers’ layers. This leads to a slow down for nearly all file operations as more containers are launched, including reading from a file.

LCFS vs Snapshot driver diagram

In the above diagram, the right side shows that LCFS also presents a unified view (mount) for three containers running the Fedora image. The design goal is to unchain how containers access their own content. First, launching the second container results in a new init clone (not a snapshot). Internally, the access of the second container’s (init) filesystem does not require tracking backward to an original (snapshot’s) parent. The net effect is that read and modify operations from successive containers do not depend on prior containers.

Separately, LCFS itself is implemented as a single filesystem. It takes in (hardware) devices and puts one filesystem over those drives. For more on the LCFS architecture and future TODOs, please see:

  • layout: how LCFS formats devices, handles inodes, and handles internal data structures.
  • layers: how layers are created, locked, cloned/snapshotted and committed.
  • file operations: how LCFS supports file operations and file locking.
  • caching: how LCFS caches and accesses metadata (inodes, directories, etc) and future work.
  • space management: how LCFS handles allocation, tracking, placement, and I/O coalescing.
  • crash consistency: how LCFS deals with abnormal shutdowns (crashes etc.)
  • cli: Commands for various operations.
  • LCFS Design: LCFS design highlights.

Installing LCFS

You can install LCFS by executing the script below (assuming your storage device is /dev/sdb):

# curl -fsSL http://lcfs.portworx.com/lcfs-setup.sh | sudo DEV=/dev/sdb bash

For detailed instructions, you must first install the LCFS filesystem and then the LCFS v2 graph driver plugin, as described here.

Licensing

The Layer Cloning Filesystem (LCFS) is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Contributing

Want to collaborate and add? Here are instructions to get started contributing code.

lcfs's People

Contributors

adityadani avatar disrani-px avatar erickhan avatar ferrantim avatar fred-love avatar gourao avatar jjobi avatar jrivera-px avatar kerneltime avatar lucj avatar michael-px avatar michaelferranti avatar robhaswell avatar rodrigc avatar stealthybox avatar wilkins-meister avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lcfs's Issues

Make lcfs v2 plugin smaller

As it stands today, around 74MB is downloaded from the Docker hub over the network while installing the plugin.
This is saved as a temporary file in LCFS.  After it is extracted to the plugin directory, it is running inside a Centos7.2.1511 container.
That container itself creates around 8000 files (inodes) in LCFS and around 300MB of data in LCFS.

Explore using Debian golang as a base image.

All these being done, just to run a binary of size 10MB.  Just wondering if this can be made smaller if that is converted to C.

PS:  The overhead is a lot smaller with V1 plugin model.

Error while installing on Ubuntu 18.04 Virtualbox VM

Hi, I created a Ubuntu 18.04 VM on Virtualbox, attached a virtual hard disk to it and then tried to install lcfs on that disk. Here is the error I am getting:

root@pensu-VirtualBox:~# curl -fsSL http://lcfs.portworx.com/lcfs-setup.sh | sudo DEV=/dev/sdb bash
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
aeb7866da422: Pull complete 
Digest: sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b
Status: Downloaded newer image for centos:latest
Preparing...                          ########################################
Updating / installing...
lcfs-0.5.0-0                          ########################################
Connecting to device, please wait...
Note: LCFS device file exists. Using existing device file /dev/sdb without modifying.
lcfs[2187]: Build: 3e2810a7e7150a231e03d9e75c565c330484cc13 Release: 0.5.0-3e2810a
lcfs[2187]: Maximum memory allowed for data pages 512 MB
lcfs[2187]: Formatting /dev/sdb, size 16106127360
lcfs[2187]: /dev/sdb mounted at /var/lib/docker
lcfs[2187]: /dev/sdb mounted at /lcfs
lcfs: linux.h:43: lc_lockOwned: Assertion `(lock == ((void *)0)) || lock->__data.__writer || (!exclusive && lock->__data.__nr_readers)' failed.

And it hangs there forever, any idea why it could be happening? TIA. @jjobi

Reserve space for free extent map

When LCFS is unmounted, it writes out free space extent map to disk. Make sure space for that is available, by reserving some space ahead. Applications will start getting ENOSPC when file system is almost 90% full, but need to make sure the file system can shut down properly as well.

Implement worker queue for LCFS

As of now, almost every task in LCFS is performed FUSE threads, with the exception of two global threads purging page caches and flushing dirty pages. If there is a worker queue support, we could offload many of the tasks which can be run concurrently and/or in the background without holding up fuse threads.

Replace NaiveDiffDriver

Currently, NaiveDiffDriver is used for finding changes made in a layer compared to its parent. Implement a better scheme similar to those used by union file systems.

Group free extents into buckets based on size rather than a single list

As of now, free space extents are kept in a single list and that could become inefficient trying to find free space in file system. It would be better if those are grouped into buckets and space allocation operations can quickly find free extents with enough free space to satisfy the allocation.

Block cache sizing

Layers in an image and all containers running on top of those, share a common block page cache. Size that based on the amount of data stored in all of those.

LCFS needs to be tuned for local volumes

Docker creates local volumes in root layer and LCFS is not optimized for those. Current assumption is that everything in root layer is Docker metadata for managing image and container layers. But volumes may add large amounts of data to it.

docker commit speed with lcfs

Ciao!

How are you?

I have build ofteh 6 containers - 15 hours at once with overlay2.
Each built container is about 50 gigabytes. That's is ok.
But when I commit all at once (in parallel), in 20 hoursm it hasn't commited any of one. Looks like either blocked or just slow.

Is lcfs faster with the docker commit? Beacuse it is very bad (of course not ssd, hdd of course, becasuse they are huge - for LEDE and OpenWrt).

Thanks,
Patrik

Provide a tool for cleaning up orphaned layers

The following sequence of operations happen while creating/removing an image layer.

  • Creating an image layer:
  1. Create a layer, populate with data.
  2. Update bunch of files in /var/lib/docker about the new image (the last one is repositories.json).
  • Removing an image layer:
  1. Update a bunch of files in /var/lib/docker (which also replaces repositories.json with new data).
  2. Remove the layer corresponding to the image being deleted.

If the system crashes after step 1 during the above operations, there will be an orphaned image in the graphdriver. Need a utilitly to clean up such orphaned layers.

See #14 on how Docker keeps track of layers.

Can't compile (or run) benchmarks

I am having the following problem:

kir@kd:~$ go get github.com/portworx/lcfs
package github.com/portworx/lcfs: no buildable Go source files in /home/kir/go/src/github.com/portworx/lcfs
kir@kd:~$ cd go/src/github.com/portworx/lcfs/testing
kir@kd:~/go/src/github.com/portworx/lcfs/testing$ go get ./...
# cd .; git clone https://github.com/portworx/px-test /home/kir/go/src/github.com/portworx/px-test
Cloning into '/home/kir/go/src/github.com/portworx/px-test'...
fatal: could not read Username for 'https://github.com': terminal prompts disabled
package github.com/portworx/px-test/graph: exit status 128

So, is there a way to reproduce the benchmarks you refer to in README?

ARM builds?

How feasible would be to build and ARM bundle?. Would really like to try this in a PI to see if it brings any improvements.

Stuck at yum install some package for lcfs storage driver

Hi,
When i use lcfs as docker storage driver, yum install some package will be stuck. I change devicemapper, start same image, install same package, it can install successfully. Why? I am not sure lcfs is available and feasible?

# yum install unzip
Loaded plugins: fastestmirror, ovl
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Resolving Dependencies
--> Running transaction check
---> Package unzip.x86_64 0:6.0-19.el7 will be installed
# lsb_release
LSB Version:	:core-4.1-amd64:core-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.4.1708 (Core) 
Release:	7.4.1708
Codename:	Core

# docker info
Containers: 5
 Running: 1
 Paused: 0
 Stopped: 4
Images: 5
Server Version: 18.06.1-ce
Storage Driver: portworx/lcfs
 Build Version: 1.0
 Library Version: 1.0
Logging Driver: json-file
Cgroup Driver: systemd

Replace sequential lists with better data structures (hash lists, B-trees etc)

Currently, many things in LCFS are tracked using sequential lists (layers, files in small directories, dirty pages of a small file, extent map of a fragmented file, allocated extent map of layers, free extent map of the file system, extended attributes, hardlinks in a layer etc.). Switch to better data structures for these for better performance.

lcfs crash

I ran latest LCFS (git commit 99b157d) and got into this while running "make" on docker daemon sources

# ./lcfs daemon /dev/sda /var/lib/docker /lcfs -f -m -r -t -p -v
.....
lcfs: fs.c:308: lc_removeLayers: Assertion `zfs->fs_super->sb_flags & LC_SUPER_INIT' failed.
Aborted (core dumped)

This is what I saw on the client:

Step 42/43 : ENTRYPOINT hack/dind
 ---> Running in 137aeada4f4d
 ---> 98b9d7df394f
Removing intermediate container 137aeada4f4d
Step 43/43 : COPY . /go/src/github.com/docker/docker
 ---> ba3aea4d5aa7
Successfully built ba3aea4d5aa7
open /var/lib/docker/image/portworx/lcfs/.tmp-repositories.json026490387: no such file or directory
Makefile:113: recipe for target 'build' failed
make: *** [build] Error 1

and here are dockerd logs:

INFO[9097] Layer sha256:d173fd590788d5c19e591594e8d6f28524d41626bbfb4d717228dc50db6b5674 cleaned up 
INFO[9104] time="2017-08-25T14:58:09Z" level=error msg="err software caused connection abort\n"   plugin=de91e35b7c8c14d0bb3190c519dd10eb0d5a5b88b4e78f91f635d4367c52f5b0
ERRO[9104] Error removing mounted layer 3ff6814c3cb6dc6e75732aa9738d29fd62bd7cc182b07f9050c5905a6a4d73a5: GraphDriver.Remove: software caused connection abort 
ERRO[9104] Failed to release RWLayer: GraphDriver.Remove: software caused connection abort 
ERRO[9104] failed to unmount previous build image sha256:2baf993621acd4dc745c97ff3127609dbf68ec73c9fedb8278fa029763d177f1: GraphDriver.Remove: software caused connection abort 
INFO[9104] time="2017-08-25T14:58:09Z" level=error msg="err transport endpoint is not connected\n"   plugin=de91e35b7c8c14d0bb3190c519dd10eb0d5a5b88b4e78f91f635d4367c52f5b0
ERRO[9104] Failed to unmount RWLayer: GraphDriver.Put: transport endpoint is not connected 
ERRO[9104] failed to unmount previous build image sha256:e1573ce8644fc2781917e5edf824b154f1492476f202b0f35340dc7119ad743f: GraphDriver.Put: transport endpoint is not connected 
INFO[9104] time="2017-08-25T14:58:09Z" level=error msg="err transport endpoint is not connected\n"   plugin=de91e35b7c8c14d0bb3190c519dd10eb0d5a5b88b4e78f91f635d4367c52f5b0
ERRO[9104] Failed to unmount RWLayer: GraphDriver.Put: transport endpoint is not connected 
ERRO[9104] failed to unmount previous build image : GraphDriver.Put: transport endpoint is not connected 
INFO[9104] time="2017-08-25T14:58:09Z" level=error msg="err transport endpoint is not connected\n"   plugin=de91e35b7c8c14d0bb3190c519dd10eb0d5a5b88b4e78f91f635d4367c52f5b0
ERRO[9104] Failed to unmount RWLayer: GraphDriver.Put: transport endpoint is not connected 
ERRO[9104] failed to unmount previous build image sha256:bdf6818dde703755f0a724e9a72579529fa66cdde79b1a5a9037184894dd6108: GraphDriver.Put: transport endpoint is not connected 
INFO[9104] time="2017-08-25T14:58:09Z" level=error msg="err transport endpoint is not connected\n"   plugin=de91e35b7c8c14d0bb3190c519dd10eb0d5a5b88b4e78f91f635d4367c52f5b0
ERRO[9104] Failed to unmount RWLayer: GraphDriver.Put: transport endpoint is not connected 
ERRO[9104] failed to unmount previous build image sha256:98b9d7df394f644af4a772e74a710f196aacd92806756881f797b2a98b52cb2e: GraphDriver.Put: transport endpoint is not connected 

Please let me know if there's anything else I can provide.

start lcfs with a storage size cap option

When lcfs is started, it should take in a size option and not exceed that capacity limit. Image layers that are not in use (perhaps using some sort of garbage collection scheme) should be automatically purged.

Provide a utility to collect logs from LCFS

LCFS displays its logs on stdout when run in daemon mode. There is no way to collect those when LCFS is run in daemon mode. Provide a method to collect LCFS logs on a demand basis.

Provide crash consistency

As of now, if system or lcfs crashes, the whole file system is wiped off (not when lcfs is cleanly unmounted).
Implement a scheme such that file system is intact even across abnormal shutdowns. A checkpointing scheme may be preferred over a journaling scheme.

Docker resets to AUFS after a reboot

After I reboot my system (ubuntu), Docker goes back to using AUFS as the driver. Consequently I lose my previous images.

LCFS needs to be retained across boots.

How to point device for "lcfs daemon"

Hi, when i build lcfs following Manual installation. Starting lcfs should point device such as /dev/sdb. I point device, but get error

# lcfs daemon /dev/vdb /var/lib/docker /search/odin/lcfs
open: Device or resource busy
lcfs[19268]: Failed to open /dev/vdb

When i change a file, i get error

# lcfs daemon /search/odin/xuezhiyou/lcfs.dev /var/lib/docker /search/odin/lcfs
lcfs[25702]: Device is too small. Minimum size required is 40MB
# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00099a84

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83886079    41942016   83  Linux

Disk /dev/vdb: 859.0 GB, 858993459200 bytes, 1677721600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x1ab19eeb

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb2            2048      411647      204800   83  Linux

How should i do?

Provide an utility for enabling/disabling various stats

LCFS tracks many stats (like file system operations, memory usage, type of files etc). Also it supports enabling cpu profiling with gperftools. All these are enabled/disabled at build time. It would be good to provide an utility which can enable/disable these selectively at run time.

Issues on installing lcfs on ubuntu 18.04

Hello,

I tried to install using the bash script and I get the following error. RUNNING IT AS ROOT

curl -fsSL http://lcfs.portworx.com/lcfs-setup.sh | DEV=/dev/sdc bash Unable to find image 'centos:latest' locally latest: Pulling from library/centos aeb7866da422: Pull complete Digest: sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b Status: Downloaded newer image for centos:latest Preparing... ######################################## Updating / installing... lcfs-0.5.0-0 ######################################## [ ok ] Stopping docker (via systemctl): docker.service. Connecting to device, please wait... Note: LCFS device file exists. Using existing device file /dev/sdc without modifying. lcfs[12772]: Build: 3e2810a7e7150a231e03d9e75c565c330484cc13 Release: 0.5.0-3e2810a lcfs[12772]: Maximum memory allowed for data pages 802 MB lcfs[12772]: Formatting /dev/sdc, size 16106127360 lcfs[12772]: /dev/sdc mounted at /var/lib/docker lcfs[12772]: /dev/sdc mounted at /lcfs Error: failed to start docker. Unable to get the full path to root (/var/lib/docker): failed to canonicalise path for /var/lib/docker: lstat /var/lib/docker: transport endpoint is not connected

Also, tried installing it manually, but I´m also getting some errors. Under the "Build the lcfs file system" while executing make I get the following.

oot@docker-runner-110:/mnt/lcfs/lcfs# make gcc -g -msse4.2 -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse3 -I/usr/local/include/fuse3 -c -o cli.o cli.c In file included from /usr/include/x86_64-linux-gnu/urcu/wfcqueue.h:28:0, from /usr/include/x86_64-linux-gnu/urcu-call-rcu.h:35, from /usr/include/x86_64-linux-gnu/urcu.h:133, from includes.h:34, from cli.c:1: linux.h: In function ‘lc_lockOwned’: linux.h:42:43: error: ‘struct __pthread_rwlock_arch_t’ has no member named ‘__writer’; did you mean ‘__writers’? assert((lock == NULL) || lock->__data.__writer || ^ linux.h:43:40: error: ‘struct __pthread_rwlock_arch_t’ has no member named ‘__nr_readers’; did you mean ‘__readers’? (!exclusive && lock->__data.__nr_readers)); ^ linux.h:42:43: error: ‘struct __pthread_rwlock_arch_t’ has no member named ‘__writer’; did you mean ‘__writers’? assert((lock == NULL) || lock->__data.__writer || ^ linux.h:43:40: error: ‘struct __pthread_rwlock_arch_t’ has no member named ‘__nr_readers’; did you mean ‘__readers’? (!exclusive && lock->__data.__nr_readers)); ^ <builtin>: recipe for target 'cli.o' failed make: *** [cli.o] Error 1

I´m using gcc 7.6.

Global LRU for block cache

There is no global LRU maintained for block cache and that could turn out to be a problem for certain workloads. Pages are purged from block cache when usage is above limit, based on count tracking number of hits on it, but that does not account recently inserted pages. Figure out a better scheme possibly without adding more global locking.

Provide a method to start Docker for MAC with LCFS

Docker runs in a Linux VM on MAC. This VM could be accessed by running the following command:

"screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty"

It should be possible to install LCFS in that VM and start docker with LCFS as graphdriver. Provide a method to automate this for end user.

Provide a method to disable syncer which takes checkpoints

Syncer takes checkpoints of LCFS frequently to keep LCFS crash consistent. This may trigger I/Os and loss of access to LCFS for short intervals. If somebody does not want LCFS to be crash consistent, they may disable syncer completely.

Inode cache sizing

Each layer maintains a private cache of inodes. Size this based on the size of the dataset (meaning number of inodes in the layer).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.