Giter Club home page Giter Club logo

action-docker-layer-caching's Introduction

Docker Layer Caching in GitHub Actions Readme Test status is unavailable CI status is unavailable

Enable Docker Layer Caching by adding a single line in GitHub Actions. This GitHub Action speeds up the building of docker images in your GitHub Actions workflow.

You can run docker build and docker-compose build in your GitHub Actions workflow using the cache with no special configuration, and it also supports multi-stage builds.

This GitHub Action uses the docker save / docker load command and the @actions/cache library.

⚠️ Deprecation Notice for v0.0.4 and older ⚠️

The author had not considered a large number of layers to be cached, so those versions process all layers in parallel. (#12)
Please update to version v0.0.5 with limited concurrency to avoid overloading the cache service.

Example workflows

Docker Compose

name: CI

on: push

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2

    # Pull the latest image to build, and avoid caching pull-only images.
    # (docker pull is faster than caching in most cases.)
    - run: docker-compose pull

    # In this step, this action saves a list of existing images,
    # the cache is created without them in the post run.
    # It also restores the cache if it exists.
    - uses: satackey/[email protected]
      # Ignore the failure of a step and avoid terminating the job.
      continue-on-error: true

    - run: docker-compose up --build

    # Finally, "Post Run satackey/[email protected]",
    # which is the process of saving the cache, will be executed.

docker build

name: CI

on: push

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2

    # In this step, this action saves a list of existing images,
    # the cache is created without them in the post run.
    # It also restores the cache if it exists.
    - uses: satackey/[email protected]
      # Ignore the failure of a step and avoid terminating the job.
      continue-on-error: true

    - name: Build the Docker image
      run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)

    # Finally, "Post Run satackey/[email protected]",
    # which is the process of saving the cache, will be executed.

Inputs

See action.yml for details.

By default, the cache is separated by the workflow name. You can also set the cache key manually, like the official actions/cache action.

    - uses: satackey/[email protected]
      # Ignore the failure of a step and avoid terminating the job.
      continue-on-error: true
      with:
        key: foo-docker-cache-{hash}
        restore-keys: |
          foo-docker-cache-

Note: You must include {hash} in the key input. ({hash} is replaced by the hash value of the docker image when the action is executed.)

action-docker-layer-caching's People

Contributors

btkostner avatar dependabot[bot] avatar rcowsill avatar satackey avatar woa7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

action-docker-layer-caching's Issues

Avoid caching images that are retrieved with `pull`

Is your feature request related to a problem? Please describe.
The Post Run satackey/[email protected] step is quite slow for our builds because it uploads a bunch of images that we already use pull for.

Describe the solution you'd like
Since we pull them anyway, it would be good to not bother caching them at all. Is there a way to only cache those images which are not retrieved through pull.

Describe alternatives you've considered
The alternative at the moment is to not use the caching at all, because the cache upload takes longer than the original build normally takes.

Cannot set concurrency

Describe the bug
According to action.yml, we can set the concurrency. However, setting this parameter appears to have no effect.

concurrency:
description: The number of concurrency when restoring and saving layers
required: true
default: '4'

To Reproduce
Steps to reproduce the behavior:

      - uses: satackey/[email protected]
        with:
          concurrency: 2

Expected behavior
concurrency: 2 should appear in the build output.

Debug logs

Run satackey/[email protected]
with:
key: docker-layer-caching-CI-{hash}
restore-keys: docker-layer-caching-CI-
concurrency: 4
skip-save: false

Runner Environment (please complete the following information):

  • OS: ubuntu-latest
  • Action version: v0.0.11

Increases pipeline times instead of decreasing it

Describe the bug
Prior to adding layer caching build took ~2 minutes and 45 seconds.

After adding it, the build time takes 2 minutes and 23 seconds.

However, the actual step (satackey/[email protected]) that's added takes 2 minutes.

So all in all, this increases the overall build time, instead of decreasing it. Any ideas why?

Workflow fails because `action-docker-layer-caching` fails with `GitHub Actions services are currently unavailable`

Describe the bug

I am using this the action as described in the documentation for caching docker layer to speed up docker build of multiple docker images of NodeJS applications.
However from time to time but more often these days, I see errors on either layer downloading step or post step about GitHub Actions services are currently unavailable that make the workflow to fail.

To Reproduce

Run the workflow with the action satackey/[email protected]

My workflow is as follow:

on:
  push:
    branches:
      - master
    paths:
      - packages/**
name: Kubernetes Application
jobs:
  docker-build:
    name: Docker Images
    runs-on: ubuntu-latest
    strategy:
      matrix:
        module: [myimage1, myimage2, myimage3, myimage4, myimage5]
    steps:
      - uses: actions/checkout@v2
      - name: Env Setup
        run: |
          echo ::set-env name=GIT_TAG::$(echo "`cat packages/${{ matrix.module }}/package.json | jq -r '.version'`-${GITHUB_SHA::8}")
      - name: Docker Layer Caching
        uses: satackey/[email protected]
        with:
          key: ${{ matrix.module }}-docker-cache-{hash}
          restore-keys: |
            ${{ matrix.module }}-docker-cache-
      - uses: docker/build-push-action@v1
        with:
          registry: ${{ secrets.DOCKER_REGISTRY_URL }}
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}
          repository: ${{ matrix.module }}
          dockerfile: packages/${{ matrix.module }}/Dockerfile
          build_args: VERSION=${{ env.GIT_TAG }}
          add_git_labels: true
          tag_with_ref: true
          tags: ${{ env.GIT_TAG }}

Expected behavior

Workflow does not fail even if some layers cannot be downloaded or saved to the cache.

Debug logs

Error seen in downloading step

Error: getCacheEntry failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 6b09e32f-887a-4e27-98c5-c6060c2c1c26
    at /home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.4/dist/ttsc-dist/main.js/index.js:45495:15
    at Generator.throw (<anonymous>)
    at rejected (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.4/dist/ttsc-dist/main.js/index.js:45426:65)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
##[error]Error: getCacheEntry failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 6b09e32f-887a-4e27-98c5-c6060c2c1c26

Errors seen in post step:

Start storing layer cache: layer-myimage1-docker-cache-9fed80be61e73bca64cb95f027a1da7d2c5d68ed5932436465b16fb13d36ceea
##[error]Unexpected error: Error: reserveCache failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 6b095012-887a-4e27-98c5-c6060c2c1c26
Error: reserveCache failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 6b095012-887a-4e27-98c5-c6060c2c1c26
    at /home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.4/dist/ttsc-dist/post.js/index.js:45520:15
    at Generator.throw (<anonymous>)
    at rejected (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.4/dist/ttsc-dist/post.js/index.js:45451:65)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
##[error]Error: reserveCache failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 6b095012-887a-4e27-98c5-c6060c2c1c26
##[error]Unexpected error: Error: reserveCache failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 6b095010-887a-4e27-98c5-c6060c2c1c26
##[error]Unexpected error: Error: reserveCache failed: GitHub Actions services are currently unavailable. Try again later. Activity Id: 49140aa7-150c-41fb-b468-c22336268aab

Runner Environment (please complete the following information):

  • OS: ubuntu-latest
  • Action version: v0.0.4

Additional context
Add any other context about the problem here.

Restored images are treated as new images

Describe the bug
Images restored when loading the cache are detected as new images in the 'post' step.

This forces the cache to be saved again, even if there were no new images created between the 'main' and 'post' step. The root cache key changes too, so a new root cache gets uploaded. All the layers fail to upload as they are already cached.

Steps to reproduce the behavior (in a workflow which builds a docker image and uses action-docker-layer-caching):

  1. Set new key/restore-keys on the workflow's action-docker-layer-caching (to make a new cache)
  2. Run the workflow
  3. Check that the build succeeded and saved a new root cache and layers
  4. Run the workflow again without changing anything
  5. Check that the docker build pulled everything from cache
  6. Observe that the 'post' step for the cache action saved and reuploaded the same images that it restored earlier

Expected behavior
If no new images were created the 'post' step exits early with the message: "There is no image to save."

Debug logs
See this workflow run: https://github.com/rcowsill/NodeGoat/runs/1501640716. The Use cache (docker layers) step loads the cache, and in the Prepare images step the cached images are all used. No new images are created.

Despite that, the Post Use cache (docker layers) saves and uploads the images that were restored from the previously exisiting cache.

Runner Environment (please complete the following information):

  • OS: ubuntu-18.04.5
  • Action version: v0.0.8

Fails with docker images pulled by sha

Docker provides a way for referring to images by their SHA hash instead of tags, like this:

$ docker pull pierrezemb/gostatic@sha256:e28d48e17840c5104b5133c30851ac45903b1d2f268d108c4cd0884802c9c87e
$ docker images|grep gostatic
pierrezemb/gostatic                                                              <none>                         4569615e9ed0        2 weeks ago
      7.72MB

Notice the <none> part.

Now, apparently this causes the docker layer caching action to do the equivalent of following under the hood:

$ docker history -q 'pierrezemb/gostatic:<none>'
Error response from daemon: invalid reference format

In the logs of Github build I see:

docker history -q pierrezemb/gostatic:<none>










Error: The process '/usr/bin/docker' failed with exit code 1
    at ExecState._setResult (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.5/dist/ttsc-dist/post.js/index.js:1300:25)
    at ExecState.CheckComplete (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.5/dist/ttsc-dist/post.js/index.js:1283:18)
    at ChildProcess.<anonymous> (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.5/dist/ttsc-dist/post.js/index.js:1183:27)
    at ChildProcess.emit (events.js:210:5)
    at maybeClose (internal/child_process.js:1021:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
##[error]Error: The process '/usr/bin/docker' failed with exit code 1

Feature Request: Alternative, larger cache disk space

My workflow fails due to an out-of-memory issue during the Post Run step. I am wondering whether there is something simple I can do like point to another place which I can use to store and retrieve the cached layers.

Wish I could use your product but because we build 3 large images it seems that I am limited in memory. Thank you.

Output here:

System.IO.IOException: No space left on device
   at System.IO.FileStream.WriteNative(ReadOnlySpan`1 source)
   at System.IO.FileStream.FlushWriteBuffer()
   at System.IO.FileStream.Flush(Boolean flushToDisk)
   at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
   at System.Diagnostics.TextWriterTraceListener.Flush()
   at GitHub.Runner.Common.HostTraceListener.WriteHeader(String source, TraceEventType eventType, Int32 id)
   at GitHub.Runner.Common.HostTraceListener.TraceEvent(TraceEventCache eventCache, String source, TraceEventType eventType, Int32 id, String message)
   at System.Diagnostics.TraceSource.TraceEvent(TraceEventType eventType, Int32 id, String message)
   at GitHub.Runner.Worker.Worker.RunAsync(String pipeIn, String pipeOut)
   at GitHub.Runner.Worker.Program.MainAsync(IHostContext context, String[] args)
System.IO.IOException: No space left on device
   at System.IO.FileStream.WriteNative(ReadOnlySpan`1 source)
   at System.IO.FileStream.FlushWriteBuffer()
   at System.IO.FileStream.Flush(Boolean flushToDisk)
   at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
   at System.Diagnostics.TextWriterTraceListener.Flush()
   at GitHub.Runner.Common.HostTraceListener.WriteHeader(String source, TraceEventType eventType, Int32 id)
   at GitHub.Runner.Common.HostTraceListener.TraceEvent(TraceEventCache eventCache, String source, TraceEventType eventType, Int32 id, String message)
   at System.Diagnostics.TraceSource.TraceEvent(TraceEventType eventType, Int32 id, String message)
   at GitHub.Runner.Common.Tracing.Error(Exception exception)
   at GitHub.Runner.Worker.Program.MainAsync(IHostContext context, String[] args)
Unhandled exception. System.IO.IOException: No space left on device
   at System.IO.FileStream.WriteNative(ReadOnlySpan`1 source)
   at System.IO.FileStream.FlushWriteBuffer()
   at System.IO.FileStream.Flush(Boolean flushToDisk)
   at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
   at System.Diagnostics.TextWriterTraceListener.Flush()
   at System.Diagnostics.TraceSource.Flush()
   at GitHub.Runner.Common.TraceManager.Dispose(Boolean disposing)
   at GitHub.Runner.Common.TraceManager.Dispose()
   at GitHub.Runner.Common.HostContext.Dispose(Boolean disposing)
   at GitHub.Runner.Common.HostContext.Dispose()
   at GitHub.Runner.Worker.Program.Main(String[] args)

Caching across runs

Describe the bug

Using the following configuration, the cache works great across jobs in the same run (even with the default values of key and restore-keys, but, re-running the workflow, the cache is never found:

  job1:
    name: job1
    runs-on: ubuntu-20.04

    steps:
    - uses: actions/checkout@v2

    - uses: satackey/[email protected]
      with:
        key: images-docker-cached-${{ github.workflow }}
        restore-keys: |
          images-docker-cached-

    - name: Build images
      run: |
        cd docker-compose/
        docker-compose -f docker-compose-images.yaml build

Output:

Root cache could not be found. aborting.

Don't know if it's related with: #49

Because the post step works fine, and again, puts all the layers in cache (and can be used in the next job), but never in the next run.

Expected behavior
The cache can be used across different runs.

Runner Environment:

  • OS: Ubuntu 20.04.2

I am doing something wrong?

Thanks

Can't use in a workflow with a comma in its name

Describe the bug
You can't use this with a worflow containing a comma in its name.

Debug logs

Run satackey/[email protected]
  with:
    key: docker-layer-caching-Build, Test & Deploy-{hash}
    restore-keys: docker-layer-caching-Build, Test & Deploy-
    concurrency: 4
    skip-save: false
  env:
    DOCKER_IMAGE_NAME: ghcr.io/fredagscafeen/web
ValidationError: Key Validation Error: docker-layer-caching-Build, Test & Deploy-{hash} cannot contain commas.
    at checkKey (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:41761:15)
    at Object.<anonymous> (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:41784:13)
Error: ValidationError: Key Validation Error: docker-layer-caching-Build, Test & Deploy-{hash} cannot contain commas.
    at Generator.next (<anonymous>)
    at /home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:41718:71
    at new Promise (<anonymous>)
    at module.exports.692.__awaiter (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:41714:12)
    at Object.restoreCache (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:41774:12)
    at LayerCache.restoreRoot (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:33245:45)
    at LayerCache.restore (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:33227:45)
    at main (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/main.js/index.js:44116:42) {
  name: 'ValidationError'
}

Runner Environment (please complete the following information):

  • OS: ubuntu-latest
  • Action version: v0.0.8

Can't reproduce build

In order to use this action I need to make sure it was indeed built for the source code.
However, when I try it to build it with ncc it seems to be different.
How can I reproduce the build?

ENOENT: no such file or directory, stat 'image-layers/xxxxxxx/layer.tar'

Hello,

First, thank you for the great action that you have created. We have been facing a strange error lately on version 0.0.4 and the same error after upgrading to 0.0.8. This action is running on linux (ubuntu-latest).

Here is the issue that we are facing and I was wondering if this would ring a bell to you?

image

The issue can be seen with this build: https://github.com/criteo/JVips/runs/1441932624

Can you please help me find out what I can check further to solve that error? In the meantime I set the continue-on-error flag in https://github.com/clems4ever/JVips/commit/59a3bd7737df4dd2b261af2124009a370be3ee1c but I'd like to have your point of view on this error anyway.

Thank you in advance.

Does it remove discarded layers from cache? [Question]

Does all of the cache gets hit at once so automatic removal won't work with it or how does it discard image layers that have been replaced on rebuild?
Will it just fill up all of the remaining space in cache and just get deleted all together?

The tar command fails on a Windows runner

Describe the bug
The tar command fails on a Windows runner.

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://github.com/exivity/rabbitmq-windows-docker/runs/966529913
  2. Click on Post Run satackey/[email protected]
  3. Expand the step starting with sh -c docker save [...]
  4. See error

Expected behavior
The Post Run to succeed

Debug logs

sh -c docker save '1ea7049bf5c6' '1ea7049bf5c6' '598c98f7b656' '912539758c76' 'dcbc260c6b09' 'a3cb6ed51c1b' 'e3de158569a9' '399de6e28bb0' '8809fb27bbb2' '4df9dd644716' '148c3b76ab54' '2b250bc046c1' 'e5e57296b9ca' '1ff1fd5e1fa2' '866b450c8b64' '68d6ccbac8d9' '7ca015404fc3' '318b2224022d' '5b43130792e4' '6c12dc0a117c' '987b1d5e0abf' 'exivity/rabbitmq:3.8.6' '1ea7049bf5c6' '598c98f7b656' '912539758c76' 'dcbc260c6b09' 'a3cb6ed51c1b' 'e3de158569a9' '399de6e28bb0' '8809fb27bbb2' '4df9dd644716' '148c3b76ab54' '2b250bc046c1' 'e5e57296b9ca' '1ff1fd5e1fa2' '866b450c8b64' '68d6ccbac8d9' '7ca015404fc3' '318b2224022d' '5b43130792e4' '6c12dc0a117c' '987b1d5e0abf' | tar xf - -C d:\a\rabbitmq-windows-docker\rabbitmq-windows-docker\.action-docker-layer-caching-docker_images\image
  "C:\Program Files\Git\bin\sh.exe" -c "docker save '1ea7049bf5c6' '1ea7049bf5c6' '598c98f7b656' '912539758c76' 'dcbc260c6b09' 'a3cb6ed51c1b' 'e3de158569a9' '399de6e28bb0' '8809fb27bbb2' '4df9dd644716' '148c3b76ab54' '2b250bc046c1' 'e5e57296b9ca' '1ff1fd5e1fa2' '866b450c8b64' '68d6ccbac8d9' '7ca015404fc3' '318b2224022d' '5b43130792e4' '6c12dc0a117c' '987b1d5e0abf' 'exivity/rabbitmq:3.8.6' '1ea7049bf5c6' '598c98f7b656' '912539758c76' 'dcbc260c6b09' 'a3cb6ed51c1b' 'e3de158569a9' '399de6e28bb0' '8809fb27bbb2' '4df9dd644716' '148c3b76ab54' '2b250bc046c1' 'e5e57296b9ca' '1ff1fd5e1fa2' '866b450c8b64' '68d6ccbac8d9' '7ca015404fc3' '318b2224022d' '5b43130792e4' '6c12dc0a117c' '987b1d5e0abf' | tar xf - -C d:\a\rabbitmq-windows-docker\rabbitmq-windows-docker\.action-docker-layer-caching-docker_images\image"
  tar: d\:arabbitmq-windows-dockerrabbitmq-windows-docker.action-docker-layer-caching-docker_imagesimage: Cannot open: No such file or directory
  tar: Error is not recoverable: exiting now
  write /dev/stdout: The pipe is being closed.
  Error: The process 'C:\Program Files\Git\bin\sh.exe' failed with exit code 2
      at ExecState._setResult (d:\a\_actions\satackey\action-docker-layer-caching\v0.0.5\dist\ttsc-dist\post.js\index.js:1300:25)
      at ExecState.CheckComplete (d:\a\_actions\satackey\action-docker-layer-caching\v0.0.5\dist\ttsc-dist\post.js\index.js:1283:18)
      at ChildProcess.<anonymous> (d:\a\_actions\satackey\action-docker-layer-caching\v0.0.5\dist\ttsc-dist\post.js\index.js:1183:27)
      at ChildProcess.emit (events.js:210:5)
      at maybeClose (internal/child_process.js:1021:16)
      at Socket.<anonymous> (internal/child_process.js:430:11)
      at Socket.emit (events.js:210:5)
      at Pipe.<anonymous> (net.js:659:12)
  ##[error]Error: The process 'C:\Program Files\Git\bin\sh.exe' failed with exit code 2

Runner Environment (please complete the following information):

  • OS: windows-2019
  • Action version: v0.0.5

Additional context
Seems that the arguments to the tar command are not properly escaped on Windows which causes all directories to be concatenated (where path separators are expected). I think the relevant is here: https://github.com/satackey/action-docker-layer-caching/blob/master/src/LayerCache.ts#L59

Restoring cache on Windows fails with "...is not a valid parent..." message

Note:
For tracking purposes only; this is caused by an issue affecting Docker itself. See: moby/moby#41829


Describe the bug
On Windows, restoring the cache fails with an error message such as: image sha256:6795354d435f[...] is not a valid parent for sha256:bfeb117a8139[...]

To Reproduce
Use this action in a workflow that builds an image FROM mcr.microsoft.com/windows/nanoserver:1809 (other windows images are affected too, eg windows/servercore:ltsc2019)

Note: the image needs to be built in the workflow, pulled images cache/restore without issues.

Expected behavior
Cache to load successfully

Debug logs

"C:\Program Files\Git\bin\sh.exe" -c "tar cf - . | docker load"
Loaded image: test_project_scratch:latest
image sha256:6795354d435f89ae3a76f71e9c4e8e8b29b70a69fefaba2b4d13581e20c19d71 is not a valid parent for sha256:bfeb117a81391290c69b33f5b1598f21a5497288247ea0b300900a5a93d7d5d9
Error: The process 'C:\Program Files\Git\bin\sh.exe' failed with exit code 1
Error: Error: The process 'C:\Program Files\Git\bin\sh.exe' failed with exit code 1
    at ExecState._setResult (D:\a\action-docker-layer-caching\action-docker-layer-caching\action-dlc\dist\ttsc-dist\main.js\index.js:1035:25)

(Taken from https://github.com/rcowsill/action-docker-layer-caching/runs/1640192546?check_suite_focus=true)

Runner Environment (please complete the following information):

  • OS: Microsoft Windows Server 2019 10.0.17763 Datacenter
  • Action version: v0.0.11

How to clear a cache?

I'm playing with action-docker-layer-caching and loving it.

However, now my cach had grown too big! I have one image, 2GB, that I don't need it anymore, however, restoring cache step wastes time loading it. How can I reset my caches and start as anew?

Request was blocked due to exceeding usage of resource 'Count' in namespace. Microsoft.TeamFoundation.Framework.Server.RequestBlockedException

We're getting intermittent failing builds in our project: https://github.com/pirate/ArchiveBox/runs/1328177855 due to what looks like rate-limiting.

Is this indeed rate-limiting of some API call, and if so, can we pay to raise the limits somehow?

Cache already exists: Error: reserveCache failed: Cache already exists. Scope: refs/heads/master, Key: layer-docker-layer-caching-Test workflow-a999e9c532683eea4e357c25d5ca0b8962198287270e2697ecd05c39c06e3e6c, Version: 5192f72cb1ac063c20e96d66c297c3edba75ec90ea8ada0c23da6d323a178d87
Stored layer cache: {"key":"layer-docker-layer-caching-Test workflow-a999e9c532683eea4e357c25d5ca0b8962198287270e2697ecd05c39c06e3e6c","cacheId":-1}
Start storing layer cache: {"layerId":"71ef3f3b3628b91e38dc9f125c6b30570b37c8a0a71ea8f5c1c43c97a54fe6c9","key":"layer-docker-layer-caching-Test workflow-71ef3f3b3628b91e38dc9f125c6b30570b37c8a0a71ea8f5c1c43c97a54fe6c9"}
Error: Unexpected error: Error: reserveCache failed: {"$id":"1","innerException":null,"message":"Request was blocked due to exceeding usage of resource 'Count' in namespace ''.","typeName":"Microsoft.TeamFoundation.Framework.Server.RequestBlockedException, Microsoft.TeamFoundation.Framework.Server","typeKey":"RequestBlockedException","errorCode":0,"eventId":3000}
Start storing layer cache: {"layerId":"a1ed3daf83da98e06aa3766fa12fc6d4e2b6cacf2a58cd6609b217734b3698b1","key":"layer-docker-layer-caching-Test workflow-a1ed3daf83da98e06aa3766fa12fc6d4e2b6cacf2a58cd6609b217734b3698b1"}
Error: Unexpected error: Error: reserveCache failed: {"$id":"1","innerException":null,"message":"Request was blocked due to exceeding usage of resource 'Count' in namespace ''.","typeName":"Microsoft.TeamFoundation.Framework.Server.RequestBlockedException, Microsoft.TeamFoundation.Framework.Server","typeKey":"RequestBlockedException","errorCode":0,"eventId":3000}
Start storing layer cache: {"layerId":"e375b26300c04b097ea0c712fdd5746bfbc62f2ca46ad5620f5a88fe2dc2f7e2","key":"layer-docker-layer-caching-Test workflow-e375b26300c04b097ea0c712fdd5746bfbc62f2ca46ad5620f5a88fe2dc2f7e2"}

(this is not urgent, we can live without docker layer caching temporarily)

Thanks!

[Security] Workflow release.yml is using vulnerable action satackey/push-prebuilt-action

The workflow release.yml is referencing action satackey/push-prebuilt-action using references v0.2.0-beta3. However this reference is missing the commit 0c027b66503f3857cb4e5cfb71633cc54dbd1ec6 which may contain fix to the some vulnerability.
The vulnerability fix that is missing by actions version could be related to:
(1) CVE fix
(2) upgrade of vulnerable dependency
(3) fix to secret leak and others.
Please consider to update the reference to the action.

Error: The process '/usr/bin/docker' failed with exit code 1

on attempt to save the cache

Error: The process '/usr/bin/docker' failed with exit code 1
    at ExecState._setResult (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/post.js/index.js:1300:25)
Error: Error: The process '/usr/bin/docker' failed with exit code 1
    at ExecState.CheckComplete (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/post.js/index.js:1283:18)
    at ChildProcess.<anonymous> (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/post.js/index.js:1183:27)
    at ChildProcess.emit (events.js:210:5)
    at maybeClose (internal/child_process.js:1021:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
  builder-liquibase:
    runs-on: ubuntu-latest
    timeout-minutes: 10
    steps:
      - uses: actions/checkout@v2
        with:
          fetch-depth: 0
      - uses: satackey/[email protected]

the build appears to have no additional useful information.

It may be relevant that I'm using the cache across 2 jobs... but I'm not certain of that.

Cache post run results in file name too long error on Windows

image

https://github.com/prisma/prisma-client-go/runs/1070606307

Describe the bug
windows action results in "file name too long" error

To Reproduce
run a windows action

Expected behavior
succeed

Debug logs

"C:\Program Files\Git\bin\sh.exe" -c "docker save '57e8659cfbe3' '57e8659cfbe3' 'b0a4bbcf733c' 'aa010ec86e43' '3e2306a9e69a' '170e5eed2561' '1fcb9726982c' '4ad344606eba' '363820071cb5' '363820071cb5' '07f56b823f92' '3096552b55c7' 'eb1ae8874308' '8fc9e243f6ac' '076b819af1b1' 'be1b4ed60961' 'c969c043a907' 'b6fb5fc8fc3f' '262eb6b25268' '1fcb9726982c' '4ad344606eba' '4ad344606eba' '4ad344606eba' 'integration:latest' '57e8659cfbe3' 'b0a4bbcf733c' 'aa010ec86e43' '3e2306a9e69a' '170e5eed2561' '1fcb9726982c' '4ad344606eba' 'golang:1.13' '4ad344606eba' | tar xf - -C ."
3
tar: f84e061c1b08d5b5c406eb8e6b392d60db53a246f24dad44cbe287c827acf2e4/layer.tar: Cannot create symlink to '..\\e6d34c02c003006582cb6abd097c3c1bbd3c583d2800e640ff04cdf645b5d615\\layer.tar': File name too long
4
tar: f4fdc24f3d20f364752816cb7dc2bb21e0ba6367cbd6cfe846422c67f3066ddf/layer.tar: Cannot create symlink to '..\\363a20c3cc7ea6f5c97f3907383603ec140a4bfb166bea1dc6e385fbe840f800\\layer.tar': File name too long
5
tar: c521749a4bd1422c0e9dcc5c2d8bc9a06f9d075aebbc800c0a3abec8ca156c1a/layer.tar: Cannot create symlink to '..\\814c4412fd8c41b88e7fc1d05da67d24103c50c24f3a53be8c2fd0329ff98cb2\\layer.tar': File name too long
6
tar: ae587939341bfafb404e90b0642c0df2867aaebe3a3965cdf0754fc65eb2783a/layer.tar: Cannot create symlink to '..\\2f7914e58cf2ed2ba7bc7672908fd8602648c25089a2efc6a46c1bfb939ec0ac\\layer.tar': File name too long
7
tar: 6ebbd9e839ed2e5e6ac9b2d853363e0119539d1ca4c39e6741a0289bf0bb046d/layer.tar: Cannot create symlink to '..\\8eeffc8140192a406c195601ca9a8143df661c0f0c136733cff5a7b837cbc0be\\layer.tar': File name too long
8
tar: 6d377303e4c7ad08d49b64f3d99f2ddf8e08190dd8cb1fdb7562e0001c372463/layer.tar: Cannot create symlink to '..\\997f17eb7110b3f397f5273ba11c447e85ad1ed91a208ad1fdc17db386e63c18\\layer.tar': File name too long
9
tar: 6c5b1bd5e2f5522aba7662223d13a98bf1c34785a260f248cc5d77e7370a0d59/layer.tar: Cannot create symlink to '..\\2705ca6221af391723cfab4bf3540a5bd229f552190a67e9e168e05b6fe3b1ee\\layer.tar': File name too long
10
tar: 655c054642cc83e59a3b894e5b4254deec6c04d4b757d5057f3d029788e8ded8/layer.tar: Cannot create symlink to '..\\4c1d96bc66f2acb88bc40795a303f317daff2b2b2f617ba98b1d2e1a762dffe7\\layer.tar': File name too long
11
tar: 5a31b519666696f0ceb4f4e67aeb1a9dffc6f41d2140978cfdc58e578f64dea9/layer.tar: Cannot create symlink to '..\\dca8603104db128a47f183ca638d580d871b5fc107143bd3ffe19130de686211\\layer.tar': File name too long
12
tar: 55b0db4f36d5e6f5f58e5fbb0a1cc9e0b42f831d77a74a65942846a8e3d17311/layer.tar: Cannot create symlink to '..\\e49be28a0162855ec9267c1a6b0b28b8131bd8eb21fab7b737474393677a96ee\\layer.tar': File name too long
13
tar: 4e9c6bee90ac5afe0a3f126352aa8bd2ac6a5c1c27ec537e9a9e856b73515dc2/layer.tar: Cannot create symlink to '..\\8507ab3b7f9450cebfd7f5a8909141ba84af4760b25be71c921fb32846cab31a\\layer.tar': File name too long
14
tar: 2b59e1cc041594149a367b0eb4fce42abca8eecc2b30395f685a095baa2f5b93/layer.tar: Cannot create symlink to '..\\2dc7aa7ba3c7d0a39552250998cfdc42ab131f9eb2a8ef78d506496b55231627\\layer.tar': File name too long
15
tar: 15b21088ba8ab0cb2e14c8f6aa29889ddb88f6eea05e9b68583f0d546db5ea94/layer.tar: Cannot create symlink to '..\\78c82bc0408ad7dc44b6eb1213747bee9bfe7c3696fb2713a34389f510d0ce52\\layer.tar': File name too long
16
tar: 070eebd009d890e2855e5bb0f4f4e7024fb34cd6d9739c04412c7eaab5a81f7c/layer.tar: Cannot create symlink to '..\\473dbcaaf6d6608092934f70e10461a76ca73a97b13ebdc3def6b22992bd0e5b\\layer.tar': File name too long
17
tar: 0382a3a452c02bb011218a9c3b3d99491f888b75c353260896a9626b77e62d20/layer.tar: Cannot create symlink to '..\\235f2f7108ecde4a6302fab5ee81f8c9cc5e4afa1dddc3cdc46d7eed4db164bd\\layer.tar': File name too long
18
tar: Exiting with failure status due to previous errors
19
Error: The process 'C:\Program Files\Git\bin\sh.exe' failed with exit code 2
20
    at ExecState._setResult (D:\a\_actions\satackey\action-docker-layer-caching\v0.0.8\dist\ttsc-dist\post.js\index.js:1300:25)
21
    at ExecState.CheckComplete (D:\a\_actions\satackey\action-docker-layer-caching\v0.0.8\dist\ttsc-dist\post.js\index.js:1283:18)
22
##[error]Error: The process 'C:\Program Files\Git\bin\sh.exe' failed with exit code 2
23
    at ChildProcess.<anonymous> (D:\a\_actions\satackey\action-docker-layer-caching\v0.0.8\dist\ttsc-dist\post.js\index.js:1183:27)
24
    at ChildProcess.emit (events.js:210:5)
25
    at maybeClose (internal/child_process.js:1021:16)
26
    at Socket.<anonymous> (internal/child_process.js:430:11)
27
    at Socket.emit (events.js:210:5)
28
    at Pipe.<anonymous> (net.js:659:12)

Runner Environment (please complete the following information):

  • OS: [e.g. windows-latest]
  • Action version: [e.g. v0.0.8]

Additional context

docker load is slow

In my case using this action make load cache slower than regular docker build and download all layers from docker hub without any cache.
Downloading is quite fast, however docker load is much longer.

getCacheEntry failed: Cache service responded with 503

Describe the bug

When fetching the cache, the actions is failing with:

Error: Error: getCacheEntry failed: Cache service responded with 503

To Reproduce
Just run the action

Expected behavior
Should not fail and fetch the cache

Debug logs

## Config:
with:
    key: app-cache-fb9da27a0a6ac8bbeebbcffaba[3](https://github.com/ReverseRetail/photobuddy-next/runs/6861749650?check_suite_focus=true#step:8:3)bdc[4](https://github.com/ReverseRetail/photobuddy-next/runs/6861749650?check_suite_focus=true#step:8:4)2183122de70eccdb3b20726f2d55a62a2
    restore-keys: app-cache-fb9da27a0a6ac8bbeebbcffaba3bdc42183122de70eccdb3b20726f2d55a62a2
    concurrency: 4
    skip-save: false

## Error logs
Run satackey/[email protected]
Error: getCacheEntry failed: Cache service responded with 503
    at /home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.[11](https://github.com/.../6861749650?check_suite_focus=true#step:8:12)/dist/ttsc-dist/main.js/index.js:44587:15
    at Generator.next (<anonymous>)
    at fulfilled (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.11/dist/ttsc-dist/main.js/index.js:44503:58)
Error: Error: getCacheEntry failed: Cache service responded with 503

Runner Environment (please complete the following information):

  • OS: [e.g. ubuntu-18.04]
  • Action version: [e.g. v0.0.11]

Additional context
Same problem when saving the cache

Unable to reserve cache with key: ReserveCacheError: Unable to reserve cache with key app-cache-fb9da27a0a6ac8bbeebbcffaba3bdc42183122de70eccdb3b20726f2d55a62a2-root, another job may be creating this cache.
Stored root cache, key: app-cache-fb9da27a0a6ac8bbeebbcffaba3bdc42183122de70eccdb3b20726f2d[5](https://github.com/ReverseRetail/.../6861749650?check_suite_focus=true#step:32:5)5a[6](https://github.com/ReverseRetail/.../6861749650?check_suite_focus=true#step:32:6)2a2-root, id: -1
cache key already exists, aborting.

Add a filter parameter - to select for the images to cache

Is your feature request related to a problem? Please describe.
Not all images need caching, and can take up valuable room.
Some images are loaded before the cache is loaded such as the images in services.
So excluding those images would help reduce the size and speed up the save/load process.

Describe the solution you'd like
When doing docker save $(docker images -q) you can provide a list of the image ID's to save.
If we could provide a filter at that point, it would mean we were only saving the images that we needed and so instead of this list:

REPOSITORY                                    TAG                       IMAGE ID       CREATED         SIZE
codacy/codacy-prospector                      2.3.9                     2ed875f0c09d   2 days ago      267MB
codacy/codacy-analysis-cli                    latest                    9d412c689197   2 days ago      421MB
snyk/snyk                                     maven-3-jdk-11            882d05531c51   3 days ago      734MB
codacy/codacy-rubocop                         5.1.6                     88f504735018   7 days ago      166MB
codacy/codacy-tsqllint                        1.2.4                     92b7ddf6589f   7 days ago      91.8MB
codacy/codacy-pylint-python3                  2.2.16                    504237065f5b   7 days ago      161MB
api                                           11                        413eb2357edb   9 days ago      512MB
codacy/codacy-bandit                          2.3.1                     5c95af379274   12 days ago     264MB
codacy/codacy-pmd                             3.11.0                    bb50d6e2a005   2 weeks ago     162MB
tomcat                                        8.5-jdk11-corretto        ae7b0918b39c   4 weeks ago     473MB
codacy/codacy-remark-lint                     2.5.59                    ef1454679208   6 weeks ago     155MB
codacy/codacy-bundler-audit                   4.2.12                    8d9828916ae8   2 months ago    86MB
redis                                         5-alpine                  caa6655434a5   2 months ago    29.7MB
codacy/codacy-spotbugs                        1.10.5                    999b0c7e7945   2 months ago    128MB
amazonlinux                                   xray                      b88042e00023   3 months ago    395MB
codacy/codacy-checkstyle                      2.3.0                     5b7b8322d896   4 months ago    118MB
codacy/codacy-hadolint                        1.5.2                     04e2b64ed099   4 months ago    9.31MB
codacy/codacy-metrics-pmd                     0.2.2                     8b5299fc6999   4 months ago    111MB
codacy/codacy-shellcheck                      2.4.1                     509fffd67432   4 months ago    33.5MB
codacy/codacy-sqlint                          2.3.2                     f21a8ce8c730   4 months ago    148MB
codacy/codacy-jackson-linter                  5.2.2                     d06c7b65bd22   4 months ago    104MB
codacy/codacy-brakeman                        1.3.2                     f95377ddb330   4 months ago    146MB
codacy/codacy-pylint                          3.2.1                     582388b48c67   4 months ago    242MB
codacy/codacy-codenarc                        0.4.2                     88739f96d424   4 months ago    111MB
mysql                                         5.6                       c580203d8753   4 months ago    302MB
codacy/codacy-metrics-cloc                    0.2.4                     0f47e48065ab   7 months ago    197MB
amazon/amazon-ecs-local-container-endpoints   latest                    b005329f50b1   7 months ago    180MB
codacy/codacy-duplication-pmdcpd              2.0.145                   e121e0516f79   11 months ago   132MB
codacy/codacy-metrics-rubocop                 0.2.2                     ee82a4e0ba6a   11 months ago   269MB
codacy/codacy-metrics-radon                   0.1.66                    19f7a80d4001   11 months ago   154MB
codacy/codacy-duplication-flay                2.0.154                   ba95f1e54cfa   11 months ago   166MB
codacy/codacy-pmdjava                         2.0.0-pmdlegacy.57fdbf2   dee6d8e1c0e5   20 months ago   231MB
boxfuse/flyway                                5.2.4-alpine              df49c5fb49bb   2 years ago     115MB
memcached                                     1.5.10-alpine             fb0df03da449   2 years ago     8.97MB

A filter would be added at this point:

docker save $(docker images -q --filter=reference='codacy/*')

and we would get this list instead:

REPOSITORY                         TAG                       IMAGE ID       CREATED         SIZE
codacy/codacy-prospector           2.3.9                     2ed875f0c09d   2 days ago      267MB
codacy/codacy-analysis-cli         latest                    9d412c689197   2 days ago      421MB
codacy/codacy-rubocop              5.1.6                     88f504735018   7 days ago      166MB
codacy/codacy-tsqllint             1.2.4                     92b7ddf6589f   7 days ago      91.8MB
codacy/codacy-pylint-python3       2.2.16                    504237065f5b   7 days ago      161MB
codacy/codacy-bandit               2.3.1                     5c95af379274   12 days ago     264MB
codacy/codacy-pmd                  3.11.0                    bb50d6e2a005   2 weeks ago     162MB
codacy/codacy-remark-lint          2.5.59                    ef1454679208   6 weeks ago     155MB
codacy/codacy-bundler-audit        4.2.12                    8d9828916ae8   2 months ago    86MB
codacy/codacy-spotbugs             1.10.5                    999b0c7e7945   2 months ago    128MB
codacy/codacy-checkstyle           2.3.0                     5b7b8322d896   4 months ago    118MB
codacy/codacy-hadolint             1.5.2                     04e2b64ed099   4 months ago    9.31MB
codacy/codacy-metrics-pmd          0.2.2                     8b5299fc6999   4 months ago    111MB
codacy/codacy-shellcheck           2.4.1                     509fffd67432   4 months ago    33.5MB
codacy/codacy-sqlint               2.3.2                     f21a8ce8c730   4 months ago    148MB
codacy/codacy-jackson-linter       5.2.2                     d06c7b65bd22   4 months ago    104MB
codacy/codacy-brakeman             1.3.2                     f95377ddb330   4 months ago    146MB
codacy/codacy-pylint               3.2.1                     582388b48c67   4 months ago    242MB
codacy/codacy-codenarc             0.4.2                     88739f96d424   4 months ago    111MB
codacy/codacy-metrics-cloc         0.2.4                     0f47e48065ab   7 months ago    197MB
codacy/codacy-duplication-pmdcpd   2.0.145                   e121e0516f79   11 months ago   132MB
codacy/codacy-metrics-rubocop      0.2.2                     ee82a4e0ba6a   11 months ago   269MB
codacy/codacy-metrics-radon        0.1.66                    19f7a80d4001   11 months ago   154MB
codacy/codacy-duplication-flay     2.0.154                   ba95f1e54cfa   11 months ago   166MB
codacy/codacy-pmdjava              2.0.0-pmdlegacy.57fdbf2   dee6d8e1c0e5   20 months ago   231MB

Layer Cache not found

I use the action with the following code in order to make sure that PRs' cache do not affect master:

    - uses: satackey/action-docker-layer-caching@6b09a11416d285a6bf2a9d1ce2484c878f7c985e
      with:
        key: dlc_${{ github.ref }}_{hash}
        restore-keys: |
          dlc_refs/heads/master_
          dlc_${{ github.ref }}_

However, when the action run on a new PR, I get the following error:

Run satackey/action-docker-layer-caching@6b09a11416d285a6bf2a9d1ce2484c878f7c985e
  with:
    key: dlc_refs/pull/32/merge_{hash}
    restore-keys: dlc_refs/heads/master_
  dlc_refs/pull/32/merge_
  
    concurrency: 4
Received 14363 of 14363 (100.0%), 0.7 MBs/sec
Cache Size: ~0 MB (14363 B)
/bin/tar --use-compress-program zstd -d -xf /home/runner/work/_temp/6c9bf4f4-3bb5-420a-9886-8bdb81eed2d9/cache.tzst -P -C /home/runner/work/ridewithto_cloudrun/ridewithto_cloudrun
Error: Layer cache not found: {"id":"bfc566732015e74554c940abfdedb5ca215be6a392cad0f189b171298527cc09"}
##[error]Error: Layer cache not found: {"id":"bfc566732015e74554c940abfdedb5ca215be6a392cad0f189b171298527cc09"}
    at LayerCache.restoreSingleLayerBy (/home/runner/work/_actions/satackey/action-docker-layer-caching/6b09a11416d285a6bf2a9d1ce2484c878f7c985e/dist/ttsc-dist/main.js/index.js:33173:19)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
    at async Promise.all (index 1)
    at async LayerCache.restoreLayers (/home/runner/work/_actions/satackey/action-docker-layer-caching/6b09a11416d285a6bf2a9d1ce2484c878f7c985e/dist/ttsc-dist/main.js/index.js:33162:58)
    at async LayerCache.restore (/home/runner/work/_actions/satackey/action-docker-layer-caching/6b09a11416d285a6bf2a9d1ce2484c878f7c985e/dist/ttsc-dist/main.js/index.js:33140:42)
    at async main (/home/runner/work/_actions/satackey/action-docker-layer-caching/6b09a11416d285a6bf2a9d1ce2484c878f7c985e/dist/ttsc-dist/main.js/index.js:43988:
25)

Changing the key prefix from dlc to something works, but it's not a practical workaround..

Can we improve slow download time?

Hello 👋

Firstly, thank you for work on this 💙 We're using it all over the place at Exercism and it's proving to be a brilliant tool.

One thing I'm noticing though is that the more its used the slower it gets to download things. On a repo I'm working on atm, it takes over 5mins to download the data and load it into docker. This time seems to be linearly increasing with each usage, which scares me a little! I've tried experimenting with different concurrency levels but to no avail.

I'm wondering if you know of any way to improve things, either for me as a user, or any ideas about how we could speed up/improve the action itself?

Could we maybe set expiries on the cached data, removing layers that haven't been used in a while? This could happen either in the clean up phase of the action, or as a stand-alone clean-up action that could run daily?

There's a few of us at Exercism that would happily contribute to making things better if you want us to submit a PR, etc, but I'm wondering if you had any ideas/thoughts/direction regarding how we could improve this?

Thank you!
Jeremy

Reduce concurrency when storing and restoring layers

Describe the bug

storeLayers and restoreLayers should limit the number of concurrent calls to the cache service, otherwise it can hit throttling limits. Right now, this action processes all layers in parallel, resulting in a large number of calls to the cache service in a very short time period.

To Reproduce
I don't have a good repro, but we have seen at least one user of this action try to cache 200+ layers. Those requests are all sent within about 0.1 seconds :)

Expected behavior

The action should limit the number of concurrent store / restore requests to the cache service.

Debug logs
N/A

Runner Environment (please complete the following information):

  • OS: Any
  • Action version: v0.0.4

Additional context

Error/failure when uploading a cache that already exists

Describe the bug
The post step is marked as failed when attempting to upload a cache that already exists.

It looks like the cache API is now returning a different error message when the cache already exists. It used to say:

Cache already exists: Error: reserveCache failed: Cache already exists. Scope: [...], Key: [...], Version: [...]

Now it says:

Error: Unexpected error: ReserveCacheError: Unable to reserve cache with key [...], another job may be creating this cache.

The LayerCache searches for "Cache already exists" in error messages to ignore that case, but the new message doesn't match that.

To Reproduce

  1. Run a workflow using this action to cache docker layers
  2. Push a change that will cause some new layers to build
  3. Run the workflow again with the same cache keys
  4. Observe the error message when already-cached files are uploaded

Expected behavior
Action to detect the new Unable to reserve cache and treat it as non-fatal.

Debug logs
https://github.com/satackey/action-docker-layer-caching/runs/1507974881

Runner Environment (please complete the following information):

  • OS: ubuntu-18.04
  • Action version: v0.0.10

"Root cache could not be found. aborting." and "Error: The process '/usr/bin/docker' failed with exit code 1"

Describe the bug
creating the cache and filling it throwing errors

To Reproduce
Use example from README
Run action
Observe errors

Expected behavior
Working cache

Debug logs
https://github.com/valentijnscholten/django-DefectDojo/runs/1167379775?check_suite_focus=true

Run satackey/[email protected]
  with:
    key: docker-layer-caching-Docker Compose Actions Workflow-{hash}
    restore-keys: docker-layer-caching-Docker Compose Actions Workflow-
    concurrency: 4
    skip-save: false
Root cache could not be found. aborting.

and

1s
Post job cleanup.
Error: The process '/usr/bin/docker' failed with exit code 1
    at ExecState._setResult (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/post.js/index.js:1300:25)
    at ExecState.CheckComplete (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/post.js/index.js:1283:18)
    at ChildProcess.<anonymous> (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.8/dist/ttsc-dist/post.js/index.js:1183:27)
    at ChildProcess.emit (events.js:210:5)
    at maybeClose (internal/child_process.js:1021:16)
    at Socket.<anonymous> (internal/child_process.js:430:11)
    at Socket.emit (events.js:210:5)
    at Pipe.<anonymous> (net.js:659:12)
Error: Error: The process '/usr/bin/docker' failed with exit code 1

If applicable, add debug logs to help explain your problem.
Learn more about enabling the debug log at https://docs.github.com/en/actions/configuring-and-managing-workflows/managing-a-workflow-run#enabling-debug-logging

Runner Environment (please complete the following information):

  • OS: ubuntu-latest
  • Action version: 0.0.8

Additional context
workflow file: https://github.com/valentijnscholten/django-DefectDojo/actions/runs/272961568/workflow

quiet mode

The output is quite verbose. Requesting for a user option to reduce verbose output, something like a 'quiet' mode (like docker build . --quiet)

GITHUB_TOKEN permissions used by this action

At https://github.com/step-security/secure-workflows we are building a knowledge-base (KB) of GITHUB_TOKEN permissions needed by different GitHub Actions. When developers try to set minimum token permissions for their workflows, they can use this knowledge-base instead of trying to research permissions needed by each GitHub Action they use.

Below you can see the KB of your GITHUB Action.

name: Docker Layer Caching # satackey/action-docker-layer-caching
# GITHUB_TOKEN not used

If you think this information is not accurate, or if in the future your GitHub Action starts using a different set of permissions, please create an issue at https://github.com/step-security/secure-workflows/issues to let us know.

This issue is automatically created by our analysis bot, feel free to close after reading :)

References:

GitHub asks users to define workflow permissions, see https://github.blog/changelog/2021-04-20-github-actions-control-permissions-for-github_token/ and https://docs.github.com/en/actions/security-guides/automatic-token-authentication#modifying-the-permissions-for-the-github_token for securing GitHub workflows against supply-chain attacks.

Setting minimum token permissions is also checked for by Open Source Security Foundation (OpenSSF) Scorecards. Scorecards recommend using https://github.com/step-security/secure-workflows so developers can fix this issue in an easier manner.

[Security] Workflow release.yml is using vulnerable action satackey/push-prebuilt-action

The workflow release.yml is referencing action satackey/push-prebuilt-action using references v0.2.0-beta3. However this reference is missing the commit 0c027b66503f3857cb4e5cfb71633cc54dbd1ec6 which may contain fix to the some vulnerability.
The vulnerability fix that is missing by actions version could be related to:
(1) CVE fix
(2) upgrade of vulnerable dependency
(3) fix to secret leak and others.
Please consider to update the reference to the action.

Abla to skip saving cache on condition

It will be useful when you don't want to lose time to save cache.

For example, I want to skip saving cache when previous action was failed.
I can specify if: condition as success() that means that cache should be saved when previous action was successful.

Warning: The `save-state` command is deprecated and will be disabled soon. Please upgrade to using Environment Files.

Describe the bug

Warning is printed by GHA:

Warning: The save-state command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

To Reproduce
Steps to reproduce the behavior:

Just run this action on any pipeline

Expected behavior

No deprecation and breakage warning

Debug logs
Example run: https://github.com/nathan815/threes-scorekeeper/actions/runs/3973442441/jobs/6812100616

Runner Environment (please complete the following information):

  • OS: ubuntu-latest
  • Action version: [e.g. v0.0.4] Current runner version: '2.300.2'

NEW FORK

⚠️ Deprecation Notice for v0.0.11 and older ⚠️

Both this and the underlying push-prebuilt-action repositories seem to be abandoned and this repo is throwing a couple deprecation warnings. I have addressed those and published a new release. Would be happy to add others on as maintainers as well if I wind up lacking time.

New repo

https://github.com/jpribyl/action-docker-layer-caching

Usage

- name: Docker Layer Caching2
  uses: jpribyl/[email protected]

Marketplace

https://github.com/marketplace/actions/docker-layer-caching2

Underlying build action

https://github.com/jpribyl/push-prebuilt-action

No space left on device

I build plenty of PHP extension and I think it is larger that 5GB.

I get the following output:

Error response from daemon: write /var/lib/docker/tmp/docker-export-122249911/afb84dc4c4395339966/layer.tar: no space left on device
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors

And nothing get's saved. Is there away around this some how? Ie, only save named images? Saved 50% of the images only.
Could one increase the cache somehow?

Rinning with the caching Slower then without a cache

Hi,
The image bellow shows that this plugin makes the build really slower. When I run without the plugin, I get better results.
Im sure the problem is on my side, what am I doing wrong?
Thanks

image

This is the code:

jobs:
  build-push-gcr:
    name: Build and Push to GCP
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - uses: satackey/[email protected]

        continue-on-error: true
      - uses: google-github-actions/setup-gcloud@master
        with:
          service_account_key: ${{ secrets.GCLOUD_SERVICE_KEY }}
          project_id: ${{ secrets.PROJECT_ID }}

      - name: Build Docker Image
        run: DOCKER_BUILDKIT=1 docker build -t ${{ inputs.APP_NAME }}/${{ inputs.BRANCH_NAME }} -f ./infra/build/${{ inputs.APP_NAME }}/Dockerfile .

      - name: Configure Docker Client
        run: |-
          gcloud auth configure-docker --quiet
          gcloud auth configure-docker us-central1-docker.pkg.dev --quiet

      - name: Push Docker Image to Container Registry (GCR)
        run: |-
          docker tag ${{ inputs.APP_NAME }}/${{ inputs.BRANCH_NAME }} gcr.io/${{ secrets.PROJECT_ID }}/${{ inputs.APP_NAME }}/${{ inputs.BRANCH_NAME }}
          docker push gcr.io/${{ secrets.PROJECT_ID }}/${{ inputs.APP_NAME }}/${{ inputs.BRANCH_NAME }}

Cannot upload FROM scratch image layers

Describe the bug
CI jobs that build FROM scratch images somehow cannot upload the cache in a post-step.

To Reproduce
Steps to reproduce the behavior:

  1. Create a job that builds FROM scratch Docker image.
  2. Add caching with this GitHub Action.
  3. See the error in post-step.

Expected behavior
Post-job should upload any created image layers normally.

Debug logs
See this job, for example, which builds the following Dockerfile:

Post job cleanup.
/bin/sh -c docker save '3f268f2d8ff3' '3f268f2d8ff3' '2b70a2fe443f' '514da35d4c88' 'fefd7f81fbe8' '95639df9477e' '95639df9477e' '692f51822250' '3db404ba035e' 'cdcfd23e4908' '826fd23f771f' '732c2936ac34' '0de5f11e842a' 'f6b52a897a7a' '049db6664b91' 'b1e3431dab8c' 'd52551e23000' 'f60ff88094b2' '5a8f3a9c56ec' 'ab6d7b76b1b9' '7137860ffc6b' '738215c4d2d1' 'd46674d6250e' 'e7ee0c7daedd' '682140a99ed0' '4f49052e8448' 'f3bcd0fe7760' '5eca3e4a1aa2' 'c4e362ce9805' '4f7bef47a1b7' '4f7bef47a1b7' '4f7bef47a1b7' 'instrumentisto/medea:build-49' '3f268f2d8ff3' '2b70a2fe443f' '514da35d4c88' 'fefd7f81fbe8' 'rust:1.48' '4f7bef47a1b7' | tar xf - -C .
Error response from daemon: empty export - not implemented
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
Error: The process '/bin/sh' failed with exit code 2
    at ExecState._setResult (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.11/dist/ttsc-dist/post.js/index.js:1035:25)
    at ExecState.CheckComplete (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.11/dist/ttsc-dist/post.js/index.js:1018:18)
    at ChildProcess.<anonymous> (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.11/dist/ttsc-dist/post.js/index.js:912:27)
    at ChildProcess.emit (events.js:210:5)
Error: Error: The process '/bin/sh' failed with exit code 2
    at maybeClose (internal/child_process.js:1021:16)
    at Socket.<anonymous> (internal/child_process.js:430:11)
    at Socket.emit (events.js:210:5)
    at Pipe.<anonymous> (net.js:659:12)

Cache not wotking with BuildKit

Describe the bug
Cache not working when building with DOCKER_BUILDKIT=1

To Reproduce
Steps to reproduce the behavior:

  1. Build an image with BuildKit (put DOCKER_BUILDKIT=1 before build command)
  2. Rebuild an image
  3. An image is building from scratch, the cache is not used.

Expected behavior
An image is building from the cache.

Debug logs
In the logs I can see:

  • cache is saved in the first iteration
  • successfully fetched in the second, but image not using that cache.

Runner Environment (please complete the following information):

Additional context
But it works ok with default builder.

All data uploaded in root cache (empty layer caches) on Windows

Describe the bug
In jobs running on windows, the code that splits out the layers into separate caches is bypassed. All the layer content is left in the root cache. Separated layer caches are created, but each one only contains an empty folder.

To Reproduce
Can be seen in the CI workflow of this repo.

The root cache is 96MB:
https://github.com/satackey/action-docker-layer-caching/runs/1515471574?check_suite_focus=true#step:6:83

Notice that all the layer caches are 30 bytes:
https://github.com/satackey/action-docker-layer-caching/runs/1515471574?check_suite_focus=true#step:6:132

Expected behavior
Layer files are moved out of the root cache and saved in their own layer cache.

Runner Environment (please complete the following information):

  • OS: Microsoft Windows Server 2019 10.0.17763 Datacenter
  • Action version: v0.0.10

Additional context
It looks like this is caused by the \ directory separators used on Windows. recursive-readdir returns paths with \ on Windows, but moveLayerTarsInDir expects /.

ReserveCacheError: Unable to reserve cache with key

Randomly (as it seems) multiple instances of the error appear:

Unable to reserve cache with key: ReserveCacheError: Unable to reserve cache with key ..., another job may be creating this cache.

Is it something I should be concerned about?

The action is used like

      - uses: satackey/[email protected]
        continue-on-error: true

      - run: docker build -t ... .

docker load` fails with "evalSymlinksInScope: too many links"

Describe the bug
docker load fails as follows:

/bin/sh -c tar cf - . | docker load
evalSymlinksInScope: too many links in /var/lib/docker/tmp/docker-import-485111148/fbf06cbdaebb44ef253d774ca00108ac5e04fc0eb46b5587361e0d958e3f6bce/layer.tar
Error: The process '/bin/sh' failed with exit code 1
    at ExecState._setResult (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.5/dist/ttsc-dist/main.js/index.js:1300:25)
    at ExecState.CheckComplete (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.5/dist/ttsc-dist/main.js/index.js:1283:18)
    at ChildProcess.<anonymous> (/home/runner/work/_actions/satackey/action-docker-layer-caching/v0.0.5/dist/ttsc-dist/main.js/index.js:1183:27)
    at ChildProcess.emit (events.js:210:5)
    at maybeClose (internal/child_process.js:1021:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
Error: Error: The process '/bin/sh' failed with exit code 1

To Reproduce
This happens on all builds in this repo

Debug logs
A sample run can be seen here

The config file is here

Additional context
I am intending to cycle the cache-key next, but wanted to log this as that will probably fix the symptom not the cause.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.