Giter Club home page Giter Club logo

volumes-backup-extension's Introduction

Volumes Backup and Share Extension

Build, Scan and Push Lint Dockerfile

Extension Screenshot

🚀 This extension was originally created by Felipe Cruz

Features

  • Export a volume:
    • To a compressed file in your local filesystem
    • To an existing local image
    • To a new local image
  • To a new image in Docker Hub (or another registry)
  • Import data into a new container or into an existing container:
    • From a compressed file in your local filesystem
    • From an existing image
    • From an existing image in Docker Hub (or another registry)
  • Transfer a volume via SSH to another host that runs Docker Desktop or Docker engine.
  • Clone a volume
  • Empty a volume
  • Delete a volume

Installation

The recommended way to install the extension is from the Marketplace in Docker Desktop. You could also install it with the Docker Extensions CLI, targeting either a published released (e.g. 1.0.0) or branch (e.g. main):

  docker extension install docker/volumes-backup-extension:main

Run Locally

Clone the project

  git clone https://github.com/docker/volumes-backup-extension.git

Go to the project directory

  cd volumes-backup-extension

Build the extension

  docker build -t docker/volumes-backup-extension:latest .

Install the extension

  docker extension install docker/volumes-backup-extension:latest

Developing the frontend

  cd ui
  npm install
  npm start

This starts a development server that listens on port 3000.

You can now tell Docker Desktop to use this as the frontend source. In another terminal run:

  docker extension dev ui-source docker/volumes-backup-extension:latest http://localhost:3000

In order to open the Chrome Dev Tools for your extension when you click on the extension tab, run:

  docker extension dev debug docker/volumes-backup-extension:latest

Each subsequent click on the extension tab will also open Chrome Dev Tools. To stop this behaviour, run:

  docker extension dev reset docker/volumes-backup-extension:latest

Acknowledgements

volumes-backup-extension's People

Contributors

alyxpractice avatar atomist-bot avatar atomist[bot] avatar benja-m-1 avatar cdupuis avatar felipecruz91 avatar frezzle avatar gtardif avatar lalit3716 avatar lucbpz avatar mcapell avatar sscsps avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

volumes-backup-extension's Issues

Allow import from non-exported vackups

At the moment, when importing a tar.gz into a volume, the compressed file must contain a folder named vackup-volume that holds the actual content to import.

This needs to be fixed so any content inside a .tar.gz can be imported.

Sort volumes by size correctly

Describe the bug
Sorting by size is not correct.

Add the steps to reproduce
Steps to reproduce the behavior:

  1. Go to the Volumes Backup & Share extension.
  2. Click on the Size header column.
  3. See the volumes are sorted incorrectly.

Describe the expected behavior
The volumes should be sorted correctly depending on their size (including the unit, not just the numeric value).

Optional: Add screenshots
image

Output of docker extension version:

Client Version: v0.2.13
Server API Version: 0.3.0

Output of docker version:

(paste your output here)

Include extension console logs
If relevant, you can add console logs that might include some trace related to the bug you're reporting

Additional context
Add any other context about the problem here.

Linux: Installation of extension fails with error

Hi,

I'm unable to install the volumes-backup-extension. Error msg is:

[steve@cosmos volumes-backup-extension]$ docker extension install docker/volumes-backup-extension:latest Extensions can install binaries, invoke commands and access files on your machine. Are you sure you want to continue? [y/N] y Image not available locally, pulling docker/volumes-backup-extension:latest... Extracting metadata and files for the extension "docker/volumes-backup-extension:latest" Installing service in Desktop VM... Setting additional compose attributes Installing Desktop extension binary "docker-credentials-client" on host... Desktop extension binary "docker-credentials-client" installed Installing Desktop extension UI for tab "Volumes Backup & Share"... Extension UI tab "Volumes Backup & Share" added. rename /tmp/412012752-ext-install/docker_volumes-backup-extension /home/steve/.docker/desktop/extensions/docker_volumes-backup-extension: invalid cross-device link Removing extension docker/volumes-backup-extension:latest... Removing extension containers... Extension containers removed VM service socket forwarding stopped Extension UI tab Volumes Backup & Share removed Extension image docker/volumes-backup-extension:latest removed Extension "Volumes Backup & Share" removed installation could not be completed due to: rename /tmp/412012752-ext-install/docker_volumes-backup-extension /home/steve/.docker/desktop/extensions/docker_volumes-backup-extension: invalid cross-device link

I also tried to build from source but with the same error. Kernel boot arg overlay.metacopy=N or echo N | sudo tee /sys/module/overlay/parameters/metacopy
doesn't help :-(

Getting `Internal server error` when trying to export to Windows network volume

Describe the bug
When trying to export a volume to a Windows network volume, I get an Error message

Add the steps to reproduce
Steps to reproduce the behavior:

  1. Go to extension view
  2. Click Export volume
  3. Set a windows mounted volume as the target

Describe the expected behavior
Got the error: Failed to backup volume ttn-locator-backend_postgres-data to \\Orzhova\home\Drive\Backups\Master-Thesis-INM\ttn-locator-db: {"message":"Internal Server Error"}. HTTP status code: 500

Optional: Add screenshots
image

Output of docker extension version:

Client Version: v0.2.19
Server API Version: 0.3.4

Output of docker version:

Client:
 Cloud integration: v1.0.31
 Version:           23.0.5
 API version:       1.42
 Go version:        go1.19.8
 Git commit:        bc4487a
 Built:             Wed Apr 26 16:20:14 2023
 OS/Arch:           windows/amd64
 Context:           default

Server: Docker Desktop 4.19.0 (106363)
 Engine:
  Version:          23.0.5
  API version:      1.42 (minimum version 1.12)
  Go version:       go1.19.8
  Git commit:       94d3ad6
  Built:            Wed Apr 26 16:17:45 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.20
  GitCommit:        2806fc1057397dbaeefbea0e4e17bddfbd388f38
 runc:
  Version:          1.1.5
  GitCommit:        v1.1.5-0-gf19387a
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Include extension console logs
No log gets output in the chrome debugger view, sadly

Additional context
Add any other context about the problem here.

Additional Option to name imported volumes/images by folder names

Is your feature request related to a problem? Please describe.
TBD

Describe the solution you'd like
Currently, Docker extension assigns random volume names when importing folders to avoid naming conflicts and gives the option to name before importing. Is it possible to add an additional option beside those to name imported volumes/images by folder names as well.

Describe alternatives you've considered
N/A

Additional context
Related to this reddit thread - https://www.reddit.com/r/docker/comments/x11m4n/comment/iq8x2bn/?utm_source=share&utm_medium=web2x&context=3

Add support to export backup to s3 compatible storage

Is your feature request related to a problem? Please describe.

I would like to backup my volumes to a s3 compatible storage solution

Describe the solution you'd like

I would like to put my credentials in a configuration file, the extensions would configure itself to send the backup to a s3 compatible storage (not necessary on aws, it could be a minio for instance)

Describe alternatives you've considered

Upload with SSH but it is not always practical as server got a limited amount of resources

Schedule backups

Is your feature request related to a problem? Please describe.
Schedule a backup, say every day or week.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Multi-threaded compressions for exported images and volumes

Is your feature request related to a problem? Please describe.
N/A

Describe the solution you'd like
Support for multi-threaded compressions for exported images and volumes.

Describe alternatives you've considered
N/A

Additional context
N/A

Easier re-push

Is your feature request related to a problem? Please describe.

No

Describe the solution you'd like

Once I pushed a volume to a registry I would love it that the extension remembered that so that I can re-push more easily and not have to remember where I pushed the volume the first time.

Describe alternatives you've considered

None

See remote volumes

Is your feature request related to a problem? Please describe.

I don't know which of my repositories on hub are my volumes.

Describe the solution you'd like

The same way we can see remote images in the images view I would <3 it if I could see a list of my volumes I pushed to Hub so that I can easily pull them on my new machine for example.

Describe alternatives you've considered

  • Keep a list of my volumes on a napkin
  • name my volumes repositories with some naming scheme

None of those are great.

Additional context

Extension “Volumes Backup & Share” fails with “Internal Server Error HTTP status code: 500”

Issue type: Unable to backup volume using extension “Volumes Backup & Share”
OS Version / Build: Windows 11 Version 22H2 (Build: 22621.674)
App version: v.4.13.1
Steps to reproduce:
“Volumes Backup & Share” > “Export volume” > “To: Local File” > “.tar.zst” > “Directory” > “Select directory” > “” returns
Failed to backup volume to C:\Users\Desktop: {“message”:“Internal Server Error”}. HTTP status code: 500

Diagnostic ID:
6FD65E53-DD8F-44DB-A774-357A2EE71651/20221102203305

Hello,

Trying to export a volume using the Docker extension “Volumes Backup & Share” but it fails regardless of destination:
C:
C:\Users\Desktop
D:
^ External USB
with the error:
Failed to backup volume to C:\Users\Desktop: {“message”:“Internal Server Error”}. HTTP status code: 500

The container the volume belongs to isn’t started.

Would appreciate any and all help!

Best Regards - TheSwede86

issue linking to volumes exported to GHCR registry

While I can export a volume to my personal GitHub account with the Docker Desktop VB&S interface, with full admin privileges to repositories in a GitHub organization, it seems I am unable to export a volume to an organization-owned container image in GitHub Container Registry as a GitHub Package

I'm not entirely sure this is an issue with VB&S extension, but in the GitHub documentation for Configuring access to container images for an organization, it states that I should be able to assign read, write, or admin roles, that I should be able to configure access. I cannot.

Documentation in the link indicates in Step 3, I should 'search for and select my package'. My package is not visible from the organization view. Further, I cannot connect to the organization's repository from my personal account, even though I have full admin access from my GitHub ID to the repo within the organization

I have configured a classic PAT as stated in Authenticating to the Container registry with every possible scope allowed to no avail

I cannot 'Choose a registry' from the Organization's Packages page either. only 'Learn More'

I stumbled across the following note on the Packages page of my personal account, where the volumes were exported to, which makes me think I should be able to export a volume to a specific organization via the Docker Desktop interface
"Note: To connect a repository to your container image, the namespace for the repository and container image on GitHub must be the same. For example, they should be owned by the same user or organization"
image

image

MacOS export to import on Linux needs `--platform linux/amd64`

Describe the bug
I've exported a backup to my repository (to keep a version-history of data) on dockerhub successfully from a MacOS laptop .
I'm now pulling that image down onto a production linux machine (that doesn't have docker-desktop) via a docker pull.
I was going to use Bret's vackup tool at this point to get the data into a volume on the linux machine.
Unfortunately, I can't access the image that I've pulled because the Mac has built the image as platform linux/arm64/v8 and the linux machine is expecting linux/arm64.

I've tried to set the DOCKER_DEFAULT_PLATFORM=linux/amd64 environment variable, but docker desktop or the plugin doesn't seem to read that.

Is there a way to build the image with a --platform linux/amd64 or docker buildx build --platform linux/amd64 to enable compatibility between hosts?

The error:

# on target linux host
docker pull username/image_1
docker run -it username/image_1

WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
exec /bin/sh: exec format error

or

docker run --platform linux/amd64 -it username/image_1 /bin/sh 
Unable to find image 'username/image_1:latest' locally
latest: Pulling from username/image_1
Digest: sha256:d43c98efcb5429dff9f8053bd43eff6ce2c2710b73978da28c72bf49aae7a3de
Status: Image is up to date for username/image_1:latest
WARNING: image with reference username/image_1 was found but does not match the specified platform: wanted linux/amd64, actual: linux/arm64/v8
docker: Error response from daemon: image with reference username/image_1 was found but does not match the specified platform: wanted linux/amd64, actual: linux/arm64/v8.
See 'docker run --help'.

Describe the expected behavior
Expect the volume to be readable on a linux host, when the backup is made from a MacOS host.

Output of docker extension version:

docker extension version
Client Version: v0.2.16
Server API Version: 0.3.0

Output of docker version:

Client:
 Cloud integration: v1.0.29
 Version:           20.10.21
 API version:       1.41
 Go version:        go1.18.7
 Git commit:        baeda1f
 Built:             Tue Oct 25 18:01:18 2022
 OS/Arch:           darwin/arm64
 Context:           default
 Experimental:      true

Server: Docker Desktop 4.15.0 (93002)
 Engine:
  Version:          20.10.21
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.7
  Git commit:       3056208
  Built:            Tue Oct 25 17:59:41 2022
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.6.10
  GitCommit:        770bd0108c32f3fb5c73ae1264f7e503fe7b2661
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Stop containers during backup

Automatically stop and start the containers that are consuming the volume you want to backup to ensure data integrity.

Please, onboard me :)

As a user, pretty new to Docker or not really used to dealing with volumes, I wish the extension tells me what I can do with it when opening it for the first time. When I complete (or skip) the onboarding, the extension saves that state so that I won't be shown the onboarding again.

The onboarding could be a way to explain

  • What can be done from the extension
  • When to use it
  • And how useful it is

Transfer to Host fails with: "shell operators are not allowed when executing commands through SDK APIs"

Describe the bug
Upon initiated a Transfer to host to transfer a volume to remote server, the transfer fails with ""shell operators are not allowed when executing commands through SDK APIs"

Screenshot 2023-08-19 234944

Add the steps to reproduce
Steps to reproduce the behavior:

  1. Follow exact steps outlined on animated image under "Transfer a volume to another Docker host" on https://www.docker.com/blog/back-up-and-share-docker-volumes-with-this-extension/ using verified public key.
  2. Plugin connects to remote host and does list the destination volumes to update.
  3. Upon selecting a destination volume and clicking Transfer button, transfer fails and the above error is shown.

Output of docker extension version:

docker/volumes-backup-extension:1.1.4

Output of docker version:

Docker Desktop 4.22.0 (117440) 

Include extension console logs

[listVolumes] took 124.60000002384186 ms.
TransferDialog.tsx:102 Transferring data from source volume wordpress-docker_wp_db_data to destination volume docker-wordpress-themecd_wp_db_data in host [email protected]...

no further output is shown in console, however above error is displayed in Docker Desktop.

Volumes from remote engines

Is your feature request related to a problem? Please describe.
It would be great to be able to choose a Docker context and start seeing my volumes from remote engines, rather than only my local machine.

Describe the solution you'd like
.

Describe alternatives you've considered
.

Additional context
.

cli?

Is your feature request related to a problem? Please describe.

No

Describe the solution you'd like

I would <3 a cli for this when I install the extension

Describe alternatives you've considered

Using the UI, but sometimes I prefer the terminal

Additional context

Support multiple archive format

Is your feature request related to a problem? Please describe.
I compressed a folder from Mac Os Finder which creates a zip file. But I couldn't import it and had to manually create a gzip file.
However, the extension was .tgz instead of .tar.gz.

Describe the solution you'd like
I wish I can import

  • zip files
  • gzip files with any extension

Add the name of the volume to be removed in the confirmation popup

Is your feature request related to a problem? Please describe.
I removed a first and immediately after confirming I click on another line to remove another. In the meantime, a line disappeared, I guess the one from the removed volume, and I wasn't sure I clicked on the right one anymore.

Describe the solution you'd like
Add the name of the volume in the popup

Describe alternatives you've considered
I closed the popup, do it again

Additional context

Screenshot 2022-09-13 at 11 44 29

Transferring larger volumes results in `ERR_CHILD_PROCESS_STDIO_MAXBUFFER` error.

Describe the bug
Transferring a volume from one host to another, the size is 218.9MB and I'm getting the error after 30 seconds or so:

Failed to clone volume SOURCE to destinaton volume DESTINATION: Exit code: ERR_CHILD_PROCESS_STDIO_MAXBUFFER

I believe that it's because you're using exec commands instead of spawn commands, I think.

In file volumes-backup-extension/ui/src/components /TransferDialog.tsx I think that line 112:

const transferredOutput = await ddClient.docker.cli.exec("run", [

needs to change to the spawn command. Otherwise you can't transfer larger payloads.

Sorry about that.

Drag and drop a tar.gz over the app to import it

Is your feature request related to a problem? Please describe.
I was importing a tar.gz and I mechanically dragged it over Docker Desktop (the extension being open) and realise that it is not possible

Describe the solution you'd like
When dropping a file on the app I wish the import dialog opens

Describe alternatives you've considered
none, just click on the button and navigate to the folder where my tar.gs is.

Additional context
Add any other context or screenshots about the feature request here.

Keep volume labels when cloning

Is your feature request related to a problem? Please describe.
Request from @jmaitrehenry:

When you clone a volume created by compose, is it possible to keep the compose labels on it?
When I clone back my cloned volume to the original name, compose complain that is not a volume created by itself:

WARN[0000] volume "xxx_mysql_data" already exists but was not created by Docker Compose. Use `external: true` to use an existing volume 

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Installation fails with error

Description

Installation fails with error

Failed to install extension: installation could not be completed due to: executing 'docker --context desktop-linux pull docker/volumes-backup-extension:1.1.2' : exit status 1: Error response from daemon: Head "https://registry-1.docker.io/v2/docker/volumes-backup-extension/manifests/1.1.2": EOF

Steps to reproduce
Steps to reproduce the behavior:

  1. Open Docker desktop dashboard
  2. Go to "Add Extensions"
  3. Find "Volumes Backup & Share"
  4. Click "Install"
  5. Observe error message as above

Expected behavior

Installation succeeds

Output of docker extension version:

Client Version: v0.2.17
Server API Version: 0.3.3

Output of docker version:

Client:
 Cloud integration: v1.0.29
 Version:           20.10.22
 API version:       1.41
 Go version:        go1.18.9
 Git commit:        3a2c30b
 Built:             Thu Dec 15 22:28:41 2022
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Desktop 4.16.2 (95914)
 Engine:
  Version:          20.10.22
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.9
  Git commit:       42c8b31
  Built:            Thu Dec 15 22:26:14 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.14
  GitCommit:        9ba4b250366a5ddde94bb7c9d1def331423aa323
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker/volumes-backup-extension:1.1.4 extension become useless without internet connection

I am working in an environment without internet and I am trying to import the jenkins.tar folder containing the jenkins files on the computer into Docker as a volume using the docker/volumes-backup-extension:1.1.4 extension. While there is no problem importing it when there is an internet connection, the Internal Server gives a 500 error when there is no internet connection. All images of the extension are installed in docker. What is the problem? Can you help me?
image

Backup all volumes that belong to an entire stack

Is your feature request related to a problem? Please describe.
Backup all volumes that belong to an entire stack

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Don't start stopped containers when cloning a volume

Describe the bug
When you clone a volume of a stopped container that container starts automatically. I think this may be dangerous without a warning and without allowing to avoid such start of the stopped container.

Add the steps to reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Describe the expected behavior
A clear and concise description of what you expected to happen.

Optional: Add screenshots
If applicable, add screenshots to help explain your problem.

Output of docker extension version:

(paste your output here)

Output of docker version:

(paste your output here)

Include extension console logs
If relevant, you can add console logs that might include some trace related to the bug you're reporting

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.