Giter Club home page Giter Club logo

s3-pit-restore's Introduction

S3 point in time restore

This is the repository for s3-pit-restore, a point in time restore tool for Amazon S3.

The typical scenario in which you may need this tool is when you have enabled versioning on an S3 bucket and want to restore some or all of the files to a certain point in time, to local file system, same s3 bucket or different s3 bucket.

Doing this with the web interface is time consuming: Amazon S3 web management gui doesn't offer a simple way to do that on a massive scale.

With this tool you can easily restore a repository to a point in time with a simple command like:

  • To local file-system:
     $ s3-pit-restore -b my-bucket -d restored-bucket-local -t "06-17-2016 23:59:50 +2"
    
  • To s3 bucket:-
     $ s3-pit-restore -b my-bucket -B restored-bucket-s3 -t "06-17-2016 23:59:50 +2"
    

Choosing the correct time and date to restore at is simply a matter of getting that information clicking the Versions: Show button from the S3 web gui and navigating through the, now appeared, versions timestamps.

Installing

With pip install:

$ pip3 install s3-pit-restore

or clone the repository and launch:

$ python3 setup.py install

Requirements

  • Python 3
  • AWS credentials available in the environment
    • This can be accomplished in various ways:
      • Environment Variables:
        • AWS_ACCESS_KEY_ID
        • AWS_SECRET_ACCESS_KEY
        • AWS_DEFAULT_REGION
      • Your ~/.aws/ files
        • Configured with aws configure

Usage

s3-pit-restore can do a lot of interesting things. The base one is restoring an entire bucket to a previous state:

Restore to local file-system

  • Restore to local file-system directory restored-bucket-local
     $ s3-pit-restore -b my-bucket -d restored-bucket-local -t "06-17-2016 23:59:50 +2"
    
    • -b gives the source bucket name to be restored from
    • -d gives the local folder to restore to (if it doesn't exist it will be created)
    • -t gives the target date to restore to. Note: The timestamp must include the timezone offset.

Restore to s3 bucket

  • Restore to same bucket:

     $ s3-pit-restore -b my-bucket -B my-bucket -t "06-17-2016 23:59:50 +2"
    
    • -B gives the destination bucket to restore to. Note: Use the same bucket name to restore back to the source bucket.
  • Restore to different bucket:-

     $ s3-pit-restore -b my-bucket -B restored-bucket-s3 -t "06-17-2016 23:59:50 +2"
    
  • Restore to s3 bucket with custom virtual prefix [restored object(src_obj) will have key as new-restored-path/src_obj["Key"]] (Using -P flag)

     $ s3-pit-restore -b my-bucket -B restored-bucket-s3 -P new-restored-path -t "06-17-2016 23:59:50 +2"
    

Other common options for both the cases

  • Another thing it can do is to restore a subfolder (prefix) of a bucket:

     $ s3-pit-restore -b my-bucket -d my-restored-subfolder -p mysubfolder -t "06-17-2016 23:59:50 +2"
    
    • -p gives a prefix to isolate when checking the source bucket (-P is used when deal with the destination bucket/folder)
  • You can also speedup the download if you have bandwidth using more parallel workers (--max-workers flag):

     $ s3-pit-restore -b my-bucket -d my-restored-subfolder -p mysubfolder -t "06-17-2016 23:59:50 +2" --max-workers 100
    
  • If want to restore a well defined time span, you can use a starting (-f) and ending (-t) timestamp (a month in this example):

     $ s3-pit-restore -b my-bucket -d my-restored-subfolder -p mysubfolder -f "05-01-2016 00:00:00 +2" -t "06-01-2016 00:00:00 +2"
    

Command line options

usage: s3-pit-restore [-h] -b BUCKET [-B DEST_BUCKET] [-d DEST] [-p PREFIX] [-P DEST_PREFIX] [-t TIMESTAMP] [-f FROM_TIMESTAMP] [-e] [-v] [-u ENDPOINT_URL] [--dry-run] [--debug] [--test] [--max-workers MAX_WORKERS]
                      [--sse {AES256,aws:kms}]

options:
  -h, --help            show this help message and exit
  -b BUCKET, --bucket BUCKET
                        s3 bucket to restore from
  -B DEST_BUCKET, --dest-bucket DEST_BUCKET
                        s3 bucket where recovering to
  -d DEST, --dest DEST  path where recovering to on local
  -p PREFIX, --prefix PREFIX
                        s3 path to restore from
  -P DEST_PREFIX, --dest-prefix DEST_PREFIX
                        s3 path to restore to
  -t TIMESTAMP, --timestamp TIMESTAMP
                        final point in time to restore at
  -f FROM_TIMESTAMP, --from-timestamp FROM_TIMESTAMP
                        starting point in time to restore from
  -e, --enable-glacier  enable recovering from glacier
  -v, --verbose         print verbose informations from s3 objects
  -u ENDPOINT_URL, --endpoint-url ENDPOINT_URL
                        use another endpoint URL for s3 service
  --dry-run             execute query without transferring files
  --debug               enable debug output
  --test                s3 pit restore testing
  --max-workers MAX_WORKERS
                        max number of concurrent download requests
  --sse {AES256,aws:kms}
                        Specify server-side encryption

Docker Usage

# make a new local dir in your current path
mkdir restore

# restore a point in time copy under the restore dir you just created
docker run -ti --rm --name=s3-pit-restore -v {$PWD}/restore:/tmp -e AWS_ACCESS_KEY_ID=[AWS_ACCESS_KEY_ID] -e AWS_SECRET_ACCESS_KEY=[AWS_ACCESS_KEY_ID] angelocompagnucci/s3-pit-restore -b [Bucket] -p [Prefix] -d /tmp -t "01-25-2018 10:59:50 +2"

Testing

s3-pit-restore comes with a testing suite. You can run it with:

Restore to local file-system test cases:

`$ ./s3-pit-restore -b my-bucket -d /tmp/ --test`

Restore to s3 bucket test cases:

`$ ./s3-pit-restore -b my-bucket -B restore-bucket-s3 -P restore-path --test` (make sure you have s3 bucket `restore-bucket-s3`)

Run all the test cases:

`$ ./s3-pit-restore -b my-bucket -B restore-bucket-s3 -d /tmp/ -P restore-path --test`

s3-pit-restore's People

Contributors

angeloc avatar arnauddury avatar barala avatar coaxvex avatar glennthomasau avatar jgimenez avatar jgmchan avatar josephjoice avatar kylec32 avatar lutaylor avatar mikedias avatar mrpatrick avatar okummer avatar stevesta avatar timor-raiman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-pit-restore's Issues

Restore from s3 bucket to s3 bucket in different account

Hi,

is there a way to authenticate to 2 different accounts or to 1 account using 2 different assume roles, thus is there a way to restore a s3 bucket to a target s3 bucket in a different account (directly, without a stopover like sync to local, then sync to target bucket)?

No versions matching criteria, exiting ...

I am getting this error --> "No versions matching criteria, exiting ..." when I try to run following command.

%sh s3-pit-restore -b -B -p /people -t "10-06-2021 11:31:46 -5"

I do see I have versions for the file which I am trying to recover.

  | people.txt | txt | V1 | September 30, 2021, 11:39:47 (UTC-05:00) | 23.0 B | Standard
  | peopleV2.txt | txt | V3 | October 6, 2021, 11:58:32 (UTC-05:00) | 47.0 B | Standard
  --> | peopleV2.txt | txt |V2 | October 6, 2021, 11:31:46 (UTC-05:00) | 47.0 B | Standard
 --> | peopleV2.txt | txt |V1 | September 30, 2021, 13:27:30 (UTC-05:00) | 45.0 B | Standard

I did try following because I am just trying to recover peopleV2.txt but no luck

%sh s3-pit-restore -b -B -p /people /peopleV2.txt -t "10-06-2021 11:31:46 -5"

Thank you any help would be appreciated.

  • Dhams

Restoring latest deleted files (no date range)

Hi, I'd like to suggest to add an option to restore all files that have been deleted from a bucket, but without the need to specify a date/range.

It would be important to list only the latest version before the delete marker (or with flags to restore all versions maybe).

would it be too hard to implement in this tool?

cheers

[Question] s3-pit-restore performance for bucket sync

Hello,
I try to use the tool to restore a snapshot of a 2 teraBytes source bucket to a new empty one.

First experimentations show a sync duration of about 26 hours with default threading configuration. I'have didn't observe lot of improvment when I increase thread number. The sync is done from an EC2 instance were I didn't observe any CPU or Network contention.

Is this the expected level of performance for a point of time restore? I want to use the tools for a disaster recovery procedure and it exceed the targeted RTO.

Regards

ModuleNotFoundError: No module named 'boto3'

When attempting to install via the "clone the repository and launch" method, I get the following error(s):

$ s3-pit-restore
Traceback (most recent call last):
  File "[my-env]/bin/s3-pit-restore", line 32, in <module>
    import os, sys, time, signal, argparse, boto3, botocore, \
ModuleNotFoundError: No module named 'boto3'

When running the install, I do get the following output (note the warning):

$ python3 setup.py install
[my-env]/lib/python3.7/distutils/dist.py:274: UserWarning: Unknown distribution option: 'install_requires'
  warnings.warn(msg)

I don't have the same problem when installing via pip.

[General question] Restoring huge buckets (>20TB)

Hi all

I've been trying to restore an entire bucket to a specific state (eg: Monday 08-26-2021 23:59:59 +2).
I tried to restore the entire bucket to this state using the following command:

/usr/local/bin/s3-pit-restore -b [bucketname] -d [bucketname] -f "08-26-2021 00:00:00 +2" -t "08-26-2021 23:59:59 +2" --max-workers 100 -v

I noticed that his command copies every S3 object to a new version.
I'm trying to only copy objects that are modified after 26 August in this case, is this possible?

What exactly am I doing wrong in my case?

Thanks in advance.

thank you!

this tool just saved me countless hours, thanks!

Allow to specify endpoint_url

For those who use other implementations like Ceph Rados Gateway, we need to set another endpoint.
AWS CLI and its config file does still not allow that :
aws/aws-cli#1270

So the program should pass that argument to the boto3.client method.

Unexpected No versions matching criteria, exiting ...

Hello,

Description

When restoring from one bucket to another, the script terminates when there is no "Versions" key found in the returned page response, therefore not all objects are parsed. See here:

    for page in page_iterator:
        if not "Versions" in page:
            print("No versions matching criteria, exiting ...", file=sys.stderr)
            sys.exit(1)

Findings

It appears that system-defined metadata prefixed with META/ don't have Versions key, therefore after first occurrence of a metadata object the script terminates.

Debugging logs:

"2021-01-29 12:48:29+00:00" j3JYMecjFXTUPwplWAbok50eenEnyOKE 95871 STANDARD 9fff-ff26d7643d203b550019ca2c2b468a96371b7b6d3261a8f656d5b4165aec
No versions matching criteria, exiting ...:
{'ResponseMetadata': {'RequestId': '*#HJGOGHSEGJ)SE', 'HostId': 'aasdasfegdsuhgsiughwei+ppIc7najtrVm7rgew+siduhgruwgoasdfgr=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'adf38grhe8oosf83ohg83jdsog+aouht2973aoifoq+9823hrusdfhusighwoadfa=', 'x-amz-request-id': '2903jtgoisd0923jfgj', 'date': 'Wed, 08 Dec 2021 13:18:36 GMT', 'content-type': 'application/xml', 'transfer-encoding': 'chunked', 'server': 'AmazonS3'}, 'RetryAttempts': 0}, 'IsTruncated': True, 'KeyMarker': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adfb599b1c-908a-4099-8b0e-f76a085388e81614621839103.refs', 'VersionIdMarker': 'Nkq0cltacQKCj.BVqXxxO0jSbH_FRY.1', 'NextKeyMarker': 'META/blob-642b6e0b-e2b6-48c0-b813-fa1fc6b1c450791a1974-9a16-450f-a721-722b7133099d1617901500524.refs', 'NextVersionIdMarker': '8PkM3_XQ66Z0aKtsRCx76oV4ZvpbF4.J', 'DeleteMarkers': [{'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adfb6332c2-157e-4220-a5f0-faa9a451ca9c1621011821559.refs', 'VersionId': 'uL1jQNJnIHTFMmEsaGb.vK04G_BBZgUZ', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 7, 28, 2, 6, 44, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adfcb01084-d675-46e2-8f7b-8af7e72ede911614708239101.refs', 'VersionId': 'qwQ9mQQuASj5URJCE1rV1aXGpyepez4d', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 4, 15, 2, 3, 14, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adfd772a88-39e7-4ad0-8eb8-8ca55c4fbc3b1615313017232.refs', 'VersionId': 'lc16qRhgb0Y3d0GxPMWdIRZub4oPLFBQ', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 4, 15, 2, 3, 14, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adfef13c46-994a-43c2-a0a5-70e5e0ab1af51616177023004.refs', 'VersionId': 'LgfI4YiKFSHabFyfCeu7dbiPq5_EF5xG', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 4, 15, 2, 3, 14, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adff3a47fc-eff4-4d73-97cb-c87b1ee9fa721623690209551.refs', 'VersionId': 'JaNl3NHeWU6cpNRl8M3xYYwL5xhFW3P2', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 7, 28, 2, 6, 44, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adffb10d89-ebb2-452a-a5d8-c7b2fb1fa7281626195834667.refs', 'VersionId': 'q3bC3T8fzuy1VJqpf.Vt3tDJy0fJjCWe', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 7, 28, 2, 6, 45, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-14a66a39-6b4a-441c-a7a2-f4578149f2adffecc54d-7efe-490c-b177-e9fe801223e51626887084452.refs', 'VersionId': 'hlPplVp8P9sVKUQsU3wpKdlq3D1Ho0LS', 'IsLatest': True, 'LastModified': datetime.datetime(2021, 7, 28, 2, 6, 45, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-44171f0d-3914-4e91-89d1-e626a6a4c97f00d505c5-0da5-4c20-9837-f68b0da3998a1583263322042.refs', 'VersionId': 'fryHGZwi7e25JdhUHdOsu92OtjSN7hz4', 'IsLatest': True, 'LastModified': datetime.datetime(2020, 3, 4, 2, 0, 3, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-44171f0d-3914-4e91-89d1-e626a6a4c97f00d505c5-0da5-4c20-9837-f68b0da3998a1583306522024.refs', 'VersionId': 'HIQ1pbyHHIUXSFULXwWkAvZL0qJFPJ21', 'IsLatest': True, 'LastModified': datetime.datetime(2020, 3, 5, 2, 0, 35, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-44171f0d-3914-4e91-89d1-e626a6a4c97f084a906d-bf84-4e97-ba2c-c1f7da5b52f21590191463794.refs', 'VersionId': 'wnCNcISNgqEA2MxsOrfvx_x.OHVpMOOO', 'IsLatest': True, 'LastModified': datetime.datetime(2020, 5, 23, 2, 0, 3, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-44171f0d-3914-4e91-89d1-e626a6a4c97f084a906d-bf84-4e97-ba2c-c1f7da5b52f21590234663791.refs', 'VersionId': 'DcbAcNLgCgyU_QJ9NSwZa3lgCW3ebNF3', 'IsLatest': True, 'LastModified': datetime.datetime(2020, 5, 24, 2, 0, 4, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}, 'Key': 'META/blob-44171f0d-3914-4e91-89d1-e626a6a4c97f084a906d-bf84-4e97-ba2c-c1f7da5b52f21590277863791.refs', 'VersionId': 'CVShkB9mof22YtmiDLnOTVY87hXJS6dm', 'IsLatest': True, 'LastModified': datetime.datetime(2020, 5, 24, 2, 0, 4, tzinfo=tzutc())}, {'Owner': {'ID': '4c0b4ac898185165ae8d411f9504f560757bc2c12975bc4362beab1ef7ff48ef9348yovmq8y4tvn73284ny57ov8ua984yv57n8hu6o8au4pmvy5nouamtvvh5'}

Expectations

Generate a warning that metadata object is not versioned and therefore it will be skipped.
Implement an additional option that will help to filter some prefixes (e.g. META/)
Other ideas

Execution Details

s3-pit-restore version: The one created with python3 setup.py install
How I run it: s3-pit-restore --dry-run --verbose -b <source_bucket> B <destination_bucket> -t "02-20-2021 14:15:00 +1"

Objects larger than 5GB cannot be restored & the error message is misleading

When attempting to PIT restore files today I came across this error:

"2021-04-30 13:45:31+00:00" l035JNl1kPeWvKi1Zftbxs0ulNx59SqH 0 STANDARD VREAnalysisResult/69e521de-8942-4296-ae4f-a6252c240563/average_power_quantiles_by_generator_1H.nc ERROR: An error occurred (InvalidRequest) when calling the CopyObject operation: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120

However, the file in question isn't above 5GB (The limit for copy_object) . It's only 6MB. It appears that the file that failed to restore is actually VREAnalysisResult/65424f29-4f5e-4a9b-9a3a-107d08e6d99e/average_power_quantiles_by_generator_10T.nc (which is 5.7GB) which makes the error a little misleading.

This error is caused by the use of Boto3's copy_object instead of copy, apparently.

Doesn't work with Glacier/Deep archive objects that have a delete Marker

I have found when trying to use the tool to do the actual restore request that it doesn't seem to always work.

When attempting to restore Glacier/DeepArchive objects. I have been getting "botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
"
Mostly do to Objects in Glacier that have a delete Marker.

This seems to be due to:

https://github.com/madisoft/s3-pit-restore/blob/master/s3-pit-restore#L199

Object https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#object has no options for "Version". It is likely looking for the object to get the restore value but it can’t find it since you can’t specify a version.

Might consider dropping the actual restore process and just use the tool to identify what needs to be restored and recommend s3 batch operations for Glacier/DeepArchive (which is how I am currently using the tool).

Option to use "delete versions" instead of create a new version of object?

Let say we have a bucket with a file abc.jpg, with 3 versions of 2021-01-01, 2022-01-01 and 2023-01-01.
Current version is 2023-01-01 and I want to restore to the 2022-01-01 version.
Using s3-pit-restore on same bucket same folder, it eventually created a new version based on 2022-01-01 and overwrite the current file.
What we want to do is to remove the 2023-01-01 version instead of creating a new version, so there will be only 2 versions of abc.jpg (2021-01-01 and 2022-01-01).

Is that possible?

Incorrect docker command example in README

The docker command in the README seems to be incorrect:

docker run -ti --rm --name=s3-pit-restore -v {$PWD}/restore:/tmp -e AWS_ACCESS_KEY_ID=[AWS_ACCESS_KEY_ID] -e AWS_SECRET_ACCESS_KEY=[AWS_ACCESS_KEY_ID] angelocompagnucci/s3-pit-restore:latest s3-pit-restore -b [Bucket] -p [Prefix] -d /tmp -t "01-25-2018 10:59:50 +2"

results in s3-pit-restore: error: unrecognized arguments: s3-pit-restore

Replacement for a backup tool?

Hi guys,

Great project! Sorry to post a question here, but I was wondering if this + rclone could be a viable backup solution? We're currently doing incremental backups to a S3 bucket via duplicati. But if this can reliably restore to a point in time, it seems like much cleaner solution.

Are there any issues with large numbers of files, file sizes etc? Or anything else that would make it unsuitable for backups?

Cheers

docker build fails on alpine 3.7

The pip install command doesn't work (anymore) in the alpine 3.7 base image:

[+] Building 21.1s (5/5) FINISHED                                                                                                                              
 => [internal] load build definition from Dockerfile                                                                                                      0.0s
 => => transferring dockerfile: 724B                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                         0.1s
 => => transferring context: 56B                                                                                                                          0.0s
 => [internal] load metadata for docker.io/library/python:3-alpine3.7                                                                                     1.7s
 => [1/2] FROM docker.io/library/python:3-alpine3.7@sha256:35f6f83ab08f98c727dbefd53738e3b3174a48b4571ccb1910bae480dcdba847                               4.7s
 => => resolve docker.io/library/python:3-alpine3.7@sha256:35f6f83ab08f98c727dbefd53738e3b3174a48b4571ccb1910bae480dcdba847                               0.0s
 => => sha256:00be2573e9f79754b17954ba7a310a5f70c25b8f5bb78375e27e9e86d874877e 6.13kB / 6.13kB                                                            0.0s
 => => sha256:48ecbb6b270eb481cb6df2a5b0332de294ec729e1968e92d725f1329637ce01b 2.11MB / 2.11MB                                                            0.5s
 => => sha256:692f29ee68fa6bab04aa6a1c6d8db0ad44e287e5ff5c7e1d5794c3aabc55884d 308.48kB / 308.48kB                                                        0.3s
 => => sha256:6439819450d10d1aae92561f3ffff722137aada46d509644e8de4ca82bb26b07 25.90MB / 25.90MB                                                          3.7s
 => => sha256:35f6f83ab08f98c727dbefd53738e3b3174a48b4571ccb1910bae480dcdba847 2.04kB / 2.04kB                                                            0.0s
 => => sha256:014f52b0e7ae4fcd43201bfa4c4e0320c8517b611d7daa0e41ba33a0cb1fab80 1.37kB / 1.37kB                                                            0.0s
 => => sha256:3c7be240f7bfb19ec575d8547832a9f20b95eec9b4cc94fe717dd047ad661159 230B / 230B                                                                0.5s
 => => sha256:ca4b349df8ed83a59776df8f3868ece2783aa1ee2e9f052c9c9f3b54ae51a593 1.81MB / 1.81MB                                                            1.0s
 => => extracting sha256:48ecbb6b270eb481cb6df2a5b0332de294ec729e1968e92d725f1329637ce01b                                                                 0.2s
 => => extracting sha256:692f29ee68fa6bab04aa6a1c6d8db0ad44e287e5ff5c7e1d5794c3aabc55884d                                                                 0.1s
 => => extracting sha256:6439819450d10d1aae92561f3ffff722137aada46d509644e8de4ca82bb26b07                                                                 0.5s
 => => extracting sha256:3c7be240f7bfb19ec575d8547832a9f20b95eec9b4cc94fe717dd047ad661159                                                                 0.0s
 => => extracting sha256:ca4b349df8ed83a59776df8f3868ece2783aa1ee2e9f052c9c9f3b54ae51a593                                                                 0.1s
 => ERROR [2/2] RUN pip3 --no-cache-dir install s3-pit-restore awscli                                                                                    14.6s
------                                                                                                                                                         
 > [2/2] RUN pip3 --no-cache-dir install s3-pit-restore awscli:                                                                                                
#5 1.411 Collecting s3-pit-restore                                                                                                                             
#5 1.813   Downloading https://files.pythonhosted.org/packages/dd/e0/386a00877de74e63c74d7a507ee0d2252b7d799e6b97d20c44d85170d3cb/s3-pit-restore-0.9.tar.gz    
#5 2.141 Collecting awscli                                                                                                                                     
#5 3.166   Downloading https://files.pythonhosted.org/packages/00/34/1565d40bfbedaa6fd7f73890abe9e40132572000ce4373cbb84fb9336ea4/awscli-1.27.120-py3-none-any.whl (4.1MB)
#5 4.366 Collecting boto3 (from s3-pit-restore)
#5 5.138   Downloading https://files.pythonhosted.org/packages/c0/d6/2b64714f43fd5fb29e8231b47f6d1994773ddec170384c61dc04c72ba4a2/boto3-1.26.120-py3-none-any.whl (135kB)
#5 5.185 Collecting botocore==1.29.120 (from awscli)
#5 6.132   Downloading https://files.pythonhosted.org/packages/69/28/a9abd92153dbbaec1c4560df0a58aac0079c2c54872cc504016c9df21bf5/botocore-1.29.120-py3-none-any.whl (10.7MB)
#5 7.688 Collecting PyYAML<5.5,>=3.10 (from awscli)
#5 7.805   Downloading https://files.pythonhosted.org/packages/a0/a4/d63f2d7597e1a4b55aa3b4d6c5b029991d3b824b5bd331af8d4ab1ed687d/PyYAML-5.4.1.tar.gz (175kB)
#5 7.989   Installing build dependencies: started
#5 11.16   Installing build dependencies: finished with status 'done'
#5 11.16   Getting requirements to build wheel: started
#5 12.74   Getting requirements to build wheel: finished with status 'done'
#5 12.74     Preparing wheel metadata: started
#5 13.02     Preparing wheel metadata: finished with status 'done'
#5 13.03 Collecting s3transfer<0.7.0,>=0.6.0 (from awscli)
#5 13.07   Downloading https://files.pythonhosted.org/packages/5e/c6/af903b5fab3f9b5b1e883f49a770066314c6dcceb589cf938d48c89556c1/s3transfer-0.6.0-py3-none-any.whl (79kB)
#5 13.16 Collecting rsa<4.8,>=3.1.2 (from awscli)
#5 13.22   Downloading https://files.pythonhosted.org/packages/e9/93/0c0f002031f18b53af7a6166103c02b9c0667be528944137cc954ec921b3/rsa-4.7.2-py3-none-any.whl
#5 13.24 Collecting docutils<0.17,>=0.10 (from awscli)
#5 13.30   Downloading https://files.pythonhosted.org/packages/81/44/8a15e45ffa96e6cf82956dd8d7af9e666357e16b0d93b253903475ee947f/docutils-0.16-py2.py3-none-any.whl (548kB)
#5 13.41 Collecting colorama<0.4.5,>=0.2.5 (from awscli)
#5 13.47   Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
#5 13.48 Collecting jmespath<2.0.0,>=0.7.1 (from boto3->s3-pit-restore)
#5 13.53   Downloading https://files.pythonhosted.org/packages/31/b4/b9b800c45527aadd64d5b442f9b932b00648617eb5d63d2c7a6587b7cafc/jmespath-1.0.1-py3-none-any.whl
#5 13.54 Collecting urllib3<1.27,>=1.25.4 (from botocore==1.29.120->awscli)
#5 13.63   Downloading https://files.pythonhosted.org/packages/7b/f5/890a0baca17a61c1f92f72b81d3c31523c99bec609e60c292ea55b387ae8/urllib3-1.26.15-py2.py3-none-any.whl (140kB)
#5 13.68 Collecting python-dateutil<3.0.0,>=2.1 (from botocore==1.29.120->awscli)
#5 13.73   Downloading https://files.pythonhosted.org/packages/36/7a/87837f39d0296e723bb9b62bbb257d0355c7f6128853c78955f57342a56d/python_dateutil-2.8.2-py2.py3-none-any.whl (247kB)
#5 13.78 Collecting pyasn1>=0.1.3 (from rsa<4.8,>=3.1.2->awscli)
#5 13.88   Downloading https://files.pythonhosted.org/packages/14/e5/b56a725cbde139aa960c26a1a3ca4d4af437282e20b5314ee6a3501e7dfc/pyasn1-0.5.0-py2.py3-none-any.whl (83kB)
#5 13.90 Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore==1.29.120->awscli)
#5 13.94   Downloading https://files.pythonhosted.org/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl
#5 14.28 Exception:
#5 14.28 Traceback (most recent call last):
#5 14.28   File "/usr/local/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 176, in main
#5 14.28     status = self.run(options, args)
#5 14.28   File "/usr/local/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 346, in run
#5 14.28     session=session, autobuilding=True
#5 14.28   File "/usr/local/lib/python3.7/site-packages/pip/_internal/wheel.py", line 886, in build
#5 14.28     assert have_directory_for_build
#5 14.28 AssertionError
#5 14.41 You are using pip version 19.0.1, however version 23.1.2 is available.
#5 14.41 You should consider upgrading via the 'pip install --upgrade pip' command.
------
Dockerfile:16
--------------------
  14 |               org.label-schema.schema-version="v0.9"
  15 |     
  16 | >>> RUN pip3 --no-cache-dir install s3-pit-restore awscli
  17 |     
  18 |     ENTRYPOINT [ "s3-pit-restore" ]
--------------------
error: failed to solve: process "/bin/sh -c pip3 --no-cache-dir install s3-pit-restore awscli" did not complete successfully: exit code: 2

Works with alpine:3.17 though.

--sse option doesn't work when installing via pip

Duplicate of #14

The --sse option doesn't work when installing from pip, resulting in error

s3-pit-restore: error: unrecognized arguments: --sse AES256

Recommended solution is to clone this repo and build from master branch.

git clone https://github.com/angeloc/s3-pit-restore.git && cd s3-pit-restore
python3 setup.py install

`--sse` option does not work

When i run the command with the --sse option I get the error:

s3-pit-restore: error: unrecognized arguments: s3-pit-restore --sse AWS:KMS

Is it possible to use for KMS encrypted data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.