Comments (8)
Hmm, this seems cryptic.
Although the error message refers to the me-south-1 region, the AWS region is US West (Northern California) us-west-1
Just to make sure I understand: does this mean your S3 bucket is located in us-west-1
, not me-south-1
?
Could you maybe try explicitly setting a region specific AWS_ENDPOINT
as per https://docs.aws.amazon.com/general/latest/gr/s3.html and see if that makes a difference? E.g.
AWS_ENDPOINT="s3.us-west-1.amazonaws.com"
from docker-volume-backup.
I just tried reproducing this and I am wondering if the admin info
output is a red herring because of the following:
When trying to back up against a newly created AWS S3 bucket in us-west-1
with the configuration you provided, I am:
- able to create a successful backup to that bucket
- seeing the same error message when running
mc admin info backup-target --debug
no matter if I setAWS_ENDPOINT
or not
This makes me think mc admin info
does strange things when working against an S3 bucket, but it's not necessarily related to your backup failing.
As a next step, could I suggest setting MC_GLOBAL_OPTIONS="--debug"
and trying to run the backup itself. Maybe this gives us a hint about what's going on here.
from docker-volume-backup.
hello @m90, thanks for the quick reply!
Hmm, this seems cryptic.
Although the error message refers to the me-south-1 region, the AWS region is US West (Northern California) us-west-1
Just to make sure I understand: does this mean your S3 bucket is located in
us-west-1
, notme-south-1
?
Yes, is it!
Could you maybe try explicitly setting a region specific
AWS_ENDPOINT
as per https://docs.aws.amazon.com/general/latest/gr/s3.html and see if that makes a difference? E.g.AWS_ENDPOINT="s3.us-west-1.amazonaws.com"
I did this test and it didn't work either.
I just tried reproducing this and I am wondering if the
admin info
output is a red herring because of the following:When trying to back up against a newly created AWS S3 bucket in
us-west-1
with the configuration you provided, I am:* able to create a successful backup to that bucket * seeing the same error message when running `mc admin info backup-target --debug` no matter if I set `AWS_ENDPOINT` or not
It really, is a red herring!
This makes me think
mc admin info
does strange things when working against an S3 bucket, but it's not necessarily related to your backup failing.As a next step, could I suggest setting
MC_GLOBAL_OPTIONS="--debug"
and trying to run the backup itself. Maybe this gives us a hint about what's going on here.
It was a great idea to set MC_GLOBAL_OPTIONS="--debug"
Running the backup with --debug enabled, I get several returns
mc: <DEBUG> HTTP/1.1 200 OK
Connection: close
Content-Type: application/xml
Date: Wed, 18 Aug 2021 12:37:14 GMT
Server: AmazonS3
X-Amz-Access-Point-Alias: false
X-Amz-Bucket-Region: us-west-1
X-Amz-Id-2: aFp23kaxEkf/VGV3ZZsUFpqJ3wTFUOB61kqiAz0vLqxcZM0HoEhtikOadOj/7BYdlINKt4NfG5Y=
X-Amz-Request-Id: GVXWJ18KRJRV0HN6
But one of them is a mistake:
mc: <DEBUG> GET /bucketname/?object-lock= HTTP/1.1
Host: s3.dualstack.us-west-1.amazonaws.com
User-Agent: MinIO (linux; amd64) minio-go/v7.0.11 mc/RELEASE.2021-06-13T17-48-22Z
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20210818/us-west-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210818T123713Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 404 Not Found
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 18 Aug 2021 12:37:13 GMT
Server: AmazonS3
X-Amz-Id-2: vZLZIDs3uFZlziisQZQdEt4sGDMdo7Joq4zkDkwbho3gH1aDUVioTk+FcVzSbcdIWzbZ9M7Rars=
X-Amz-Request-Id: GVXHJ6PF92E54RH5
162
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>ObjectLockConfigurationNotFoundError</Code><Message>Object Lock configuration does not exist for this bucket</Message><BucketName>bucketname</BucketName><RequestId>GVXHJ6PF92E54RH5</RequestId><HostId>vZLZIDs3uFZlziisQZQdEt4sGDMdo7Joq4zkDkwbho3gH1aDUVioTk+FcVzSbcdIWzbZ9M7Rars=</HostId></Error>
0
Do you know this return?
from docker-volume-backup.
I think the ObjectLock request returning a 404 is expected, at least that's what I am seeing as well when I test the setup.
Following this request, I will see another two request being logged in the successful case:
- HEAD / HTTP/1.1
- PUT /test.tar.gz HTTP/1.1
after which the backup has successfully finished.
Do you see any of these in your case too or does it just stop after the 404 you mentioned above?
from docker-volume-backup.
I think the ObjectLock request returning a 404 is expected, at least that's what I am seeing as well when I test the setup.
Following this request, I will see another two request being logged in the successful case:
* HEAD / HTTP/1.1 * PUT /test.tar.gz HTTP/1.1
after which the backup has successfully finished.
Do you see any of these in your case too or does it just stop after the 404 you mentioned above?
After ObjectLock the return that follows is successful, but nothing arrives in S3:
mc: <DEBUG> HEAD /bucketname/ HTTP/1.1
Host: s3.dualstack.us-west-1.amazonaws.com
User-Agent: MinIO (linux; amd64) minio-go/v7.0.11 mc/RELEASE.2021-06-13T17-48-22Z
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20210818/us-west-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c4429x5c1c149afbf4c3496fb92427ae82e4649b934ca495991b7852b855
X-Amz-Date: 20210818T172043Z
mc: <DEBUG> HTTP/1.1 200 OK
Connection: close
Content-Type: application/xml
Date: Wed, 18 Aug 2021 17:20:44 GMT
Server: AmazonS3
X-Amz-Access-Point-Alias: false
X-Amz-Bucket-Region: us-west-1
X-Amz-Id-2: NPxc5uPZ1uVR7XsdKV2KwOCXw/BhJMGCMAxqDQz3bdawS15bDtHAXmjS3k3Jwh4ITx7IPDAgUyw=
X-Amz-Request-Id: P4PEQF4FXEXY1TX8
mc: <DEBUG> TLS Certificate found:
mc: <DEBUG> >> Country: US
mc: <DEBUG> >> Organization: DigiCert Inc
mc: <DEBUG> >> Expires: 2022-07-24 23:59:59 +0000 UTC
mc: <DEBUG> TLS Certificate found:
mc: <DEBUG> >> Country: IE
mc: <DEBUG> >> Organization: Baltimore
mc: <DEBUG> >> Expires: 2025-05-10 12:00:00 +0000 UTC
mc: <DEBUG> Response Time: 6.518198ms
`backupname-2021-08-18T17-19-15.tar.gz` -> `backup-target/bucketname/backupname-2021-08-18T17-19-15.tar.gz`
mc: <DEBUG> POST /bucketname/backupname-2021-08-18T17-19-15.tar.gz?uploads= HTTP/1.1
Host: s3.dualstack.us-west-1.amazonaws.com
User-Agent: MinIO (linux; amd64) minio-go/v7.0.11 mc/RELEASE.2021-06-13T17-48-22Z
Content-Length: 0
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20210818/us-west-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
Content-Type: application/gzip
X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
X-Amz-Date: 20210818T172043Z
Accept-Encoding: gzip
mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Wed, 18 Aug 2021 17:20:44 GMT
Server: AmazonS3
X-Amz-Id-2: T1Ja1PG6otffBGeE2WiGQzRIIUVXi/Q4MAGWlufp7W2i8wr15edn7NeumwrwW6xyGQQiPQ=
X-Amz-Request-Id: P4PF0FDHNF47F23V
mc: <DEBUG> TLS Certificate found:
mc: <DEBUG> >> Country: US
mc: <DEBUG> >> Organization: DigiCert Inc
mc: <DEBUG> >> Expires: 2022-07-24 23:59:59 +0000 UTC
mc: <DEBUG> TLS Certificate found:
mc: <DEBUG> >> Country: IE
mc: <DEBUG> >> Organization: Baltimore
mc: <DEBUG> >> Expires: 2025-05-10 12:00:00 +0000 UTC
mc: <DEBUG> Response Time: 27.745137ms
Killed
Upload finished.
[INFO] Cleaning up
removed 'backupname-2021-08-18T17-19-15.tar.gz'
[INFO] Backup finished
Will wait for next scheduled backup.
An information that may be important is that this volume tested above has almost 2GB.
Another test I did, with a 120.8MB volume, had the ObjectLock error but uploaded successfully.
from docker-volume-backup.
It's kind of odd: I am seeing the very same output, just not the line that says Killed
.
An information that may be important is that this volume tested above has almost 2GB.
I wonder if the Killed
is coming from the 25M
memory limit here. I use the same setup in production for a while now, but the biggest backup I am doing currently is roughly 1GB. Removing the memory limit should not have a real world impact, although it might create scary metrics when looking at docker stats
: #9
from docker-volume-backup.
Yeah! I removed the memory limit and it worked!
But it seems coherent to me to set a limit for this very heavy task.
I wonder if the
Killed
is coming from the25M
memory limit here. I use the same setup in production for a while now, but the biggest backup I am doing currently is roughly 1GB. Removing the memory limit should not have a real world impact, although it might create scary metrics when looking atdocker stats
: #9
I will take a calm look at this thread.
But I really appreciate your attention, thank you very much!
from docker-volume-backup.
Glad to know you've got it working. I'll add a note about potentially having to increase this limit to the documentation then.
Also thank you for your detailed reports, this has been very helpful 🆒
from docker-volume-backup.
Related Issues (20)
- "Page not found" error when using with balena HOT 9
- Hard to understand error message during exec HOT 3
- archive-pre is triggered on each container backup instead of just container with exec-label HOT 3
- Health Check HOT 6
- Notifications Delay HOT 1
- Replace Log line feeds with HTML <br> tag HOT 3
- Backup the entire content without directory HOT 4
- Backup to subfolders HOT 2
- Resolve date in AWS_S3_PATH HOT 1
- Can't get stop during backup working HOT 2
- Remove local backup after upload to S3 HOT 3
- Invalid memory address crash when running a backup HOT 16
- Fail to configure cron schedule HOT 3
- Docker swarm error HOT 5
- "swarm exec" support HOT 1
- "input/output error" when copying file to NFS mount HOT 4
- Azure storage: Support access tier HOT 2
- schedule the docker-volume-backup? HOT 1
- [Feature] Automated backups of all named volumes HOT 6
- [FR] upgrade `shoutrrr` to v0.8.0 to support custom fields in calls to Generic Webhooks HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from docker-volume-backup.