Giter Club home page Giter Club logo

Comments (10)

fenos avatar fenos commented on August 28, 2024 1

are you running storage behind a reverse proxy?

If yes, you'd need to add the S3_PROTOCOL_PREFIX=/storage/v1 environment variable

from storage.

Obeyed avatar Obeyed commented on August 28, 2024 1

Hey @fenos

I'm self-hosting, and tried to follow the steps you wrote above. Am seeing the SignatureDoesNotMatch error:

{
    "raw": "{\"metadata\":{},\"code\":\"SignatureDoesNotMatch\",\"httpStatusCode\":403,\"userStatusCode\":400}",
    "name": "Error",
    "message": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
    "stack": "Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n    at Object.SignatureDoesNotMatch (/app/dist/internal/errors/codes.js:140:39)\n    at Object.<anonymous> (/app/dist/http/plugins/signature-v4.js:72:34)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
  }

Standard and Resumable uploads work.

At one point I was getting the following error:

Missing S3 Protocol Access Key ID or Secret Key Environment variables

But that seems to have been my mistake with restarting the containers properly, because I'm not seeing this anymore. Now I get the SignatureDoesNotMatch error.

Any pointers on what I could be doing wrong?

Is there anything more I can share that could help?

Storage config

These are the environment variables for the storage container:

STORAGE_BACKEND: s3

# Removing this seems to have no effect
# S3_PROTOCOL_PREFIX: /storage/v1
S3_PROTOCOL_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
S3_PROTOCOL_ACCESS_KEY_SECRET: ${AWS_SECRET_ACCESS_KEY}

AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}

AWS_DEFAULT_REGION: ${S3_REGION}
REGION: ${S3_REGION}

GLOBAL_S3_BUCKET: ${S3_GLOBAL_S3_BUCKET}

TENANT_ID: ${TENANT_ENVIRONMENT}
IS_MULTITENANT: "false"

TUS_URL_PATH: /upload/resumable
TUS_URL_EXPIRY_MS: 86400000
UPLOAD_SIGNED_URL_EXPIRATION_TIME: 86400000

Kong config

My kong config for the storage-v1 path is as follows:

Notice the /storage/v1 in "Forwarded: host=$(headers.host)/storage/v1;proto=http".

## Storage routes: the storage server manages its own auth
- name: storage-v1
  _comment: "Storage: /storage/v1/* -> http://storage:5000/*"
  connect_timeout: 60
  write_timeout: 3600
  read_timeout: 3600
  url: http://storage:5000/
  routes:
    - name: storage-v1-all
      strip_path: true
      paths:
        - /storage/v1/
      request_buffering: false
      response_buffering: false
  plugins:
    - name: cors
    - name: request-transformer
      config:
        add:
          headers:
            - "Forwarded: host=$(headers.host)/storage/v1;proto=http"

Python Boto3

Example of how I'm doing the upload from a client application with Python and boto3. Couldn't find anything about using Python in the docs, so lots of guessing and trying.

import os
import boto3
from botocore.config import Config

# kong gateway url
gateway_uri = os.getenv("GATEWAY_URI")

s3_client = boto3.client(
    "s3",
    region_name="eu-west-1",
    # assuming this requires the `/storage/v1' otherwise kong can't route it
    endpoint_url=f"{gateway_uri}/storage/v1/s3",
    aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
    aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
    # assuming this is how I'd set forcePathStyle as described in the docs
    config=Config(s3={"addressing_style": "path"}),
)

# then do an upload
s3_client.upload_file(
    Filename=file_path,
    Bucket=bucket_name,
    Key=remote_file_path,
)

from storage.

fenos avatar fenos commented on August 28, 2024

Hello,
for self-hosted Storage setup you'll need to pass the following env which sets the AWS credentials that you should use in your client:

https://github.com/supabase/storage/blob/master/docker-compose.yml#L46-L47

Let me know if that work

from storage.

kallebysantos avatar kallebysantos commented on August 28, 2024

Hello, for self-hosted Storage setup you'll need to pass the following env which sets the AWS credentials that you should use in your client:

https://github.com/supabase/storage/blob/master/docker-compose.yml#L46-L47

Let me know if that work

If you check the docker-compose that I provide above, it already has theses vars. The problem is that when I call to supabase-storage from aws-sdk it throws an error.
Should I need to change my secrets for a specific value? The documentation is not clear about Self-Hosted projects.
How did you get this values? Need I create from Minio Dashboard?

from storage.

fenos avatar fenos commented on August 28, 2024

If you are not, don't set the S3_PREFIX_URL, and the URL you are using to connect to storage shouldn't contain the storage/v1 use http://localhost:5000/s3 for example

const s3Client = new S3Client({
    logger: console,
    forcePathStyle: true,
    region: "stub",
    endpoint: "http://localhost:5000/s3",
    // endpoint: "http://localhost:9000",
    credentials: {
      accessKeyId: "supa-storage",
      secretAccessKey: "secret1234",
    },
  });

from storage.

kallebysantos avatar kallebysantos commented on August 28, 2024

are you running storage behind a reverse proxy?

If yes, you'd need to add the S3_PROTOCOL_PREFIX=/storage/v1 environment variable

I'm running the default self-host template from supabase repo.
That uses kong as API Gateway. I just cloned the repo and tried to call with @aws-sdk

Following your instructions I'd added the follwoing to my docker-compose.s3.yml file in the storage service

# ...
S3_PROTOCOL_PREFIX: /storage/v1 # Added
# ...
S3_PROTOCOL_ACCESS_KEY_ID: supa-storage 
S3_PROTOCOL_ACCESS_KEY_SECRET: secret1234
AWS_ACCESS_KEY_ID: supa-storage
AWS_SECRET_ACCESS_KEY: secret1234

and then request with:

  const s3Client = new S3Client({
    logger: console,
    forcePathStyle: true,
    region: "stub",
    endpoint: "http://localhost:5000/storage/v1/s3",
    credentials: {
      accessKeyId: "supa-storage",
      secretAccessKey: "secret1234",
    },
  });

  const createBucketRequest = new CreateBucketCommand({
    Bucket: "test-bucket",
    ACL: "public-read",
  });

But still got the same error.

Storage container logs
{
  "level": 40,
  "time": "2024-06-04T11:41:52.996Z",
  "pid": 1,
  "hostname": "6acfe7891859",
  "reqId": "req-1",
  "tenantId": "stub",
  "project": "stub",
  "type": "request",
  "req": {
    "region": "stub",
    "traceId": "req-1",
    "method": "PUT",
    "url": "/s3/test-bucket/",
    "headers": {
      "host": "storage:5000",
      "x_forwarded_proto": "http",
      "x_forwarded_host": "localhost",
      "x_forwarded_port": "8000",
      "x_real_ip": "172.20.0.1",
      "content_length": "186",
      "content_type": "application/xml",
      "user_agent": "aws-sdk-js/3.583.0 ua/2.0 os/linux#5.15.146.1-microsoft-standard-WSL2 lang/js md/nodejs#20.11.1 api/s3#3.583.0",
      "accept": "*/*"
    },
    "hostname": "storage:5000",
    "remoteAddress": "172.20.0.13",
    "remotePort": 39424
  },
  "res": {
    "statusCode": 403,
    "headers": {
      "content_type": "application/xml; charset=utf-8",
      "content_length": "268"
    }
  },
  "responseTime": 1094.2835690006614,
  "error": {
    "raw": "{\"metadata\":{},\"code\":\"SignatureDoesNotMatch\",\"httpStatusCode\":403,\"userStatusCode\":400}",
    "name": "Error",
    "message": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
    "stack": "Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n    at Object.SignatureDoesNotMatch (/app/dist/storage/errors.js:107:41)\n    at Object.<anonymous> (/app/dist/http/plugins/signature-v4.js:36:36)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
  },
  "resources": [
    "/test-bucket"
  ],
  "msg": "stub | PUT | 403 | 172.20.0.13 | req-1 | /s3/test-bucket/ | aws-sdk-js/3.583.0 ua/2.0 os/linux#5.15.146.1-microsoft-standard-WSL2 lang/js md/nodejs#20.11.1 api/s3#3.583.0"
}

from storage.

fenos avatar fenos commented on August 28, 2024

Right! if you use kong you'll need to add this snippet in the storage route:
https://github.com/supabase/cli/blob/develop/internal/start/templates/kong.yml#L111-L115

from storage.

fenos avatar fenos commented on August 28, 2024

Just be more specific, you'll need to update this line on the supabase repo you have linked: https://github.com/supabase/supabase/blob/master/docker/volumes/api/kong.yml#L64

The host will be the host you are accessing Kong, in development will be localhost and the port will be the port you are accessing kong, ex: 5000

@kallebysantos let me know if this works for you, I'm quite confident it will since we use the same setup on the supabase cli

from storage.

kallebysantos avatar kallebysantos commented on August 28, 2024

Just be more specific, you'll need to update this line on the supabase repo you have linked: https://github.com/supabase/supabase/blob/master/docker/volumes/api/kong.yml#L64

The host will be the host you are accessing Kong, in development will be localhost and the port will be the port you are accessing kong, ex: 5000

@kallebysantos let me know if this works for you, I'm quite confident it will since we use the same setup on the supabase cli

So I need to update both: storage and auth sections right?

I'd follow the other steps that you provide but it didn't work, I'll try to redo everything again in a new fresh installation of supabase.

Thank you, I'll let you know 🙏

I just don't have much time for now, a guy in my team decided to not move on with supabase in this project 😭😭 and I need to redo everything that I spend a whole month in another stack.

But this s3 feat will be very useful in other projects.

from storage.

fenos avatar fenos commented on August 28, 2024

So I need to update both: storage and auth sections right?
only the storage one.

It will work since we use the same setup on the supabase cli.

I'll be closing this issue for now, however if you need need any more help on this, please feel free to comment below and I can re-open in case.

from storage.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.