Giter Club home page Giter Club logo

arsenal's People

Contributors

adrienverge avatar alexanderchan-scality avatar alexandre-merle avatar anurag4dsb avatar benzekrimaha avatar bert-e avatar bourgoismickael avatar dependabot[bot] avatar dora-korpar avatar electrachong avatar francoisferrand avatar ghivert avatar ilkescality avatar ironman-machine avatar jmunoznaranjo avatar jonathan-gramain avatar kerkesni avatar killiang avatar laurenspiegel avatar miniscruff avatar nicolas2bert avatar philipyoo avatar ploki avatar rachedbenmustapha avatar rahulreddy avatar tcarmet avatar thibaultriviere avatar tmacro avatar vrancurel avatar williamlardier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

arsenal's Issues

top level keys in policies are not whitelisted

Policy validator should return false for this policy

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": ["s3:ListAllMyBuckets", "s3:PutBucket"],
        "Resource": "arn:aws:s3:::*"
    }],
    "Condition": {"DateGreaterThan": { "aws:CurrentTime": "2013-06-30T00:00:00Z"}}
}

In the above policy, Condition is not a valid key.

incorrect authorization header: errors incompatible with AWS

When authorization header is incorrect: error incompatible with amazon.
On ceph/s3-tests, these changes fix:

on bucket:

FAIL: s3tests.functional.test_headers.test_bucket_create_bad_authorization_unreadable
expect <Code>AccessDenied</Code><Message>Anonymous access is forbidden for this operation</Message>

on object:

FAIL: s3tests.functional.test_headers.test_object_create_bad_authorization_unreadable
expect <Code>AccessDenied</Code><Message>Access Denied</Message>

COMPAT: Increase Signature V2 Expires param

  • In AWS, there is no real Signature Version 2 expiration date limitation (up to 2038).

  • Now the limitation is 1 hour and 1 second.

  • To pass ceph tests, we should check if the expires parameter is more than 100 000 000 ms ( 1 day and 4 hours) in the future.

  • For security reason, we don't want to mirror AWS (ie. no limitation)

On ceph/s3-tests, this test fails:

  • ERROR: s3tests.functional.test_s3.test_multipart_upload
  • ERROR: s3tests.functional.test_s3.test_abort_multipart_upload
  • FAIL: s3tests.functional.test_s3.test_bucket_head
  • FAIL: s3tests.functional.test_s3.test_object_raw_authenticated
  • FAIL: s3tests.functional.test_s3.test_object_raw_response_headers
  • FAIL: s3tests.functional.test_s3.test_object_raw_authenticated_bucket_acl
  • FAIL: s3tests.functional.test_s3.test_object_raw_authenticated_object_acl
  • FAIL: s3tests.functional.test_s3.test_object_raw_authenticated_bucket_gone
  • FAIL: s3tests.functional.test_s3.test_object_raw_authenticated_object_gone
  • FAIL: s3tests.functional.test_s3.test_object_raw_put_authenticated

HMAC on GETLOG_RESPONSE

There's a problem with that. I know you are aware of it, but it's usually better to register every known bug in github issues.

Cleanup Kinetic Library

I've made a lot of Comments on PR #24 , relating to typos, OOP design etc.
It deserves another Cleanup/smaller refactor that I'm ready to supervise.

invalid Authorization header: errors incompatible with AWS

When invalid Authorization header : return errors which are incompatible with AWS.

These ceph/s3-tests are failing:

FAIL: s3tests.functional.test_headers.test_object_create_bad_authorization_invalid_aws2
FAIL: s3tests.functional.test_headers.test_object_create_bad_authorization_invalid_aws4
FAIL: s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2
FAIL: s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws4

Node.js 4 / ES6

Node.js 4.x comes with ES6 and new sexy, useful syntax elements. See this page for a quick overview. For instance, let and const that prevent error-prone variable usage; or the class syntax:

class MyClass extends BaseClass {
  constructor(x) {
    this.x = x;
  }
  getX() { return this.x; }
}

instead of:

function MyClass(x) {
    this.x = x;
}
Chunk.prototype.getX = function () { return this.x; };

I think starting coding with this new standard is the right thing to do, both for readability and forward-compatibility. The problem is: if we do so, our code would lose backward-compatibility with node 0.x.

What do you think? I believe IronMan-S3 and IronMan-Data are already node 4 compatible, what about the others?

Flaky test on Public CircleCI

The following unit test is flaky, due to resource constraints on the public CircleCI, it sometimes goes over the defined timeout of 5500ms:
โœ“ Sould distribute uniformly with a maximum of 20% of deviation (4876ms)

Should we increase this timeout, in order to take into account the public CI's resources ?

Continuous integration

We currently have no way to assess PR viability without running every test ourselves. I know that a Jenkins build is being set up, but in the meantime, having a Travis or something similar would be a huge boon when it comes to code review.

Kinetic lib does not work

It used to work last week, now it's not. To reproduce, use the Kinetic lib to create a Kinetic message then import this raw message (using parse). You'll get an INTERNAL_ERROR.

@AntoninCoulibaly Could you write unit tests to validate the send and parse functions behavior for different types of message (GET, PUT, FLUSH) etc. so that we make sure that new commits don't break anything? It would be greatly appreciated :-)

Listing Implementation Issue

versions of the product: all
affects: bucketfile and bucketclient backend

We should be able to list with both a 'prefix' and a 'marker' while having a delimiter.

Get now, _getStartIndex() masks the prefix if you have a marker.

E.g.

?prefix=X11/&marker=X11%2FResConfigP.h

gives for file names

Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  3.6 kB         ResourceI.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  2.9 kB         SM/SM.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  11.0 kB        SM/SMlib.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  4.7 kB         SM/SMproto.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  5.1 kB         SelectionI.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  17.0 kB        Shell.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  0.2 kB         ShellI.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  12.4 kB        ShellP.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  29.7 kB        StringDefs.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  3.9 kB         Sunkeysym.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  4.2 kB         ThreadsI.h
Mon Aug 29 2016 23:53:50 GMT+0200 (CEST)  16.8 kB        TranslateI.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  2.3 kB         VarargsI.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  2.7 kB         Vendor.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  3.5 kB         VendorP.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  19.7 kB        X.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  13.2 kB        XF86keysym.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  30.3 kB        XKBlib.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  3.9 kB         XWDFile.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  4.5 kB         Xalloca.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  2.9 kB         Xarch.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  2.5 kB         Xatom.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  3.7 kB         Xauth.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  20.8 kB        Xcms.h
Mon Aug 29 2016 23:53:51 GMT+0200 (CEST)  2.3 kB         Xdefs.h

As we see some CommonPrefixes are listed as files.

The bug is located here: https://github.com/scality/Arsenal/blob/master/lib/algos/list/delimiter.js

It seems there are other inconsistencies in the code.

Please provide consistent functional (for bucketfile) and end-to-end (for metadata) test scenarios.

Do not mix `class Kinetic` and Kinetic messages

Currently there is one single, big Kinetic class: if we import it in another project, we can only deal with one Kinetic message at a time. But we clearly need to handle multiple messages (called Kinetic Protocol Data Unit or Kinetic PDU) concurrently.

A solution can be to have both a Kinetic class and a KineticPDU class (representing single messages).

@AntoninCoulibaly @MichaelZapataScality What do you think?

Don't use `Kinetic.errors.XXX` for client API

Kinetic errors (such as Kinetic.errors.SUCCESS, Kinetic.errors.HMAC_FAILURE, Kinetic.errors.NOT_AUTHORIZED...) are special, protocol-defined values meant to be embedded in Kinetic messages. There are not meant to be returned to client code calling the library.

Some Kinetic lib functions (e.g. send and _parse) should be modified to return in a Node.js way (either throw exception or take a callback).

Provide a network-client class that provides default behaviors for failover and bootstraplist management.

We're currently in need of a common failover mechanism, as well as of a common default behavior on managing the bootstrap list.

A few issues could be solved by this:

1

In order to enforce the locality of our components (ie: S3 connect to the BucketD running in the same host), we are currently forced to set only one item in the bootstraplist of the associated component's client lib, because it is shuffled and does not track who's the closest component.

2

When installing locally, the Federation config templates are setting by default the bucketclient's bootstraplist as the host itself, without any care for the port (specific feature for local install). This means that either configuring Metadata not to use the default port or killing the one metadata instance that owns the default port renders the whole Metadata cluster useless (since every S3 is trying to connect to the port 9000). Configuring the full bootstrap with ports would help solving this by giving access to secondary servers.

@scality/team-ironman-core Discussion of this issue is really needed/welcome !

invalid amz-date header: errors incompatible with AWS

When amz-date header is unreadable: return incompatible error.
On ceph/s3-tests, these tests fail:

Ceph test creating unreadable x-amz-date 'X-Amz-Date': '\x07'

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_unreadable_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_unreadable_aws4

Ceph test creating x-amz-date in past 'X-Amz-Date': '20100707T215304Z'

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_before_today_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_before_today_aws4

Ceph test creating x-amz-date in future 'X-Amz-Date': '20300707T215304Z'

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_after_today_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_after_today_aws4

Ceph test creating x-amz-date before epochTime 'X-Amz-Date': '19500707T215304Z'

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_before_epoch_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_before_epoch_aws4

Ceph test creating x-amz-date after 9999 'X-Amz-Date': '99990707T215304Z'

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_after_end_aws4

Policy with ${aws:username} is not evaluating properly. Policy with 'red' does.

GA6.2.0-beta5-rc3
CentOS 7.2

My use-case is to simulate a 'home' directory where user 'red' cannot see contents of user 'black' folder like a Linux filesystem.

I created a bucket 'mybucket' with:

  • mybucket/home/red/file1
  • mybucket/home/red/file2
  • mybucket/home/black/file1
  • mybucket/home/black/file2

and hoped that the below policy would prevent 'red' from seeing files under mybucket/home/black and vice-versa, but get AccessDenied trying to list anything under the bucket as either 'red' or 'black' user.

If I replace occurrances of ${aws:username} to red in the below policy and attach it to the 'red' user, I get the expected results - that user red can only see mybucket/home/red/fileX and gets access denied on mybucket/home/black/fileX

{
  "Version": "2012-10-17",
  "Statement": [
    {"Sid": "AllowGroupToSeeBucketList",
     "Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
     "Effect": "Allow",
     "Resource": ["arn:aws:s3:::*"]},
    {"Sid": "AllowRootLevelListingOfThisBucketAndHomePrefix",
     "Action": ["s3:ListBucket"],
     "Effect": "Allow",
     "Resource": ["arn:aws:s3:::mybucket"],
     "Condition": {"StringEquals":{"s3:prefix": ["","home/"],"s3:delimiter":["/"]}}},
    {"Sid": "AllowListBucketofASepecificUserPrefix",
      "Action": ["s3:ListBucket"],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::mybucket"],
      "Condition": {"StringLike": {"s3:prefix":["home/${aws:username}/*"]}}},
    {"Sid": "AllowUserFullAcccesstoJustSpecificUserPrefix",
      "Action": ["s3:*"],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::mybucket/home/${aws:username}",
                   "arn:aws:s3:::mybucket/home/${aws:username}/*"]}
  ]
}

See: https://github.com/scality/IronMan-TS/blob/master/IAM-Vault.md#giving-a-user-a-home-directory

Query auth compat

  1. For v4, the query params are not being sorted correctly if there is a lowercase query param.

  2. For v4, if a signed 'header' is actually a query param, the value is not being picked up.

  3. For v2, if there is a date in the header, the header date is being used in the string to sign rather than the expires date.

  4. For v2, if there are x-amz query params (like x-amz-acl), they are not being included in the canonicalizedAmzHeaders.

Cannot delete group or policies after deleting all users

I have an account 'acct2' (reguilar vault account, no SSO) that had 3 users assigned to a group and different policies. I wanted to 'clean out' the account and delete everything.

  • I was able to delete all the users successfully
  • I am unable to delete the group or policy now

See output below.

Interesting in the error message below, you see " The error message describes these entities."...but there is no error message displayed.

[devops@ws2 ~]$ aws --endpoint-url http://app1.lab.local:8600 iam list-users
{
    "Users": []
}
[devops@ws2 ~]$ aws --endpoint-url http://app1.lab.local:8600 iam list-groups
{
    "Groups": [
        {
            "Path": "/", 
            "CreateDate": "2016-08-25T22:13:09Z", 
            "GroupId": "EHJCYAYXDUG02QSWM6BCCJ1RN20QTK0G", 
            "Arn": "arn:aws:iam::060034739872:group/DirAccess", 
            "GroupName": "DirAccess"
        }
    ]
}
[devops@ws2 ~]$ aws --endpoint-url http://app1.lab.local:8600 iam list-policies
{
    "Policies": [
        {
            "PolicyName": "godmode", 
            "CreateDate": "2016-08-25T22:20:45Z", 
            "AttachmentCount": 1, 
            "IsAttachable": true, 
            "PolicyId": "XAJFIW9UGEK5ZAAREC59PNVKAH5XS5AS", 
            "DefaultVersionId": "v1", 
            "Path": "/", 
            "Arn": "arn:aws:iam::060034739872:policy/godmode", 
            "UpdateDate": "2016-08-25T22:20:45Z"
        }, 
        {
            "PolicyName": "homedir", 
            "CreateDate": "2016-08-25T22:12:54Z", 
            "AttachmentCount": 1, 
            "IsAttachable": true, 
            "PolicyId": "5XH3KUVA1XO6TO2XNWGSECF83WN2B4UE", 
            "DefaultVersionId": "v1", 
            "Path": "/", 
            "Arn": "arn:aws:iam::060034739872:policy/homedir", 
            "UpdateDate": "2016-08-25T22:12:54Z"
        }
    ]
}
[devops@ws2 ~]$  aws --endpoint-url http://app1.lab.local:8600 iam delete-group --group-name DirAccess

An error occurred (DeleteConflict) when calling the DeleteGroup operation: The request was rejected because it attempted to delete a resource that has attached subordinate entities. The error message describes these entities.
[devops@ws2 ~]$ 
[devops@ws2 ~]$ aws --endpoint-url http://app1.lab.local:8600 iam delete-policy --policy-arn arn:aws:iam::060034739872:policy/godmode

An error occurred (DeleteConflict) when calling the DeletePolicy operation: The request was rejected because it attempted to delete a resource that has attached subordinate entities. The error message describes these entities.
[devops@ws2 ~]$ aws --endpoint-url http://app1.lab.local:8600 iam delete-policy --policy-arn arn:aws:iam::060034739872:policy/homedir

An error occurred (DeleteConflict) when calling the DeletePolicy operation: The request was rejected because it attempted to delete a resource that has attached subordinate entities. The error message describes these entities.
[devops@ws2 ~]$ 

Checked to make sure the policy was not attached to the group, it is not...

[devops@ws2 ~]$ aws --endpoint-url http://app1.lab.local:8600 iam list-group-policies --group-name DirAccess
{
    "PolicyNames": []
}
[devops@ws2 ~]$ 

Auth v4 canonical request

The computation of the canonical request should normalize the request path, like this

GET /example/.. HTTP/1.1
Host:example.amazonaws.com
X-Amz-Date:20150830T123600Z

should result in :

GET
/

host:example.amazonaws.com
x-amz-date:20150830T123600Z

host;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

This fix need to be done against rel/6.2

Note: This behavior is not use for s3 requests, but every other services

Different libs -> different projects

There are important drawbacks with having all our "common modules" (kinetic, linter conf, etc.) in the same repo.

  • Everything is packaged together, so only one common versioning is possible. This is a real problem if, for instance:
    • IronMan-Vault project A depends on Arsenal/kinetic v1.0 and Arsenal/otherlib v2.0
    • IronMan-Data project B depends on Arsenal/kinetic v2.0 and Arsenal/otherlib v1.0
  • If a project (let's say Vault) only requires Arsenal/eslintrc it will have Arsenal as a depency and will have to download/include all the other libs in Arsenal.
  • Having different projects in the same git repo mixes commits and makes reviewing harder.

I'd rather have different GitHub project for different libs. It will prevent trouble in the future. And it's still easy to do it now.

auth v4 canonical request

Query parameters for POST request should not infer in the payloadChecksum, like this :

POST /?Param1=value1 HTTP/1.1
Host:example.amazonaws.com
X-Amz-Date:20150830T123600Z

should generate :

POST
/
Param1=value1
host:example.amazonaws.com
x-amz-date:20150830T123600Z

host;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

when we generate this payload 9095672bbd1f56dfc5b65f3e153adc8731a4a654192329106275f4c7b24d0b6e

auth v4 canonical request

Headers values should be trim in canonical request, like this :

GET / HTTP/1.1
Host:example.amazonaws.com
My-Header1: value1
My-Header2: "a   b   c"
X-Amz-Date:20150830T123600Z

should result in :

GET
/

host:example.amazonaws.com
my-header1:value1
my-header2:"a b c"
x-amz-date:20150830T123600Z

host;my-header1;my-header2;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

you can notice the value of header "my-header2" was trimmed

This fix should be against rel/6.2

S3 server crashes if the access key and secret key are missing

GA6.2.0-beta5-rc3

I made a mistake and had empty access key and secret key, and sending a request to s3 using the AWS CLI crashes the server.

[devops@ws2 S3]$ cat ~/.aws/credentials 
[default]
# acct1/devops
#aws_access_key_id = 1D8ZM2NS0R8V47L474OW
#aws_secret_access_key = mK1tWVk0SL0KNqo7WWT+NbuuxltuPyr9PbXkwJth
#
# acct1
#aws_access_key_id = QYODQNKJ4AQUCC2U7WDS
#aws_secret_access_key = 7PK0rwX5DB6IsVCLIaMS1QAqIbUJTRaN=vRiCEj7
# acct2
aws_access_key_id = UYERP648I7SZLGBUUIB2
aws_secret_access_key = OeIFnKGxiUIqyHE8F1O5GyWym42oFAfTiN5RQgTQ 
# acct3
aws_access_key_id =
aws_secret_access_key =
[devops@ws2 S3]$  aws --endpoint-url http://s3.lab.local s3 ls

An error occurred (502) when calling the ListBuckets operation: Bad Gateway
[devops@ws2 S3]$ 
{"name":"S3","clientIP":"::ffff:10.0.0.11","clientPort":23311,"httpMethod":"GET","httpURL":"/","time":1473384972656,"req_id":"5b5ca79e53067eb1ece5","level":"info","message":"received request","hostname":"app1.lab.local","pid":2545}
{"name":"S3","error":"Cannot read property 'MissingSecurityHeader' of undefined","stack":"TypeError: Cannot read property 'MissingSecurityHeader' of undefined\n    at Object.headerAuthCheck.check (/home/scality/S3/node_modules/arsenal/lib/auth/v2/headerAuthCheck.js:45:29)\n    at Object.auth.setAuthHandler.auth.check (/home/scality/S3/node_modules/arsenal/lib/auth/auth.js:41:43)\n    at Object.auth.setAuthHandler.auth.check.auth.doAuth.vault.authenticateV2Request [as doAuth] (/home/scality/S3/node_modules/arsenal/lib/auth/auth.js:66:22)\n    at Object.callApiMethod (api.js:75:21)\n    at routerGET (routeGET.js:31:13)\n    at checkUnsuportedRoutes (routes.js:33:16)\n    at routes (routes.js:71:12)\n    at Server.<anonymous> (server.js:61:17)\n    at emitTwo (events.js:87:13)\n    at Server.emit (events.js:172:7)\n    at HTTPParser.parserOnIncoming [as onIncoming] (_http_server.js:528:12)\n    at HTTPParser.parserOnHeadersComplete (_http_common.js:103:23)","level":"fatal","message":"caught error","hostname":"app1.lab.local","pid":2545}
{"name":"S3","level":"error","message":"shutdown of worker due to exception","hostname":"app1.lab.local","pid":2545}
{"name":"S3","workerId":277,"level":"error","message":"worker disconnected. making sure exits","hostname":"app1.lab.local","pid":42}
{"name":"S3","workerId":277,"level":"error","message":"worker exited.","hostname":"app1.lab.local","pid":42}
{"name":"S3","workerId":286,"level":"error","message":"new worker forked","hostname":"app1.lab.local","pid":42}
{"name":"S3","bootstrap":["app1.lab.local"],"https":false,"level":"info","message":"bucketclient configuration","hostname":"app1.lab.local","pid":2594}
{"name":"S3","host":"app1.lab.local","port":8500,"https":false,"level":"info","message":"vaultclient configuration","hostname":"app1.lab.local","pid":2594}
{"name":"S3","level":"warn","message":"scality kms selected but unavailable, using file backend","hostname":"app1.lab.local","pid":2594}
{"name":"S3","https":false,"level":"info","message":"Https server configuration","hostname":"app1.lab.local","pid":2594}
{"name":"S3","address":"::","port":8000,"pid":2594,"level":"info","message":"server started","hostname":"app1.lab.local"}

Error messages should not have periods

At least some of them. For example:

From AWS:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>totallymadeupwer3</BucketName><RequestId>F01609D2FA7999DC</RequestId><HostId>s10elYCsWc4AQtNuhXcD7L/3pJfJCHZhmiQQWKs/E5IGZwE8HniZvQFkqsg2AvIsnWKYyibYnH4=</HostId></Error>

Custom error descriptions

Currently we are not setting any description to the errors which change depending on the context. Error descriptions are not required by the clients but they do offer a good insight into what went wrong with the request.
For example

aws iam set-default-policy-version --policy-arn arn:aws:iam::878965906153:policy/my-policy --version-id v235555
A client error (NoSuchEntity) occurred when calling the SetDefaultPolicyVersion operation: 
Policy arn:aws:iam::878965906153:policy/my-policy version v235555 does not exist
or is not attachable.

This would require having a setup where we can set custom descriptions to the error objects without polluting the global scope.

[kinetic]Avoid buffer concatenation

It's an expensive operation, that adds a copy of the data to return (That can be as large as 256Mb on our system!)

The codeline that causes the problem is in the send() method.

Circular dependency within the module

The errors part of Arsenal is referenced inside some files through the index.js that serves as our external point of entry. This creates some circular dependency that would bring some strange behaviour when importing those with the errors themselves as was exposed in #118

empty/invalid/unredable/none Date header: errors incompatible with AWS

When empty/invalid/unreadable/none Date header : return error incompatible with AWS
On ceph/s3-tests, these tests are failing:

on object with V2 auth:

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_date_empty_aws2
  • FAIL: s3tests.functional.test_headers.test_object_create_bad_date_invalid_aws2
  • FAIL: s3tests.functional.test_headers.test_object_create_bad_date_unreadable_aws2
  • FAIL: s3tests.functional.test_headers.test_object_create_bad_date_none_aws2

on bucket with V2 auth:

  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_date_empty_aws2
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_date_invalid_aws2
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_date_unreadable_aws2
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_date_none_aws2

on object with V4 auth:

  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_empty_aws4
  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_invalid_aws4
  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_unreadable_aws4
  • FAIL: s3tests.functional.test_headers.test_object_create_bad_amz_date_none_aws4

on bucket with V4 auth:

  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_empty_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_invalid_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_unreadable_aws4
  • FAIL: s3tests.functional.test_headers.test_bucket_create_bad_amz_date_none_aws4

Create some common configuration management lib

Currently, for common parts of the configuration in our different projects, we're rewriting those in each project.

The result is that some common configuration bits appear completely differently in the way the configuration of each project is structured. This was actually confusing for some people trying to look at the different configuration files.

Such common configuration should have the same format, and thus be written only once. As such; I'm suggesting that we write a small config management lib, that at least includes the logging concern, and that generates a werelogs configuration object from the given configuration section. It would help harmonize configuration formats, and could easily bring more flexibility to our logging configuration.

Organize repo by utility rather than one lib and one test directory

Since this repo is supposed to be a collection of unrelated tools it is odd that the code is mixed together in one place (the lib directory) and then the tests are mixed together in another place (the test directory). I propose that each utility have its own parent directory and within that directory, it have its own lib and tests directories.

authv4 sort query params

In creating the canonical request, we are not sorting query params that have a mix of capital and lowercase letters correctly.

Kinetic implementation must be Buffer based.

Strings are not well-handled by protobufJS(it throws a TypeError on Strings that are not %4 in certain cases), and buffers will allow us to really match the spec byte to byte.

Refactor Auth API

We need to refactor the auth API so as to:

  1. Provide a simple, synchronous API
  2. Avoid using closures/callbacks for simple computations
  3. Avoid depending on any HTTP framework (remove any reference to http.request, keep only required bits of data)
  4. Provide an 'AwsServiceName' to the signing requests, as this may be included depending on the service.

The planned API is as follows:

auth: {
    client: {
        generateV4Headers: function (query, method, uri, payload, secretKey) -> { headersDict, errorObject },
    },
    server: {
        prepareV2: function (QueryString, Headers) -> { authParamsObject, errorObject },
        prepareV4: function (QueryString, Headers) -> { authParamsObject, errorObject },
        checkV2Signature: function(authParamsObject, secretKeyValue) -> bool,
        checkV4Signature: function(authParamsObject, secretKeyValue) -> bool,
    },
}

This is the general feeling. The Client would use the client API, and the server could use the server API in two steps:

  1. prepare auth params for actual auth
  2. retrieve auth information from whatever storage is used
  3. compute and check signature using results from step 1+2

Admitedly, the current API is missing a potential options object, or at least an AWSServiceName to use.

Policy evaluation, action should be case insensitive

During policy evaluation, we do not manage lowercase action in policies :

The name must match an action that is supported by the service. The prefix and the action name are case insensitive. For example, iam:ListAccessKeys is the same as IAM:listaccesskeys

undefined error issue

Making sure this issue gets logged so that we don't forget about this. While fixing Vault with pr https://github.com/scality/IronMan-Vault/pull/228
we noticed that if we mess up the case(uppercase/lowercase) of the arsenal error property, we just return undefined back to the calling function. Now we don't know if we got undefined because there was no error or if the Arsenal error property does not exist.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.