Giter Club home page Giter Club logo

s3-credentials's Introduction

s3-credentials

PyPI Changelog Tests Documentation Status License

A tool for creating credentials for accessing S3 buckets

For project background, see s3-credentials: a tool for creating credentials for S3 buckets on my blog.

Installation

pip install s3-credentials

Basic usage

To create a new S3 bucket and output credentials that can be used with only that bucket:

% s3-credentials create my-new-s3-bucket --create-bucket
Created bucket:  my-new-s3-bucket
Created user: s3.read-write.my-new-s3-bucket with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.my-new-s3-bucket to user s3.read-write.my-new-s3-bucket
Created access key for user: s3.read-write.my-new-s3-bucket
{
    "UserName": "s3.read-write.my-new-s3-bucket",
    "AccessKeyId": "AKIAWXFXAIOZOYLZAEW5",
    "Status": "Active",
    "SecretAccessKey": "...",
    "CreateDate": "2021-11-03 01:38:24+00:00"
}

The tool can do a lot more than this. See the documentation for details.

Documentation

s3-credentials's People

Contributors

h4kor avatar matthiask avatar simonw avatar tomdyson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

s3-credentials's Issues

`--auth filepath` option for authentication from a file

Now that I'm building time-limited credentials in #27 it's getting pretty inconvenient to pass them as the --access-key and --secret-key and --session-key arguments.

I'm going to support a new option called --credentials which, if provided, is treated as the path (or - for stdin) to a JSON or INI file containing credentials.

The idea is that this will work:

% s3-credentials create mybucket --duration 15m > creds.json
% s3-credentials list-bucket mybucket --credentials=creds.json

Mechanism for running tests against a real AWS account

The tests for this project currently run against mocks - which is good, because I don't like the idea of GitHub Action tests hitting real APIs.

But... this project is about building securely against AWS. As such, automated tests that genuinely exercise a live AWS account (and check that the resulting permissions behave as expected) would be incredibly valuable for growing my confidence that this tool works as advertised.

These tests would need quite a high level of administrative access, because they need to be able to create users, roles etc.

I don't like the idea of storing my own AWS administrator account credentials in a GitHub Actions secret though. I think I'll write these tests such that they can be run outside of GitHub Actions, maybe configured via environment variables that allow other project contributors to run tests against their own accounts.

list-buckets option to get extra --details about buckets

The S3 security best practices in https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#security-best-practices-prevent suggest:

Use the ListBuckets API to scan all of your Amazon S3 buckets. Then use GetBucketAcl, GetBucketWebsite, and GetBucketPolicy to determine whether the bucket has compliant access controls and configuration.

list-buckets could do this with an extra --details option (since it adds 3 new API calls per bucket).

`--duration` option to create time-limited credentials (using `sts.assume_role()`)

See #26 for the research. It looks like the way to do this is:

  1. Ensure a dedicated role with arn:aws:iam::aws:policy/AmazonS3FullAccess exists - if it does not, create it. It needs to have a known name - I propose using s3-credentials.AmazonS3FullAccess here, and also populating the Description field. The role needs to be assumable by the current account, see AssumeRolePolicyDocument example in #26 (comment)
  2. Call sts.assume_role() against that role, passing in as a policy the same inline policy document used for non-expiring credentials, using the code in policies.py.
  3. Return the AccessKeyId, SecretAccessKey AND the SessionToken - all three are needed to make authenticated calls.

`s3-credentials list-users` command

Feels useful to have.

boto3 recipe:

iam = boto3.client('iam')
paginator = iam.get_paginator('list_users')
for response in paginator.paginate():
    print(response)

Expose functionality as a Python library

There is enough useful logic in here that it would be good to have it work as a stable, documented Python library (similar to sqlite-utils).

The logic in the create command is the most interesting here.

Option to specify a custom JSON policy file

Based on #11 I'm now thinking that there is value in applying custom policies - since that way people can tweak the policies used and share them with others.

Maybe a --policy policy.json option would be useful?

One challenge: the need to hard-code the name of the bucket into that policy. So perhaps it supports the absolute dumbest template system ever, like literally replacing $!BUCKET_NAME!$ in the JSON with the name of the bucket.

`s3-credentials create` command

This is the command which create a user and returns credentials for a specified bucket, optionally also creating the bucket as well.

See initial design notes in #1.

Design the read-only S3 bucket policy

Need to pick the actions I'm going to bake into that policy. Spun out from #15.

Current policy is:

def read_only(bucket):
return {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::{}".format(bucket)],
},
{
"Effect": "Allow",
"Action": "s3:GetObject*",
"Resource": ["arn:aws:s3:::{}/*".format(bucket)],
},
],
}

Design the read-write policy

Need to pick the actions I'm going to bake into that policy. Spun out from #15.

Current policy:

def read_write(bucket):
# https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html
return {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::{}".format(bucket)],
},
{
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::{}/*".format(bucket)],
},
],
}

`s3-credentials delete-user` command

I wanted to clean up all of the users and buckets I made while testing this tool.

Deleting buckets is easy enough with the aws s3 tool:

aws s3 rb s3://simonw-test-bucket-10 --force

Deleting users is harder:

aws iam delete-user --user-name s3.read-only.simonw-test-bucket-11
An error occurred (DeleteConflict) when calling the DeleteUser operation:
Cannot delete entity, must delete policies first.

I'm going to build a s3-credentials delete-user command which deletes the inline policies first.

Stable documented output formats

Before 1.0 I want to have stable output formats - in particular for the create command.

I want to provide:

  • JSON (the current default is OK here)
  • INI style config that can be written to ~/.AWS
  • Shell syntax that sets environment variables

Research creating expiring credentials using `sts.assume_role()`

The initial reason for creating this tool was that I wanted to be able to create long-lived (never expiring) tokens for the kinds of use-cases described in this post: https://simonwillison.net/2021/Nov/3/s3-credentials/

Expiring credentials are fantastic for all sorts of other use-cases. It would be great if this tool could optionally create those instead of creating long-lived credentials.

This would mean the tool didn't have to create users at all (when used in that mode) - it could create a role and then create temporary access credentials for that role using sts.assume_role(): https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role

Manually test --prefix against litestream.io

Originally posted by @simonw in #39 (comment)

Splitting this into a separate issue mainly so I can clearly document how to use Litestream in the comments here.

Goal is to confirm that S3 credentials created using s3-credentials create ... --prefix litestream-test/ can be used with Litestream to back up a SQLite database to that path within the bucket.

--dry-run option

I want to be able to see what the tool is going to do (including the policy documents) without actually calling the AWS APIs.

`delete-bucket` command

I already have delete-user - this would be a similar utility but for deleting buckets. Mainly so I don't have to remember how to do it with awscli.

Support configuring the bucket as a website

It would be useful to have an opt-in option for saying "this bucket should be configured as a website" - because setting that up without a tool is quite fiddly.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html has the details:

When you configure a bucket as a static website, if you want your website to be public, you can grant public read access. To make your bucket publicly readable, you must disable block public access settings for the bucket and write a bucket policy that grants public read access.

See #20 for "block public access" setting, and #19 for bucket policies.

Apply jdub policy suggestions

https://github.com/simonw/s3-credentials/blob/main/s3_credentials/policies.py

My suggestions:

  • specify individual actions explicitly (no wildcards)
  • separate permissions by resource (Buckets vs. Objects)
  • Sid is unnecessary

Your read/write policy is good, but instead of *Object, list GetObject and PutObject.

Your read-only policy would be better written like your read/write policy, one section for the bucket permission (ListBucket), one for the object permission (which should be GetObject, no wildcard).

Your write-only policy is great as is.

You may want to add additional permissions to let clients set ACLs. But if it's all simple object-by-object stuff, these very simple policies are great.

Originally posted by @jdub in #7 (comment)

Seek feedback from AWS experts

I'm not an AWS expert. I would feel a lot more comfortable if some AWS experts could review this tool and make sure that what it is doing makes sense and there are no unpleasant flaws in the approach it is taking.

Use `sts get-caller-identity` for `whoami`

I believe this will work even if you don't haveGetUser permission.

Saw this in https://aws-blog.de/2021/08/iam-what-happens-when-you-assume-a-role.html

https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html says:

No permissions are required to perform this operation. If an administrator adds a policy to your IAM user or role that explicitly denies access to the sts:GetCallerIdentity action, you can still perform this operation. Permissions are not required because the same information is returned when an IAM user or role is denied access.

`list-user-policies` Tests failing in Python 3.6

E             At index 4 diff: "call().get_user_policy(PolicyName='policy-one', UserName='one')"
              != "call().get_user_policy(UserName='one', PolicyName='policy-one')"

Looks like there's no guarantee to the ordering of the parameters when they are pretty-printed here:

assert [str(c) for c in boto3.mock_calls] == [
"call()",
"call('iam')",
"call().get_paginator('list_users')",
"call().get_paginator('list_user_policies')",
"call().get_user_policy(UserName='one', PolicyName='policy-one')",

Initial research

The goal of this tool is to provide a CLI for creating IAM access credentials - an access key and a secret key - that are restricted to either reading from a specific bucket, writing to a specific bucket or read/write to a specific bucket.

The goal is to never have to go through the manual process described in dogsheep/dogsheep-photos#4 ever again.

Research bucket policies

An optional flag for attaching bucket policies to the new s3 bucket. These are just like IAM user policies, but attached to the bucket itself.

Originally posted by @zacaytion in #7 (comment)

I need to research bucket policies to fully understand what kinds of things they are useful for and how they should be supported by this tool.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.