Giter Club home page Giter Club logo

s3pypi's Introduction

S3PyPI

S3PyPI is a CLI for creating a Python Package Repository in an S3 bucket.

Why?

The official Python Package Index (PyPI) is a public repository of Python software. It's used by pip to download packages.

If you work at a company, you may wish to publish your packages somewhere private instead, and still have them be accessible via pip install. This requires hosting your own repository.

S3PyPI enables hosting a private repository at a low cost. It requires only an S3 bucket for storage, and some way to serve files over HTTPS (e.g. Amazon CloudFront).

Publishing packages and index pages to S3 is done using the s3pypi CLI. Creating the S3 bucket and CloudFront distribution is done using a provided Terraform configuration, which you can tailor to your own needs.

Alternatives

Getting started

Installation

Install s3pypi using pip:

$ pip install s3pypi

Setting up S3 and CloudFront

Before you can start using s3pypi, you must set up an S3 bucket for storing packages, and a CloudFront distribution for serving files over HTTPS. Both of these can be created using the Terraform configuration provided in the terraform/ directory:

$ git clone https://github.com/gorilla-co/s3pypi.git
$ cd s3pypi/terraform/

$ terraform init
$ terraform apply

You will be asked to enter your desired AWS region, S3 bucket name, and domain name for CloudFront. You can also enter these in a file named config.auto.tfvars:

region = "eu-west-1"
bucket = "example-bucket"
domain = "pypi.example.com"

DNS and HTTPS

The Terraform configuration assumes that a Route 53 hosted zone exists for your domain, with a matching (wildcard) certificate in AWS Certificate Manager. If your certificate is a wildcard certificate, add use_wildcard_certificate = true to config.auto.tfvars.

Distributed locking with DynamoDB

To ensure that concurrent invocations of s3pypi do not overwrite each other's changes, the objects in S3 can be locked via an optional DynamoDB table (using the --lock-indexes option). To create this table, add enable_dynamodb_locking = true to config.auto.tfvars.

Basic authentication

To enable basic authentication, add enable_basic_auth = true to config.auto.tfvars. This will attach a Lambda@Edge function to your CloudFront distribution that reads user passwords from AWS Systems Manager Parameter Store. Users and passwords can be configured using the put_user.py script:

$ basic_auth/put_user.py pypi.example.com alice
Password:

This creates a parameter named /s3pypi/pypi.example.com/users/alice. Passwords are hashed with a random salt, and stored as JSON objects:

{
  "password_hash": "7364151acc6229ec1468f54986a7614a8b215c26",
  "password_salt": "RRoCSRzvYJ1xRra2TWzhqS70wn84Sb_ElKxpl49o3Y0"
}

Terraform module

The Terraform configuration can also be included in your own project as a module:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  region = "eu-west-1"
}

provider "aws" {
  alias  = "us_east_1"
  region = "us-east-1"
}

module "s3pypi" {
  source = "github.com/gorilla-co/s3pypi//terraform/modules/s3pypi"

  bucket = "example-bucket"
  domain = "pypi.example.com"

  use_wildcard_certificate = true
  enable_dynamodb_locking  = true
  enable_basic_auth        = true

  providers = {
    aws.us_east_1 = aws.us_east_1
  }
}

Migrating from s3pypi 0.x to 1.x

Existing resources created using the CloudFormation templates from s3pypi 0.x can be imported into Terraform and removed from CloudFormation. For example:

$ terraform init
$ terraform import module.s3pypi.aws_s3_bucket.pypi example-bucket
$ terraform import module.s3pypi.aws_cloudfront_distribution.cdn EDFDVBD6EXAMPLE
$ terraform apply

In this new configuration, CloudFront uses the S3 REST API endpoint as its origin, not the S3 website endpoint. This allows the bucket to remain private, with CloudFront accessing it through an Origin Access Identity (OAI). To make this work with your existing S3 bucket, all <package>/index.html objects must be renamed to <package>/. You can do so using the provided script:

$ scripts/migrate-s3-index.py example-bucket

To instead keep using the old configuration with a publicly accessible S3 website endpoint, pass the following options when uploading packages:

$ s3pypi upload ... --index.html --s3-put-args='ACL=public-read'

Example IAM policy

The s3pypi CLI requires the following IAM permissions to access S3 and (optionally) DynamoDB. Replace example-bucket by your S3 bucket name.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::example-bucket/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::example-bucket"
    },
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:DeleteItem"
      ],
      "Resource": "arn:aws:dynamodb:*:*:table/example-bucket-locks"
    }
  ]
}

Usage

Distributing packages

You can now use s3pypi to upload packages to S3:

$ cd /path/to/your-project/
$ python setup.py sdist bdist_wheel

$ s3pypi upload dist/* --bucket example-bucket [--prefix PREFIX]

See s3pypi --help for a description of all options.

Installing packages

Install your packages using pip by pointing the --extra-index-url to your CloudFront domain. If you used --prefix while uploading, then add the prefix here as well:

$ pip install your-project --extra-index-url https://pypi.example.com/PREFIX/

Alternatively, you can configure the index URL in ~/.pip/pip.conf:

[global]
extra-index-url = https://pypi.example.com/PREFIX/

Roadmap

Currently there are no plans to add new features to s3pypi. If you have any ideas for new features, check out our contributing guidelines on how to get these on our roadmap.

Contact

Got any questions or ideas? We'd love to hear from you. Check out our contributing guidelines for ways to offer feedback and contribute.

License

Copyright (c) Gorillini NV. All rights reserved.

Licensed under the MIT License.

s3pypi's People

Contributors

andrei-shabanski avatar brandond avatar chrisrut avatar foxel avatar github-francisco-pereira avatar goodhart avatar holinnn avatar james-emerton avatar jamiecressey avatar jaustinpage avatar jnalley avatar kikones34 avatar kimvais avatar marcelgwerder avatar mdwint avatar nikos avatar oliverhofkens avatar rubenvdb avatar scotta avatar takacsd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3pypi's Issues

Access Denied While Installing Python Package due to package version

In the project, we are using automated versioning of python project. So packages are pushed to private repo with the version as 0.1.0.post36+gec2b198.
So whenever we install package like
pip install <package_name>==0.1.0.post36+gec2b198 --extra-index-url https://private-pip-repo.
Package are not installed and show access denied.

While we able to install package like
pip install <package_name>==0.1.0.post36 --extra-index-url https://private-pip-repo
Basically, this (+) sign in package version creates an issue while parsing. Need to fix index.html

Because when we test with these changes. Things work properly

Windows MSYS - put_package hangs without using binary read mode

This issue appears when using MSYS on Windows to deploy a package. When attempting to upload the tar.gz file, the code uses a file object but does not specify to use binary read mode. Without this specification, the process will hang when using Windows. The solution to this seems to be simply adding the mode='rb' when opening the file.

PutObject AccessDenied

I have Admin privileges in my AWS account.

I have boto3 installed in a python virtual env so it's isolated.

I have Block Public Access set to On on my S3 Bucket.

I have my ~/.aws/credentials && ~/.aws/config set up properly

At the AWS command line I can upload a file to my S3 bucket.

In a python3 interactive command shell, I can walk through a boto3 example and use the put_object method to upload a file.

Using an alternative python script pip install pypiprivate in a separate venv because it uses two libraries at different version levels than you, also successfully uploaded a file to my S3 bucket.


If I open up public access, I have no trouble at all with issuing a s3pypi --bucket bucket_name. Works like a charm. I'll leave it there, set the bucket back to private, and try the same command, and the script will inform me that the package already exists in the repository, so the script has access.

If I try to upload a new package, or a new version of the same package, I'll get the Access Denied error.

`(s3pypi_venv) > s3pypi --bucket my_bucket

Traceback (most recent call last):

s3pypi_venv/bin/s3pypi", line 10, in
s3pypi/main.py", line 61, in main
s3pypi/main.py", line 24, in create_and_upload_package
s3pypi/storage.py", line 55, in put_package
boto3/resources/factory.py", line 520, in do_action
boto3/resources/action.py", line 83, in call
botocore/client.py", line 357, in _api_call
botocore/client.py", line 661, in _make_api_call

botocore.exceptions.ClientError: An error occurred
(AccessDenied) when calling the PutObject operation:
Access Denied`

Compatibility with tox

I ran into an issue when trying to use s3pypi in a tox project.

Out of the box, s3pypi leans on shell expansion (dist/*) to upload multiple files to a s3 bucket. Tox does not seem to support this shell expansion, at least on ubuntu using bash.

I looked into ways to get tox to support this, and the answer is to do a "bash exec s3pypi" as your command in the Tox config, which feels kinda clunky.

Another option would be to have hard coded file paths in your tox config. Since the files themselves change names every time the version changes, you really dont want to hard code the file names in the tox config.

Ideally, if i have a tox "publish" command, i should just have to add it to the deps section, and execute s3pypi, pointing to my dist folder.

I am proposing the following changes:
If an arg is a file, behavior is unchanged.
If an arg is a directory, try to expand the args with all of the files in that directory, but, ignore errors, since some of the contents may not be a valid wheel or tar file.

It is also a goal to implement this in a way that does not require changes in the future if we discover more file types that need to be uploaded.

I will prep the pull request.

Fails to push to S3

s3pypi runs python setup and stdout does not match an expected pattern.

Reproduce

git clone https://github.com/dbarnett/python-helloworld.git
cd python-helloworld/
s3pypi --bucket pypi.example.com

Error

Traceback (most recent call last):
  File "/usr/local/bin/s3pypi", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python2.7/site-packages/s3pypi/cli.py", line 21, in main
    package = Package.create(args.wheel)
  File "/usr/local/lib/python2.7/site-packages/s3pypi/package.py", line 53, in create
    raise RuntimeError(stdout)
RuntimeError: running sdist
running egg_info
creating helloworld.egg-info
writing helloworld.egg-info/PKG-INFO
writing top-level names to helloworld.egg-info/top_level.txt
writing dependency_links to helloworld.egg-info/dependency_links.txt
writing entry points to helloworld.egg-info/entry_points.txt
writing manifest file 'helloworld.egg-info/SOURCES.txt'
reading manifest file 'helloworld.egg-info/SOURCES.txt'
writing manifest file 'helloworld.egg-info/SOURCES.txt'
running check
creating helloworld-0.1
creating helloworld-0.1/helloworld
creating helloworld-0.1/helloworld.egg-info
making hard links in helloworld-0.1...
hard linking README -> helloworld-0.1
hard linking setup.py -> helloworld-0.1
hard linking helloworld/__init__.py -> helloworld-0.1/helloworld
hard linking helloworld/main.py -> helloworld-0.1/helloworld
hard linking helloworld.egg-info/PKG-INFO -> helloworld-0.1/helloworld.egg-info
hard linking helloworld.egg-info/SOURCES.txt -> helloworld-0.1/helloworld.egg-info
hard linking helloworld.egg-info/dependency_links.txt -> helloworld-0.1/helloworld.egg-info
hard linking helloworld.egg-info/entry_points.txt -> helloworld-0.1/helloworld.egg-info
hard linking helloworld.egg-info/top_level.txt -> helloworld-0.1/helloworld.egg-info
Writing helloworld-0.1/setup.cfg
creating dist
Creating tar archive
removing 'helloworld-0.1' (and everything under it)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/helloworld
copying helloworld/__init__.py -> build/lib/helloworld
copying helloworld/main.py -> build/lib/helloworld
installing to build/bdist.macosx-10.11-x86_64/wheel
running install
running install_lib
creating build/bdist.macosx-10.11-x86_64
creating build/bdist.macosx-10.11-x86_64/wheel
creating build/bdist.macosx-10.11-x86_64/wheel/helloworld
copying build/lib/helloworld/__init__.py -> build/bdist.macosx-10.11-x86_64/wheel/helloworld
copying build/lib/helloworld/main.py -> build/bdist.macosx-10.11-x86_64/wheel/helloworld
running install_egg_info
Copying helloworld.egg-info to build/bdist.macosx-10.11-x86_64/wheel/helloworld-0.1-py2.7.egg-info
running install_scripts
creating build/bdist.macosx-10.11-x86_64/wheel/helloworld-0.1.dist-info/WHEEL

known boto issue inconsistent with writeup instructions

Boto has a problem with buckets with dots in the name.

boto/boto#2836

It sounds like boto3 does not have this problem.

The directions on

https://novemberfive.co/blog/opensource-pypi-package-repository-tutorial/

imply that the bucket name could (or should) have dots in the name such that it is the same as the hostname from which you pip install packages from. I needed to change the bucket name to a name with no dots in order to successfully upload a package.

Unable to install the package using --extra-index-url

Hi, this follows some comments in #34.

Essentially, I'm trying to install the uploaded package using:

pip install --upgrade {mypackage} --extra-index-url https://pypi-index-bucket.com

But get timeout responses:

Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fee0c644710>, 'Connection to pyp-index-bucket.com timed out. (connect timeout=15)')': /mypackage/

However, if I link the index.html directly:

pip install predict --find-links https://s3.eu-west-2.amazonaws.com/pypi-index-bucket.com/mypackage/index.html mypackage

I can install the package without issues. But this then would defeat the purpose of s3pypi...

Any hint of what can be going wrong? I've followed the steps and at the end, placed the CNAMERecord output in Route53 as an A record.

Thanks!

PublishS3PyPiPackages role is missing s3:PutObjectAcl

I find that without this, PutObject fails with Access Denied. Removing the ACL from the PutObject succeeds. Adding this permission to the role succeeds. Also there were some warnings that the policy editor was generating that I fixed. This is my resulting policy with no warnings:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetBucketLocation",
            "Resource": "arn:aws:s3:::pypi.dev.proctorlp.com"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::pypi.dev.proctorlp.com",
                "arn:aws:s3:::pypi.dev.proctorlp.com/*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}

Alpha, beta releases are not properly handled

Alpha or beta releases of a library, in the form of 1.23.0a0 or 1.23.0b0 are wrongly picked up as the release itself by the following line:

https://github.com/novemberfiveco/s3pypi/blob/4851ca304386b7e673f8c652ed4e2fa4c8519901/s3pypi/package.py#L97

The regular expression in that line only considers the case that a releases consists exclusively of digits.

This means that, if I make a beta release 1.23.0b0 of a library and then want to release 1.23.0, I can't because s3pypi tells me that this version already exists.

Also, the library should provide a way to undo a release. Currently, if I release version 1.23.0b0 and then delete the builds from the s3 bucket, that's not enough to re-release that version because s3pypi only looks into the generated html file to check for existing releases.

No sha256 hash of packages

Hi,

Right now, index.html doesn't contain sha256 hash of published packages. It causes the issue when we export requirements from pyproject.toml poetry export --format="requirements.txt" > requirements.txt and install them with pip (we use this approach to install packages inside docker image).

It would be great to calculate hashes and include them in index.html

Authenticated Access using Lambda@Edge

Hi,

I have a need for an access-restricted PyPi server on AWS. However, I cannot limit access to certain networks.

Based on your project, I came up with an infrastructure setup that allows access to the S3 bucket only via a CloudFront distribution or if access was granted by a publish or user admin managed policy; the CloudFront distribution checks in a Lambda@Edge function every incoming call for the presence of HTTP basic auth credentials and validates them against a user store in the S3 bucket.

What seemed straight-forward in the beginning turned a bit more complicated when I realized that I had to split the infrastructure definition into multiple CloudFormation stacks for deployment into multiple regions. (CloudFront requires that the Lambda@Edge function is deployed in N. Virginia; company policy requires the S3 bucket is deployed in Frankfurt.)

If you are interested, I can clean up my changes / additions and prepare a pull request at some point next week. (They certainly need some cleaning up - this was the first time I used CloudFormation.) Of course, it's also fine if you think maintaining CloudFormation stacks and code for too many deployment scenarios would overly complicate your setup.

Cheers!
Christoph

Does S3PyPI support extras?

I am still using v 0.11.0 of s3pypi. Does it support extra dependencies? My library has boto3 as an extra dependency so my expectation is that I can install pip install my-package from s3pypi and it will not include boto3. And I should be able to install pip install my-package[boto3] and it will install boto3. However, when I try the latter command I get "no matches found".

Feature request: Support for pip search?

This is a cool project. pip install works as advertised, and I am curious if it's possible to support pip search as well?

I don't know the technical details of the protocol used by pip, but thought I should ask anyway. 😃

Wheel name/version parsing incorrect when versions contain '-' or build tags are used

For a wheel following the official naming convention:

{distribution}-{version}(-{build tag})?-{python tag}-{abi tag}-{platform tag}.whl

The version in the wheel metadata will be {version}-{build tag}. The Package constructor (self.name, self.version = name.rsplit("-", 1)) will interpret this as name = distribution-version and version = build tag

This is easy to see when you use the --tag-date option in a setup.py.

Allow to set any ACL, not just "public-read" and "private"

As far as I can see, currently, there is a flag that lets you toggle between the "private" and "public-read" ACL. However, I'm facing the issue that I'm deploying my package from a Jenkins instance that is running in a different AWS account and thus the account that owns the bucket does not have sufficient permissions on the object. It would be easy to set the bucket-owner-full-control ACL if this package would expose that argument directly instead of converting it to a flag.

s3pypi cli using invalid keyword argument for argparse.ArgumentParser

(build) admin@ip-192-168-2-109:~/lxml-3.4.4$ s3pypi --bucket my_bucket
Traceback (most recent call last):
  File "/home/admin/build/bin/s3pypi", line 11, in <module>
    sys.exit(main())
  File "/home/admin/build/lib/python3.5/site-packages/s3pypi/cli.py", line 14, in main
    p = argparse.ArgumentParser(prog=__prog__, version=__version__)
TypeError: __init__() got an unexpected keyword argument 'version'

https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser
According to the docs, version is not a valid keyword argument for the argparse.ArgumentParser constructor. Instead, it's supposed to be used within the context of adding an argument, like so:

p.add_argument('--version', action='version', version='{} {}'.format(__prog__, __version__))

Python-2 and Python-3 on the same machine

In package.py a distribution is created with:
cmd = ['python', 'setup.py', 'sdist', '--formats', 'gztar']
stdout = check_output(cmd).decode().strip()

If s3pypi runs on an machine that has both python 2 and 3 this will usually default to using python-2.

I don't think it causes any major issues but if it feels unsanitary while building python-3 packages, could we add an option to explicitly use python-3?

Unable to install package from s3pypi

I've just setup my s3 PyPi as explained in this article from novemberfive. I was able to publish a package to s3pypi using the command line utility. I can see it in my bucket. When I try to install from there I get a pip error:

$ pip install my-project --extra-index-url pypi.example.com
Collecting my-project
  Url 'pypi.example.com/my-project/' is ignored. It is either a non-existing path or lacks a specific scheme.
  Could not find a version that satisfies the requirement my-project (from versions: )
No matching distribution found for my-project

Does this mean there is something wrong with the folder structure in my bucket? I can see that the folder for my project exists in the bucket and it contains a tar.gz file and an index.html.

Troubles with installing my private package from private S3 index built with S3PyPi

Hi all,
First of all I would lie to thank you for this wonderful tool. The built of a private S3 index was so easy using S3PyPi (following your tutorial).

Actually I'm trying to install a homemade package using pipenv from my private S3 pypi index index.

I'm using:

  • python 3.7.3
  • pip 19.2.1
  • pipenv 2018.11.26

The install command line:
pipenv install my-homemade-package --extra-index-url https://my-private-index

I'm facing the following issue below and do not understand what I'm doing wrong (since the final issue did mension an unreachable).

Installing my-homemade-package…
Adding my-homemade-package to Pipfile's [packages]…
✔ Installation Succeeded 
Pipfile.lock (67b9a8) out of date, updating to (8e084c)…
Locking [dev-packages] dependencies…
✔ Success! 
Locking [packages] dependencies…
✘ Locking Failed! 
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/resolver.py", line 69, in resolve
[pipenv.exceptions.ResolutionFailure]:       req_dir=requirements_dir
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/utils.py", line 726, in resolve_deps
[pipenv.exceptions.ResolutionFailure]:       req_dir=req_dir,
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/utils.py", line 480, in actually_resolve_deps
[pipenv.exceptions.ResolutionFailure]:       resolved_tree = resolver.resolve()
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/utils.py", line 395, in resolve
[pipenv.exceptions.ResolutionFailure]:       raise ResolutionFailure(message=str(e))
[pipenv.exceptions.ResolutionFailure]:       pipenv.exceptions.ResolutionFailure: ERROR: ERROR: Could not find a version that matches my-homemade-package
[pipenv.exceptions.ResolutionFailure]:       No versions found
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
  First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
 Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
  Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ERROR: Could not find a version that matches my-homemade-package
No versions found
Were https://pypi.org/simple or https://my-private-index.my-domaine.com/my-homemade-package/ reachable?
[pipenv.exceptions.ResolutionFailure]:       req_dir=requirements_dir
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/utils.py", line 726, in resolve_deps
[pipenv.exceptions.ResolutionFailure]:       req_dir=req_dir,
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/utils.py", line 480, in actually_resolve_deps
[pipenv.exceptions.ResolutionFailure]:       resolved_tree = resolver.resolve()
[pipenv.exceptions.ResolutionFailure]:   File "/home/redhouane/.local/lib/python3.7/site-packages/pipenv/utils.py", line 395, in resolve
[pipenv.exceptions.ResolutionFailure]:       raise ResolutionFailure(message=str(e))
[pipenv.exceptions.ResolutionFailure]:       pipenv.exceptions.ResolutionFailure: ERROR: ERROR: Could not find a version that matches my-homemade-package
[pipenv.exceptions.ResolutionFailure]:       No versions found
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
  First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
 Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
  Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ERROR: Could not find a version that matches my-homemade-package
No versions found
Were https://pypi.org/simple or https://my-private-index.my-domaine.com/my-homemade-package/ reachable?

I did try the approaches recommanded in the stack trace :

First:
1 - Try clearing your dependency cache with pipenv lock --clear
2 - Try the original command again (the issue persist)

Second:
1 - Use pipenv install --skip-lock to bypass this mechanism
2 - run pipenv graph to inspect the situation (No dependency related to my private package is listed)

Third:
Try pipenv lock --pre if it is a pre-release dependency

Fourth:
I tryied to install my homemade package from my private package using pip with the following commande line:
pip install my-homemade-package --extra-index-url https://my-private-index.my-domaine.com/
But still does not work with the following error message:

WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f9b67abb278>: Failed to establish a new connection: [Errno -2] Name or service not known')': /my-homemade-package/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f9b67abb550>: Failed to establish a new connection: [Errno -2] Name or service not known')': /my-homemade-package/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f9b67d0e5c0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /my-homemade-package/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f9b67a9fef0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /my-homemade-package/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f9b67a9ff98>: Failed to establish a new connection: [Errno -2] Name or service not known')': /my-homemade-package/
ERROR: Could not find a version that satisfies the requirement my-homemade-package (from versions: none)
ERROR: No matching distribution found for my-homemade-package

P.S: I did try to add my private index url into enabled source within my Pipefile with the following lines:

[[source]]
url = "https://my-private-index.my-domaine.com/"
verify_ssl = true
name = "my-private-index"

but, one more time, the issue still persist.

Did you ever fecad a similar issue ?
Thx in advance for your help,

Red1

Force removes other platforms

Hello,
I am creating packages on my bucket for linux, mac and windows.
If the package exists for linux and mac, but not for windows, when I call s3pypi on windows it complains that the version exists already. When I call --force, the correct files are uploaded and, but I have to manually edit the index.html the versions from the other platforms are removed

[bug] It is possible to overwrite existing files when using `--dist-path` and having multiple versions in your dist directory.

Right now, it is possible to implicitly overwrite the package version files when uploading packages using --dist-path and having multiple package versions in your dist directory.

This is because s3pypi checks only the version of the very last file found in the dist directory when checking for conflict with existing package versions. This check should happen independently for each package file.

dataclasses need backporting to be used on python 3.6

Hi,

While trying to use s3pypi, i ran into an issue where the dataclasses package was being imported, but, for python 3.6 we need to load in backported dataclasses module. I will have a pull request in that does this shortly. Thanks!

s3pypi --acl bucket-owner-full-control --s3-put-args ServerSideEncryption=AES256 --region us-west-2 --s3-endpoint-url https://bucket.vpce-an-endpointhashwashere.s3.us-east-2.vpce.amazonaws.com --bucket my-bucket-name dist/*
Traceback (most recent call last):
File "/home/user/myproject/.tox/publish/bin/s3pypi", line 5, in <module>
from s3pypi.__main__ import main
File "/home/user/myproject/.tox/publish/lib/python3.6/site-packages/s3pypi/__main__.py", line 9, in <module>
from s3pypi import __prog__, __version__, core
File "/home/user/myproject/.tox/publish/lib/python3.6/site-packages/s3pypi/core.py", line 5, in <module>
from dataclasses import dataclass
ModuleNotFoundError: No module named 'dataclasses'````

Wheel name regex fails

https://github.com/novemberfiveco/s3pypi/blob/6ac00dd636b826fda08c58306b2adff21bd210cd/s3pypi/package.py#L54

While debugging publishing with s3pypi on my build server, I finally discovered the issue was this regex doesn't match the python setup.py output on my build server.

The output does contain creating build/bdist.linux-x86_64/wheel/PACKAGENAME-0.2.16.dist-info/WHEEL, but that is the end of the output.

The build server does successfully build an installable wheel. I'm not sure if this is an issue with output verbosity of running python setup.py [ARGS] on the build server or something else, but the build server does have up to date setuptools and wheel packages installed, and this seems to be more of an issue of the regex search for the wheel name looking for a wheel name in inconsistent command output.

Add option for specifying S3 endpoint + credentials

Thanks for your tool which is awesome. I would like to use it for managing internal company python packages using Minio, a S3 like server which is 100% compatible with original AWS S3 API. The only thing is that i have to specify the API endpoint to boto, or any AWS S3 related tool. I would like to know if you are ok to add following option to the CLI:

  • --endpoint URL
  • AWS credentials which are mandatory for uploading to Minio S3.

It would still be compatible with your solution, just adding option parsing and update boto resources creation as following :

s3 = boto3.resource('s3',
                    endpoint_url='minio_url',
                    aws_access_key_id='YOUR-ACCESSKEYID',
                    aws_secret_access_key='YOUR-SECRETACCESSKEY',
                    config=Config(signature_version='s3v4'),
                    region_name='us-east-1')

If this is not bothering you i would gladly update the tool myself and submit a pull request.

Having Trouble Logging

Hey

I'm trying out this library for the first time. I hit an error on the regex that looks for a wheel. I can't see from the trace what string is being passed. So I added the verbose flag as suggested, but the logger doesn't seem to have a handler set up.

neil~/Documents/AMICUS/Code/global_toolkit/toolkit$ s3pypi --bucket

/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type'
warnings.warn(msg)
Traceback (most recent call last):
File "/usr/local/bin/s3pypi", line 11, in
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/s3pypi/main.py", line 52, in main
create_and_upload_package(args)
File "/usr/local/lib/python2.7/dist-packages/s3pypi/main.py", line 20, in create_and_upload_package
package = Package.create(args.wheel, args.sdist)
File "/usr/local/lib/python2.7/dist-packages/s3pypi/package.py", line 87, in create
files.append(os.path.basename(Package._find_wheel_name(stdout)))
File "/usr/local/lib/python2.7/dist-packages/s3pypi/package.py", line 60, in _find_wheel_name
raise RuntimeError('Wheel name not found! (use --verbose to view output)')
RuntimeError: Wheel name not found! (use --verbose to view output)

neil~/Documents/AMICUS/Code/global_toolkit/toolkit$ s3pypi --bucket --verbose
No handlers could be found for logger "root"
/usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type'
warnings.warn(msg)
Traceback (most recent call last):
File "/usr/local/bin/s3pypi", line 11, in
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/s3pypi/main.py", line 52, in main
create_and_upload_package(args)
File "/usr/local/lib/python2.7/dist-packages/s3pypi/main.py", line 20, in create_and_upload_package
package = Package.create(args.wheel, args.sdist)
File "/usr/local/lib/python2.7/dist-packages/s3pypi/package.py", line 87, in create
files.append(os.path.basename(Package._find_wheel_name(stdout)))
File "/usr/local/lib/python2.7/dist-packages/s3pypi/package.py", line 60, in _find_wheel_name
raise RuntimeError('Wheel name not found! (use --verbose to view output)')
RuntimeError: Wheel name not found! (use --verbose to view output)
neil~/Documents/AMICUS/Code/global_toolkit/toolkit$

Unable to install packages when `--private` option is set

I started using the --private option to submit my packages:

AWS_PROFILE=production s3pypi --bucket pypi.my-company.io --force --private

But every time I did that, when installing the package I got the following message:

pip install my-package --extra-index-url https://pypi.my-company.io
Collecting my-package
  Could not find a version that satisfies the requirement my-package (from versions: )
No matching distribution found for my-package

If I submit it without the --private, it installs correctly.
Is this a bug? or am I doing something wrong?

Cannot figure out how to configure whitelist

I realize this is likely an issue w/ cloudfront and how to write the config file... but I'm trying to use the s3-pypi-with-waf.json template and its throwing this error:

An error occurred (ValidationError) when calling the CreateStack operation: Parameters: [WhitelistedCIDRBlock] must have value

Any attempts at modifying it have not been successful.

Outdated wheel version constraint

I have a project using s3pypi and some other packages for building and publishing releases. One of the packages depends on a newer version of wheel. I'm currently unable to upgrade.

The current stable version of s3pypi (0.11.1) depends on wheel 0.33.6. This version of wheel is over two and a half years old. Since then there have been 10 releases. Is it possible to support a more modern version of wheel, say 0.37.x?

Is there a way to add an Index.html for the entire bucket.

Hi guys,
Thanks for your great work.
I'm wondering if there is a way to add an Index.html for the entire bucket?
Right now, s3pypi only generates one index.html for each package, is there a way to generate an index.html for the entire s3 bucket?

publishing broken as of 0.9.1 release

s3pypi --bucket pypi.mydomainbucket --private --force
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help

error: invalid command 'bdist_wheel'

Allow uploading built files directly

s3pypi creates the package using a fixed command every time it runs. This doesn't work with tools like poetry which hide the complexity of setup.py and produce dists using a different command (poetry build). Can we instead provide an option like --file <file-to-upload> (an example) that skips the create fn call?

add an option to specify the aws profile

This is a feature request to add a --profile option to the command line tool so I can specify the AWS profile I want to use.

Currently, this is what I'm getting:

s3pypi --bucket pypi.mywebsite.com --profile production
usage: s3pypi [-h] --bucket BUCKET [--secret SECRET] [--region REGION]
              [--force] [--no-wheel] [--bare] [--private]
s3pypi: error: unrecognized arguments: --profile production

Thank you!

bytes decoding issue

(build) admin@ip-192-168-2-109:~/lxml-3.4.4$ s3pypi --bucket my_bucket
/usr/lib/python3.5/distutils/dist.py:261: UserWarning: Unknown distribution option: 'bugtrack_url'
  warnings.warn(msg)
Traceback (most recent call last):
  File "/home/admin/build/bin/s3pypi", line 11, in <module>
    sys.exit(main())
  File "/home/admin/build/lib/python3.5/site-packages/s3pypi/cli.py", line 22, in main
    package = Package.create(args.wheel)
  File "/home/admin/build/lib/python3.5/site-packages/s3pypi/package.py", line 50, in create
    match = re.search('^making hard links in (.+)\.\.\.$', stdout, flags=re.MULTILINE)
  File "/usr/lib/python3.5/re.py", line 173, in search
    return _compile(pattern, flags).search(string)
TypeError: cannot use a string pattern on a bytes-like object

I changed line 50 of /home/admin/build/lib/python3.5/site-packages/s3pypi/package.py from

match = re.search('^making hard links in (.+)\.\.\.$', stdout, flags=re.MULTILINE)

to

match = re.search('^making hard links in (.+)\.\.\.$', stdout.decode("utf-8"), flags=re.MULTILINE)

and then its able to continue.

s3pypi 0.11.0 can't load aws creds from env

I always get:

Traceback (most recent call last):
  File "/dev/pypi-virtual-env/bin/s3pypi", line 8, in <module>
    sys.exit(main())
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/s3pypi/__main__.py", line 75, in main
    create_and_upload_package(args)
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/s3pypi/__main__.py", line 25, in create_and_upload_package
    storage.put_package(package, args.dist_path)
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/s3pypi/storage.py", line 60, in put_package
    self._object(package, filename).put(
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/boto3/resources/factory.py", line 520, in do_action
    response = action(self, *args, **kwargs)
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/boto3/resources/action.py", line 83, in __call__
    response = getattr(parent.meta.client, operation_name)(*args, **params)
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/botocore/client.py", line 391, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/dev/pypi-virtual-env/lib/python3.9/site-packages/botocore/client.py", line 719, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

all good with 0.10.1 version.

Root index page is empty with --prefix

Hi there,

Thanks for the project - using latest RC.

No big deal, but with --prefix the root index page seems to be empty.

I've tested without, and with the index at the root of the S3 bucket, it's generated correctly.

Thanks,

David

Not handling setup --fullname returning multiple lines

When s3pypi package.py calls my setup.py because dist_path is empty, setup.py returns:

['pytds']
pfi-python-tds-1.10.0-5-g032f

s3pypi then attempts to use that whole thing to find files to upload, which doesn't work.

I'm not a setup expert, but from what I could tell, my setup.py is okay. I even tried replacing ['pytds'] with a call to setuptools.find_packages(), but got the same result.

setup(name='pfi-python-tds',
      version=version.get_git_version(),
      description='(Forked by PFI) Python DBAPI driver for MSSQL using pure Python TDS (Tabular Data Stream) protocol implementation',
      author='Mikhail Denisenko (Jason Holladay)',
      author_email='[email protected]',
      url='https://bitbucket.org/proctorfinancial/aws-fork-pytds',
      license="MIT",
      packages=find_packages(),
      package_dir={'': 'src'},
      classifiers=[
          'Development Status :: 4 - Beta',
          'Programming Language :: Python',
          'Programming Language :: Python :: 2.7',
          'Programming Language :: Python :: 3.3',
          'Programming Language :: Python :: 3.4',
          'Programming Language :: Python :: 3.5',
      ],
      zip_safe=True,
      install_requires=requirements,
      )

I did this to fix it:

            names = list(filter(lambda line: not line.startswith('['),
                           check_output(cmd + ["--fullname"]).decode().strip()
                            .splitlines()))
            if len(names) != 1:
                raise ValueError("Only one package name is allowed")
            name = names[0]

Unable to list --extra-index-url in requirement.txt

I have a module that has a dependency from a private repository so when I list the reference to the private repository using --extra-index-url then s3pypi fails to read it as part of the install-requires in setup.py with the following error

error in test setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; Invalid requirement, parse error at "'--extra-'"
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/s3pypi/package.py", line 74, in create
stdout = check_output(cmd).decode().strip()
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 395, in check_output
**kwargs).stdout
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 487, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/usr/local/opt/python/bin/python3.7', 'setup.py', 'sdist', '--formats', 'gztar', 'bdist_wheel']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/s3pypi", line 10, in
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/s3pypi/main.py", line 52, in main
create_and_upload_package(args)
File "/usr/local/lib/python3.7/site-packages/s3pypi/main.py", line 20, in create_and_upload_package
package = Package.create(args.wheel, args.sdist)
File "/usr/local/lib/python3.7/site-packages/s3pypi/package.py", line 76, in create
raise RuntimeError(e.output.rstrip())
RuntimeError: b''

Feature Request: Invalidate CloudFront Cache

I would appreciate a parameter like --cloud-front-distribution-id=AOEUAOEUAOEU and, if provided, it will create a Cache Invalidation in CF in order to make sure all requests will obtain the most recent package index.

An additional option that is absolutely not required but it could be interesting would be waiting for the Cache Invalidation to be completed, or just outputing the invalidation ID in order to let a script wait for it.

bdist_wheel

Is there an easy way to upload a wheel package (if c extension exists, complile it and include it in binary distribution) instead of the default source distribution?

I see that in line 38 of s3pypi/package.py, the command that is effectively run is

python setup.py sdist --formats gztar

but what i really want (or at least want to have the option of doing) is

python setup.py bdist_wheel

SSL Certificate - Failure?

I was able to successfully publish packages to S3, and I can go in and view the packages from S3 in the AWS console, but when I try to do a

pip install {project} --extra-index-url https://pypi.example.com/

removed project name and actual url

I receive the error below:

'''
Could not fetch URL https://pypi.example.com/{project}/: There was a problem confirming the ssl certificate: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:581) - skipping
Could not find a version that satisfies the requirement {project} (from versions: )
No matching distribution found for {project}
'''

I'm not sure if this is indeed a SSL certificate issue, or if this is a Could not find a version that satisfies the requirement... issue...

I use versioneer for my projects...

Any idea?

I did go an update pip and virtualenv to the newest versions before attempting.

Feature Request: force version override

Some warnings/error complete the package build and publish. Below is an example:

package init file 'actions/tests/__init__.py' not found (or not a regular file)

Stopping this action would probably be the best default behavior. However, you may want to force the build to publish with --force.
If this build did happen you may want to --force a build to override the current (same version).

Unable to restrict access using CloudFront WAF template

@rubenvdb @mdwint I added a list of static IP ranges to the WhitelistCIDR parameter using the CloudFront WAF template. However, I'm able to install the pip package from the public network (outside the range of IP ranges restricted) destroying the benefit of a private s3 pip server.

I did follow the exact steps mentioned in the blog to create a CloudFormation and S3 bucket.
I have hosted the resources on the eu-west2.

Could you please guide as what is causing this issue?

Can only ever get latest package from pip

Once I upload a new version of a package to my s3pypi, then I can only ever get the latest version after that. Looking at the repo in s3, I can see the entire history of tar.gz files, but inspecting the html file lists only the most recent version. This is the first python package I've ever published, so I could be making a beginner's mistake here.

$ pip install pcns-common==0.9 --extra-index-url <my-repo-url>
Could not find a version that satisfies the requirement pcns-common==0.9 (from versions: 0.10)
No matching distribution found for pcns-common==0.9

setup.py

'''pcns-common setup'''

from setuptools import setup

setup(
    name='pcns-common',
    version='0.10',
    author='Andrew O\'Hara',
    author_email='<my email>',
    packages=['pcns.common'],
    install_requires=[
        'aws-xray-sdk',
        'boto'
    ]
)

Is this a problem with s3pypi or my own setup.py?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.