Giter Club home page Giter Club logo

sigstore's Introduction

sigstore framework

Fuzzing Status CII Best Practices

sigstore/sigstore contains common Sigstore code: that is, code shared by infrastructure (e.g., Fulcio and Rekor) and Go language clients (e.g., Cosign and Gitsign).

This library currently provides:

  • A signing interface (support for ecdsa, ed25519, rsa, DSSE (in-toto))
  • OpenID Connect fulcio client code

The following KMS systems are available:

  • AWS Key Management Service
  • Azure Key Vault
  • HashiCorp Vault
  • Google Cloud Platform Key Management Service

For example code, look at the relevant test code for each main code file.

Fuzzing

The fuzzing tests are within https://github.com/sigstore/sigstore/tree/main/test/fuzz

Security

Should you discover any security issues, please refer to sigstores security process

For container signing, you want cosign

sigstore's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sigstore's Issues

Hook up device flow to sigstore CLI.

We can do it easily with a flag, but we might also want to get smarter about defaulting. If there's no terminal or we don't have a browser to open, we could fallback to this.

Release 1.0 planning

  • implement improved e2e testing #46
  • Allow the OOB authentication flow when we can't open a browser. #62
  • implement dog food signing / verification
  • Update upstream projects who import / vendor from sigstore/sigstore

Adding more authentication options to Azure KMS

Description

Hi!

Great job you have been doing here! 😊

When looking at cosign I noticed that the Azure KMS integration only supports a service principal through the environment - but there are lots of other methods available from the Azure SDK.

Is there a specific reason to only support one of them or would you be open to a PR that adds support for MSI (Managed Service Identity) and Azure CLI (if you are authenticated using the cli, use the credentials from there).

I would be willing to try and add some additional support if you want! 👍

Proposal: Add Jake Sanders to the maintainers list!

Description
He's the #2 contributor to this repository, and has implemented a large amount of refactoring and feature work. We should add him as a maintainer!

The CODEOWNERS file here just maps to a team though, so let's track the decision here and then manually add on GitHub.

Unable to add multiple signature layers using Vault KMS

Description

It looks like there is a bug in the read, modify, write process when using Vault as the KMS.
cosign sign -key hashivault://key repo_path/image:tag

I've ensured that registry immutability is turned off and have used both AWS ECR and Docker Hub registries for troubleshooting.

Here is a summary of what I have tested:

  • Cosign signature manifest is successfully created using Vault if no signature currently exists in the repo for the specified image
  • Using a different Vault key to apply another signature to the manifest does not update the signature manifest
  • Using a local key cosign sign -key cosign.key ... will successfully add signature layers an existing Vault key signature manifest
  • Creating a signature manifest first with a local key then signing again with Vault key results in no changes

This makes me believe that there must be a bug somewhere in the read/modify/write process when using Vault as a KMS.

Looking at the debug output I do notice that there isn't any HTTP 201 Created output at the very end of the signing process when using Vault to add an additional signature to an existing manifest. So maybe it just isn't running the push process?

Blob Signer

Description

Sign (detached) blobs on gcs/s3/cloud storage.

sigstore sign (-key or not for keyless) s3://bucket/object
Signing object hash abcdef123...
Uploading signature to s3://bucket/abcdef123.sig
Signature in Transparency Log at 3453.
sigstore download-verify -key sigstore.pub gcs://bucket/object
Downloaded object with hash abcdef123...
Signature found at gcs://bucket/abcdef123.sig
Signature verified!

Hashivault KMS provider public key vs. verify

There's an issue with the hashivault provider that I haven't quite been able to pin down yet. I noticed it while testing sigstore/cosign#278, so the bug may not actually be here.

The existing PR works when you sign/verify with the provider API itself, but does not work when you verify against the exported public key. That is:

cosign sign -key hashivault://foo followed by cosign verify -key hashivault://foo works

but

cosign sign -key hashivault://foo followed by cosign public-key -key hashivault://foo > hk.pub && cosign verify -key hk.pub does not work.

Include code coverage for tests

Description
Right now the tests don't have coverage metrics reported. It would help when including a new feature to identify the coverage.

oauth: include response_modes_supported as part of the providerClaims

Description

I propose to add a new provideClaim response_modes_supported (as optional) in https://github.com/sigstore/sigstore/blob/main/pkg/oauth/oidc/pkce.go#L42. When present, we could use its value to set the response_mode as part of the auth URL here: https://github.com/sigstore/sigstore/blob/main/pkg/oauth/oidc/pkce.go#L84.

Perhaps we could even prefer using response_mode=form_post over other response modes, as detailed here: https://openid.net/specs/oauth-v2-form-post-response-mode-1_0.html#FormPostResponseExample

As described in [OAuth 2.0 Multiple Response Type Encoding Practices](https://openid.net/specs/oauth-v2-form-post-response-mode-1_0.html#OAuth.Responses) [OAuth.Responses], there are security implications to encoding response values in the query string and in the fragment value. Some of these concerns can be addressed by using the Form Post Response Mode. In particular, it is safe to return Authorization Response parameters whose default Response Modes are the query encoding or the fragment encoding using the form_post Response Mode.

Any thoughts ?

consider moving core sign & verify functions from cosign to sigstore

TL;DR: I propose transferring the core crypto functions of cosign to project Sigstore.

Motivation

We aim to make sigstore more commonize for cryptographic functions. Cosign calls the all the crypto-specific functions (i.e., generating ECDSA, signing, validating, etc.) from the keys.go file. If we get rid of those ECDSA-related functions, we would easily import sigstore project and use those functions again in cosign; moreover, we would import them on other non-container-specific projects. For cosign, this work will likely legacy change for no discernible benefit. I think it should be better to handle these functions (like GenerateKeyPair(), LoadECDSAPrivateKey(), LoadPublicKey(), etc.) inside the Sigstore itself.

We could find neither an actively discussing issue nor a PR. It is a great opportunity to get used to Sigstore project.

Implementation

func GeneratePrivateKey() (*ecdsa.PrivateKey, error)

func GenerateKeyPair(options *GenerateKeyPairOptions) (*KeyPair, error)

func LoadPublicKey(options *PublicKeyOptions) (PublicKey, error)

func LoadPrivateKey(options *PrivateKeyOptions) (signature.ECDSASignerVerifier, error)

func KeyToPem(pub crypto.PublicKey) ([]byte, error)

func CertToPem(c *x509.Certificate) []byte

func PemToECDSAKey(raw []byte) (*ecdsa.PublicKey, error)

Example Usages:

package main

import (
	"github.com/sigstore/sigstore/pkg/signature"
)

func main() {
    s, _ := signature.GenerateKeyPair(&signature.GenerateKeyPairOptions{
        PassFunc := pf()
    })

    content := "foo"
    
    pub, _ := signature.LoadPublicKey(&signature.PublicKeyOptions{
        Path: "cosign.pub",
    })

    key, _ := signature.LoadPrivateKey(&signature.PrivateKeyOptions{
        Path: "cosign.key",
        Pass: []byte("foo"),
    })
    
    s1, s2, _ := key.Sign(context.TODO(), []byte(content))
    
    _ = pub.Verify(context.TODO(), []byte(content), s1)

    // ...
}

Build fails (swag)

$ make
....
# github.com/sigstore/sigstore/pkg/httpclients
pkg/httpclients/fulcioclient.go:53:5: cannot use swag.String(models.CertificateRequestPublicKeyAlgorithmEcdsa) (type *string) as type string in field value
Makefile:52: recipe for target 'client' failed
make: *** [client] Error 2

add artifact hash

It should be possible to also send the digest instead of the artifact itself.

Plan verify operation

[WIP] still fleshing this out, but wanted to open this up for others to provide input.

Key focus should be the end UX. This needs to be as simple as we can possibly make it for end users. They should be able to just say 'verify' to consider something 'trusted' with zero setup and as few as possible manual steps. Having said that, those who want to dig deeper should be able to, but a majority of users won't want to and they will consider it an annoyance to have to perform any checks themselves (we know this from the amount of users who currently don't even bother to verify with existing tools. Users will even mash away at Y/n prompts with very little attention or patience). This should be as seamless as possible.

I am imagining a system where projects can set their own policy and maintain it for their users. We provide project owners / maintainers the ability to bootstrap and control their own policy.

The CLI will allow a project owner to seed a new project with the first signed project policy entry that can be stored in rekor and the project repository.

sigstore --init github.com/johndoe/widget_proj

{
  "project": "github.com/janedoe/widget_proj",
  "owner": "[email protected]",
  "policy_version": 2.1,
  "active_since": "2012-08-06T00:00:00.000Z",
  "sign_algorithm": "ecdsa-p384",
  "sign_threshold": 2,
  "maintainers": [
    {
      "name": "Jessica Doe",
      "email": [
        "[email protected]",
        "[email protected]"
      ],
      "active_since": "2014-06-25T00:00:00.000Z"
      "allow_device_flow_auth:" true
    },
    {
      "name": "Justin Doe",
      "email": [
        "[email protected]"
      ],
      "active_since": "2017-02-09T00:00:00.000Z"
      "allow_device_flow_auth:" false
    }
     {
      "name": "James Doe",
      "email": [
        "[email protected]"
      ],
      "active_since": "2020-10-09T00:00:00.000Z"
      "allow_device_flow_auth:" false
    }
  ]
}

Some more details

project: the namespace of the project (this could be a pypi link, crates.io url etc)
owner: the individual that controls the project and performs the genesis signing of the projects policy and later policy updates.
policy_version: versioning number for the policy
sign_algorithm: explains itself (projects should be able to pivot as they need (algorithm agility)
sign_threshold: the amount of maintainers needed to sign an artifact for it be considered 'trusted'. Users could possible override this for releases they consider pre-release etc (or higher for prod bound releases)

  • maintainers: users who can sign an artifact
    • email: the maintainers OIDC account
    • active_since: can prevent a user being a threshold element for signing past releases
    • allow_device_flow_auth: can the account be used as a device flow / unattended signee.

Some other fields we could include. There can be a genesis_key where the project owner bootstraps the trust with an offline key stored in a yubikey / HSM.

Remove operational client code, respin as just a client library.

We are starting to run into some pains from circular dependencies (rekor inherits from sigstore/sigstore and in turn sigstore/store inherits from rekors-cli). In light of the fact this code base is not gaining traction as a client, and is becoming more widely utilized as a client library, all code such as sign / verify should be removed and refactored as /examples.

We should also better document how to use this library for those who wish to build clients, making it a lite SDK. Tests should also be improved and narrowed on this being a library only.

In turn this should make releases more stable for those consuming this as a client library (as we won't have client specific features being proposed).

e2e tests

e2e tests are not ideal right now, we only have them in the fulcio and cosign repos.

We should add them here too where we can. We should figure out some kind of ok-to-test strategy so we can run tests with secrets before PRs go in

Workload Identity Federation is not working with GCP KMS support

Description

Recently, we (w/@Dentrax @erkanzileli) added other key management system support to Kyverno while verifying image signatures.1 Then, I tried this feature on GCP while using GCP KMS and GKE. To achieve this I took advantage of Workload Identity Federation2. To enable this I've used the following commands:

🎗 Cross-ref: kyverno/website#376

$ export PROJECT_ID=$(gcloud config get-value project)
$ export CLUSTER_NAME="gke-wif"
$ gcloud container clusters create $CLUSTER_NAME \
    --workload-pool=$PROJECT_ID.svc.id.goog --num-nodes=2
$ export GSA_NAME=kyverno-sa
$ gcloud iam service-accounts create $GSA_NAME
$ gcloud iam service-accounts add-iam-policy-binding \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:${PROJECT_ID}.svc.id.goog[kyverno/kyverno]" \
  ${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
$ gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --role roles/cloudkms.admin \
  --member serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
$ kubectl annotate serviceaccount \
  --namespace kyverno \
  kyverno \
  iam.gke.io/gcp-service-account=${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com

Then, I tried it with Kyverno but it didn't work as I expected. So, I decided to do a small test together with the google/cloud-sdk:slim image. So, I ran a Pod with this image, everything worked fine.

kubectl run -it --rm \
  --image google/cloud-sdk:slim \
  --serviceaccount kyverno \
  --namespace kyverno \
  workload-identity-test

Screen Shot 2021-11-11 at 15 53 53

cc: @JimBugwadia @dlorenc @cpanato

Footnotes

  1. https://github.com/kyverno/kyverno/pull/2607

  2. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity

Initial Flags

  • --file
  • --fulcio (default to fulcio.dev)
  • --email (needed for challenge, can also be populated in config)

Pluggable KMS integrations

Description

This is something I've been marinating on for some time, but was driven to open a tracking issue by #384

Problem: baking in every KMS provider under the sun into sigstore/sigstore itself incurs a significant overhead on all downstream projects regardless of whether they even use this functionality.

I'll sketch out some ideas in a follow up comment.

WASM Signing

WASM signing.

It is possible to embed x509 signing into a custom WASM header.

Failed to verify signature in DSSE

Description

Part of fuzz testing DSSE verify is failing for this data"\xb3"

panic: failed to verify signature

goroutine 1 [running]:
github.com/sigstore/sigstore/pkg/signature/dsse.Fuzz({0x7fa1488e6000, 0x1, 0x0})
	/home/sammy/go/src/github.com/naveensrinivasan/sigstore/pkg/signature/dsse/fuzz.go:56 +0x757
go-fuzz-dep.Main({0xc0000dff68, 0x1, 0x575300})
	go-fuzz-dep/main.go:36 +0x15b
main.main()
	github.com/sigstore/sigstore/pkg/signature/dsse/go.fuzz.main/main.go:15 +0x3b
exit status 2%

AWS tests fail intermittently

Noticed a few intermittent failures with the AWS tests

https://github.com/sigstore/sigstore/pull/111/checks?check_run_id=3741853919

=== RUN   TestAWS/TestVerify
    aws_test.go:160: 
        	Error Trace:	aws_test.go:160
        	Error:      	Expected nil, but got: looking up key: getting public key: SerializationError: failed to unmarshal response error
        	            		status code: 500, request id: 
        	            	caused by: UnmarshalError: failed decoding error message
        	            		00000000  3c 21 44 4f 43 54 59 50  45 20 48 54 4d 4c 20 50  |<!DOCTYPE HTML P|
        	            	00000010  55 42 4c 49 43 20 22 2d  2f 2f 57 33 43 2f 2f 44  |UBLIC "-//W3C//D|
        	            	00000020  54 44 20 48 54 4d 4c 20  33 2e 32 20 46 69 6e 61  |TD HTML 3.2 Fina|
        	            	00000030  6c 2f 2f 45 4e 22 3e 0a  3c 74 69 74 6c 65 3e 35  |l//EN">.<title>5|
        	            	00000040  30 30 20 49 6e 74 65 72  6e 61 6c 20 53 65 72 76  |00 Internal Serv|
        	            	00000050  65 72 20 45 72 72 6f 72  3c 2f 74 69 74 6c 65 3e  |er Error</title>|
        	            	00000060  0a 3c 68 31 3e 49 6e 74  65 72 6e 61 6c 20 53 65  |.<h1>Internal Se|
        	            	00000070  72 76 65 72 20 45 72 72  6f 72 3c 2f 68 31 3e 0a  |rver Error</h1>.|
        	            	00000080  3c 70 3e 54 68 65 20 73  65 72 76 65 72 20 65 6e  |<p>The server en|
        	            	00000090  63 6f 75 6e 74 65 72 65  64 20 61 6e 20 69 6e 74  |countered an int|
        	            	000000a0  65 72 6e 61 6c 20 65 72  72 6f 72 20 61 6e 64 20  |ernal error and |
        	            	000000b0  77 61 73 20 75 6e 61 62  6c 65 20 74 6f 20 63 6f  |was unable to co|
        	            	000000c0  6d 70 6c 65 74 65 20 79  6f 75 72 20 72 65 71 75  |mplete your requ|
        	            	000000d0  65 73 74 2e 20 45 69 74  68 65 72 20 74 68 65 20  |est. Either the |
        	            	000000e0  73 65 72 76 65 72 20 69  73 20 6f 76 65 72 6c 6f  |server is overlo|
        	            	000000f0  61 64 65 64 20 6f 72 20  74 68 65 72 65 20 69 73  |aded or there is|
        	            	00000100  20 61 6e 20 65 72 72 6f  72 20 69 6e 20 74 68 65  | an error in the|
        	            	00000110  20 61 70 70 6c 69 63 61  74 69 6f 6e 2e 3c 2f 70  | application.</p|
        	            	00000120  3e 0a                                             |>.|
        	            	
        	            	caused by: invalid character '<' looking for beginning of value
        	Test:       	TestAWS/TestVerify

Improve design of OAuth success HTML page

Description

Currently, when you successfully get through an OAuth flow, you're met with this page:

Screen Shot 2022-05-07 at 1 51 41 PM

Good stuff:

  • it's small and simple
  • it's fast to load
  • it's clear about what to do next

Bad stuff:

  • it's black Times New Roman on white background (not Sigstore branding)

I think we have an opportunity to make this page feel really cohesive with the rest of the Sigstore brand. We shouldn't take this as an opportunity to load the page down with a bunch of CSS/JS/images, but I think there's improvements we could make, like:

  • add a Sigstore logo
  • use Sigstore's font/coloring
  • add a button to close, or maybe even "press space to close" -- JS won't let you close a window without some user action, but I think keyboard event should work. edit: browsers no longer allow this

After #425 we'll only have one copy of this HTML, here:

InteractiveSuccessHTML = `<html>
<title>Sigstore Auth</title>
<body>
<h1>Sigstore Auth Successful</h1>
<p>You may now close this page.</p>
</body>
</html>

Add KeyID (public key fingerprint) to signature in in-toto attestation

Description

The DSSE wrapped signer does not populate the KeyID field of the DSSE payload

Sig: base64.StdEncoding.EncodeToString(sig),

As a result, running cosign attest or using the dsse.WrappedSigner leaves the keyid empty. This was produces by a cosign attest:

{
  "payloadType": "application/vnd.in-toto+json",
  "payload": "eyJfdHlwZSI6Imh0dHBzOi8vaW4tdG90by5pby9TdGF0ZW1lbnQvdjAuMSIsInByZWRpY2F0ZVR5cGUiOiJodHRwczovL3Nsc2EuZGV2L3Byb3ZlbmFuY2UvdjAuMiIsInN1YmplY3QiOlt7Im5hbWUiOiJnY3IuaW8vYXNyYS1hbGkvYnVzeWJveC9kZW1vIiwiZGlnZXN0Ijp7InNoYTI1NiI6IjM3ZTUyODc5NDU3NzRmMjdiNDE4Y2U1NjdjZDc3ZjRiYmM5ZWY0NGExYmNkMWEyMzEyMzY5ZjMxZjljY2U1NjcifX1dLCJwcmVkaWNhdGUiOnsiYnVpbGRlciI6eyJpZCI6Imh0dHBzOi8vZ2l0aHViLmNvbS9BdHRlc3RhdGlvbnMvR2l0SHViSG9zdGVkQWN0aW9uc0B2MSJ9LCJidWlsZFR5cGUiOiJodHRwczovL2dpdGh1Yi5jb20vQXR0ZXN0YXRpb25zL0dpdEh1YkFjdGlvbnNXb3JrZmxvd0B2MSIsImludm9jYXRpb24iOnsiY29uZmlnU291cmNlIjp7InVyaSI6ImdpdCthc3JhYS9zbHNhLW9uLWdpdGh1Yi10ZXN0LmdpdCIsImRpZ2VzdCI6eyJTSEExIjoiNDUwNjI5MGUyZThmZWIxZjM0YjI3YTA0NGY3Y2M4NjNjODMwZWY2YiJ9LCJlbnRyeVBvaW50IjoiVGVzdCBTTFNBIn0sImVudmlyb25tZW50Ijp7ImFyY2giOiJhbWQ2NCIsImVudiI6eyJHSVRIVUJfRVZFTlRfTkFNRSI6IndvcmtmbG93X2Rpc3BhdGNoIiwiR0lUSFVCX1JVTl9JRCI6IjE4ODgyMTYxNTkiLCJHSVRIVUJfUlVOX05VTUJFUiI6IjY5In19fSwibWF0ZXJpYWxzIjpbeyJ1cmkiOiJnaXQrYXNyYWEvc2xzYS1vbi1naXRodWItdGVzdC5naXQiLCJkaWdlc3QiOnsiU0hBMSI6IjQ1MDYyOTBlMmU4ZmViMWYzNGIyN2EwNDRmN2NjODYzYzgzMGVmNmIifX1dfX0=",
  "signatures": [
    {
      "keyid": "",
      "sig": "MEUCIGoyFXzEOqVRbK1/0Ep4sfwiWZ77nRj9WirRsCKrIAYgAiEAoTMViCICvK5z5cBEcWuWOj85f8OKkoyCCeSVihU1lSo="
    }
  ]
}

This would be nice to have to identify which key was used to produce the signature rather than brute force verifying against them all (lets say, from fetching rekor entries by the Subject.Digest of the intoto attestation). But I'm aware that keyid is optional.

Right now I have a signed in-toto attestation and want to search on the log for the correct rekor entry. Right now I can only search rekor in-toto attestations reliably with Subject.Digest (since the entire DSSE envelope is not stored, and the hash of the signed payload isn't canonical). I may get multiple rekor entries of intoto attestations. I want to select the correct one programattically. If I can match certs with keyids I can narrow down without comparing attestation payloads or iterating through verification with the signing certs in the rekor entries.

community : Contributor ladder

Description

I am opening this to ask if there's a contributor ladder defined for sigstore.
How do I become an org member?

I would be happy to help do PR's reviews here, hoping to work towards maintainership.

previous contributions - mainly fuzzing sigstore and integrating with oss-fuzz

PR's in sigstore

  1. #214
  2. #213
  3. #212
  4. #197
  5. #178
  6. #177
  7. #173
  8. #170
  9. #169
  10. #168
  11. #165
  12. #164
  13. #160
  14. #158
  15. #157
  16. #148
  17. #146
  18. #127

oss-fuzz and actively maintaining the oss-fuzz issues

  1. google/oss-fuzz#6890
  2. google/oss-fuzz#6927
  3. google/oss-fuzz#6964

Issues in sigstore

https://github.com/sigstore/sigstore/issues?q=is%3Aissue+author%3Anaveensrinivasan

PR's in cosign

  1. sigstore/cosign#1141
  2. sigstore/cosign#1020
  3. sigstore/cosign#1001
  4. sigstore/cosign#971
  5. sigstore/cosign#968
  6. sigstore/cosign#944
  7. sigstore/cosign#124
  8. sigstore/cosign#121
  9. sigstore/cosign#120
  10. sigstore/cosign#119

Issues in cosign

https://github.com/sigstore/cosign/issues?q=is%3Aissue+author%3Anaveensrinivasan+

PR's rekor

https://github.com/sigstore/rekor/pulls?q=author%3Anaveensrinivasan

Issues in rekor

https://github.com/sigstore/rekor/issues?q=author%3Anaveensrinivasan

cc @lukehinds @dlorenc @bobcallaway

"redirect_uri did not match URI from initial request" error during OOB OAuth flow

Description

Generating ephemeral keys...
Retrieving signed certificate...
error opening browser: exec: "xdg-open": executable file not found in $PATH
Go to the following link in a browser:

         https://oauth2.gcp.xxx.zzz/auth/auth?access_type=online&client_id=sigstore&code_challenge=dMlz-Gh7syHeNJNMaKMubERNcAffSULqQFXa3vuFkvs&code_challenge_method=S256&nonce=1ysOo0owUAagezb55qlL5GmUwAY&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=openid+email&state=1ysOo1W6KfBbqN732aqVOOn5SRL
Enter verification code: k7hpum4hsddl5viwyea52ks5t
error: signing [ghcr.io/mmmm/sigstore-thw:latest]: getting signer: getting key from Fulcio: retrieving cert: oauth2: cannot fetch token: 400 Bad Request
Response: {"error":"invalid_request","error_description":"redirect_uri did not match URI from initial request."}

After investigating, I think it is because

cfg.RedirectURL = oobRedirectURI
changes cfg.RedirectURL on a copy of cfg rather than the value passed, such that when the code is used after doOobFlow() returns, the configs do not match resulting in the error message seen above.

Pypi Signing

pypi/warehouse#3356 (comment)

Looks like you can sign pypi artifacts and upload ".asc" files next to them. These are typically supposed to be PGP ascii signatures, but I don't think this is enforced.

Document Library

We should provide some nice developer docs and examples on how to integrate with s/s

verification of `gcpkms://` requires overly broad permissions

Description

I was looking at the permissions needed by the ClusterImagePolicy -> ConfigMap reconciler to deal with KMS, and it seems to require cloudkms.cryptoKeys.get, where I'd expect it to only need cloudkms.cryptoKeyVersions.viewPublicKey.

I can understand the signing path requiring more capabilities, but for things like the admission controller and cosign verify flows, it should be doable by folks that only have public key access.

cc @dekkagaijin @imjasonh

https://www.sigstore.dev/ returns 404

curl -I https://www.sigstore.dev/
HTTP/2 404 
[...]
curl --http1.1 -I https://www.sigstore.dev/
HTTP/1.1 404 Not Found
[...]

Interestingly, the page content looks right, but the 404 return code (instead of the normal 200 OK ) throws off my linkchecker -r 1 --check-extern https://reproducible-builds.org/reports/2021-09/

This bug was found while working on reproducible builds for openSUSE.

Use ed25519 for KMS's that support it? E.g. Vault

Description

I know work was done to move from ed25519 to ecdsa in cosign here, but for KMS's that support ed25519 I wonder if we can have an option to support it?

My use case is we use Hashicorp Vault with ed25519 keys to produce signatures for artifacts that are compatible with minisign, I'd like to be able to use cosign with the same keys that already exist to take advantage of our auditing and access control.

The current implementation of hashivault actually allows you to produce a signature from a ed25519 transit key since the API call is the same. However you can't create a public key from an ed25519 transit key since it's not PEM encoded:

❯ cosign sign --key hashivault://cosigntest --upload=false --payload=hello.txt ubuntu:latest
Using payload from: hello.txt
vOmbkC7KNLYrr0/hIsis8YrqXp/RuR36Zqsuuyp92l66jIgsCdxJLMxWHxmZjz8SoR0Kx3CWUeOfSCGUWRaQAQ==

❯ cosign public-key --key hashivault://cosigntest
error: PEM decoding failed

Getting the public-key command to work correctly should be pretty straightforward since Vault returns the keytype so we could encode ed25519 keys differently, but to get verification to work it would also require changing the verify command to understand what kind of public key the PEM format is in (ed25519 vs ecdsa).

What are your thoughts on supporting both key types? Personally I would love to use ed25519 to be more inline with minisign/signify, but understand lack of KMS support and FIPS for ed25519 currently

I have some local work I'd be happy to turn into a PR if we agree on an approach. Thanks!

Add support for `aws-us-gov` partitions in AWS KMS key ARNs

Description
Currently attempting to sign an artifact using cosign sign --key awskms:///arn:aws-us-gov:kms:us-gov-west-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab localhost:5000/test/ubi8:8.5 -d will result in the following error:

Error: signing [localhost:5000/test/ubi8:8.5]: getting signer: reading key: kms get: invalid awskms format "awskms:///arn:aws-us-gov:kms:us-gov-west-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab"
main.go:46: error during command execution: signing [localhost:5000/test/ubi8:8.5]: getting signer: reading key: kms get: invalid awskms format "awskms:///arn:aws-us-gov:kms:us-gov-west-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab"

Changes should be made here to permit the case where the arn partition is aws-us-gov rather than aws.

This MR should resolve it #289

Invitation from the OpenSSF's Great MFA Distribution project

Question
Hello! My name is CRob and I work with the Developer Best Practices Working Group of the
Linux Foundation's Open Source Security Foundation (OpenSSF)
https://github.com/ossf/wg-best-practices-os-developers "Great
Multi-Factor Authentication (MFA) Distribution Project"
https://github.com/ossf/great-mfa-project.

We'd like to give your project free MFA hardware tokens from
Google and GitHub, for use by your maintainers. We'd especially
like to give them to any of your maintainers who aren't already
using any. Our goal is to help improve the security of open source
software (OSS)/Free Software projects. For example, these tokens
can counter attacks that release source code updates and/or packages
using stolen passwords.

By 2021-12-20 and preferably much sooner, please let me know:

  1. If you want any tokens, and if so...
  2. How many Titan tokens from Google (up to 5)
  3. How let Yubikey tokens from GitHub (up to 5)
  4. The private email address to send codes to
    (this email must not go to the public, as these are use-once
    codes can be used to get the tokens)
  5. If you could use more, how many more.

We would send you coupon codes and validation codes to the private
email address. You would then distribute those codes to the
maintainers you choose. The recipients would use the coupon codes
and validation codes to "buy" the tokens from the Google Store
and/or GitHub Shop, who would ship the tokens directly to recipients.
These codes are use-once, so make sure you can keep the codes private
until they're used by the intended person.

Important: The Google coupon codes must be used by 2021-12-31
on the Google Store or they expire.

How can you trust us? You don't need to. You would get the MFA
tokens from Google and GitHub; we're simply offering codes to make
them no-cost. We'll provide some documentation on how to use them,
but you don't need to use our documents.

To qualify, each token recipient must:

  1. Be a maintainer or contributor to this critical open source software (OSS)
    project, or to another OSS project that this project depends on
    (the dependency may be indirect).
  2. Try to use an MFA token once they receive the token.
    We'd like recipients to use MFA tokens from then on, but at least try.
  3. Not reuse the token between different people (the token must not be shared).
  4. Consider providing feedback to us (so we can try to fix problems).

We also need each project that receives coupon codes and/or validation codes
to tell us these numbers (preferably within 30 days of getting the codes):

  1. How many tokens did you distribute from just Google? From just GitHub?
  2. How many people received tokens from just Google? From just GitHub?
    From both?
  3. How many people didn’t have hardware tokens they used for OSS who
    received tokens from just Google? From just GitHub? From both?

We ask for this information so we can tell others some simple
measures of success. We don't need nor want the names of any
individuals participating. It's fine to ask the people who got the
codes for that information and provide a best-effort summary.

The MFA tokens are shipped from the US. They can be shipped
internationally, but there are various limitations on where each
can be shipped.

In particular, we can't ship somewhere if that is forbidden
(sanctioned) under US law. So at this time we are unable to ship
to individuals in China, Afghanistan, Russia, Ukraine, North Korea,
Iran, Sudan, and Syria. Sorry about that. See the Google and
GitHub sites for more shipping information. More sanction information
is available at
https://home.treasury.gov/policy-issues/financial-sanctions/sanctions-programs-and-country-information.

For more information including how-tos and other setup information
can be found at the "Great Multi-Factor Authentication (MFA)
Distribution Project" site: https://github.com/ossf/great-mfa-project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.