Giter Club home page Giter Club logo

oauth2-proxy's Introduction

OAuth2 Proxy

Continuous Integration Go Report Card GoDoc MIT licensed Maintainability Test Coverage

A reverse proxy and static file server that provides authentication using Providers (Google, Keycloak, GitHub and others) to validate accounts by email, domain or group.

Note: This repository was forked from bitly/OAuth2_Proxy on 27/11/2018. Versions v3.0.0 and up are from this fork and will have diverged from any changes in the original fork. A list of changes can be seen in the CHANGELOG.

Note: This project was formerly hosted as pusher/oauth2_proxy but has been renamed as of 29/03/2020 to oauth2-proxy/oauth2-proxy. Going forward, all images shall be available at quay.io/oauth2-proxy/oauth2-proxy and binaries will be named oauth2-proxy.

Sign In Page

Installation

  1. Choose how to deploy:

    a. Using a Prebuilt Binary (current release is v7.6.0)

    b. Using Go to install the latest release

    $ go install github.com/oauth2-proxy/oauth2-proxy/v7@latest

    This will install the binary into $GOPATH/bin. Make sure you include $GOPATH in your $PATH. Otherwise your system won't find binaries installed via go install

    c. Using a Prebuilt Docker Image (AMD64, PPC64LE, ARMv6, ARMv7, and ARM64 available)

    d. Using a Pre-Release Nightly Docker Image (AMD64, PPC64LE, ARMv6, ARMv7, and ARM64 available)

    e. Using the official Kubernetes manifest (Helm)

    Prebuilt binaries can be validated by extracting the file and verifying it against the sha256sum.txt checksum file provided for each release starting with version v3.0.0.

    sha256sum -c sha256sum.txt 2>&1 | grep OK
    oauth2-proxy-x.y.z.linux-amd64: OK
    
  2. Select a Provider and Register an OAuth Application with a Provider

  3. Configure OAuth2 Proxy using config file, command line options, or environment variables

  4. Configure SSL or Deploy behind a SSL endpoint (example provided for Nginx)

Security

If you are running a version older than v6.0.0 we strongly recommend you please update to a current version. See open redirect vulnerability for details.

Docs

Read the docs on our Docs site.

OAuth2 Proxy Architecture

Images

From v7.6.0 and up the base image has been changed from Alpine to GoogleContainerTools/distroless. This image comes with even fewer installed dependencies and thus should improve security. The image therefore is also slightly smaller than Alpine. For debugging purposes (and those who really need it (i.e. armv6)) we still provide images based on Alpine. The tags of these images are suffixed with -alpine.

Since 2023-11-18 we provide nightly images. These images are build and pushed nightly to quay.io/oauth2-proxy/oauth2-proxy-nightly from master. These images should be considered alpha and therefore should not be used for production purposes unless you know what you're doing.

Getting Involved

If you would like to reach out to the maintainers, come talk to us in the #oauth2-proxy channel in the Gophers slack.

Contributing

Please see our Contributing guidelines. For releasing see our release creation guide.

oauth2-proxy's People

Contributors

aeijdenberg avatar bradym avatar braunsonm avatar brianv0 avatar carrot-hub avatar cgroschupp avatar codablock avatar costelmoraru avatar evgenigordeev avatar fstelzer avatar ianroberts avatar jehiah avatar jmickey avatar joelspeed avatar johejo avatar kamaln7 avatar karlskewes avatar kvanzuijlen avatar leyshon avatar mgiessing avatar miguelborges99 avatar misterwil avatar nickmeves avatar ploxiln avatar renovate[bot] avatar steakunderscore avatar talam avatar timothy-spencer avatar toshi-miura avatar tuunit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oauth2-proxy's Issues

Session state's email stored in clear in the cookie

Expected Behavior

Some enterprises requests that no information should be exposed through the cookie value if cookie-secret is provided.

Current Behavior

However, the current implementation is just base64 encoding the email value, thus exposing the information. In the same cookie, the Session State encrypts the accessToken, IDToken, ExpiresOn and RefreshToken.

Possible Solution

During the EncryptString part of the session_state, when a cookie.Cipher is provided, encrypt also the account info made out of email and user together with the rest of the SessionState fields.
The same time, the Decode of the Session State should take into consideration and decrypt the email and user.

Steps to Reproduce (for bugs)

  1. Configure the oauth2_proxy with a cookie-secret and go through the auth process,
  2. Take the generated cookie cookie-name and apply a base64 decode and you will have the email value in clear.

Context

Some enterprises are now allowing exposure of any useful information in the cookies.

Your Environment

  • Version used: bitly's oauth2_proxy with @JoelSpeed's PR's

--authenticated-emails-file fully ignored

I set up oauth2-proxy through the stable Helm chart helm install stable/oauth2-proxy.
Through the answer.yaml I did the following overrides from the default:

---
  config: 
    clientID: "<id>.apps.googleusercontent.com"
    clientSecret: "<secret>"
    cookieSecret: "<cookie-secret>"
  extraArgs: 
    provider: "google"
  authenticatedEmailsFile:
    enabled: true
    restricted_access: |-
      [email protected]

The pod is spinned up with the following params:

spec:
  containers:
  - args:
    - --email-domain=*
    - --http-address=0.0.0.0:4180
    - --provider=google
    - --upstream=file:///dev/null
    - --authenticated-emails-file=/etc/oauth2-proxy/authenticated-emails-list

Expected Behavior

After calling a protected domain I should be forwarded to the identity provider and after a successful authentication process redirected to my application.
oauth2-proxy should consider which email addresses are whitelisted with the --authenticated-emails-file flag

Current Behavior

oauth2-proxy do not respect the settings inside --authenticated-emails-file. Every single email address which has been authenticated through google is valid and grants access to the application.

Possible Solution

I guess it's a misconfiguration from my side.

Steps to Reproduce (for bugs)

  1. Install the stable oauth2-proxy Helm chart with the parameters listed above
  2. Create a simple web deployment
  3. Expose the web application through two ingress objects
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
    nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
spec:
  rules:
  - host: grafana.mydomain.tld
    http:
      paths:
      - backend:
          serviceName: prometheus-grafana
          servicePort: 80
        path: /
  tls:
  - hosts:
    - grafana.mydomain.tld
    secretName: grafana-tls-ingress
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: grafana.mydomain.tld
    http:
      paths:
      - backend:
          serviceName: oauth2-proxy
          servicePort: 80
        path: /oauth2
  tls:
  - hosts:
    - grafana.mydomain.tld
  1. Open https://grafana.mydomain.tld
  2. Forwarded (if not already logged in) to google. After that I get redirected back to my application

Context

Log output from the oauth2-proxy pod:

51.xx.xx.xx - [email protected] [23/Feb/2019:13:47:50 +0000] grafana.mydomain.tld GET - "/oauth2/auth" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 202 0 0.000
51.xx.xx.xx - [email protected] [23/Feb/2019:13:47:50 +0000] grafana.mydomain.tld GET - "/oauth2/auth" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 202 0 0.000
51.xx.xx.xx - [email protected] [23/Feb/2019:13:47:50 +0000] grafana.mydomain.tld GET - "/oauth2/auth" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 202 0 0.000

From the pod's shell:

cat /etc/oauth2-proxy/authenticated-emails-list
[email protected]

Your Environment

  • Rancher 2.1.6
  • Version used:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • oauth2-proxy 0.9.1 Helm chart
  • quay.io/pusher/oauth2_proxy:v3.1.0 image

Skip validation of upstream certificate

Even with -ssl-insecure-skip-verify it is not possible to use a self-signed certificate for the upstream.

Expected Behavior

Upstream with self-signed certificate (such as default kubernetes dashboard deployment) should work if oauth2-proxy is configured with -ssl-insecure-skip-verify.

Current Behavior

Oauth2-proxy still tries to verify the certificate and fails with the following error message:

reverseproxy.go:395: http: proxy error: x509: certificate has expired or is not yet valid

Possible Solution

This issue was reported here in the bitly/oauth2_proxy repository and has a pull request with a proposed solution here.

Steps to Reproduce (for bugs)

I have followed this guide but modified it a bit.

  1. Install minikube, kubectl and helm
  2. Clone this repo
  3. Checkout the branch self-signed-dashboard
  4. Follow the quick start
  5. Check the logs of the oauth2-proxy pod

Context

The default dashboard deployment uses self-signed certificates and this is better than no TLS at all.
I think this is a common use case, especially in a kubernetes environment.

Your Environment

Versions used:

  • Kubernetes v1.13.3
  • Minikube v0.34.1
  • Oauth2-proxy chart 0.9.1
  • Oauth2-proxy 3.1.0

store group email address in header

I've set up Google groups to authenticate users. But currently the x-forwarded-user is the user email address instead of group email address that the user belongs to. I need the group email address to intercept calls in a reverse proxy server. Is there a way to get the group email address/user?

Expected Behavior

Want to access group username, such as X-Forwarded-Group-Name

Current Behavior

I can only access the X-Forwarded-User as the name of the single user.

Possible Solution

Context

Your Environment

Oauth2-proxy 2.2.0
CentOS 7

proxy_pass http upstream causes 414 Request-URI Too Long: Buffer overflow detected

Expected Behavior

Using the Nginx auth request with a 401 redirect to oauth2_proxy with an http upstream service, I should be redirected to the site after login.

Current Behavior

After going through the oauth2 sign-in flow, I receive 414 Request-URI Too Long: Buffer overflow detected

Steps to Reproduce (for bugs)

Nginx config file for affected site

upstream itwatchdog {
        server 192.168.200.203:80;
}
server {
        listen 80;
        listen [::]:80;
        server_name itwatchdog.mydomain.net;
        return 301 https://$server_name$request_uri;
}
server {
        listen *:443;

        server_name itwatchdog.mydomain.net;
        ssl_certificate /etc/letsencrypt/live/itwatchdog.mydomain.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/itwatchdog.mydomain.net/privkey.pem;
        include snippets/ssl-params.conf;
        include snippets/error_pages.conf;

        location /oauth2/ {
        proxy_pass       http://127.0.0.1:4180;
        proxy_set_header Host                    $host;
        proxy_set_header X-Real-IP               $remote_addr;
        proxy_set_header X-Scheme                $scheme;
        proxy_set_header X-Auth-Request-Redirect $request_uri;
        }

        location / {

        auth_request /oauth2/auth;
        error_page 401 = /oauth2/start?rd=$request_uri;

        # pass information via X-User and X-Email headers to backend,
        # requires running with --set-xauthrequest flag
        auth_request_set $user   $upstream_http_x_auth_request_user;
        auth_request_set $email  $upstream_http_x_auth_request_email;
        proxy_set_header X-User  $user;
        proxy_set_header X-Email $email;

        # if you enabled --cookie-refresh, this is needed for it to work with auth_request
        auth_request_set $auth_cookie $upstream_http_set_cookie;
        add_header Set-Cookie $auth_cookie;


        proxy_pass  http://itwatchdog;
        proxy_set_header        Host            $host;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_intercept_errors on;
        proxy_connect_timeout   180;
        proxy_send_timeout      180;
        proxy_read_timeout      180;
        }
}

Context

I started running into this issue with this sub-domain after downloading the latest oauth2_proxy 3.1.0 upgrading from the original bitly/oauth2_proxy 2.2.

I can access the site properly if I either:

  1. Remove the auth_request from nginx and bypass oauth2_proxy
  2. Change the upstream to port 443 and the proxy_pass to https://itwatchdog

Your Environment

Ubuntu 18.0.4
nginx 1.14.0
oauth2_proxy 3.1.0

G-Suite Groups with users outside domain unable to be authorized

Expected Behavior

All members inside the specified G-Suite group should be able to authenticate.

Current Behavior

Users in the group belonging to the same G-Suite organization are able to authenticate, but users outside the domain or regular gmail users can not.

Steps to Reproduce (for bugs)

  1. Setup Oauth2_Proxy with Google group authorization as described in the readme.
  2. Add a user to the group with a G-Suite account in the same domain as the group oauth2_proxy is checking
  3. Add users not from inside your organization, such as a regular Gmail user
  4. Try to access the protected services behind oauth_proxy using both accounts

Context

/var/log/syslog output

04:31:13 google.go:189: error fetching user: googleapi: Error 404: Resource Not Found: userKey, notFound
04:31:13 oauthproxy.go:754: 127.0.0.1:50424 ("192.168.0.96") Permission Denied: "[email protected]" is unauthorized
04:31:13 oauthproxy.go:498: ErrorPage 403 Permission Denied Invalid Account

Your Environment

Ubuntu 18.04 with nginx 1.14.0

  • Version used:
    3.1.0

Refreshing Behavior is under-defined for OIDC provider

The refresh behavior isn't clear with respect to how cookie_refresh affects tokens. Maybe the problem is the overloading of the word "refresh" in this case, but I'm not sure.

The documentation says:

It's recommended to refresh sessions on a short interval (1h) with cookie-refresh setting which validates that the account is still authorized.

This sounds like there's some sort of forced interaction with an identity provider to verify authorization. In the OAuth2 case, this sounds a lot like access tokens are supposed to be used to verify a user's info by testing to see if the access token is still authorized. I'm not sure what it means in the OIDC case.

In reality, all cookie_refresh seems to do is write out a new cookie the user agent (new cookie expiry, new signature) via SaveSession.

Secondly, the refresh for the OIDC provider is not based on the expiration of the identity token.

Expected Behavior

For sure, the ID Token's expiration, not the access token's expiration, should be used to determine when to refresh the tokens.

I'm not sure what cookie_refresh should mean, acquire new ID tokens?

Current Behavior

ValidateSessionState relies on the IdToken being valid, including the IdToken being valid, but the expiry is derived from in the oauth2 token response via the expires_in object, which is the lifetime of the access token, which I think can be a different time than the expiration of the ID token (though you think a reasonable provider would keep access token and id token lifetimes the same - but I can't verify that is the case).

CookieRefresh doesn't actually force a refresh of the tokens in the proxy (e.g. using the refresh token to acquire a new ID token) when the period has passed. It does generate a new valid with an extended expiration (and of course, signs it).

Context

My ID tokens expire every 15 minutes. I was hoping I could use cookie_refresh to acquire new ID tokens every 10 minutes, guaranteeing that, when passing the token downstream, that the ID token is still valid for at least 5 minutes.

It seems like my expires_in and exp are not the same values with my identity provider, with my access_tokens expiring after my identity tokens, though I'm having trouble verifying. This causes ValidateSessionState to fail because RefreshSessionIfNeeded doesn't think the token needs refreshing.

Your Environment

It seems like my expires_in and exp are not the same values with my identity provider, with my access_tokens expiring after my identity tokens, though I'm having trouble verifying.

OIDC Cookie refresh not working with nginx auth_request when cookies are split.

Expected Behavior

When a session with split cookies is refreshed, all cookie parts should be returned to the user.

Current Behavior

When using cookie refresh with the nginx auth_request directive, nginx only returns the first chunk of the split cookie, but not the other ones.

Initial authentication is fine, after POSTing the auth code to the callback endpoint, both cookies are returned, just during the auth_request it only returns one.

@JoelSpeed you mentioned in one of the related PRs that you have this solution (big Azure JWTs that required cookie splitting) in production successfully with nginx auth_request. Do you have noticed any of that behavior or do you have any special nginx config for that (aside from the one at the bottom of the README here)?

nginx should receive multiple Set-Cookie: Headers for the split cookie in the auth_request, however, when dumping the $http_upstream_set_cookie variable, it contains only the cookie-name-0 part, all others it doesn't. Seems like nginx doesn't iterate through multiple headers with the same name here?

Any suggestions that might point me into the right direction?

Running On GKE

Ideally, I would like oauth2_proxy to work, with it exposed to the internet on GKE with an GCP ingress and then pointing on to a different service upstream.

Expected Behavior

Setup as sidecar for the deployment of interest and point a to the container of interest as an upstream.

Current Behavior

The service is marked as unhealthy by the ingress.

Possible Solution

Somehow make root path return a 200.

Steps to Reproduce (for bugs)

  1. Create GKE Cluster
  2. Create image with oauth2_proxy side car
  3. Point service to side car
  4. Point ingress to service.

Context

Your Environment

  • Version used:

security reports?

I think I found a security issue (could be in our fork but doesn't yet seem like it).

I could not find anywhere mentioning reporting security issues privately, can you please add some details?

Expected Behavior

If I find a security issue in the project I can go to the repo and find some info on how to report it.

Current Behavior

I am left guessing.

Possible Solution

create a file security.md with details of how to report security vulnerabilities, link to it from readme.md

Steps to Reproduce (for bugs)

Context

potentially report a security issue (if confirmed it is not a mess up in our fork or my configuration)

Your Environment

hey I got this building on windows woot!

  • Version used:

login loop using ingress nginx, auth_request always returning 401

Expected Behavior

Go to the ingress hostname for the first time, be greeted with google login.
Select google account, redirect to application setup to be behind hostname.

Current Behavior

Go to the ingress hostname for the first time, be greeted with google login.
Select google account..... Select google account.... Select Google account.

I can see in ingress nginx that the auth_request requests are always returning a 401.

"/oauth2/auth" HTTP/1.1 "Go-http-client/1.1" 401 21 0.000
2019/02/25 02:37:26 oauthproxy.go:796: 100.97.114.11:35790 ("54.79.36.100") Cookie "_oauth2_proxy" not present

Steps to Reproduce (for bugs)

I am using the helm chart, here is the values.yaml deployed:

config:
  clientID: "asdfasdfasdfasdf.apps.googleusercontent.com"
  clientSecret: "asdf"
  cookieSecret: "asdfasdf=="
  configFile: |-
    pass_basic_auth = false
    pass_access_token = true
    set_authorization_header = true
    pass_authorization_header = true

image:
  repository: "quay.io/pusher/oauth2_proxy"
  tag: "v3.1.0"
  pullPolicy: "IfNotPresent"

extraArgs:
  provider: "google"
  email-domain: "example.com.au"
  whitelist-domain: ".stuff.example.com.au"
  upstream: "file:///dev/null"
  http-address: "0.0.0.0:4180"

authenticatedEmailsFile:
  enabled: false
  template: ""
  restricted_access: ""

ingress:
  enabled: true
  path: /
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: 'true'
    nginx.ingress.kubernetes.io/proxy-body-size: 100m
  hosts:
    - oauth2.k8s-url.com.au
  tls:
    - secretName: wildcard.k8s-url.com.au
      hosts:
        - oauth2.k8s-url.com.au

I think have the ingress annotation config of:

    nginx.ingress.kubernetes.io/auth-response-headers: Authorization
    nginx.ingress.kubernetes.io/auth-signin: https://oauth2.k8s-url.com.au/oauth2/start?rd=https://$host$request_uri$is_args$args
    nginx.ingress.kubernetes.io/auth-url: https://oauth2.k8s-url.com.au/oauth2/auth
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    #nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$escaped_request_uri
    #nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
    nginx.ingress.kubernetes.io/configuration-snippet: |
      auth_request_set $name_upstream_1 $upstream_cookie_name_1;

      access_by_lua_block {
        if ngx.var.name_upstream_1 ~= "" then
          ngx.header["Set-Cookie"] = "name_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
        end
      }

I have gone through many iterations to get to this point.

Context

I am try to use this auth proxy without luck. To me it just looks like the nginx auth-request always returns a 401. I realise this maybe an ingress nginx issue but I thought I would start here.

Your Environment

kops 1.11 maintained k8s cluster, k8s version 1.11.6

  • Version used: "v3.1.0"

user-configured redirect URL clobbered in oauthproxy.go

see line 155 of oauthproxy.go (link)

We literally overwrite whatever the value is with fmt.Sprintf("%s/callback", opts.ProxyPrefix) even if the user provided a redirect URL they want to use.

I have a PR for this which I tested and confirmed to resolve the issue. Will link.

unauthorized on kubernetes dashboard with google provider (invalid bearer token, Token has been invalidated)

hi. we trying to use oauth2_proxy instead lua-resty-openidc (there are problem with refresh token)
kube-apiserver was configured with oidc

  • --oidc-client-id=XXX.apps.googleusercontent.com
    - --oidc-groups-claim=hd
    - --oidc-username-claim=email
    - --oidc-issuer-url=https://accounts.google.com

we build image from oidc branch as lbvffvbl/oauth2_proxy:v0.0.2
(to Dockerfile was added
RUN apt-get update
RUN apt-get install -y ca-certificates
)
but after configuration oauth2_proxy we see Unauthorized
in logs of kube-apiserver i see

Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, Token has been invalidated]]

our deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dashboard-proxy
name: dashboard-proxy
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: dashboard-proxy
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: dashboard-proxy
spec:
containers:
- args:
- --email-domain=domain.com
- --upstream=http://kubernetes-dashboard.kube-system.svc.cluster.local
- --http-address=0.0.0.0:8080
- --provider=google
- --redirect-url=https://HOST/oauth2/callback
- --pass-authorization-header=true
- --pass-basic-auth=false
- --pass-access-token=false
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: OAUTH2_PROXY_CLIENT_ID
name: oauth
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: OAUTH2_PROXY_CLIENT_SECRET
name: oauth
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: OAUTH2_PROXY_COOKIE_SECRET
name: oauth
image: lbvffvbl/oauth2_proxy:v0.0.2
imagePullPolicy: Always
name: oauth-proxy
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /creds
name: google-service-account
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: kube-system
spec:
rules:
- host: HOST
http:
paths:
- backend:
serviceName: dashboard-proxy
servicePort: 80
path: /
tls:
- hosts:
- HOST
secretName: secret-tls

what we are doing wrong?

Nonce in login.gov should be per-session

Expected Behavior

The login.gov provider in #55 creates a nonce that is used for the lifetime of the proxy, rather than doing it per-session, which is the proper behavior.

Current Behavior

The login.gov provider creates the nonce during the LoginGovProvider initialization and then uses it for the life of the proxy.

Possible Solution

The code will either need to be restructured significantly so that session state is created during the authorization redirect, or perhaps some additional session state storage system could be plugged in to store the nonce that could be looked up during token redemption.

Actually, https://openid.net/specs/openid-connect-core-1_0.html#NonceNotes suggests that we might be able to find some sort of session cookie that is common to both queries and hash that for the nonce. Not sure if we have already created such a thing yet or not, though. Need to get more familiar with the golang proxy stuff.

Steps to Reproduce (for bugs)

If you alter the code to print out the nonce anywhere it is used, you will see it is the same every time.

Context

The nonce can be used to mitigate token replay attacks: https://openid.net/specs/openid-connect-core-1_0.html#NonceNotes

Your Environment

This works during any login.gov authorization/authentication using the LoginGovProvider.

  • Version used: #55

Possible to use "Dynamic" callback URL?

Thanks so much for taking over the maintenance of the oauth2_proxy repository.

I am a beginner and have a (perhaps) stupid question - is it possible to have the one oauth2_proxy instance authenticate a domain with several web services (e.g. app1.example.com, app2.example.com)? Or would I need one oauth2_proxy per webservice?

I would prefer to use a self hosted Gitlab instance as oauth2 provider, however looking at when I add a service I need to specify a specific callback URL - I can't figure out of to make this dependent on the original requested URL. For example, I go to app1.example.com, redirectes to oauth2_proxy/Gitlab login, after successful login I'm served with the requested app1.example.com login. In case I would have requested app2.example.com, then that would have been the served URL after successful login etc.

Proxy "Fixing" URLs containing double slashes

I noticed that oauth2_proxy will attempt to "fix" urls that contain two consecutive slashes by issuing a 301 redirect to the fixed URL. In the request log this looks as follows:

IP - PERSON [13/Feb/2019:15:09:28 +0000] adminpanel.homefully.tech POST - "/management-application//graphql" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 301 0 0.000

(notice the dash between POST and the path is the upstream - which is empty in this case, indicating that oauth2_proxy takes action itself)

I believe that this is not the ideal behaviour in all cases (see in context below for two examples)

I understand that changing the default behaviour of the proxy to not fix urls might cause more bugs than it would fix. It would be great however, if there was a way to disable this "url repair" mode via a commandline argument, such as --no-url-repair.

Expected Behavior

oauth2_proxy features an argument --no-url-repair. If this is set, URLs are not rewritten

Current Behavior

all urls are "fixed" and a redirect is issued. this causes some applications to be not operable behind oauth2_proxy.

Context

So far, I've had two situations, in which this behaviour is causing pain:

  • issuing a 301 redirect will convert POST into GET requests (RFC 2616 section 10.3.2). Even if the upstream would have been able to handle the request accordingly - clients executing such a link might break. (I am encountering such a behaviour right now)
  • There are cases, in which double slashes in URLs are used by design. Thumbor is an image resizing proxy, that expects the url of the image to be resized to be part of the request path. A normal Thumbor Url would look like this: http://myhost.com/unsafe/300x200/http://myimagehost.com/image1.jpg . Because of oauth2_proxys behaviour, it becomes impossible to put a thumbor instance behind this proxy.

It's worth nothing that this is a purely pragmatic approach to the issue. URLs are not used as they should in both cases - but existing tools out there depend on this behaviour and break due to the redirect.

BTW: Thanks a lot for this great tool! I find it enormously helpful! :)

Structured logging

Structured logging to a json line format will greatly help log collection and processing in tools like elastic, splunk, etc.

Expected Behavior

Log lines should be done in json, so the fields end up as first class citizens within the log and allows easy log ingestion into splunk, elastic, etc. The log levels and location should be configurable so other agents can pick them up and manage them as needed. It should also allow easy management/uploading, etc through a sidecar.

Current Behavior

Logs are sent to stdout and are not easily parsable from tools like splunk agent.

Possible Solution

Logs should be generated in json.

Steps to Reproduce (for bugs)

Generate output. All data is in text lines and not in json.

Context

Most companies ingest logs into tools like splunk so they can be interpreted, aggregated, etc. Json logging makes this process very easy for ingestion and makes sure there is no data loss.

Your Environment

My current environment is docker based.

  • Version used:
    latest

Auth logging

Expected Behavior

I would like to see the ability to configure auth logging separately from request logs. This should specifically include the ability to log directly to a configured file path.

Current Behavior

All log output is sent to stdout, both requests and error logging. It appears that the error logs flow through a separate logging interface than the request logs. The only way to currently obtain these logs is to use docker logs Oauth2Proxy or configuring the docker container to log to the host syslog.

Request logs are logged:

123.123.123.123 - [email protected] [07/Feb/2019:00:01:30 +0000] domain.com GET - "/oauth2/auth" HTTP/1.0 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 202 0 0.000

Failed auth attempts are logged un-formatted to stdout as well:

2019/02/07 00:00:40 oauthproxy.go:754: 172.17.0.1:52548 ("123.123.123.123") Permission Denied: "[email protected]" is unauthorized

Possible Solution

Further configuration either via command line like:

  -auth-logging: Log auth requests (default true)
  -auth-logging-file: File to log auth requests to (defaults to empty for stdout)
  -auth-logging-format: Template for auth log lines (see "Auth Logging Format" paragraph below)

Or through the oauth2_proxy.cfg file:

## Log requests to stdout
auth_logging = true
auth_logging_file = /etc/oauth2/auth.log
auth_logging_format = [{{.Timestamp}}[{{.Level.}}] {{.Message}} IP: {{.RemoteIP}}. Username: {{.Username}}.

The format should include the option for all relevant info but most importantly the remote IP address of the request.

That way a failed oauth2 attempt would log something like:

[2019/02/07 00:00:40] [ERROR] Permission Denied. IP: 123.123.123.123. Username: [email protected]

Most of this logging would at the existing log.PrintF() function calls in oauthproxy.go.

Context

I would like to get a log file of failed login attempts to redirect through fail2ban so that any attempts to login via any interface can be blocked upstream at the reverse proxy level. In this particular case I have fail2ban set up to ban IP addresses directly at the cloudflare proxy so the banned IP's don't even make it to my reverse proxy server.

Then you could create a file /etc/fail2ban/jail.d/oauth2proxy.conf

[INCLUDES]
before = common.conf

[Definition]
failregex = ^.*Permission Denied\. IP: <HOST>\. Username:.*$
ignoreregex =

And then you could add oauth2proxy as a fail2ban jail.

Plan to handle multiple providers at once?

Is there any plan to support multiple providers?

It would be useful if the service wants to authenticate two users X and Y where:

  • X has a Google account but doesn't have a Github account
  • Y doesn't have a Google account but has a Github account

This could be presented as follows.
multiple_providers_idea

This question was asked before the fork: bitly/oauth2_proxy#509

error redeeming code Post, certificate signed by unknown authority

Moving from bitly/oauth2proxy and I'm having the above issue. I'm using the docker image with nginx providing SSL and the Nginx auth_request directive

Following error, after I have signed into google:

10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" 302 368 0.000
2019/01/19 20:42:42 oauthproxy.go:588: 172.17.0.1:38034 ("192.168.1.1") error redeeming code Post https://www.googleapis.com/oauth2/v3/token: x509: certificate signed by unknown authority

2019/01/19 20:42:42 oauthproxy.go:395: ErrorPage 500 Internal Error Internal Error

Steps to Reproduce (for bugs)

I'm running docker with the following command:

docker run --name='PusherOAuth2' --net='bridge' -e TZ="Europe/London" -e HOST_OS="Unraid" -p '4181:4181/tcp' -v '/mnt/user/appdata/PusherOAuth2/':'/etc/oauth2/':'rw' 'quay.io/pusher/oauth2_proxy' -config=/etc/oauth2/oauth2_proxy.cfg -set-xauthrequest

Here is my config:

## OAuth2 Proxy Config File

## <addr>:<port> to listen on for HTTP/HTTPS clients
http_address = "0.0.0.0:4181"

## the http url(s) of the upstream endpoint. If multiple, routing is based on path
upstreams = [
     "http://127.0.0.1:4181/oauth2/login"
]

## Log requests to stdout
request_logging = true

## The OAuth Client ID, Secret
client_id = "MY_ID"
client_secret = "A_SECRET"

## Authenticated Email Addresses File (one email per line)
authenticated_emails_file = "/etc/oauth2/emails.cfg"

## Templates
## optional directory with custom sign_in.html and error.html
#custom_templates_dir = "/etc/oauth2/templates/"

## Cookie Settings
## Name     - the cookie name
## Secret   - the seed string for secure cookies; should be 16, 24, or 32 bytes
##            for use with an AES cipher when cookie_refresh or pass_access_token
##            is set
## Domain   - (optional) cookie domain to force cookies to (ie: .yourcompany.com)
## Expire   - (duration) expire timeframe for cookie
## Refresh  - (duration) refresh the cookie when duration has elapsed after cookie was initially set.
##            Should be less than cookie_expire; set to 0 to disable.
##            On refresh, OAuth token is re-validated. 
##            (ie: 1h means tokens are refreshed on request 1hr+ after it was set)
## Secure   - secure cookies are only sent by the browser of a HTTPS connection (recommended)
## HttpOnly - httponly cookies are not readable by javascript (recommended)
cookie_name = "_oauth2_proxy"
cookie_secret = "blah"
cookie_domain = "squishedmooo.com"
cookie_expire = "168h"
cookie_refresh = "1h"
cookie_secure = true
cookie_httponly = true

An example site-conf:

server {
	listen 80;
	server_name lidarr.squishedmooo.com;
	return 301 https://$server_name$request_uri;
}

server {
  listen 443 ssl;
  server_name lidarr.squishedmooo.com;
  
  ssl on;
  ssl_certificate /config/keys/letsencrypt/fullchain.pem;
  ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
  ssl_prefer_server_ciphers on;
  add_header Strict-Transport-Security max-age=2592000;

  location /oauth2/ {
    proxy_pass       http://192.168.1.41:4181;
    proxy_set_header Host                    $host;
    proxy_set_header X-Real-IP               $remote_addr;
    proxy_set_header X-Scheme                $scheme;
    proxy_set_header X-Auth-Request-Redirect $request_uri;
  }
  location = /oauth2/auth {
    proxy_pass       http://192.168.1.41:4181;
    proxy_set_header Host             $host;
    proxy_set_header X-Real-IP        $remote_addr;
    proxy_set_header X-Scheme         $scheme;
    # nginx auth_request includes headers but not body
    proxy_set_header Content-Length   "";
    proxy_pass_request_body           off;
  }

  location / {
    auth_request /oauth2/auth;
    error_page 401 = /oauth2/sign_in;

    # pass information via X-User and X-Email headers to backend,
    # requires running with --set-xauthrequest flag
    auth_request_set $user   $upstream_http_x_auth_request_user;
    auth_request_set $email  $upstream_http_x_auth_request_email;
    proxy_set_header X-User  $user;
    proxy_set_header X-Email $email;

    # if you enabled --cookie-refresh, this is needed for it to work with auth_request
    auth_request_set $auth_cookie $upstream_http_set_cookie;
    add_header Set-Cookie $auth_cookie;

    proxy_pass http://192.168.1.41:8686/;
    # or "root /path/to/site;" or "fastcgi_pass ..." etc
  }
}

Azure provider: Unexpected audience causes invalid bearer token

I am using oauth2_proxy behind Kubernetes ingress nginx to allow users of the cluster to access its dashboard.

Pretty much everything seems to work well (e.g., the authentication bearer token is correctly forwarded to the dashboard from nginx). But the token obtained from Azure contains the wrong audience (at least from the point of view of what Kubernetes expect). I can see the following error message in the cluster master node:

E0317 17:13:21.837365       
1 authentication.go:62] Unable to authenticate the request due to an error: 
[invalid bearer token, 
[invalid bearer token, oidc: verify token: 
oidc: expected audience "spn:{redacted client ID}" got ["https://graph.windows.net"]]]

As the message says, the token returned by oauth2_proxy does not have the client ID in the audience field. Instead, it has the audience set as "aud": "https://graph.windows.net".

Following https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-openid-connect-code I have created a small program that queries Azure to obtain the ID token. Instead of setting the response_type to code (as oauth2_proxy does) I used id_token and the ID token that I got back had the correct audience set ("aud": "{redacted client ID}"). The response misses the spn: prefix, but that is a known issue tracked at kubernetes/client-go#566 (it seems that missing the prefix is actually correct, and Kubernetes is expecting the old audience format).

Expected Behavior

The obtained ID token has its audience set to the client ID.

Current Behavior

The obtained ID token has its audience set to https://graph.windows.net.

Possible Solution

Perhaps allowing a flow that simply fetches the ID token (setting response_type to id_token) could solve this issue.

Context

For completeness, I had a look at how Kubernetes verifies the tokens, and this is what I found:

OIDC token verification in Kubernetes is done here:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L559

The verifier uses coreos/go-oidc, and the verification itself expects the client ID to be part of the audience. See:
https://github.com/coreos/go-oidc/blob/master/verify.go#L164

Your Environment

  • Version used: latest

Panic at startup on the latest docker image

Current Behavior

Updating to the latest docker image (6c9a468325f2), all my oauth2_proxy containers panic at startup.

goroutine 1 [running]:
github.com/pusher/oauth2_proxy/vendor/github.com/mreiferson/go-options.Resolve(0x8769c0, 0xc000043500, 0xc00001e600, 0xc0000692c0)
	/go/src/github.com/pusher/oauth2_proxy/vendor/github.com/mreiferson/go-options/options.go:86 +0xa3e
main.main()
	/go/src/github.com/pusher/oauth2_proxy/main.go:109 +0x148f
panic: interface conversion: *main.StringArray is not flag.Getter: missing method Get

Steps to Reproduce (for bugs)

  1. Update to the latest docker image
  2. Reup your docker-compose without changing anything
  3. docker-compose logs -f shows me the previous panic stack

Context

It seems that this bug appears on the build generated by the merge of this pull request #92. I notice that this PR adds a new option -proxy-websockets. Note that setting it to true or false does not change anything.

Your Environment

  • Version used: latest docker image (6c9a468325f2)

oauth proxy does not call my Userinfo endpoint to determine the user's email

Expected Behavior

I am using the openid provider satosa which does not return the email as a claim inside the "identity token" (not sure what the correct name is) but expects the RP to call the userinfo endpoint to recieve that.

This seems to be ok according to the spec? And I see a configuration option where I'm able to provide the userinfo endpoint to oauth2_proxy.

So I expect the endpoint to be called.

Current Behavior

The endpoint is not queried and oauth2_proxy errors out claiming "error redeeming code unable to update session: id_token did not contain email"

Possible Solution

Either satosa need to be fixed or oauth2_proxy. I'm not sure who is in error here.

Steps to Reproduce (for bugs)

Run a copy of satosa with an oidc frontend and try to login using oauth2_proxy. If you want I can get you a dump of the http requests between oauth2_proxy and satosa.

Context

I'm currently unable to use oauth2_proxy to protect my services.

Your Environment

I'm running oauth2_proxy as a sidecar container in kubernetes. Satosa is deployed as a separate service in the same cluster. I can see them talking to eachother.

Using libapache2-mod-auth-openidc (without any specific config) I am able to set up a proxy that will provide the email.

  • Version used:
    v3.1.0

Release links include .gz extension but files are not GZipped

Links to the binaries in the 3.0.0 release include .gz extensions, when in fact the links are just TAR files.

Expected Behavior

$ curl -sL https://github.com/pusher/oauth2_proxy/releases/download/v3.0.0/oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz | tar zxvf -
release/oauth2_proxy-linux-amd64

Current Behavior

$ curl -sL https://github.com/pusher/oauth2_proxy/releases/download/v3.0.0/oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz | tar zxvf -

gzip: stdin: not in gzip format
tar: Child died with signal 13
tar: Error is not recoverable: exiting now

Possible Solution

Remove .gz from the links, or GZip the TAR files.

Steps to Reproduce (for bugs)

  1. Download one of the binaries.
  2. Attempt to gunzip it.

Context

$ file oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz
oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz: POSIX tar archive

Your Environment

Ubuntu Linux 18.04.

`dep ensure` fails

dep ensure gives:

Solving failure: No versions of gopkg.in/fsnotify.v1 met constraints:
	v1.2.11: unable to update checked out version: fatal: reference is not a tree: 836bfd95fecc0f1511dd66bdbf2b5b61ab8b00b6
: command failed: [git checkout 836bfd95fecc0f1511dd66bdbf2b5b61ab8b00b6]: exit status 128

This is with dep version:

dep:
 version     : v0.5.0
 build date  : 2018-07-26
 git hash    : 224a564
 go version  : go1.10.3
 go compiler : gc
 platform    : linux/amd64
 features    : ImportDuringSolve=false

I'm not sure if this is a dep issue or whether it's somehow caused by the Gopkg.toml / Gopkg.lock files having been modified as part of the fork (I noticed we changed 18F/hmacauth to mbland/hmacauth but only in .lock, not in .toml).

Adding support for groups in Azure Active Directory

Hi :)

Currently I'm using oauth2-proxy in putting tools and applications behind Azure AD integration. It works very well, thanks.

However, there is currently no option to get groups from AD, and pass this in a header back onto the tools and applications to do RBAC.

There is an open pull request on the old fork - any chance we can get this reviewed and merged?

bitly/oauth2_proxy#347

Expected Behavior

Upon successful auth from Azure AD, I should get a list of groups the user is in, in AD. This list of groups should be put in a header, that can then be passed back upstream.

From the above pull request,

pass-groups flag to enable an additional X-Forwarded-Groups header that contains a pipe-separated list of groups to which the user belongs.

Current Behavior

There is no way to pass group information upstream.

Possible Solution

Review and merge bitly/oauth2_proxy#347

Steps to Reproduce (for bugs)

  1. Configure oauth2-proxy to talk to Azure AD
  2. Check there is no group information in the returned data.
  3. Check there is no option to parse, and pass group data in a header back upstream.

Context

Trying to implement RBAC on tools and applications. The tools and applications are configured with HTTP header authentication, and currently log the user in based on x-auth-request-user. I want the ability to give the user different access based on the AD group they are in.

e.g. admin, dev, read_only

Your Environment

Deployed with the official helm chart, and configured with Azure provider. Nginx (kubernetes ingress controller) is configured with auth-url and auth-signin to proxy auth to oauth2-proxy.

Config:

set-xauthrequest: true
set-authorization-header: true
provider: azure
azure-tenant: <redacted>
<+ other unrelated config>
  • Version used:
repository: "quay.io/pusher/oauth2_proxy"
tag: "v3.1.0"

Allow pass-through of valid JWT Bearer tokens with OpenID Connect/JWKS issuers

In order to enable API access where outside parties may have received a valid JWT token and would like to authenticate, it would be nice if there was a way to configure oauth2_proxy to verify those tokens, use them to construct a session, and pass through the request.

Expected Behavior

This is similar in effect to the CheckBasicAuth, but instead of Authorization: Basic [...], it's Authorization: Bearer [...], and instead of checking against a .htpasswd, you verify the token according to pre-configured OpenID Connect/JWKS issuers; using https://example.com/.well-known/openid-configuration and then the jwks attribute, or https://example.com/.well-known/jwks.json as well as the client IDs to verify the audience in the JWT.

When running Authenticate in oauthproxy.go, first you would check if there was a valid header Authorization header. Then you would construct a session representing the JWT token as much as possible. That session would not be refreshable and it would not be saved the way others are.

Current Behavior

Not possible

Possible Solution

I've implemented some of this in my fork:
https://github.com/lsst-dm/oauth2_proxy/pull/1/files

(This is my first time programming in Go, so I thought I start with an issue rather than a pull request on this specific item).

Context

We issue tokens for API users. This more easily enables REST APIs to use ID tokens, for example, to statelessly authenticate against the server and prevent redirects which are troublesome with non-idempotent operations.

Release tar.gz files are actually just tar files

The v3.0.0 release archives have .gz extensions, but aren't, in fact, gzipped.

Expected Behavior

$ file *.gz
oauth2_proxy-v3.0.0-0-g7887272.darwin-amd64.go1.11.tar.gz:  gzip compressed data, was "oauth2_proxy-v3.0.0-0-g7887272.darwin-amd64.go1.11.tar", last modified: Thu Feb  7 21:24:45 2019, from Unix
oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz:   gzip compressed data, was "oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar", last modified: Thu Feb  7 21:24:59 2019, from Unix
oauth2_proxy-v3.0.0-0-g7887272.windows-amd64.go1.11.tar.gz: gzip compressed data, was "oauth2_proxy-v3.0.0-0-g7887272.windows-amd64.go1.11.tar", last modified: Thu Feb  7 21:24:30 2019, from Unix

Current Behavior

$ file *.gz
oauth2_proxy-v3.0.0-0-g7887272.darwin-amd64.go1.11.tar.gz:  POSIX tar archive
oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz:   POSIX tar archive
oauth2_proxy-v3.0.0-0-g7887272.windows-amd64.go1.11.tar.gz: POSIX tar archive

Possible Solution

tar -czf would probably do it.

Steps to Reproduce (for bugs)

cd /tmp
curl -L -O https://github.com/pusher/oauth2_proxy/releases/download/v3.0.0/oauth2_proxy-v3.0.0-0-g7887272.windows-amd64.go1.11.tar.gz
curl -L -O https://github.com/pusher/oauth2_proxy/releases/download/v3.0.0/oauth2_proxy-v3.0.0-0-g7887272.darwin-amd64.go1.11.tar.gz
curl -L -O https://github.com/pusher/oauth2_proxy/releases/download/v3.0.0/oauth2_proxy-v3.0.0-0-g7887272.linux-amd64.go1.11.tar.gz
file *.gz

Context

When running things like tar -xzf ${ARCHIVE_NAME} in an install / build script, tar fails.

Your Environment

$ lsb_release -a
...
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:        18.04
Codename:       bionic

Using Google Group Mailing List

Thanks for maintaining this project first of all.

OAuth based proxy solutions started to appealing. I have a scenario in which I have a regular google group (not a gsuit based custom domain group).

Is it possible to use google oauth with a regular google group?

Current OIDC implementation requires provider to implement the /.well-known/openid-configuration endpoint

Expected Behavior

Current Behavior

The current implementation expects the OIDC provider to implement the discovery endpoint, however this is optional in the spec and unfortunately some enterprises do not implement this, such as IBM.

Possible Solution

On my fork of the bitly version, I implemented an additional provider with a fork of the go-oidc package that had an implementation for NewManualProvider, skipping the call to the discovery endpoint.

// NewManualProvider creates a provider with manually set configurations
func NewManualProvider(ctx context.Context, issuer, authURL, tokenURL, userInfoURL, jwksURL string) *Provider {
	return &Provider{
		issuer:       issuer,
		authURL:      authURL,
		tokenURL:     tokenURL,
		userInfoURL:  userInfoURL,
		remoteKeySet: newRemoteKeySet(ctx, jwksURL, time.Now),
	}
}

We could implement as a new provider or use a flag to let oauth2_proxy know that we are trying to integrate to a provider that does not have the discovery endpoint available.

Alternatively we could handle the error and proceed.

Steps to Reproduce (for bugs)

Configure the oauth2_proxy against an OIDC provider that does not have the discovery endpoint.

Context

Some enterprises do not implement or expose the /.well-known/openid-configuration endpoint

Your Environment

  • Version used: bitly's oauth2_proxy with @JoelSpeed's PR's

Support for WebSockets

I am using oauth2_proxy and one of the connections open a WebSocket but I am not sure if that's supported. So far I have tried to disable the proxy for the specific URL that opens the websocket (-skip-auth-regex=/api/kube/.*) but with that I receive an error 403.

Expected Behavior

The server should receive the header Sec-WebSocket-Protocol with the bearer token and returns a 101 Connection upgrade response:

screenshot from 2019-02-14 17-37-46

Current Behavior

With the set of flags I am using I am not receiving any response, the connection just hangs. These are the flags I am using:

        - --email-domain=*
        - --cookie-secure=false
        - --cookie-secret=<redacted>
        - --client-secret=<redacted>
        - --client-id=oidckube
        - --provider=oidc
        - --oidc-issuer-url=<redacted>
        - --ssl-insecure-skip-verify
        - --http-address=0.0.0.0:4180
        - --upstream=http://kubeapps/
        - --set-authorization-header
        - --pass-authorization-header

Then if I skip the auth for the URL of the websoket I receive a 403:

screenshot from 2019-02-14 17-41-59

Possible Solution

I see that there are some PRs in the previous project to add support for WebSockets, like:
bitly/oauth2_proxy#554

They didn't got merged though.

Steps to Reproduce (for bugs)

  1. Deploy oauth2_proxy with the flags above
  2. Open a websocket:
    new WebSocket(
      "api/kube/apis/extensions/v1beta1/namespaces/default/deployments?watch=true&fieldSelector=metadata.name%3Dinconclusive-lake-wordpress",
      [
        "base64url.bearer.authorization.k8s.io." + token,
        "binary.k8s.io",
      ],
    );

Your Environment

I am using Keycloak as Identity Provider in a Kubernetes cluster (minikube).

  • Version used: v3.1.0

Thanks in advance!

Project README is too large!

Expected Behavior

ReadMe should provide information about the project but not necessarily contain all documentation for the entire project.

Current Behavior

ReadMe has all documentation in it and is hard to navigate

Possible Solution

Set up GitHub pages with Jekyll just-the-docs theme and break the documentation into smaller, more manageable pieces.

  • Version used: 3.0.0

Try to get email from UserInfo before giving up

Some OIDC providers do not include email in the default IDToken information. In such cases email would need to be requested separately, either via UserInfo interface or via Remote Claims.

Expected Behavior

To get email in providers/oidc.go:createSessionState check ID token first, if email is not there, request UserInfo and check there, if not there then try resolving a claim for "email"

Current Behavior

If email field is not included in the ID token, then authentication just stops with "id_token did not contain an email"

Redirect URL missing when the skip-provider-button option is used

When I use -skip-provider-button=true it always gets redirected after login to /
When I don't ( -skip-provider-button=false) it gets redirected after login to the url beforehand.

Expected Behavior

preserve URL after the login regardless of the value of -skip-provider-button option.

Steps to Reproduce (for bugs)

e.g. I point my browser at https://my-site/some/sub/path
when I have -skip-provider-button, it will end up pointing at https://my-site/
when I don't it will end up pointing at https://my-site/some/sub/path

Your Environment

  • Version used:
    3.0.0

Similar issues in bitly's archived repo:
bitly/oauth2_proxy#327
bitly/oauth2_proxy#586

ClearSessionCookie doesn't work for splitted cookies

Session cookies will be splitted if they're too large. On logout ClearSessionCookies should remove
all cookies. That does not work if cookies are splitted due to naming

Expected Behavior

ClearSessionCookies should remove all cookies set on login.

Current Behavior

When a user logs in SetSessionCookie() is responsible for setting an cookie with user information. The cookie is
created by MakeSessionCookie(...) with provider specific information. If the length of the cookie is to large, it
will be split into several cookies which are named <cookie_name>_0, <cookie_name>_1, ...
-> SET: _oauth2_proxy_0, _oauth2_proxy_1

Cookies are deleted by creating an empty cookie with be same name that has already expired. The browser will remove the
old one. The ClearSessionCookie(...) calls MakeSessionCookie(...) to create the "delete"-cookie. This time is is called
without value (to create that empty cookie) resulting in an unsplit delete-cookie.
-> UNSET: _oauth2_proxy

Possible Solution

The ClearSessionCookie() method should search the request cookies for cookies that start with OAuthProxy.CookieName and
remove them

Steps to Reproduce (for bugs)

We added a test to oauthproxy_test.go:

func TestClearSplittedCookie(t *testing.T) {
	p := OAuthProxy{CookieName: "oauth2"}
	var rw = httptest.NewRecorder()
	req := httptest.NewRequest("get", "/", nil)

	req.AddCookie(&http.Cookie{
		Name:  "test1",
		Value: "test1",
	})
	req.AddCookie(&http.Cookie{
		Name:  "oauth2-0",
		Value: "oauth2-0",
	})
	req.AddCookie(&http.Cookie{
		Name:  "oauth2-1",
		Value: "oauth2-1",
	})

	p.ClearSessionCookie(rw, req)
	header := rw.Header()

	assert.Equal(t, 3, len(header["Set-Cookie"]), "should have 3 set-cookie header entries")
}

func TestClearNotSplittedCookie(t *testing.T) {
	p := OAuthProxy{CookieName: "oauth2", CookieDomain: "abc"}
	var rw = httptest.NewRecorder()
	req := httptest.NewRequest("get", "/", nil)

	req.AddCookie(&http.Cookie{
		Name:  "test1",
		Value: "test1",
	})
	req.AddCookie(&http.Cookie{
		Name:  "oauth2",
		Value: "oauth2",
	})

	p.ClearSessionCookie(rw, req)
	header := rw.Header()

	fmt.Printf("%#v\n", header)
	assert.Equal(t, 1, len(header["Set-Cookie"]), "should have 1 set-cookie header entry")
}

Context

This bug permits the logout. If the user reloads the page she will be logged in again.

Your Environment

  • Version used:

The bug works in current versions

How to run as a docker container

I am trying to run this as a docker container, here is my command:

docker run -P quay.io/pusher/oauth2_proxy \
    --cookie-secure=false \
    --upstream="http://internal.website.com/" \
    --http-address="127.0.0.1:4180" \
    --redirect-url="http://internal.website.com/oauth2/callback" \
    --cookie-secret=<secret> \
    --client-id=<google OAuth client id> \
    --client-secret=<google Oauth secret>

The output tells me it's listening to 127.0.0.1:4180, but when I curl that address. I'm getting
Failed to connect to 127.0.0.1 port 4180: Connection refused

How do I get to the home page shown at the top of the README?

BitBucket support

I work for an org that uses BitBucket Cloud for all of our repos. It'd be nice if oauth2_proxy could be used w/ BitBucket.

Expected Behavior

It'd be nice if oauth2_proxy has a BitBucket provider.

Current Behavior

oauth2_proxy can't be used with BitBucket right now. BitBucket supports Oauth2, but not OpenID connect, so a new provider would be needed.

Possible Solution

Add bitbucket as a provider? I'd be willing to spend some time writing a PR for this, but it may take me awhile and I lack confidence in my skills.

Steps to Reproduce (for bugs)

N/A

Context

The org I work for uses BitBucket cloud for all of our repos. I'd like to deploy various internal tools for developers to access and authorize them using their BitBucket credentials.

Your Environment

Running the container as non-root

Expected Behavior

The container only contains a single binary, and therefore should be able to run without the need for root.

Current Behavior

The container runs as root, as there is no USER defined in the Dockerfile. This is not best practice, and opens unnecessary conversations with people who utter the word 'security' every few words ;)

Possible Solution

Add a USER line in the dockerfile.

Steps to Reproduce (for bugs)

  1. Deploy oauth2-proxy using the helm chart
  2. kubectl exec -i -t <pod> -- ash
  3. whoami shows root

Context

Trying to use oauth2-proxy in a client environment running in Azure. Currently using Kubernetes. Client security team are very conscious of containers running as root.

Your Environment

Azure AKS, Kubernetes cluster, oauth2-proxy deployed using official helm chart found at https://github.com/helm/charts/tree/master/stable/oauth2-proxy

  • Version used:
    aks: Kubernetes v1.11.6
    repository: "quay.io/pusher/oauth2_proxy"
    tag: "v3.1.0"

-skip-auth-regex flag does not work when using a redirect

The -skip-auth-regex flag values are only validated against the request path and not the rd= path

Expected Behavior

If using with redirects, the path to be validated should be the redirect path

Current Behavior

When integrated with nginx-controller as part of kubernetes, the path will also be the value of the auth path such as /dev/oauth/auth which never matches the redirect path of the application being protected.

Possible Solution

Alter the IsWhitelistedRequest() method to handle passing in redirect path instead of req.URL.Path if the request has a redirect. Potentially use p.GetRedirect(req) to determine if a redirect is available

Your Environment

  • Version used: 3.1.0

oauth2 does not honour http_proxy environment variable

Oauth2 command line should detect the environment variables such as http_proxy and https_proxy and use them while accessing the redirection urls.

We are running oauth2 behind a corporate proxy and it is having trouble reaching our corporate github server. I do not see any option to provide http_proxy info in the oauth2.conf file

oauth2 command line should honour the environment variables ( system level ) and use them while initiating http requests.

run oauth2 from a host behind corporate proxy and use redirection url like google.com or github.com which needs to be routed via a proxy.

Context

Redhat Linux 6.

  • Version used: v3.1.0

Break the fork relationship to bitly/oauth2_proxy

The repo is a fork of bitly repo which is bad for a few reasons.

  1. you end up in an incorrect tree (if you follow the forks you end up at an outdate repo - the root of the tree should be this repo
  2. forked repos are not searchable in github

2 is the most annoying as you can not use github search cabapility to quickly scan the repos.

Expected Behavior

pusher/oauth2_proxy is the root repo and not a fork, I can search its codebase in githubs native search capabilities.

Current Behavior

I can search the codebase.

Possible Solution

  1. contack github support and ask github to break the fork relationship (but keep the issues and PRs in tact)
  2. clone the entire repo (including PR refs) copy the issues etc delete the repo and then recreate it.

Obviously 1 is pretty easy, non destructive and painless, 2 is a right PITA :)

Steps to Reproduce (for bugs)

Context

Your Environment

  • Version used:

Support for Traefik

I'd like to ask if any of you has the experience to configure oauth2-proxy with Traefik? Is it supported out of the box?

oauth2_proxy configured with oidc provider quietly does not sign in

Context

I am trying to use oauth2_proxy to add Okta authentication to a browser-based application (Kibana).

Following all the information I could gather from the README, Issues, and PRs, I've done my best to configure oauth2_proxy for this purpose. I've gotten to the point that attempts to access my application are properly intercepted by oauth2_proxy and I see the sign-in page. But at this point clicking the button to sign in doesn't kick off the authentication process and I don't know why.

Expected Behavior

I would expect to see some information in the oauth2_proxy console output that would help me debug the problem. Some information that would either positively or negatively confirm that my configuration is correct, that my OIDC provider is providing the info that oauth2_proxy needs, and some trace information about the requests oauth2_proxy is seeing both in terms of proxying and in terms of actions initiated from its own UI (the sign-in page).

Current Behavior

I see two pieces of information logged: a message about a missing cookie and a generic message dumping the routes that have been hit:

2019/02/14 18:59:12 oauthproxy.go:796: 172.16.4.250:51816 Cookie "_oauth2_proxy_my_app" not present
172.16.4.250 - - [14/Feb/2019:18:59:12 +0000] my.subdomains.platforms.my_company.com GET - "/" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.81 Safari/537.36" 403 2523 0.000
2019/02/14 18:59:14 oauthproxy.go:796: 172.16.4.250:51816 Cookie "_oauth2_proxy_my_app" not present
172.16.4.250 - - [14/Feb/2019:18:59:14 +0000] my.subdomains.platforms.my_company.com GET - "/oauth2/start?rd=%2F" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.81 Safari/537.36" 403 2542 0.000

Possible Solution

A couple thoughts come to mind:

  1. It would be great if there was a way to enable verbose logging with oauth2_proxy dumping out a lot more information about how it's been configured and what it's attempting/failing to do at runtime.
  2. If there are any causes of this particular case ("clicking the sign-in button does nothing), it would be great to add them to a Troubleshooting section of the README. As someone new to oauth2_proxy, this problem feels like a dead end.

Steps to Reproduce (for bugs)

  1. I am running oauth2_proxy in a container in Kubernetes, using the helm chart published at stable/oauth2_proxy
  2. At my ingress, I route traffic on the route "/logs" to the oauth2_proxy, so I set a proxy-prefix of "/logs/oauth2". (I'm putting this proxy in front of Kibana for an EFK stack, hence "/logs".)
  3. Here's the values.yaml file I use to configure the proxy:
# Oauth client configuration specifics
config:
  # OAuth client ID
  clientID: "my.okta.client.id"
  # OAuth client secret
  clientSecret: "my.okta.client.secret"
  # Create a new secret with the following command
  # python -c 'import os,base64; print base64.b64encode(os.urandom(16))'
  cookieSecret: "G06IvASrTGSPWUm8l9EEKg=="
  # Custom configuration file: oauth2_proxy.cfg
  # configFile: |-
  #   pass_basic_auth = false
  #   pass_access_token = true
  configFile: ""

service:
  port: 80

extraArgs:
  email-domain: "my-company.com"
  upstream: "http://my-service"
  provider: "oidc"
  oidc-issuer-url: "https://my-company.okta.com"
  cookie-name: "_oauth2_proxy_my_app"
  cookie-secure: "false"
  cookie-httponly: "false"
  proxy-prefix: "/logs/oauth2"
  • Note that the config section gets loaded into Kubernetes secrets which ultimately are fed to the binary via environment variables OAUTH2_PROXY_CLIENT_ID, OAUTH2_PROXY_CLIENT_SECRET, and OAUTH2_PROXY_COOKIE_SECRET.
  • Note that the extraArgs section gets turned directly into arguments passed to oauth2_proxy as --key=value pairs.

Your Environment

Running oauth2_proxy in a kubernetes pod behind an Istio ingress on AWS EKS.

  • Version used:

3.1.0
docker image: https://quay.io/repository/pusher/oauth2_proxy?tag=v3.1.0

Better Docker image tags

It'd be great to tag images by major version (e.g. 3), major + minor version (e.g. 3.1) and major + minor + patch version (e.g. 3.1.0). Currently, only major + minor + patch version tags are present on Quay.

Cookie "_oauth2_proxy" not present

When setting --pass-authorization-header=true, The proxy returns an error 502 with the following message in the logs:
oauthproxy.go:764 redacted_ip:port ("redacted_ip") Cookie "_oauth2_proxy" not present

Here are the parameter pass to the proxy:

"--provider=oidc",
"--client-id=<redacted>",
"--client-secret=<redacted>",
"--redirect-url=https://target.my_organisation.com/oauth2/callback",
"--oidc-issuer-url=https://keycloak.my_organisation.com/auth/realms/master",
"--email-domain=my_organisation.com",
"--upstream=http://127.0.0.1:9090",
"--http-address=0.0.0.0:3000",
"--cookie-secret=<redacted>",
"--pass-authorization-header=true",
"--set-authorization-header=true",
"--cookie-domain=.my_organization.com",
"--cookie-secure=false"

I am trying to pass the authorization header to the kubernetes dashboard

oauth2_proxy stripping out content-length header?

Expected Behavior

When a request is proxied by oauth2_proxy, all headers are proxied to the upstream.

Current Behavior

If I make an authenticated GET request with a content-length: 0 header, the header is not sent to the upstream.

Interestingly, if I set tell oauth2_proxy to skip auth for the affected route (e.g. skip-auth-regex: /api), then the proxy does pass the headers through.

Possible Solution

Is there anything I can do as a client to tell oauth2_proxy to pass along all headers to the upstream? Or is a code change needed?

Context

In my particular case, this is a breaking issue. I'm proxying the Kibana dashboard behind oauth2_proxy and the XHR request it makes to fetch data fails with a 403 when the content-length header is missing.

Your Environment

Running oauth2_proxy in a kubernetes pod behind an Istio ingress on AWS EKS.

  • Version used:

3.1.0
docker image: https://quay.io/repository/pusher/oauth2_proxy?tag=v3.1.0

--whitelist-domain doesn't respect protocol on redirect

I'm using oauth2_proxy with AWS Cognito. Previously i was using bitly/OAuth2_Proxy and multiple oauth2_proxy pods in k8s to protect all the internal services, now i'm trying to replace those multiples pods with the --whitelist-domain setting available in v3.1.0 of this fork.

The problem i'm having is that oauth_proxy is setting HTTP as protocol on the redirect URL, ignoring the callbacks defined in Cognito, which all have HTTPS as protocol

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    k8s-app: oauth2-proxy
  name: oauth2-proxy
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: oauth2-proxy
  template:
    metadata:
      labels:
        k8s-app: oauth2-proxy
    spec:
      containers:
      - args:
        - --provider=oidc
        - --email-domain=*
        - --upstream=file:///dev/null
        - --http-address=0.0.0.0:4180
        - --scope=openid
        - --cookie-secure=false
        - --oidc-issuer-url=https://cognito-idp.us-east-1.amazonaws.com/POOL_ID
        - --whitelist-domain=.mydomain.com
        - --login-url=https://oauth.mydomain.com/oauth2/authorize
        - --profile-url=https://oauth.mydomain.com/oauth2/userInfo
        - --redeem-url=https://oauth.mydomain.com/oauth2/token
        env:
           .....
        image: quay.io/pusher/oauth2_proxy:v3.1.0
        imagePullPolicy: Always
        name: oauth2-proxy
        ports:
        - containerPort: 4180
          protocol: TCP

captura de pantalla 2019-02-11 a la s 11 52 36
captura de pantalla 2019-02-11 a la s 11 55 20

  • Version used:

3.1.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.