Giter Club home page Giter Club logo

ack-ram-authenticator's Introduction

ACK RAM Authenticator for Kubernetes

A tool to use AlibabaCloud RAM credentials to authenticate to a Kubernetes cluster.

Why do I want this?

If you are an administrator running a Kubernetes cluster on AlibabaCloud, you already need to manage AlibabaCloud RAM credentials to provision and update the cluster. By using AlibabaCloud RAM Authenticator for Kubernetes, you avoid having to manage a separate credential for Kubernetes access.

If you are building a Kubernetes installer on AlibabaCloud, AlibabaCloud RAM Authenticator for Kubernetes can simplify your bootstrap process. You won't need to somehow smuggle your initial admin credential securely out of your newly installed cluster. Instead, you can create a dedicated KubernetesAdmin role at cluster provisioning time and set up Authenticator to allow cluster administrator logins.

How do I use it?

Assuming you have a cluster running in AlibabaCloud and you want to add AlibabaCloud RAM Authenticator for Kubernetes support, you need to:

  1. Create an RAM role or user you'll use to identify users.
  2. Create an CRD in your cluster to store the mapping between RAM roles and Kubernetes users.
  3. Configure the mapping relationship between RAM identities and RBAC permissions.
  4. Run the Authenticator server as a DaemonSet.
  5. Configure your API server to talk to Authenticator.
  6. Set up kubectl to use Authenticator tokens.

1. Create an RAM role

First, you must create one or more RAM roles that will be mapped to users/groups inside your Kubernetes cluster. The easiest way to do this is to log into the RAM Console:

  • Choose the "RAM Roles" / "Create RAM Role" option.
  • Select type of trusted entity "Alibaba Cloud Account", Select Trusted Alibaba Cloud Account "Current Alibaba Cloud Account".
  • Type in a name at "RAM Role Name" and click "OK"

This will create an RAM role with no permissions that can be assumed by authorized users/roles in your account. Note the AlibabaCloud Resource Name (ARN) of your role, which you will need below.

You can also skip this step and use:

  • An existing role (such as a cross-account access role).
  • An RAM user (see mapUsers below).

2. Create an CRD

The Authenticator server uses a custom resource definition (CRD) to store the mapping between RAM roles and Kubernetes users.

You can create this CRD with kubectl apply -f ramidentitymapping.yaml, ramidentitymapping.yaml see ramidentitymapping.yaml.

3. Configure the mapping relationship between RAM identities and RBAC permissions

You need to configure the mapping relationship between RAM identities and RBAC permissions. First you need to create a RAM identity mapping with kubectl apply -f example-ramidentitymapping.yaml, example-ramidentitymapping.yaml see example-ramidentitymapping.yaml. Then you need to configure the mapping relationship between RAM identities and RBAC permissions with kubectl apply -f example-binding.yaml, example-binding.yaml see example-binding.yaml.

[NOTE] Please modify example-binding.yaml and example-ramidentitymapping.yaml by prompted to use.

4. Run the server

The server is meant to run on each of your master nodes as a DaemonSet with host networking so it can expose a localhost port.

For a sample ConfigMap and DaemonSet configuration, see example.yaml. You can run the server with kubectl apply -f example.yaml.

The example.yaml file including the content of init container, the init container to generate a certificate, key, and kubeconfig files by ack-ram-authenticator binary file, and mount these files to the host.

NOTE Please replace <your-cluster-id> with actual cluster id in the example.yaml files.

(Optional) Pre-generate a certificate, key, and kubeconfig

If you're building an automated installer, you can also pre-generate the certificate, key, and webhook kubeconfig files easily using ack-ram-authenticator init --cluster-id=<your-cluster-id>,and delete init container content form example.yaml file.

You nedd run the following command to place the generated files in the desired path. mkdir -p /var/ack-ram-authenticator /etc/kubernetes/ack-ram-authenticator &&mv cert.pem key.pem /var/ack-ram-authenticator &&mv ack-ram-authenticator.kubeconfig /etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml

You need run the following command to give the generated files the correct permissions. chown -R 65534:65534 /var/ack-ram-authenticator /etc/kubernetes/ack-ram-authenticator

You can run this on each master node prior to starting the API server. You could also generate them before provisioning master nodes and install them in the appropriate host paths.

If you do not pre-generate files, ack-ram-authenticator server will generate them on demand.

5. Configure your API server to talk to the server

The Kubernetes API integrates with ACK RAM Authenticator for Kubernetes using a token authentication webhook. When you run ack-ram-authenticator server, it will generate a webhook configuration file and save it onto the host filesystem. You'll need to add a single additional flag to your API server configuration:

--authentication-token-webhook-config-file=/etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml

On many clusters, the API server runs as a static pod. You can add the flag to /etc/kubernetes/manifests/kube-apiserver.yaml. Make sure the host directory /etc/kubernetes/ack-ram-authenticator/ is mounted into your API server pod. You can run sh example-configure-api-server.sh to automatically complete the above work, example-configure-api-server.sh see example-configure-api-server.sh. You may also need to restart the kubelet daemon on your master node to pick up the updated static pod definition:

systemctl restart kubelet.service

Note When you restart the ack-ram-authenticator service or regenerate certificate, key, and kubeconfig files, you need run sh example-configure-api-server.sh.

6. Set up kubectl to use authentication tokens provided by ACK RAM Authenticator for Kubernetes

This requires a 1.10+ kubectl binary to work. If you receive Please enter Username: when trying to use kubectl you need to update to the latest kubectl

Finally, once the server is set up you'll want to authenticate! You will still need a kubeconfig that has the public data about your cluster (cluster CA certificate, endpoint address). The users section of your configuration, however, should include an exec section (refer to the v1.10 docs)::

# [...]
users:
    - name: "<your-user-name>"
      user:
        exec:
            command: ack-ram-tool
            args:
                - credential-plugin
                - get-token
                - --cluster-id
                - <your-cluster-id>
                - --api-version
                - v1beta1
                - --log-level
                - error
            apiVersion: client.authentication.k8s.io/v1beta1
            provideClusterInfo: false
            interactiveMode: Never
preferences: {}

This means the kubeconfig is entirely public data and can be shared across all Authenticator users. It may make sense to upload it to a trusted public location such as AlibabaCloud OSS.

Make sure you have the ack-ram-tool binary installed. You can install and configure it with ack-ram-tool.

To authenticate, run kubectl --kubeconfig /path/to/kubeconfig" [...]. kubectl will exec the ack-ram-tool binary with the supplied params in your kubeconfig which will generate a token and pass it to the apiserver.

How does it work?

It works using the RAM sts:GetCallerIdentity API endpoint. This endpoint returns information about whatever RAM credentials you use to connect to it.

Client side (ack-ram-tool credential-plugin get-token)

We use this API in a somewhat unusual way by having the Authenticator client generate and pre-sign a request to the endpoint. We serialize that request into a token that can pass through the Kubernetes authentication system.

Server side (ack-ram-authenticator server)

The token is passed through the Kubernetes API server and into the Authenticator server's /authenticate endpoint via a webhook configuration. The Authenticator server validates all the parameters of the pre-signed request to make sure nothing looks funny. It then submits the request to the real https://sts.aliyuncs.com server, which validates the client's HMAC signature and returns information about the user. Now that the server knows the RAM identity of the client, it translates this identity into a Kubernetes user and groups via a simple static mapping.

What is a cluster ID?

The Authenticator cluster ID is a unique-per-cluster identifier that prevents certain replay attacks. Specifically, it prevents one Authenticator server (e.g., in a dev environment) from using a client's token to authenticate to another Authenticator server in another cluster.

Troubleshooting

If that fails, there are a few possible problems to check for:

  • Make sure your base RAM credentials are available in your shell.

  • Make sure the target role allows your source account access (in the role trust policy).

  • Make sure your source principal (user/role/group) has an RAM policy that allows sts:AssumeRole for the target role.

  • Make sure you don't have any explicit deny policies attached to your user, group that would prevent the sts:AssumeRole.

Full Configuration Format

The client and server have the same configuration format. They can share the same exact configuration file, since there are no secrets stored in the configuration.

# a unique-per-cluster identifier to prevent replay attacks (see above)
clusterID: c1234abcde-xxxx

# default RAM role to assume for `ack-ram-authenticator token`
defaultRole: acs:ram::000000000000:role/KubernetesAdmin

# server listener configuration
server:
  # localhost port where the server will serve the /authenticate endpoint
  port: 21362 # (default)

  # state directory for generated TLS certificate and private keys
  stateDir: /var/ack-ram-authenticator # (default)

  # output `path` where a generated webhook kubeconfig will be stored.
  generateKubeconfig: /etc/kubernetes/ack-ram-authenticator.kubeconfig # (default)

  # each mapRoles entry maps an RAM role to a username and set of groups
  # Each username and group can optionally contain template parameters:
  #  1) "{{AccountID}}" is the 16 digit ID.
  #  2) "{{SessionName}}" is the role session name.
  mapRoles:
  # statically map acs:ram::000000000000:role/KubernetesAdmin to cluster admin
  - roleARN: acs:ram::000000000000:role/KubernetesAdmin
    username: kubernetes-admin
    groups:
    - system:masters

  # each mapUsers entry maps an RAM role to a static username and set of groups
  mapUsers:
  # map user RAM user Alice in 000000000000 to user "alice" in group "system:masters"
  - userARN: acs:ram::000000000000:user/Alice
    username: alice
    groups:
    - system:masters

Community, discussion, contribution, and support

You are welcome to make new issues and pull reuqests.

ack-ram-authenticator's People

Contributors

dahuk avatar denverdino avatar haoshuwei avatar lijiuxing1 avatar mozillazg avatar ouchunyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ack-ram-authenticator's Issues

我在获取客户端时出现了问题

你好,我遇到了一个问题

问题

我在使用go get -u -v github.com/AliyunContainerService/ack-ram-authenticator/cmd/ack-ram-authenticator时,出现了下面的报错:

go: finding k8s.io/api latest
go: finding k8s.io/apimachinery latest
go: finding github.com/aliyun/alibaba-cloud-sdk-go latest
go: github.com/AliyunContainerService/ack-ram-authenticator/cmd/ack-ram-authenticator imports
	github.com/christopherhein/go-version: github.com/christopherhein/[email protected]: parsing go.mod:
	module declares its path as: go.hein.dev/go-version
	        but was required as: github.com/christopherhein/go-version

环境

我的go版是: go version go1.13 darwin/amd64
我的go env:

GO111MODULE="on"
GOARCH="amd64"
GOBIN="/Users/shaoxuan/Documents/development/golang/bin"
GOCACHE="/Users/shaoxuan/Library/Caches/go-build"
GOENV="/Users/shaoxuan/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY="gitlab.com/lighthouse-china/.*"
GONOSUMDB="gitlab.com/lighthouse-china/.*"
GOOS="darwin"
GOPATH="/Users/shaoxuan/Documents/development/golang"
GOPRIVATE="gitlab.com/lighthouse-china/.*"
GOPROXY="https://goproxy.io,direct"
GOROOT="/usr/local/opt/go/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/opt/go/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/dev/null"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/3c/vsrd0psn7m971b3fgyqhqylm0000gn/T/go-build652009459=/tmp/go-build -gno-record-gcc-switches -fno-common"

Can I use this with Aliyun managed k8s service?

Problem

As I understand about Aliyun managed k8s service, I won't be able to access or configure master node, it seems like that I would not be able to complete step 3 in README: "Configure your API server to talk to the server".

My use case

I want to enable our team members to use kubectl to access our cluster by assuming a role(cross-account).

Questions

  1. Is there a workaround I can do about this?
  2. Is there an alternative to ack-ram-authenticator?
  3. If the answer is "no" for both 1 and 2, do I have to use the "Dedicated Kubernetes"(the 3-master-node cluster service) to achieve my use case?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.