Giter Club home page Giter Club logo

k8s-prom-hpa's Introduction

k8s-prom-hpa

Autoscaling is an approach to automatically scale up or down workloads based on the resource usage. Autoscaling in Kubernetes has two dimensions: the Cluster Autoscaler that deals with node scaling operations and the Horizontal Pod Autoscaler that automatically scales the number of pods in a deployment or replica set. The Cluster Autoscaling together with Horizontal Pod Autoscaler can be used to dynamically adjust the computing power as well as the level of parallelism that your system needs to meet SLAs. While the Cluster Autoscaler is highly dependent on the underling capabilities of the cloud provider that's hosting your cluster, the HPA can operate independently of your IaaS/PaaS provider.

The Horizontal Pod Autoscaler feature was first introduced in Kubernetes v1.1 and has evolved a lot since then. Version 1 of the HPA scaled pods based on observed CPU utilization and later on based on memory usage. In Kubernetes 1.6 a new API Custom Metrics API was introduced that enables HPA access to arbitrary metrics. And Kubernetes 1.7 introduced the aggregation layer that allows 3rd party applications to extend the Kubernetes API by registering themselves as API add-ons. The Custom Metrics API along with the aggregation layer made it possible for monitoring systems like Prometheus to expose application-specific metrics to the HPA controller.

The Horizontal Pod Autoscaler is implemented as a control loop that periodically queries the Resource Metrics API for core metrics like CPU/memory and the Custom Metrics API for application-specific metrics.

Overview

What follows is a step-by-step guide on configuring HPA v2 for Kubernetes 1.9 or later. You will install the Metrics Server add-on that supplies the core metrics and then you'll use a demo app to showcase pod autoscaling based on CPU and memory usage. In the second part of the guide you will deploy Prometheus and a custom API server. You will register the custom API server with the aggregator layer and then configure HPA with custom metrics supplied by the demo application.

Before you begin you need to install Go 1.8 or later and clone the k8s-prom-hpa repo in your GOPATH:

cd $GOPATH
git clone https://github.com/stefanprodan/k8s-prom-hpa

Setting up the Metrics Server

The Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data and is the successor of Heapster. The metrics server collects CPU and memory usage for nodes and pods by pooling data from the kubernetes.summary_api. The summary API is a memory-efficient API for passing data from Kubelet/cAdvisor to the metrics server.

Metrics-Server

If in the first version of HPA you would need Heapster to provide CPU and memory metrics, in HPA v2 and Kubernetes 1.8 only the metrics server is required with the horizontal-pod-autoscaler-use-rest-clients switched on. The HPA rest client is enabled by default in Kubernetes 1.9. GKE 1.9 comes with the Metrics Server pre-installed.

Deploy the Metrics Server in the kube-system namespace:

kubectl create -f ./metrics-server

After one minute the metric-server starts reporting CPU and memory usage for nodes and pods.

View nodes metrics:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .

View pods metrics:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods" | jq .

Auto Scaling based on CPU and memory usage

You will use a small Golang-based web app to test the Horizontal Pod Autoscaler (HPA).

Deploy podinfo to the default namespace:

kubectl create -f ./podinfo/podinfo-svc.yaml,./podinfo/podinfo-dep.yaml

Access podinfo with the NodePort service at http://<K8S_PUBLIC_IP>:31198.

Next define a HPA that maintains a minimum of two replicas and scales up to ten if the CPU average is over 80% or if the memory goes over 200Mi:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
        target:
          type: Utilization
          averageUtilization: 80
  - type: Resource
    resource:
      name: memory
        target:
          type: AverageValue
          averageValue: 200Mi

Create the HPA:

kubectl create -f ./podinfo/podinfo-hpa.yaml

After a couple of seconds the HPA controller contacts the metrics server and then fetches the CPU and memory usage:

kubectl get hpa

NAME      REFERENCE            TARGETS                      MINPODS   MAXPODS   REPLICAS   AGE
podinfo   Deployment/podinfo   2826240 / 200Mi, 15% / 80%   2         10        2          5m

In order to increase the CPU usage, run a load test with rakyll/hey:

#install hey
go get -u github.com/rakyll/hey

#do 10K requests
hey -n 10000 -q 10 -c 5 http://<K8S_PUBLIC_IP>:31198/

You can monitor the HPA events with:

$ kubectl describe hpa

Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  7m    horizontal-pod-autoscaler  New size: 4; reason: cpu resource utilization (percentage of request) above target
  Normal  SuccessfulRescale  3m    horizontal-pod-autoscaler  New size: 8; reason: cpu resource utilization (percentage of request) above target

Remove podinfo for the moment. You will deploy it again later on in this tutorial:

kubectl delete -f ./podinfo/podinfo-hpa.yaml,./podinfo/podinfo-dep.yaml,./podinfo/podinfo-svc.yaml

Setting up a Custom Metrics Server

In order to scale based on custom metrics you need to have two components. One component that collects metrics from your applications and stores them the Prometheus time series database. And a second component that extends the Kubernetes custom metrics API with the metrics supplied by the collect, the k8s-prometheus-adapter.

Custom-Metrics-Server

You will deploy Prometheus and the adapter in a dedicated namespace.

Create the monitoring namespace:

kubectl create -f ./namespaces.yaml

Deploy Prometheus v2 in the monitoring namespace:

If you are deploying to GKE you might get an error saying: Error from server (Forbidden): error when creating This will help you resolve that issue: RBAC on GKE

kubectl create -f ./prometheus

Generate the TLS certificates needed by the Prometheus adapter:

touch metrics-ca.key metrics-ca.crt metrics-ca-config.json
make certs

Deploy the Prometheus custom metrics API adapter:

kubectl create -f ./custom-metrics-api

List the custom metrics provided by Prometheus:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .

Get the FS usage for all the pods in the monitoring namespace:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/kubelet_container_log_filesystem_used_bytes" | jq .

Auto Scaling based on custom metrics

Create podinfo NodePort service and deployment in the default namespace:

kubectl create -f ./podinfo/podinfo-svc.yaml,./podinfo/podinfo-dep.yaml

The podinfo app exposes a custom metric named http_requests_total. The Prometheus adapter removes the _total suffix and marks the metric as a counter metric.

Get the total requests per second from the custom metrics API:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-6b86c8ccc9-kv5g9",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-10T16:49:07Z",
      "value": "901m"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-6b86c8ccc9-nm7bl",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-10T16:49:07Z",
      "value": "898m"
    }
  ]
}

The m represents milli-units, so for example, 901m means 901 milli-requests.

Create a HPA that will scale up the podinfo deployment if the number of requests goes over 10 per second:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
        metric:
          name: http_requests
        target:
          type: AverageValue
          averageValue: 10

Deploy the podinfo HPA in the default namespace:

kubectl create -f ./podinfo/podinfo-hpa-custom.yaml

After a couple of seconds the HPA fetches the http_requests value from the metrics API:

kubectl get hpa

NAME      REFERENCE            TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
podinfo   Deployment/podinfo   899m / 10   2         10        2          1m

Apply some load on the podinfo service with 25 requests per second:

#install hey
go get -u github.com/rakyll/hey

#do 10K requests rate limited at 25 QPS
hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz

After a few minutes the HPA begins to scale up the deployment:

kubectl describe hpa

Name:                       podinfo
Namespace:                  default
Reference:                  Deployment/podinfo
Metrics:                    ( current / target )
  "http_requests" on pods:  9059m / 10
Min replicas:               2
Max replicas:               10

Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  2m    horizontal-pod-autoscaler  New size: 3; reason: pods metric http_requests above target

At the current rate of requests per second the deployment will never get to the max value of 10 pods. Three replicas are enough to keep the RPS under 10 per each pod.

After the load tests finishes, the HPA down scales the deployment to it's initial replicas:

Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  5m    horizontal-pod-autoscaler  New size: 3; reason: pods metric http_requests above target
  Normal  SuccessfulRescale  21s   horizontal-pod-autoscaler  New size: 2; reason: All metrics below target

You may have noticed that the autoscaler doesn't react immediately to usage spikes. By default the metrics sync happens once every 30 seconds and scaling up/down can only happen if there was no rescaling within the last 3-5 minutes. In this way, the HPA prevents rapid execution of conflicting decisions and gives time for the Cluster Autoscaler to kick in.

Conclusions

Not all systems can meet their SLAs by relying on CPU/memory usage metrics alone, most web and mobile backends require autoscaling based on requests per second to handle any traffic bursts. For ETL apps, auto scaling could be triggered by the job queue length exceeding some threshold and so on. By instrumenting your applications with Prometheus and exposing the right metrics for autoscaling you can fine tune your apps to better handle bursts and ensure high availability.

k8s-prom-hpa's People

Contributors

carlioth avatar dalfos avatar lukebond avatar pi-unnerup avatar profl avatar stafot avatar stefanprodan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-prom-hpa's Issues

Metric Server HPA not working compatible

I'm using mongodb-exporter for store/query the metrics via prometheus. I have set up a custom metric server and storing values for that .

That is the evidence of prometheus-exporter and custom-metric-server works compatible .

Query : kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes"

Result : {"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]}

In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :

failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API

What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .

Help

Thanks :)

Unable to get metrics from node_exporter

Hi,
I am building custom metrics server by following this github instructions. Thanks. It works. Now, I would like to deploy node_exporter from https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests/node-exporter to get node level information, but I don't see any metrics from node_exporter , such as node_memory_MemFree or node_memory_MemTotal, when I invoked kubectl get --raw /metrics. Do I need any additional components?

Thanks.

cannot get anything when curl

when finished it , I curl kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq and get
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": []
}
the resources is nothing. From the pod ,I get the logs
Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}

Error from server (NotFound): the server could not find the metric http_requests for pods

Hi,How do I add http_requests indicators? thx.

[root@kube-master custom-metrics-apiserver]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
Error from server (NotFound): the server could not find the metric http_requests for pods
[root@kube-master custom-metrics-apiserver]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .|grep http_requests
[root@kube-master custom-metrics-apiserver]#
[root@kube-master custom-metrics-apiserver]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .|grep fs_usage_bytes
"name": "pods/fs_usage_bytes"
[root@kube-master custom-metrics-apiserver]#

Based on bandwidth expansion

I don't know how to expand the bandwidth according to the cluster. Most of the examples are about CPU and memory.I can't write this yaml file. Please give me some help.

Metrics for JVM

Installed custom metric but I do not see all the metrics I need specially for JVM metrics. In addition, it appears the image does not like using custom config.yaml. Below I created adapter-config configmap specifically for JVM.

apiVersion: v1
kind: ConfigMap
metadata:
  name: adapter-config
  namespace: monitoring
data:
  config.yaml: |
    - seriesQuery: '{__name__=~"jvm_.*",kubernetes_name!="",kubernetes_namespace!=""}'
      resources:
        overrides:
          kubernetes_namespace:
            resource: namespace
          kubernetes_name:
            resource: pod
      name:
        matches: "^jvm_(.*)"
      metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)

However, when I use the config in deployment.yaml, the pod crashes pointing that it does not recognize the flag.

    spec:
      serviceAccountName: custom-metrics-apiserver
      containers:
      - name: custom-metrics-apiserver
        image: quay.io/coreos/k8s-prometheus-adapter-amd64:v0.2.0
        args:
        - /adapter
        - --secure-port=6443
        - --tls-cert-file=/var/run/serving-cert/serving.crt
        - --tls-private-key-file=/var/run/serving-cert/serving.key
        - --logtostderr=true
        - --prometheus-url=http://prometheus.monitoring.svc:9090/
        - --metrics-relist-interval=30s
        - --rate-interval=5m
        - --v=10
        - --config=/etc/adapter/config.yaml
panic: unknown flag: --config

goroutine 1 [running]:
main.main()
        /go/src/github.com/directxman12/k8s-prometheus-adapter/cmd/adapter/adapter.go:41 +0x114

Can you assist on how I can expose jvm metrics to custom.metrics.k8s.io?

Thanks,
Christian M.

unable to get metric http_requests: no metrics returned from custom metrics API

Hi,
I followed the instructions and deployed podinfo app and i am able to scale the app on http_requests. However, whenever i run my own app or just nginx, i am getting error as "unable to get metric http_requests: no metrics returned from custom metrics API". Deployment files are exactly the same, just changed port number, health_checks and labels.

[ec2-user@ip-192-168-100-253 nginx]$ kubectl get hpa
NAME               REFERENCE            TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
nginx-custom-hpa   Deployment/nginx     <unknown>/10   2         10        2          10m
podinfo            Deployment/podinfo   888m/10        2         10        2          23m

[ec2-user@ip-192-168-100-253 nginx]$ kubectl describe hpa nginx-custom-hpa
Name:                       nginx-custom-hpa
Namespace:                  default
Labels:                     <none>
Annotations:                <none>
CreationTimestamp:          Mon, 19 Aug 2019 08:32:58 +0000
Reference:                  Deployment/nginx
Metrics:                    ( current / target )
  "http_requests" on pods:  <unknown> / 10
Min replicas:               2
Max replicas:               10
Deployment pods:            2 current / 0 desired
Conditions:
  Type           Status  Reason               Message
  ----           ------  ------               -------
  AbleToScale    True    SucceededGetScale    the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetPodsMetric  the HPA was unable to compute the replica count: unable to get metric http_requests: no metrics returned from custom metrics API
Events:
  Type     Reason                        Age                   From                       Message
  ----     ------                        ----                  ----                       -------
  Warning  FailedComputeMetricsReplicas  8m18s (x12 over 11m)  horizontal-pod-autoscaler  failed to get object metric value: unable to get metric http_requests: no metrics returned from custom metrics API
  Warning  FailedGetPodsMetric           63s (x41 over 11m)    horizontal-pod-autoscaler  unable to get metric http_requests: no metrics returned from custom metrics API

Is my app running??

[ec2-user@ip-192-168-100-253 nginx]$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.100.0.1       <none>        443/TCP          21d
nginx-svc    NodePort    10.100.195.60    <none>        9899:31199/TCP   13m
podinfo      NodePort    10.100.112.143   <none>        9898:31198/TCP   24m


[ec2-user@ip-192-168-101-39 ~]$ curl -I http://10.100.195.60:9899
HTTP/1.1 200 OK
Server: nginx/1.17.3

[ec2-user@ip-192-168-100-253 ~]$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-7b8c9bc5c9-xgppt",
        "apiVersion": "/v1"
      },
      "metricName": "http_requests",
      "timestamp": "2019-08-19T08:48:38Z",
      "value": "888m"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-7b8c9bc5c9-zkkbd",
        "apiVersion": "/v1"
      },
      "metricName": "http_requests",
      "timestamp": "2019-08-19T08:48:38Z",
      "value": "888m"
    }
  ]
}

I would like to know why i am not able to see http_requests for nginx app?

couldn't see container_* api metrics

Hello,
I'm running kubernetes 1.10, and followed instruction here and installed metrics server, custom-metrics-api, and prometheus. I'm able to do autoscaling based on http_requests, but i can not do that based on container_network_receive_bytes_total, i noticed that http_requests metrics exposed at "/apis/custom.metrics.k8s.io/v1beta1", see output below:
[xyz@master k8s-prom-hpa]$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1|jq .|grep http_requests
"name": "pods/http_requests_count",
"name": "pods/http_requests",
"name": "namespaces/http_requests",
"name": "jobs.batch/http_requests",
"name": "pods/http_requests_sum",
"name": "jobs.batch/http_requests_bucket",
"name": "jobs.batch/http_requests_count",
"name": "jobs.batch/http_requests_sum",
"name": "namespaces/http_requests_count",
"name": "namespaces/http_requests_sum",
"name": "pods/http_requests_bucket",
"name": "namespaces/http_requests_bucket",

but this is the case for "container_network_receive_bytes_total"

[xyz@master k8s-prom-hpa]$ kubectl get --raw container_network_receive_bytes_total|jq .|grep network
"name": "pods/network_udp_usage",
"name": "pods/network_tcp_usage",

In Prometheus , i can see "container_network_receive_bytes_total" with data, actually all the metrics start with "container_" in Prometheus didn't get exposed at /apis/custom.metrics.k8s.io/v1beta1..., any help is really appreciated

my system information:
[xyz@master k8s-prom-hpa]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

[xyz@master k8s-prom-hpa]$ kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
custom.metrics.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
stable.example.com/v1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

[yaoky@master k8s-prom-hpa]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default php-apache-8699449574-8hlqf 1/1 Running 0 3h
default podinfo-65fc48f955-9xwdd 1/1 Running 0 22h
default podinfo-65fc48f955-fnzx4 1/1 Running 0 22h
default vtc1-deployment-6f864dbf54-lpq2d 1/1 Running 0 22h
default vtc2-deployment-7f65859fc9-nbt2x 1/1 Running 0 22h
kube-system etcd-master 1/1 Running 1 23h
kube-system heapster-cbfb87c9c-sqknr 1/1 Running 0 23h
kube-system kube-apiserver-master 1/1 Running 0 23h
kube-system kube-controller-manager-master 1/1 Running 2 23h
kube-system kube-dns-86f4d74b45-mgb44 3/3 Running 3 23h
kube-system kube-proxy-52ctt 1/1 Running 0 23h
kube-system kube-proxy-cgbb4 1/1 Running 1 23h
kube-system kube-proxy-fw46f 1/1 Running 0 23h
kube-system kube-proxy-rb4l5 1/1 Running 0 23h
kube-system kube-scheduler-master 1/1 Running 2 23h
kube-system metrics-server-6fbfb84cdd-cs7hk 1/1 Running 0 23h
kube-system monitoring-grafana-69df66f668-87qkl 1/1 Running 0 23h
kube-system monitoring-influxdb-78d4c6f5b6-jk59b 1/1 Running 0 23h
kube-system weave-net-4qvhg 2/2 Running 0 23h
kube-system weave-net-6mg8p 2/2 Running 0 23h
kube-system weave-net-9bvqg 2/2 Running 0 23h
kube-system weave-net-z9mmz 2/2 Running 3 23h
monitoring custom-metrics-apiserver-7dd968d85-sw7fl 1/1 Running 0 22h
monitoring prometheus-7dff795b9f-6jpd5 1/1 Running 0 22h

Makefile:14: recipe for target 'gencerts' failed

Below error reported when doing custom metric:

~/go/k8s-prom-hpa$ sudo make certs
Generating TLS certs
Using default tag: latest
latest: Pulling from cfssl/cfssl
Digest: sha256:525005bc4e39d61a2302490329e414eab13ee7ec3edf261df5da68ff65cb506b
Status: Image is up to date for cfssl/cfssl:latest
Can't load /home/ubuntu/.rnd into RNG
139813639434688:error:2406F079:random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:88:Filename=/home/ubuntu/.rnd
Generating a RSA private key
.................+++++
........................................................................................................+++++
writing new private key to 'metrics-ca.key'

Failed to load config file: {"code":5200,"message":"could not read configuration file"}Failed to parse input: unexpected end of JSON input
Makefile:14: recipe for target 'gencerts' failed
make: *** [gencerts] Error 1

Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

kube-api config
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem
--requestheader-allowed-names=aggregator
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User \

kubectl top nodes

Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

kubectl logs -f pod/metrics-server-7cccb5464-n8wwn -n kube-system -c metrics-server

I0404 05:21:09.136867 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
[restful] 2019/04/04 05:21:10 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
[restful] 2019/04/04 05:21:10 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
I0404 05:21:10.729295 1 serve.go:96] Serving securely on [::]:443
E0404 06:51:15.246723 1 reflector.go:322] github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130: Failed to watch *v1.Pod: Get https://10.254.0.1:443/api/v1/pods?resourceVersion=17196317&timeoutSeconds=434&watch=true: dial tcp 10.254.0.1:443: connect: connection refused

help me,thanks!

Where dose the podinfo app expose a custom metric named http_requests_total?

@stefanprodan
Hi,
In the README.md, you say "The podinfo app exposes a custom metric named http_requests_total. The Prometheus adapter removes the _total suffix and marks the metric as a counter metric."

While I can't find where it is. So could you please help me to point them out:

  1. Where dose the "podinfo" app expose the custom metrics named "http_request_total"?
  2. Where dose the Prometheus adapter remove the _total suffix and mark the metric as a counter metric?

Thanks!

Custom metrics improvement

Hi Stefan,

First of thank you for providing custom metrics dependencies which is working fine for us. But we need to enrich small code into this to use.

currently this custom metrics image is fetching metric value from prometheus and responding back to HPA for autoscaling.

But we want to add irate function to prometheus metrics to fetch tps directly from prometheus like below.

sum(irate(<>[1m]))

So can you please guide us in this.

Thanks,
Surendra

HPA not showing memory, and showing wrong CPU

Steps to reproduce:

$ kubectl create -f metrics-server/
$ kubectl create -f ./podinfo/podinfo-svc.yaml,./podinfo/podinfo-dep.yaml
$ kubectl create -f ./podinfo/podinfo-hpa.yaml

Soon after, I start seeing erroneous values for the CPU (they're not 100%, I've not yet started sending any traffic); and the memory is showing as <unknown>.

$ kubectl get hpa -w
NAME      REFERENCE            TARGETS                     MINPODS   MAXPODS   REPLICAS   AGE
podinfo   Deployment/podinfo   <unknown>/200Mi, 100%/80%   2         10        4          2m10s

If I look at the logs of the metrics server I see the following:

$ kubectl logs metrics-server-68d85f76bb-kgqlr -n kube-system
E0320 14:29:47.201497       1 reststorage.go:144] unable to fetch pod metrics for pod default/podinfo-7b8c9bc5c9-d5f42: no metrics known for pod

I'm running on CentoOS on clusters created with kubeadm init with just the defaults.

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

$ docker version
Client:
 Version:           18.09.3
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        142dfce
 Built:             Thu Feb 28 06:08:06 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine
 Engine:
  Version:          18.09.3
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       142dfce
  Built:            Thu Feb 28 06:03:18 2019
  OS/Arch:          linux/amd64
  Experimental:     false

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Autoscale based on node metrics exposed to Prometheus by Node exporter

I am trying to access the values of node metrics (exposed by node exporter to Prometheus) in custom metrics API which appear e.g. jobs.batch/node_memory_MemTotal. But when I do kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/batch.jobs/node_memory_MemTotal" | jq . I get Error from server (NotFound): the server could not find the requested resource

How to configure the Istio Prometheus in the custom metric api.

Hi Team,

I have followed the Readme and able to scale up/down the pods using http_request.

Typically, I want to scale up/down the HPA based on Istio Prometheus http_request value. Hence, I have tried to configure the Istio Prometheus in custom metric API server afterward unable to get the http_request metric values.

Just changed the Prometheus endpoint in the custom-metrics-apiserver-deployment.yaml.

 - name: custom-metrics-apiserver
    image: quay.io/coreos/k8s-prometheus-adapter-amd64:v0.2.0
    args:
    - /adapter
    - --secure-port=6443
    - --tls-cert-file=/var/run/serving-cert/serving.crt
    - --tls-private-key-file=/var/run/serving-cert/serving.key
    - --logtostderr=true
    - --prometheus-url=http://prometheus.aruntest.tk:9090/  # its a  public domain
    - --metrics-relist-interval=30s
    - --rate-int

Error:

root@ip-172-21-15-19:/k8s-prom-hpa/custom-metrics-api# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/http_requests" | j
q .
Error from server (NotFound): the server could not find the metric http_requests for pods
root@ip-172-21-15-19:
/k8s-prom-hpa/custom-metrics-api#

Making certificate is throwing error.

I am getting the below error when Irun the command make certs under the project folder.

Generating TLS certs
error: cannot open .git/FETCH_HEAD: Permission denied
package github.com/cloudflare/cfssl/cmd/cfssl: exit status 1
make: *** [gencerts] Error 1

Error from server (NotFound): the server could not find the requested resource

-bash-4.2# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .
Error from server (NotFound): the server could not find the requested resource
-bash-4.2# kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods" | jq .
Error from server (NotFound): the server could not find the requested resource


The Metric Pod status is Terminated and the log has following error / panic
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 10.96.0.1:443: i/o timeout

Questions after flowing the guidance

Hi Stefanprodan, I have some questions after flowing your guidance:

  1. In the second part, you use the prometheus instead of metrics server to collect the metrics, why don't we use metrics server?Can it only collect the core metrics like cpu or memory? Could you please share your reasons to me? Thanks!

  2. For the file "prometheus-cfg.yaml", when i installed the prometheus, there is no such file. So when i write my own customer metrics server, can i use it into my prometheus directly? If it needs to change some items, could you please point them out for me? Thanks!

metrics-server v1beta1.metrics.k8s.io failed : net/http: request canceled while waiting for connection

    1. kubectl create -f ./metrics-server

master1 kube-apiserver: E0412 22:18:30.255424 2628 available_controller.go:295] v1beta1.metrics.k8s.io failed with: Get https://10.233.53.4:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

    1. listen port

[root@master1 1.8+]# kubectl exec -it metrics-server-7bcc5bf8f-pk865 -n kube-system sh / # ps -ef PID USER TIME COMMAND 1 root 0:00 /metrics-server --source=kubernetes.summary_api:'' --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem 38 root 0:00 sh 43 root 0:00 ps -ef / # netstat -anpt Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.233.53.4:49394 10.254.0.1:443 ESTABLISHED 1/metrics-server tcp 0 0 10.233.53.4:52990 192.168.200.52:10255 ESTABLISHED 1/metrics-server tcp 0 0 10.233.53.4:51762 192.168.200.53:10255 ESTABLISHED 1/metrics-server tcp 0 0 :::443 :::* LISTEN 1/metrics-server / #

    1. metrics-server log

[root@master1 1.8+]# kubectl logs -f metrics-server-7bcc5bf8f-pk865 -n kube-system I0412 14:15:02.347895 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem I0412 14:15:02.347957 1 heapster.go:72] Metrics Server version v0.2.1 I0412 14:15:02.348159 1 configs.go:61] Using Kubernetes client with master "https://10.254.0.1:443" and version I0412 14:15:02.348194 1 configs.go:62] Using kubelet port 10255 I0412 14:15:02.349072 1 heapster.go:128] Starting with Metric Sink I0412 14:15:02.555531 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) I0412 14:15:02.933037 1 heapster.go:101] Starting Heapster API server... [restful] 2018/04/12 14:15:02 log.go:33: [restful/swagger] listing is available at https:///swaggerapi [restful] 2018/04/12 14:15:02 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ I0412 14:15:02.934173 1 serve.go:85] Serving securely on 0.0.0.0:443

    1. kube-apiserver config

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
--advertise-address=192.168.200.51
--allow-privileged=true
--anonymous-auth=false
--apiserver-count=1
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-maxage=30
--audit-log-maxbackup=3
--audit-log-maxsize=100
--audit-log-path=/var/log/kubernetes/audit.log
--authorization-mode=Node,RBAC
--bind-address=0.0.0.0
--secure-port=6443
--client-ca-file=/etc/kubernetes/ssl/ca.pem
--enable-swagger-ui=true
--etcd-cafile=/etc/kubernetes/ssl/ca.pem
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem
--etcd-servers=https://192.168.200.51:2379,https://192.168.200.52:2379,https://192.168.200.53:2379
--event-ttl=1h
--kubelet-https=true
--insecure-bind-address=192.168.200.51
--insecure-port=8080
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
--service-cluster-ip-range=10.254.0.0/16
--service-node-port-range=30000-32000
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
--enable-bootstrap-token-auth=true
--token-auth-file=/etc/kubernetes/token.csv
--requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem
--proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.pem
--proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client-key.pem
--requestheader-allowed-names=aggregator
--requestheader-group-headers=X-Remote-Group
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-username-headers=X-Remote-User
--runtime-config=admissionregistration.k8s.io/v1alpha1
--runtime-config=api/all=true
--enable-aggregator-routing=true
--v=0
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

    1. kube-controller-manager config

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager
--address=0.0.0.0
--master=http://192.168.200.51:8080
--service-cluster-ip-range=10.254.0.0/16
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem
--root-ca-file=/etc/kubernetes/ssl/ca.pem
--leader-elect=true
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

error: configmaps "extension-apiserver-authentication" not found

I1207 09:53:35.292482 1 request.go:836] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps "extension-apiserver-authentication" not found","reason":"NotFound","details":{"name":"extension-apiserver-authentication","kind":"configmaps"},"code":404}
W1207 09:53:35.292976 1 authentication.go:231] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
Error: configmaps "extension-apiserver-authentication" not found

Not working with kube 1.9.5 : no metrics returned from heapster

Warning FailedGetResourceMetric 4s (x2 over 34s) horizontal-pod-autoscaler unable to get metrics for resource memory: no metrics returned from heapster
Warning FailedComputeMetricsReplicas 4s (x2 over 34s) horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from heapster
Maybe is something that i need to do, but followed your readme file and didn't get it working
`
cat /etc/kubernetes/manifests/kube-apiserver.manifest

apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
labels:
k8s-app: kube-apiserver
kubespray: v2
annotations:
kubespray.etcd-cert/serial: "E0B8497F43AA5A66"
kubespray.apiserver-cert/serial: "BEE5A9B9D94E41B8"
spec:
hostNetwork: true
dnsPolicy: ClusterFirst
containers:

  • name: kube-apiserver
    image: gcr.io/google-containers/hyperkube:v1.9.5
    imagePullPolicy: IfNotPresent
    resources:
    limits:
    cpu: 800m
    memory: 2000M
    requests:
    cpu: 100m
    memory: 256M
    command:
    • /hyperkube
    • apiserver
    • --advertise-address=192.168.16.214
    • --etcd-servers=https://192.168.16.214:2379
    • --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
    • --etcd-certfile=/etc/ssl/etcd/ssl/node-k8s-master.pem
    • --etcd-keyfile=/etc/ssl/etcd/ssl/node-k8s-master-key.pem
    • --insecure-bind-address=127.0.0.1
    • --bind-address=0.0.0.0
    • --apiserver-count=1
    • --endpoint-reconciler-type=lease
    • --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
    • --service-cluster-ip-range=10.233.0.0/18
    • --service-node-port-range=30000-32767
    • --client-ca-file=/etc/kubernetes/ssl/ca.pem
    • --profiling=false
    • --repair-malformed-updates=false
    • --kubelet-client-certificate=/etc/kubernetes/ssl/node-k8s-master.pem
    • --kubelet-client-key=/etc/kubernetes/ssl/node-k8s-master-key.pem
    • --service-account-lookup=true
    • --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
    • --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    • --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    • --service-account-key-file=/etc/kubernetes/ssl/service-account-key.pem
    • --secure-port=6443
    • --insecure-port=8080
    • --storage-backend=etcd3
    • --runtime-config=admissionregistration.k8s.io/v1alpha1
    • --v=2
    • --allow-privileged=true
    • --anonymous-auth=False
    • --authorization-mode=Node,RBAC
    • --feature-gates=Initializers=False,PersistentLocalVolumes=False,VolumeScheduling=False,MountPropagation=False
    • --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem
    • --requestheader-allowed-names=front-proxy-client
    • --requestheader-extra-headers-prefix=X-Remote-Extra-
    • --requestheader-group-headers=X-Remote-Group
    • --requestheader-username-headers=X-Remote-User
    • --enable-aggregator-routing=False
    • --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.pem
    • --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client-key.pem
      livenessProbe:
      httpGet:
      host: 127.0.0.1
      path: /healthz
      port: 8080
      failureThreshold: 8
      initialDelaySeconds: 15
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 15
      volumeMounts:
    • mountPath: /etc/kubernetes
      name: kubernetes-config
      readOnly: true
    • mountPath: /etc/ssl
      name: ssl-certs-host
      readOnly: true
    • mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
    • mountPath: /etc/ssl/etcd/ssl
      name: etcd-certs
      readOnly: true
      volumes:
  • hostPath:
    path: /etc/kubernetes
    name: kubernetes-config
  • name: ssl-certs-host
    hostPath:
    path: /etc/ssl
  • name: usr-share-ca-certificates
    hostPath:
    path: /usr/share/ca-certificates
  • hostPath:
    path: /etc/ssl/etcd/ssl
    name: etcd-certs
    `

Custom Metrics self-link for other services ?

In your ReadMe, you have demoed a way to fetch any custom metrics related to Pod. How do i fetch the same for other services such as nodes,replications etc ?

Ex:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .

{
"name": "namespaces/kube_deployment_status_replicas_updated",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "replicasets.extensions/kube_replicaset_status_observed_generation",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/memory_failcnt",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "daemonsets.extensions/kube_daemonset_status_number_available",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}

What is the self -link for "daemonsets.extensions/kube_daemonset_status_number_available" and replicasets.extensions/kube_replicaset_status_observed_generation ?

error: You must be logged in to the server (Unauthorized)

Hi, Thank you very much for your documentation. I deployed custom-metrics-api in my cluster and I didn't report an error during the deployment process. But when I executed the 'kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . ' command, I reported the following error:

error: You must be logged in to the server (Unauthorized)

logs:

I0613 03:59:15.327246       1 request.go:897] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
I0613 03:59:15.327513       1 authorization.go:73] Forbidden: "/", Reason: ""
I0613 03:59:15.327730       1 wrap.go:42] GET /: (12.371276ms) 403 [[Go-http-client/2.0] 177.245.72.64:21169]
W0613 03:59:24.780796       1 x509.go:172] x509: subject with cn=front-proxy-client is not in the allowed list: [aggregator]
E0613 03:59:24.780859       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: subject with cn=front-proxy-client is not allowed, x509: certificate signed by unknown authority]
I0613 03:59:24.781005       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (696.518µs) 401 [[kube-controller-manager/v1.13.6 (linux/amd64) kubernetes/abdda3f/system:serviceaccount:kube-system:resourcequota-controller] 177.245.72.64:18907]
W0613 03:59:26.187626       1 x509.go:172] x509: subject with cn=front-proxy-client is not in the allowed list: [aggregator]
E0613 03:59:26.187716       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: subject with cn=front-proxy-client is not allowed, x509: certificate signed by unknown authority]
I0613 03:59:26.187866       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (822.578µs) 401 [[kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f] 177.245.72.64:18907]
W0613 03:59:29.071907       1 x509.go:172] x509: subject with cn=front-proxy-client is not in the allowed list: [aggregator]
E0613 03:59:29.071978       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: subject with cn=front-proxy-client is not allowed, x509: certificate signed by unknown authority]
I0613 03:59:29.072126       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (669.407µs) 401 [[kube-apiserver/v1.13.6 (linux/amd64) kubernetes/abdda3f] 10.103.236.179:36320]
I0613 03:59:29.923974       1 request.go:897] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
I0613 03:59:29.924301       1 round_trippers.go:386] curl -k -v -XPOST  -H "Content-Type: application/json" -H "Accept: application/json, */*" -H "User-Agent: adapter/v0.0.0 (linux/amd64) kubernetes/$Format" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImN1c3RvbS1tZXRyaWNzLWFwaXNlcnZlci10b2tlbi1yd2I1bSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2ZGE5M2UwYS04ZDhmLTExZTktOTBkYy1mODBmNDFmMjdkYTEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bW9uaXRvcmluZzpjdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.xyuVdgLY7GxpozNMPGqUFpOhe2xTlKFtAH62xmgoSRjw3dx2LAMQwdYVcPRJJhEnYL5fadsQpENCbbO21v229RJFd3ZSuNbFzqjCf5Zi_SP8c2XIGyPQtkOnBxJK1RfcisLsAxt-FfFP-m5OZ33okRKXVyb6tZj3qK08YPHdD9WlVYSpdlTg8aK_GPlwWbmSftn4A4K7iGXzKb936trjO9SdT3aTz2sYY7PzkzKAt1w2M48Vge8P0UJvUnD1mGZ3T2fUYFGMtBmQe598Cx3wDssVjw2Nm8_QFtkGkgzIW2HvIkFwblNcjztF5-6qcMu4HkoSZjSS76w0BlJhrj2nag" 'https://50.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews'
I0613 03:59:29.935204       1 round_trippers.go:405] POST https://50.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews 201 Created in 10 milliseconds
I0613 03:59:29.935264       1 round_trippers.go:411] Response Headers:
I0613 03:59:29.935281       1 round_trippers.go:414]     Content-Length: 260
I0613 03:59:29.935295       1 round_trippers.go:414]     Date: Thu, 13 Jun 2019 04:01:00 GMT
I0613 03:59:29.935308       1 round_trippers.go:414]     Content-Type: application/json
I0613 03:59:29.935406       1 request.go:897] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
I0613 03:59:29.935612       1 authorization.go:73] Forbidden: "/", Reason: ""
I0613 03:59:29.935817       1 wrap.go:42] GET /: (12.205904ms) 403 [[Go-http-client/2.0] 10.103.236.179:36294]
W0613 03:59:30.791929       1 x509.go:172] x509: subject with cn=front-proxy-client is not in the allowed list: [aggregator]
E0613 03:59:30.791994       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: subject with cn=front-proxy-client is not allowed, x509: certificate signed by unknown authority]
I0613 03:59:30.792143       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (696.288µs) 401 [[kube-controller-manager/v1.13.6 (linux/amd64) kubernetes/abdda3f/system:serviceaccount:kube-system:generic-garbage-collector] 177.245.72.64:18907]
W0613 03:59:32.541963       1 x509.go:172] x509: subject with cn=front-proxy-client is not in the allowed list: [aggregator]
E0613 03:59:32.542042       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: subject with cn=front-proxy-client is not allowed, x509: certificate signed by unknown authority]
I0613 03:59:32.542186       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1?timeout=32s: (672.224µs) 401 [[kube-controller-manager/v1.13.6 (linux/amd64) kubernetes/abdda3f/system:serviceaccount:kube-system:generic-garbage-collector] 177.245.72.64:18907]
I0613 03:59:34.996385       1 authorization.go:73] Forbidden: "/", Reason: ""
I0613 03:59:34.996529       1 wrap.go:42] GET /: (386.223µs) 403 [[Go-http-client/2.0] 177.253.180.64:49696]

Looking at the log is like a certificate issue.

My cluster was deployed manually, not using kubeadm. When I created the cluster, I generated the following certificate:

admin.csr
admin-key.pem
admin.pem
apiserver.csr
apiserver-key.pem
apiserver.pem
ca.csr
ca-key.pem
ca.pem
controller-manager.csr
controller-manager-key.pem
controller-manager.pem
front-proxy-ca.csr
front-proxy-ca-key.pem
front-proxy-ca.pem
front-proxy-client.csr
front-proxy-client-key.pem
front-proxy-client.pem
kubelet-key.pem
kubelet.pem
sa.key
sa.pub
scheduler.csr
scheduler-key.pem
scheduler.pem

Then I tried to change the ca certificate in the Makefile, then re-execute the make certs, and finally redeploy the custom-metrics-api, but still have this problem, is there a solution?

Kubernetes Version:

[root@k8s-master01 k8s-prom-hpa]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", GitCommit:"abdda3f9fefa29172298a2e42f5102e777a8ec25", GitTreeState:"clean", BuildDate:"2019-05-08T13:53:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", GitCommit:"abdda3f9fefa29172298a2e42f5102e777a8ec25", GitTreeState:"clean", BuildDate:"2019-05-08T13:46:28Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.