Giter Club home page Giter Club logo

lens's People

Contributors

aleksfront avatar chenhunghan avatar dependabot[bot] avatar dex4er avatar dmitriynoa avatar gabriel-mirantis avatar iciclespider avatar iku-turso avatar ixrock avatar jakolehm avatar jansav avatar jim-docker avatar jkroepke avatar jnummelin avatar jweak avatar k8slens-bot avatar marcbachmann avatar miskun avatar msa0311 avatar nachasic avatar nevalla avatar nokel81 avatar ocdi avatar panuhorsmalahti avatar pashevskii avatar pauljwil avatar samitiilikainen avatar stevejr avatar vshakirova avatar wangyangjun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lens's Issues

New column on Storage page

Proposal is to have new column here Available Size / Free Size or similar ( in right of column Size):
image that will show how much space left in PVC. Can be used kubelet prometheus metrics: kubelet_volume_stats_available_bytes{persistentvolumeclaim="pvc-ID"}

download users kubeconfig

as an admin I want to download users kubeconfig so that I can use it with kubectl without first logging in as a user

Unable to connect to clusters using aws-iam-authenticator

Describe the bug
The AWS IAM authenticator program generates temporary tokens which are injected in a context in order to provide authentication that's been validated against an active IAM account. In order to support this the users section of the kubeconfig file calls a command (supported by Kubectl). Unfortunately I don't seem to be able to authenticate to these clusters using Lens.

To Reproduce
Steps to reproduce the behavior:

  1. Setup EKS based cluster within AWS
  2. Follow instructions for setting up local kubectl access
  3. Add cluster to Lens
  4. See error Invalid kubeconfig context snip-production: cannot access cluster (snip-production)

Expected behavior
Lens should connect to the cluster by executing the command stored in the config file, just like Kubectl

Screenshots
Example config file:

    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1alpha1
        args:
          - token
          - '-i'
          - production-<snip>
        command: aws-iam-authenticator
        env: null

Environment (please complete the following information):

  • Lens Version: 2.1.3
  • OS: OSX 10.14.5
  • Installation method (e.g. snap or AppImage in Linux): DMG file

Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:
No logs produced when running application with open -W -F -a Lens

Kubeconfig:
See above

Additional context
Add any other context about the problem here.

kube-state-metrics installation for lens metrics is missing PSP

Describe the bug
Installing kube-state-metrics via Cluster -> Settings -> Features -> Metrics -> Install fails because it does not install Pod Security Policies. I mean, the install in the UI itself succeeds, but the end result of having kube-state-metrics running fails because of PSP.

To Reproduce
Steps to reproduce the behavior:

  1. Have cluster that enforces PSP
  2. Install metrics
  3. Observe how pods are not running because of PSP violations (missing)

Expected behavior
Pods start OK with PSP.

Environment (please complete the following information):

  • Lens Version: 2.1.3
  • OS: Mojave
  • Download from website

preempted message is confusing

other log messages start with "killing" "created", but when a pod is preempted the message is "by "

I know this is just straight out of kubernetes, but this could be improved in lens

Terminal broken when using the Snap version

Describe the bug
Terminal not working and after few click on refresh button get error

To Reproduce
Steps to reproduce the behavior:

  1. Go to your Lens app
  2. Click on Terminal button in bottom
  3. Scroll down to '....'
  4. See error

Expected behavior
Terminal to open as Web version of Lens

Screenshots
image

Environment (please complete the following information):

  • Lens Version: 2.2.0
  • OS: [e.g. OSX] Ubuntu 18.10
  • Installation method (e.g. snap or AppImage in Linux): snap

Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:

Your logs go here...

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

your kubeconfig here

Additional context
Add any other context about the problem here.

login as user

from users allow me (as admin) to log in to lens as a user

Clicking "open in shell" fails if command line already has something

  1. Clicking "open in shell" does not clear existing command line before entering kubectl. Maybe always open a new tab and bring it automatically visible?

  2. if you already have a shell opened inside a pod, clickingin "open shell" will issue kubectl inside that pod (and it fails)

  3. Terminal tab is not vislble after clicking "open in shell". It seems that nothing happens and you might press multiple times -> you get multiple commands in that hidden window which you need to open manually.

  4. when you have multiple containers with SHORT NAMES, the "open shell" menu is really difficult to reach as it's a hover which is

  • hard to get to appear on a big screen,
  • easy to accidentally dismiss as the hover window is fairly small

kubernetes_dashboard___kontena_lens-1

Cannot connect to cluster using OIDC

Describe the bug
When connecting to a cluster that has OIDC enabled, Kontena says that the cluster is offline.

To Reproduce
Steps to reproduce the behavior:

  1. Open Kontena
  2. Select a cluster with OIDC authentication
  3. Message is given stating 'Cluster is offline'

Expected behavior
Kontena should pass the login message through to the user so that they can authenticate. A retry button should be available after that completes.

Screenshots
If applicable, add screenshots to help explain your problem.
image

Environment (please complete the following information):

  • Lens Version:
  • OS: Windows
  • Installation method (e.g. snap or AppImage in Linux): Installer

Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:

 info: Checking for update
info: initializing server for kubernetes.docker.internal:6443 on port 9600
info: initializing server for kubernetes.docker.internal:6443 on port 9601
info: initializing server for kube-api-p-rg-api-productio-b7d5a5-ffeac4a7.hcp.westus.azmk8s.io:443 on port 9602
info: initializing server for kube-api-s-rg-api-staging-319a50-ae3ee23b.hcp.westus.azmk8s.io:443 on port 9603
info: Update for version 2.0.9 is not available (latest version: 2.0.9, downgrade is disallowed).

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

your kubeconfig here

Additional context
Add any other context about the problem here.

issue with using Kontena Lens with AKS cluster

Hi Team,

We are facing issue with connecting kotena with aks cluster ,

Steps to reproduce :

  1. Start a aks cluster and install kotena locally , and connect to it and enable metrics
  2. Once metrics is enabled, all the required resources are created in lens-metrics namespaces without any errors
  3. Node level metrics are scraped correctly , When we are trying to get pod level metrics it fails , Saying metrics not available yet
  4. On seeing the kotena logs , This seem to be a bug
error: server https://aks:443 stderr: [ERROR] -> [KUBE-REQUEST]: Request failed with status code 404 {
 "method": "get",
 "url": "https://aks:443/api/v1/namespaces/lens-metrics/services/prometheus:80/proxy/api/v1/query_range",
 "headers": {
   "Accept": "application/json, text/plain, /",
   "Content-Type": "application/json",
   "User-Agent": "axios/0.19.0"
 },
 "params": {
   "query": "sum(node_filesystem_size_bytes{mountpoint=\"/\"}) by (kubernetes_node)",
   "start": "1569985980",
   "end": "1569989580",
   "step": "60",
   "kubernetes_namespace": "kontena-stats"
 }
}

We faced the same issue when installing prometheus operator on the cluster , Work around for this is to change https to htp
On executing the below command on prometheus operator , Everything works fine

kubectl -n monitoring get servicemonitor prometheus-prometheus-oper-kubelet -o yaml | sed 's/https/http/' | kubectl replace -f - 

This may be an hint for the issue

All log output is responding with invalid value

invalid value for "since": parsing time "-62135596800" as "2006-01-02": cannot parse "-62135596800" as "2006"

image

HTTP/1.1 200 OK
Audit-Id: 89f65505-9cfe-4f30-b4b8-a8078546432a
Content-Length: 109
Content-Type: text/plain
Date: Mon, 03 Dec 2018 13:52:53 GMT
X-Powered-By: Express

invalid value for "since": parsing time "-62135596800" as "2006-01-02": cannot parse "-62135596800" as "2006"

Request has empty since time:

curl "https://lens.int.foundationsoft.com/api-kube/api/v1/namespaces/registry/pods/stolon-sentinel-76d8c58b74-hz6x2/log?container=stolon-sentinel^&timestamps=true^&tailLines=1000^&sinceTime="

This is lens 1.3.0 shipped with Pharos 2.1.0-beta.1.

Connection problem to self-signed tls certificates

I have problem with connecting to clusters that have self-signed certificates.
I can connect to them with my kubectl and i've put
insecure-skip-tls-verify: true
in it's kubeconfig but seems that Lens doesn't consider that.
it tells:
Unable to connect to the server: x509: certificate is valid for INTERNALIP not PUBLICIP
How can i fix this?
Thanks

Minikube + snap permissions

Describe the bug
When using the snap edition, I can't connect to my minikube cluster using the generated kubeconfig, because lens cannot read the CA file.
A clear and concise description of what the bug is.

To Reproduce

  1. Create a minikube cluster
  2. Attempt to use the new minikube-generated kubeconfig in the "add a new cluster" configuration

Expected behavior
Cluster connects successfully

Actual behavior
Cluster attempts to load forever

Environment (please complete the following information):

  • Lens Version: 2.0.9
  • OS: [e.g. OSX] Ubuntu Budgie 19.04
  • Installation method (e.g. snap or AppImage in Linux): snap

Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:

liamdawson@crow ~/Downloads> kontena-lens --debug --verbose
CLUSTER STORE, MIGRATION: 2.0.0-beta.2
Store data now: {"_options":{"configName":"lens-cluster-store","fileExtension":"json","projectSuffix":"nodejs","clearInvalidConfig":true,"accessPropertiesByDotNotation":false,"projectVersion":"2.0.9","migrations":{},"cwd":"/home/liamdawson/snap/kontena-lens/21/.config/Lens"},"events":{"_events":{},"_eventsCount":0},"path":"/home/liamdawson/snap/kontena-lens/21/.config/Lens/lens-cluster-store.json"}
info: SNAP env is defined, updater is disabled
dumping kc: {
  apiVersion: 'v1',
  kind: 'Config',
  preferences: {},
  'current-context': 'cygnus-dev',
  clusters: [ { name: 'cygnus-dev', cluster: [Object] } ],
  contexts: [ { name: 'cygnus-dev', context: [Object] } ],
  users: [ { name: 'cygnus-dev', user: [Object] } ]
}
(node:11938) UnhandledPromiseRejectionWarning: Error: EACCES: permission denied, open '/home/liamdawson/.minikube/ca.crt'
    at Object.openSync (fs.js:447:3)
    at Object.func (electron/js2c/asar.js:155:31)
    at Object.func [as openSync] (electron/js2c/asar.js:155:31)
    at Object.readFileSync (fs.js:349:35)
    at Object.fs.readFileSync (electron/js2c/asar.js:597:40)
    at Object.fs.readFileSync (electron/js2c/asar.js:597:40)
    at e.<anonymous> (/snap/kontena-lens/21/resources/app.asar/webpack:/src/main/context-handler.ts:44:24)
    at /snap/kontena-lens/21/resources/app.asar/main.js:1:270277
    at Object.next (/snap/kontena-lens/21/resources/app.asar/main.js:1:270382)
    at o (/snap/kontena-lens/21/resources/app.asar/main.js:1:269128)
    at processTicksAndRejections (internal/process/task_queues.js:89:5)
(node:11938) UnhandledPromiseRejectionWarning: Error: EACCES: permission denied, open '/home/liamdawson/.minikube/ca.crt'
    at Object.openSync (fs.js:447:3)
    at Object.func (electron/js2c/asar.js:155:31)
    at Object.func [as openSync] (electron/js2c/asar.js:155:31)
    at Object.readFileSync (fs.js:349:35)
    at Object.fs.readFileSync (electron/js2c/asar.js:597:40)
    at Object.fs.readFileSync (electron/js2c/asar.js:597:40)
    at e.<anonymous> (/snap/kontena-lens/21/resources/app.asar/webpack:/src/main/context-handler.ts:44:24)
    at /snap/kontena-lens/21/resources/app.asar/main.js:1:270277
    at Object.next (/snap/kontena-lens/21/resources/app.asar/main.js:1:270382)
    at o (/snap/kontena-lens/21/resources/app.asar/main.js:1:269128)
    at processTicksAndRejections (internal/process/task_queues.js:89:5)
(node:11938) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:11938) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:11938) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:11938) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
user-open error: Object does not implement the interface

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/liamdawson/.minikube/ca.crt
    server: https://192.168.39.82:8443
  name: cygnus-dev
contexts:
- context:
    cluster: cygnus-dev
    user: cygnus-dev
  name: cygnus-dev
current-context: cygnus-dev
kind: Config
preferences: {}
users:
- name: cygnus-dev
  user:
    client-certificate: /home/liamdawson/.minikube/client.crt
    client-key: /home/liamdawson/.minikube/client.key

Additional context
It looks like snap prohibits access to hidden files in the root of a user's home directory, and I think the only current workaround is classic confinement. (https://forum.snapcraft.io/t/access-to-specific-hidden-file-path-in-users-home/6948/21)

Labels annotations and selectors should be clickable

I'm looking at this service
image

And I see the label app: rxindicator

Down below I even see the selector app=rxindicator but no way to get to the pods it represents.

It seems to me that clicking a label, annotation, or selector should take you to a universal resource list page with that selector applied and show all matching resources.


Barring that, paying special attention to Services with a selector, those ones should definitely be linked to the Pods list with that selector applied.

Asign Pharos PRO license

I am with Kontena Lens 1.9.1 and button Assign License for Kontena Pharos is not working (click on it not do any actiopn) In the same time the same button for Lens license is operation. Is there more new tag of from registry.pharos.sh/kontenapharos/lens:1.9.1 ?
There are 2 console messages:
-> /api/watch?nodes=3347199&pods=3347199&events=3347199 was interrupted while the page was loading.
-> [mobx.array] Attempt to read an array index (0) that is out of bounds (0). Please check length first. Out of bound indices will not be tracked by MobX

Firefox UI bug

Sub menu for different items is missing on all pages when is opened from Firefox. In Chrome work normally

Firefox:
image
Chrome:
image

Deploying Prometheus with Lens reports errors

Describe the bug
I've clicked on the Install button for metrics to have Lens deploy Prometheus. It does seem to be working as I do get metrics data in Lens now, however the StatefulSet for Prometheus is in a constant state of alarm with the following error:

create Pod prometheus-0 in StatefulSet prometheus failed error: pods "prometheus-0" is forbidden: error looking up service account lens-metrics/prometheus: serviceaccount "prometheus" not found

To Reproduce
Steps to reproduce the behavior:

  1. Go to settings
  2. Click on Install for Metrics
  3. Go to StatefulSets
  4. See error

Expected behavior
Prometheus to deploy without errors.

Environment (please complete the following information):

  • Lens Version: 2.1.2
  • OS: Ubuntu Studio 19.10
  • Installation method: snap
  • Kubernetes Cluster: 1.15.3 Digital Ocean Hosted

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://REDACTED.ondigitalocean.com
  name: do-nyc3-pubserv
contexts:
- context:
    cluster: do-nyc3-pubserv
    user: do-nyc3-pubserv-admin
  name: do-nyc3-pubserv
current-context: do-nyc3-pubserv
kind: Config
preferences: {}
users:
- name: do-nyc3-pubserv-admin
  user:
    token: REDACTED

kube-prometheus integration

Describe the bug
I couldn't find a way for the Lens to use an existing prometheus installation. In other words, we have an installed kube-prometheus stack but no metrics are visible in the app. Metrics appear only after installing another prometheus stack in the lens-metrics namespace (using the button in the settings).

Expected behavior
Kontena Lens automatically (or with some extra configuration) should use an existing prometheus installation

Environment (please complete the following information):

  • Lens Version: 2.1.4
  • OS: Linnux
  • Installation method (e.g. snap or AppImage in Linux): AppImage

Issues loading Lens on Windows 7

Hello All,

Got the following error , while running Lens 2.0.8 on Windows 7 - 64 bit . Never get the loading screen :

C:\Users\abcd\Downloads>"Lens Setup 2.0.8.exe"

C:\Users\abcd\Downloads>C:\Users\abcd\AppData\Local\Programs\kontena
-lens\Lens.exe

C:\Users\abcd\Downloads>
←[32minfo←[39m: Checking for update
(node:7784) UnhandledPromiseRejectionWarning: Error: listen EACCES: permission d
enied 127.0.0.1:9000
at Server.setupListenHandle [as _listen2] (net.js:1209:19)
at listenInCluster (net.js:1274:12)
at doListen (net.js:1413:7)
at processTicksAndRejections (internal/process/task_queues.js:84:9)
(node:7784) UnhandledPromiseRejectionWarning: Error: listen EACCES: permission d
enied 127.0.0.1:9000
at Server.setupListenHandle [as _listen2] (net.js:1209:19)
at listenInCluster (net.js:1274:12)
at doListen (net.js:1413:7)
at processTicksAndRejections (internal/process/task_queues.js:84:9)
(node:7784) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function without a catch
block, or by rejecting a promise which was not handled with .catch(). (rejection
id: 1)
(node:7784) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function without a catch
block, or by rejecting a promise which was not handled with .catch(). (rejection
id: 1)
(node:7784) [DEP0018] DeprecationWarning: Unhandled promise rejections are depre
cated. In the future, promise rejections that are not handled will terminate the
Node.js process with a non-zero exit code.
(node:7784) [DEP0018] DeprecationWarning: Unhandled promise rejections are depre
cated. In the future, promise rejections that are not handled will terminate the
Node.js process with a non-zero exit code.
←[32minfo←[39m: Update for version 2.0.8 is not available (latest version: 2.0.8
, downgrade is disallowed).

=====

My Machine configuration ::
OS Name Microsoft Windows 7 Professional
Version 6.1.7601 Service Pack 1 Build 7601
Other OS Description Not Available
OS Manufacturer Microsoft Corporation
System Name C3LAP060
System Manufacturer Hewlett-Packard
System Model HP ProBook 440 G2
System Type x64-based PC
Processor Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz, 2401 Mhz, 2 Core(s), 4 Logical Processor(s)
BIOS Version/Date Hewlett-Packard M73 Ver. 01.44, 13-07-2017
SMBIOS Version 2.7

Namespace needs first-class support in the UX

There are a number of issues around namespaces.

  1. When you open a terminal, the namespace selected in kubectl commands is always default, regardless of what my normal terminal kubectl namespace is (e.g. in iTerm I kubens cjohnson and in the Lens terminal it's still default). It also has no bearing on the namespace I selected in the Filters area of resource list panes
  2. Not everything is scoped/filtered by the namespace filter, even though it seems to persist. The Overview tab shows statistics about all pods, not just pods from the selected namespace. Same with the event list

I propose pushing a "namespace" dropdown right into the top of the window that globally scopes everything Lens does into that namespace. When I go into resource lists, there would be no "Filter" option since the namespace is being filtered globally, and the resource list only includes resources in the selected namespace.

If I go to the Overview pane, the statistics and events would all be scoped to the namespace.

And of course, an "(All namespaces)" option would be available which de-filters namespace (and sets namespace to default in the Terminal)

run lens on master by default

for example when doing a cordon+drain and lens happens to be on the same node --> boom.

lens is critical to pharos cluster management and I think it should be on master by default.

Unable to modify manifests

I am trying to update manifests within my cluster but I am getting the following error:

"TypeError: e.map is not a function"

Steps to reproduce:

1 - Open a resource within a cluster
2 - Click the edit icon to bring up the manifest editor
3 - Edit the manifest (Optional also fails when un-altered)
4 - Click save or save & close

Environment:

Cluster Type: Azure Kubernetes System (AKS)
Nodes: Linux aks-agentpool-XXXX 4.15.0-1060-azure #65-Ubuntu SMP Wed Sep 18 08:55:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Client OS: Windows 10
Manifest example:

apiVersion: v1
kind: Pod
metadata:
name: kube-state-metrics-54558ff88b-rfv2z
generateName: kube-state-metrics-54558ff88b-
namespace: lens-metrics
selfLink: /api/v1/namespaces/lens-metrics/pods/kube-state-metrics-54558ff88b-rfv2z
uid: 9ce60a05-e5dc-11e9-bb7d-2e2df2e8421e
resourceVersion: '49804274'
creationTimestamp: '2019-10-03T12:52:09Z'
labels:
name: kube-state-metrics
pod-template-hash: '1011499446'
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: kube-state-metrics-54558ff88b
uid: 9ce28372-e5dc-11e9-bb7d-2e2df2e8421e
controller: true
blockOwnerDeletion: true
spec:
volumes:
- name: kube-state-metrics-token-4r7z5
secret:
secretName: kube-state-metrics-token-4r7z5
defaultMode: 420
containers:
- name: kube-state-metrics
image: 'registry.pharos.sh/kontenapharos/prometheus-kube-state-metrics:v1.6.0'
ports:
- name: metrics
containerPort: 8080
protocol: TCP
env:
- name: KUBERNETES_PORT_443_TCP_ADDR
value: aks-caf-7af47eaf.hcp.eastus2.azmk8s.io
- name: KUBERNETES_PORT
value: 'tcp://aks-caf-7af47eaf.hcp.eastus2.azmk8s.io:443'
- name: KUBERNETES_PORT_443_TCP
value: 'tcp://aks-caf-7af47eaf.hcp.eastus2.azmk8s.io:443'
- name: KUBERNETES_SERVICE_HOST
value: aks-caf-7af47eaf.hcp.eastus2.azmk8s.io
resources:
limits:
cpu: 200m
memory: 150Mi
requests:
cpu: 10m
memory: 150Mi
volumeMounts:
- name: kube-state-metrics-token-4r7z5
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: kube-state-metrics
serviceAccount: kube-state-metrics
securityContext: {}
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
priority: 0
status:
phase: Pending
conditions:
- type: PodScheduled
status: 'False'
lastProbeTime: null
lastTransitionTime: '2019-10-03T12:52:09Z'
reason: Unschedulable
message: '0/8 nodes are available: 8 node(s) didn''t match node selector.'
qosClass: Burstable

But all attempts give the same error.

Remove cert-manager CRDs at uninstall

The default behaviour is to remove cert-manager CRDs .

If cert-manager was already install, this is definitely not the behaviour we want. Because it breaks cert-manager.

Cannot open shell in pod outside of current context

When attempting to open a shell session within a pod from the Lens UI, a --context argument is not passed. Therefore, an error indicating that the pod was not found is issued unless the pod in question exists in the current-context defined in the active kubeconfig.

Certificates - app crash

Describe the bug
Entering to Certificates tab generates error.

To Reproduce
Steps to reproduce the behavior:

  1. In Lens menu click Configuration
  2. Click on 'Certificates'
  3. See error

Expected behavior
Certificate list appears

Screenshots
image

Environment (please complete the following information):

  • Lens Version: 2.1.2
  • OS: Windows 10
  • Installation method (e.g. snap or AppImage in Linux): Downloaded, updated package

Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:

 info: Checking for update
Loading kubeconfig from store with key: kubernetes-admin@kubernetes
Loading kubeconfig from store with key: aws
info: Update for version 2.1.2 is not available (latest version: 2.1.2, downgrade is disallowed).
(node:14608) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.
(node:14608) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.
info: Set kubectl version 1.16.1 for cluster version v1.15.3 using version map
info: initializing server for 10.0.0.31:6443 on port 9001

Component stack:

    in ItemListLayout
    in withRouter(ItemListLayout)
    in KubeObjectListLayout
    in withRouter(KubeObjectListLayout)
    in Certificates
    in Route
    in Switch
    in ErrorBoundary
    in main
    in div
    in MainLayout
    in withRouter(MainLayout)
    in Config
    in Route
    in Switch
    in Switch
    in ErrorBoundary
    in Router
    in b
    in App

Error stack:

TypeError: Cannot read property 'map' of undefined
    at Certificate.getConditions (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:2268604)
    at renderTableContents (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:2274894)
    at http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:1964232
    at Array.map (<anonymous>)
    at ItemListLayout.renderList (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:1963920)
    at ItemListLayout.render (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:1964947)
    at allowStateChanges (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:24218)
    at http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:77995
    at trackDerivedFunction (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:31388)
    at Reaction.track (http://d23c6a2d-5ad8-424f-9c5a-0b1858582a66.localhost:9000/app.js?9b761ca2cabeed8704f5:2:36439)

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: XXX
    server: https://10.0.0.31:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: XXX
    client-key-data: XXX

Additional context
I have second cluster configured (EKS) and it works there.

Kubernetes cluster on

 cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

Terminal funkiness

When I open a terminal, it doesn't seem like it's sourcing my .zshrc despite the fact that it does load zsh. One of my completion files throws an error compdef not defined which indicates it didn't source the compinit line that would have loaded it. Furthermore, there is something not properly coupled, because the namespace in the launched terminal is default even if my current namespace is something else. I can even go into a normal iTerm window and issue kubectl commands to confirm I'm in namespace cjohnson and then go back to the Lens terminal and the namespace is default

Deploying Metrics to AKS configuration

When deploying the metrics pods via lens to an AKS cluster the pods can't find nodes to run on.

Looking at it the pods have this node selector on them: kubernetes.io/os: linux

While AKS nodes are linux, they by default don't have this label on them causing the pod failure.

Cluster Type: Azure Kubernetes System (AKS)
Nodes: Linux aks-agentpool-XXXX 4.15.0-1060-azure #65-Ubuntu SMP Wed Sep 18 08:55:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Client OS: Windows 10

Error Events:

Event: prometheus-0.15ca241e30b1e6e7
Message 0/8 nodes are available: 8 node(s) didn't match node selector.
Created 1h 5m ago (2019-10-03T12:52:12Z)
Namespace lens-metrics
Reason FailedScheduling
Source default-scheduler
First seen 1h ago 2019-10-03T12:52:12Z
Last seen <1m ago 2019-10-03T13:57:15Z
Count 2931
Type Warning
Involved object Pod prometheus-0

Event: kube-state-metrics-54558ff88b-rfv2z.15ca241dac39ce92
Message 0/8 nodes are available: 8 node(s) didn't match node selector.
Created 1h 6m ago (2019-10-03T12:52:09Z)
Namespace lens-metrics
Reason FailedScheduling
Source default-scheduler
First seen 1h ago 2019-10-03T12:52:09Z
Last seen 1m ago 2019-10-03T13:57:14Z
Count 2931
Type Warning
Involved object Pod kube-state-metrics-54558ff88b-rfv2z

Once I removed the node selector from both the Prometheus statefulset and the Metrics deployment it works fine.

Node behavior

It seems undesirable to "remove a node" from the UI.
Maybe this could be replaced with a Cordon/Drain/Uncordon option?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.