Comments (9)
Thanks for the report. This definitely seems like a bug, and not one i've encountered before.
The resource name for a Pod is extracted from the Pod definition: https://github.com/cruise-automation/k-rail/blob/master/resource/pod.go#L48
I'll look into this some more.
Would you be able to provide a simple DaemonSet that could reproduce this?
from k-rail.
Here's a sample DaemonSet:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: bad-daemonset
spec:
selector:
matchLabels:
name: bad
template:
metadata:
labels:
name: bad
spec:
hostNetwork: true
hostPID: true
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: hostroot
hostPath:
path: /
containers:
- name: bad
image: ubuntu
command: ["sleep", "36000"]
imagePullPolicy: Always
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
- name: hostroot
mountPath: "/host"
securityContext:
privileged: true
capabilities:
add: ["NET_ADMIN", "SYS_ADMIN"]
from k-rail.
Fixed in #24.
I figured out what's happening.
First I deployed k-rail in debug mode:
$ helm template --namespace k-rail --set config.log_level="debug" deploy/helm | kubectl apply -f -
Debug mode prints the raw AdmissionReview requests that come in.
Then I added an exemption for the DaemonSet:
- namespace: default
resource_name: "bad-deployment"
Then I deployed the DaemonSet:
xsel -b | kubectl apply -f -
This is the critical log information from the AdmissionReview for a Pod created from a DaemonSet:
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"generateName": "bad-deployment-",
"creationTimestamp": null,
"labels": {
"controller-revision-hash": "7c8f9cd7d",
"name": "bad",
"pod-template-generation": "1"
},
"annotations": {
"kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container bad"
},
"ownerReferences": [
{
"apiVersion": "apps/v1",
"kind": "DaemonSet",
"name": "bad-deployment",
"uid": "b63338e0-fc59-11e9-8b62-42010a8000b2",
"controller": true,
"blockOwnerDeletion": true
}
]
Notice the top-level name
field does not exist. The Name field is provided by metav1.ObjectMeta, and the comment says that the field is optional but a client (the DaemonSet controller) can request a name to be generated:
// Name must be unique within a namespace. Is required when creating resources, although
// some resources may allow a client to request the generation of an appropriate name
// automatically. Name is primarily intended for creation idempotence and configuration
// definition.
// Cannot be updated.
// More info: http://kubernetes.io/docs/user-guide/identifiers#names
// +optional
Name string `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
So for improved resource name extraction I have created a function called GetResourceName()
that will attempt to retrieve the best resource name, in this order:
- The name of the controller owner resource (DaemonSet in this case)
.metadata.ownerReferences.name
- The top-level
.name
field - A name label
.metadata.labels.name
The reason why I made the owner controller resource name top priority is that this is the name of high-level resource that a user would be working with most directly. The other names would come from a template spec.
Thanks a lot for bringing this bug to our attention, it was a very good thing to fix.
from k-rail.
Fix included in the v1.0 release
from k-rail.
Hey Dustin,
I just tried to test this on v1.0-release, and I'm afraid the problem persists. Here's the config I fed to k-rail:
- resource_name: "istio-cni-node"
namespace: "istio-system"
username: "*"
group: "*"
exempt_policies: ["pod_no_host_network"]
And here's the debug output, when trying to deploy an istio-node-cni daemonset:
{
"enforced": true,
"kind": "Pod",
"level": "warning",
"msg": "ENFORCED",
"namespace": "istio-system",
"policy": "pod_no_host_network",
"resource": "",
"time": "2019-11-11T09:03:43Z",
"user": "system:serviceaccount:kube-system:daemon-set-controller"
}
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1beta1",
"request": {
"uid": "e6a3f261-3121-437e-b34e-f537245f79de",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"namespace": "istio-system",
"operation": "CREATE",
"userInfo": {
"username": "system:serviceaccount:kube-system:daemon-set-controller",
"uid": "81af81e6-f027-48da-ac0a-70358b49a5cc",
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:kube-system",
"system:authenticated"
]
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"generateName": "istio-cni-node-",
"creationTimestamp": null,
"labels": {
"controller-revision-hash": "76db549475",
"k8s-app": "istio-cni-node",
"pod-template-generation": "1"
},
"annotations": {
"kubernetes.io/psp": "istio-cni-node",
"scheduler.alpha.kubernetes.io/critical-pod": "",
"seccomp.security.alpha.kubernetes.io/pod": "runtime/default",
"sidecar.istio.io/inject": "false"
},
"ownerReferences": [
{
"apiVersion": "apps/v1",
"kind": "DaemonSet",
"name": "istio-cni-node",
"uid": "d6c11e2a-c5eb-4569-bb77-8ee35a83b169",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "cni-bin-dir",
"hostPath": {
"path": "/opt/cni/bin",
"type": ""
}
},
{
"name": "cni-net-dir",
"hostPath": {
"path": "/etc/cni/net.d",
"type": ""
}
},
{
"name": "istio-cni-token-s6fzj",
"secret": {
"secretName": "istio-cni-token-s6fzj"
}
}
],
"containers": [
{
"name": "install-cni",
"image": "registry-internal.elpenguino.net/library/istio-install-cni:1.3.3",
"command": [
"/install-cni.sh"
],
"env": [
{
"name": "CNI_NETWORK_CONFIG",
"valueFrom": {
"configMapKeyRef": {
"name": "istio-cni-config",
"key": "cni_network_config"
}
}
}
],
"resources": {},
"volumeMounts": [
{
"name": "cni-bin-dir",
"mountPath": "/host/opt/cni/bin"
},
{
"name": "cni-net-dir",
"mountPath": "/host/etc/cni/net.d"
},
{
"name": "istio-cni-token-s6fzj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"capabilities": {
"drop": [
"ALL"
]
},
"allowPrivilegeEscalation": false
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 5,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"beta.kubernetes.io/os": "linux"
},
"serviceAccountName": "istio-cni",
"serviceAccount": "istio-cni",
"hostNetwork": true,
"securityContext": {},
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchFields": [
{
"key": "metadata.name",
"operator": "In",
"values": [
"wn1.kube-cluster.local"
]
}
]
}
]
}
}
},
"schedulerName": "default-scheduler",
"tolerations": [
{
"operator": "Exists",
"effect": "NoSchedule"
},
{
"operator": "Exists",
"effect": "NoExecute"
},
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute"
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute"
},
{
"key": "node.kubernetes.io/disk-pressure",
"operator": "Exists",
"effect": "NoSchedule"
},
{
"key": "node.kubernetes.io/memory-pressure",
"operator": "Exists",
"effect": "NoSchedule"
},
{
"key": "node.kubernetes.io/pid-pressure",
"operator": "Exists",
"effect": "NoSchedule"
},
{
"key": "node.kubernetes.io/unschedulable",
"operator": "Exists",
"effect": "NoSchedule"
},
{
"key": "node.kubernetes.io/network-unavailable",
"operator": "Exists",
"effect": "NoSchedule"
}
],
"priority": 0,
"enableServiceLinks": true
},
"status": {}
},
"oldObject": null,
"dryRun": false,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}
from k-rail.
@dustin-decker not sure whether I should create a new issue for this, haven't been able to re-open this one. What would you prefer I do? :)
from k-rail.
Thanks for the debug output.
It looks like that Pod has the .metadata.ownerReferences[0].name
field with .metadata.ownerReferences[0].controller
set to true
, so it should work.
I've added a unit test with the exact contents of that example Pod you posted, and it passes with the expected resource name of istio-cni-node
: 9879eec#diff-f24c662fefa979ac39e09c202651ecbbR29
Are you certain you are on v1.0+?
I recommend pulling the latest manifests to get the new policy configurations, but you can probably just set this tag: https://github.com/cruise-automation/k-rail/blob/master/deploy/helm/values.yaml#L9
from k-rail.
My mistake. I was using a cached version of the image that I submitted my first numeric USERID PR on. V1.0 is working fine in this regard :)
from k-rail.
Oh, great news! We actually ended up encountering the original bug following a cluster upgrade but were able to quickly upgrade to the new release that had the patch. So thanks again for the original bug report.
Cheers
from k-rail.
Related Issues (20)
- Question: how to change the values.yaml file?
- helm install --debug k-rail k-rail/k-rail --namespace k-rail fails HOT 2
- Unable to delete pod HOT 10
- Exemptions do not cover DaemonSets HOT 1
- "runAsNonRoot: true" should be in Pod and Container SecurityContexts HOT 5
- exemptions on container level HOT 2
- Helm Install: no matches for kind "PodDisruptionBudget" in version "policy/v1beta1" HOT 1
- [FR] Emergency stop button. Prevent all changes when toggled. HOT 3
- exempt_policies pod_no_exec and execute to pod/container fails HOT 5
- Can we use regexs in exemptions? HOT 2
- [FR] Make terminationMessagePolicy: FallbackToLogsOnError default HOT 1
- EmptyDir sizelimits no longer applied via mutation HOT 2
- Include violating image in logs produced by pod_trusted_repository policy HOT 2
- Bug in Namespace Process Sharing HOT 1
- Pod policy check inconsistencies HOT 3
- [FR] Add policy to enforce unique Istio VirtualServices (like unique Ingress policy) HOT 3
- [Question] Is there a way to enforce only matching resources on a set of nodes? HOT 3
- Update deprecated api group admissionregistration.k8s.io/v1beta1 HOT 1
- k-rail Webhook Fails Due to TLS SANs Issue + Temporary Workaround HOT 1
- Exemptions and Fuzzy Matching - Possible Bug HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from k-rail.