Giter Club home page Giter Club logo

kubearmor / kubearmor Goto Github PK

View Code? Open in Web Editor NEW
1.3K 20.0 307.0 54.58 MB

Runtime Security Enforcement System. Workload hardening/sandboxing and implementing least-permissive policies made easy leveraging LSMs (BPF-LSM, AppArmor).

Home Page: https://kubearmor.io/

License: Apache License 2.0

Shell 10.23% C 7.14% Makefile 1.95% Go 76.45% Dockerfile 0.57% HTML 3.55% Smarty 0.10%
lsm tool security containers kubernetes policy system bpf ebpf kernel

kubearmor's People

Contributors

achrefbensaad avatar ankurk99 avatar anurag-rajawat avatar aryan-sharma11 avatar asifalix avatar daemon1024 avatar delusionaloptimist avatar dku-boanlab avatar dqsully avatar elfadel avatar geyslan avatar github-actions[bot] avatar h3llix avatar haytok avatar jatinagwal avatar kranurag7 avatar nam-jaehyun avatar namdeirf avatar nyrahul avatar oneiro-naut avatar prateeknandle avatar primalpimmy avatar renovate[bot] avatar rksharma95 avatar rootxrishabh avatar seswarrajan avatar seungsoo-lee avatar shreyas220 avatar vedratan avatar weirdwiz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubearmor's Issues

gRPC connection failure

When existing gRPC connections between KubeArmor and LogServer, there was no reconnection logic.
Thus, after facing some connection errors, logs were no longer delivered to LogServer.
I'll fix it soon.

Get/Describe ksp via kubectl

We need to get/describe kubeArmor security policy information like Cilium network policy (e.g., $kubectl get cnp)

  • kubectl get ksp
  • kubectl describe ksp [ksp name]

Check the feasibility of nested profiles inside containers

The purpose of the fromSource option in security_policy_specification.md is to narrow down the coverage of a security policy.
It means that we want to restrict not only the behavior of a container but also the behavior of a process in the container.
For example, we want /bin/bash in a container to access /tmp only.
To achieve this, we need to utilize nested profiles supported by AppArmor.

As a first step, we want to see an example of nested profiles applied to a container.
(there are several examples that present nested profiles in a normal case, but those examples do not work in a container)

Auditing allowed operations

KubeArmor provides three types of actions: Allow, Audit, and Block.

The current purpose of the 'Audit' action is to check if some policies are properly defined before using the 'Block' action.
So, we can basically get alerts related to certain security rules with the 'Audit' and 'Block' actions.

How about the 'allow' action? We can get alerts for some objects or operations that are not allowed.
In other words, there is no way to get some logs for exactly allowed access or operations.

Thus, we'll add a new action called 'AllowWithAudit'.

Support gRPC

KubeArmor currently supports two types of logging mechanisms: standard output and a log file.
Now, it's time to adopt gRPC to send audit logs to the outside.

Apply native apparmor (or any LSM) policy using KubeArmor

KubeArmor allows one to specify the YAML policy which in turn gets converted/mapped into app-armor or underlying LSM based policy spec.

It should be possible to specify the native app-armor (or other LSM) policy as it is in YAML so that we do not depend on the KubeArmor extensions to be added for supporting all the AppArmor constructs.

A sample YAML could be:

--- 
apiVersion: security.accuknox.com/v1
kind: KubeArmorPolicy
metadata: 
  name: ksp-mysql-dir-audit
  namespace: wordpress-mysql
spec: 
  apparmor: |
        /bin/bzip2 rm,
        /** r,
        signal peer=/usr/bin/man,
  selector: 
    matchLabels: 
      app: mysql

Notice the AppArmor specific constructs included in native AppArmor format.

SELinux investigation & support

AWS EKS does not support AppArmor. They only support SELinux.

So, we need to investigate SELinux and support it in KubeArmor.

Extend container-aware logs with policy details

Log provided by Auditd

type=AVC msg=audit(1611571375.558:7542): apparmor="DENIED" operation="exec" profile="apparmor-demo-ubuntu-1" name="/bin/sleep" pid=2945377 comm="bash" requested_mask="x" denied_mask="x" fsuid=0 ouid=0

Container-aware log provided by KubeArmor

{
"updatedTime":"2021-01-26T12:24:24.073028Z",
"hostName":"ubuntu20",
"namespaceName":"multiubuntu",
"podName":"ubuntu-1-deployment-5fd94b7b9b-vvbk2",
"containerID":"aa30045a08ed662534085ad49349ec120878f73ff9aa5451597e64eeaabcf030",
"containerName":"k8s_ubuntu-1-container_ubuntu-1-deployment-5fd94b7b9b-vvbk2_multiubuntu_0b79c32b-2f4c-4da8-b771-e6baa7ebede4_0",
"hostPid":105116,
"source":"bash",
"operation":"Process",
"resource":"/bin/sleep",
"result":"Permission denied"
}

What would be the next step?

{
"updatedTime":"2021-01-26T12:24:24.073028Z",
"hostName":"ubuntu20",
"namespaceName":"multiubuntu",
"podName":"ubuntu-1-deployment-5fd94b7b9b-vvbk2",
"containerID":"aa30045a08ed662534085ad49349ec120878f73ff9aa5451597e64eeaabcf030",
"containerName":"k8s_ubuntu-1-container_ubuntu-1-deployment-5fd94b7b9b-vvbk2_multiubuntu_0b79c32b-2f4c-4da8-b771-e6baa7ebede4_0",
"hostPid":105116,
"ppid":99,
"pid":108,
"uid":0,
"policyName":"ksp-ubuntu-1-proc-path-block",
"severity":"low",
"type":"PolicyMatched",
"source":"/bin/bash",
"operation":"Process",
"resource":"/bin/sleep 1",
"action":"Block",
"result":"Permission denied"
}

Telemetry with Prometheus

  • Define telemetry metrics

=== Example Metrics ===

  • (Number of) logs generated on a given host / namespace / pod / container
    → [ HostName | NamespaceName | PodName | ContainerName ]

  • (Number of) logs generated on a given policy
    → PolicyName

  • (Number of) logs with X severity or above
    → Severity

  • (Number of) logs with a given type (i.e., PolicyMatched, SystemLog)
    → Type

  • (Number of) logs with a given operation (i.e., Process, File, Network, Capabilities)
    → Operation

  • (Number of) logs with a given action (i.e., Allow, Audit, Block)
    → Action

  • (Number of) logs with a given tag (not support yet)

  • (Number of) logs with a given label (not support yet)

=== Example Metrics ===

=== Possible Filters ===

  • TimeRange (from YYYY-MM-DD hh:mm:ss to YYYY-MM-DD hh:mm:ss)
  • HostName (string)
  • NamespaceName (string)
  • PodName (string, narrowed down by NamespaceName)
  • ContainerName (string, narrowed down by NamespaceName)
  • PolicyName (string, possibly narrowed down by NamespaceName)
  • Severity (integer, 1-10)
  • Type (PolicyMatched, SystemLog)
  • Operation (Process, File, Network, Capabilities)
  • Action (Allow, Audit, Block)

=== Possible Filters ===

  • Produce telemetry data to monitoring systems (e.g., Prometheus)

Missing audit policy count

This is a minor issue.
When we apply security policies with the audit action, KubeArmor didn't count such policies.
It only counted security policies with the allow and block actions. I'll fix this soon.

Check the feasibility of PodPreset/PodSecurityPolicy

  • Current status

In order to apply security contexts (e.g., AppArmor profiles),
we need to specifically define them in pod definitions for Kubernetes.

Here is an example.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ubuntu-1-deployment
  namespace: multiubuntu
  labels:
    deployment: ubuntu-1
spec:
  replicas: 1
  selector:
    matchLabels:
      group: group-1
      container: ubuntu-1
  template:
    metadata:
      labels:
        group: group-1
        container: ubuntu-1
      annotations:
        container.apparmor.security.beta.kubernetes.io/ubuntu-1-container: localhost/apparmor-demo-ubuntu-1
    spec:
      containers:
        - name: ubuntu-1-container
          image: 0x010/ubuntu-w-utils:latest

As you can see, we manually added an annotation to use AppArmor for container security.
This document (Link) provides more details.

  • What we want to do

We're looking for a way to automatically add security contexts into pod definitions.

For example, we want to automatically add an annotation like
"container.apparmor.security.beta.kubernetes.io/[container-name]: localhost/apparmor-[namespace name]-[container name]" in this definition.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ubuntu-1-deployment
  namespace: multiubuntu
  labels:
    deployment: ubuntu-1
spec:
  replicas: 1
  selector:
    matchLabels:
      group: group-1
      container: ubuntu-1
  template:
    metadata:
      labels:
        group: group-1
        container: ubuntu-1
    spec:
      containers:
        - name: ubuntu-1-container
          image: 0x010/ubuntu-w-utils:latest

Here, we don't need to worry about the existence of an AppArmor profile.
KubeArmor will internally create the profile for you.

gRPC service of KubeArmor

KubeArmor provides a gRPC service for externals.
It feeds logs and telemetry through the gRPC service.

There are several sub-tasks to achieve this.

  • Open gRPC service per KubeArmor
  • Calculate statistics
  • Feed logs and statistics to clients

Handling for access to the predefined host volume mount

In SELinux, if a user wants to deploy a pod that has host-volume mount options, we should handling right after attaching the SELinux type label for the pod to access the mounted volume.

    spec:
      containers:
        - name: ubuntu-3-container
          image: 0x010/ubuntu-selinux:latest
          volumeMounts:
          - name: homepath
            mountPath: /home
          securityContext:
            seLinuxOptions:
              type: ubuntu-3-container.process
      volumes:
      - name : homepath
        hostPath:
          path: /home
          type: Directory

In this case, we should care the /home directory by default, and then the user can apply a policy that defines access control of the subdirectories.

Policy reset problem during security policy updates

I found an issue that security policies are sometimes gone when a new security policy is added.

Multiple structures are updated without locks during the updates of pods, containers, and security policies.
Thus, the updates sometimes overwrite some data (in particular, security policies), eventually missing some updates.

We need to properly fix KubeArmor's update logic.

Pre-install check script/bin

A pre-install script that checks the pre-requisites needed for KubeArmor will be helpful during onboarding the clusters/nodes with the KubeArmor daemonset.

  • Check app-armor related installation/tools
  • Check kernel config, LSM configs
  • Any other tools dependencies.

Support Containerd along with Docker

While KubeArmor is designed for Kubernetes, it currently uses Docker APIs to get container information in each node.
This is because the container information is required to figure out which container triggers a certain audit log.

In the audit log, we can only see system metadata (e.g., PPID and PID).
Thus, when KubeArmor keeps track of process executions inside containers, it finds the mapping between system metadata and container context using the container information given by Docker.

As a next step, we'd like to support Containerd.

The overall logic would be the same as the current Docker engine (dockerHandler.go, dockerWatcher.go).
We also prepared empty files for Containerd (containerdHandler.go, containerdWatcher.go).

It would be helpful if someone takes this issue for us.

ContainerMonitor: use libbpf instead of bcc

bcc requires kernel headers and it seems unstable when we test KubeArmor on several environments (with different kernel versions). Thus, we're planning to change the base of the container monitor from bcc to libbpf.

Define system operations at a high level

We need to define high-level operations that represent a set of internal behaviors.
For example, the 'open_network_service' operation will cover the 'net_bind_service' capability internally.

Please take a look at supported_capability_list.md and the coverage of each capability.
Then, see how to categorize capabilities at a high level.

Add Severity in Policy Specification

Add the severity field in a security policy

apiVersion: security.accuknox.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-group-1-proc-path-block
namespace: multiubuntu
spec:
severity: medium
selector:
matchLabels:
group: group-1
process:
matchPaths:
- path: /bin/sleep # try sleep 1 (permission denied)
action:
Block

Failing kubearmor-cos-auditd during its healthcheck

In GKE, kubearmor works fine, but kubearmor-cos-auditd is being failed after some time.
It seems that something causes such failures during healthcheck.
Require an investigation on this issue and fix it.

Network rules are not working on GKE

KubeArmor provides network rules (specifying protocols that we want to allow or block).
These network rules are working fine on a bare-metal environment.
However, they are not working on GKE.

No matter what network rules are defined, they are ignored on GKE.
We need to investigate this issue.

Alerts from the policy violations of network operations and capabilities

AppArmor and Auditd do not generate alerts for the policy violations in operations,
while they can generate alerts for the policy violations against object accesses (e.g., process executions and file accesses).

By update KubeArmor's container monitor, KubeArmor needs to generate alerts for the policy violations in operations too.

Missed audit logs

  • Problem

KubeArmor works fine in terms of audit logs for blocked process executions and blocked file accesses.
However, KubeArmor does not generate audit logs for blocked network operations and capabilities.
It seems that the problems occur since such logs are not generated by LSMs at this point.

  • Possible approach

When some system calls are blocked due to capability permission checks,
the information of such blocked system calls is recorded in audit logs.
We can use this information and generate container-aware audit logs.
In the case of network operations, we still need more time to think of how to get audit logs.

  • Checklist

  • Blocked network operations

  • Blocked capability permission checks

Test KubeArmor on AWS (EKS)

Check if the KubeArmor can work on AWS k8s engines.

  • Check if the kernel primitives needed for KubeArmor are readily available
  • Test KubeArmor on AWS
  • Create a document for EKS deployment

Check if anyone can deploy KubeArmor based on the document

  • Double validation by someone else

Garbage value in the system logs of container monitor

KubeArmor's container monitor keeps track of not only the lifecycles of processes but also the syscall failures in containers.
While extracting the arguments of syscalls, there were some garbage values.

{"updatedTime":"2020-12-28T10:53:45.968956Z","hostName":"localk8s","containerID":"be074ac2bdba143df46e5f88f4ebccc2cde6e196459b3d9f2ae11ef8529e693d","containerName":"k8s_ubuntu-1-container_ubuntu-1-deployment-5fd94b7b9b-92m4p_multiubuntu_ae3175cb-f740-4feb-9664-d144c4ada0c9_0","hostPid":1796,"ppid":443,"pid":107,"tid":107,"uid":0,"comm":"bash","syscall":"SYS_CONNECT","argnum":2,"retval":-2,"data":"fd=3 sa_family=AF_UNIX sun_path=/var/run/nscd/socket\ufffd\u001f\u0016\ufffd\ufffd�\n\nN\ufffdK \u0001\ufffd� \u00080(M\ufffd\ufffd�","errorMessage":"No such file or directory"}

I'll fix this soon.

Support for Host based policies

KubeArmor currently supports pod-specific policies i.e, one needs to specify the pod selector labels and optionally namespace for policy enforcement. However, there are rules that are applicable at the host level and thus need handling using NodeSelector labels alone. There is no notion of namespaces for host-based rules.

Check Cilium's host-policies to refer host-based policies.

Update Docker Engine API

There have been some updates in Docker Engine APIs.
To support the k8s environment these days, we'll update Docker Engine APIs used for container discovery.

Auto-annotating the service/pod with apparmor annotations

Currently, the AppArmor annotations have to be manually applied before deploying the service/pods.

For e.g., from examples/wordpress-mysql example deployment contains the manually added AppArmor annotation:

::
  template:
    metadata:
      labels:
        app: wordpress
      annotations:
        container.apparmor.security.beta.kubernetes.io/wordpress: localhost/apparmor-wordpress
::

The AppArmor related annotations should be automatically applied if KubeArmor operator is deployed in the cluster.

Investigate container attacks to define predefined policies

So far, KubeArmor depends on user-defined security policies.
To extend its coverage, we'll investigate container attacks in the real world and develop security policies against them,
which can be then used for the baseline of container security.

Enable KubeArmor per namespace

When KubeArmor is deployed, it currently injects AppArmor annotations to all pods to put them under its control.
However, there could be some issues when AppArmor is applied to pods in certain namespaces.
Thus, similar to 'istio-injection=enabled' used by Istio, I'm planning to have the 'kubearmor=enabled' label at the namespace level to selectively inject AppArmor annotations to pods in specific namespaces only.

specify multiple protocols, capabilities in the same rule

KubeArmor currently allows enabling/disabling network protocols and system capabilities, one protocol/cap at a time.
It should be possible to specify multiple protocols or system capabilities in the same rule.

For e.g, if the user wants to enable TCP and UDP protocols (and not ICMP), then the user has to specify two different rules i.e, one enabling TCP and the other enabling UDP. Following is the policy specification in the context:

network:
    matchProtocols:
    - protocol: [TCP|UDP|ICMP]       #---> Only TCP or UDP or ICMP could be specified.
      fromSource:
      - path: [absolute file path]
      - dir: [absolute directory path]
        recursive: [true:false]

It should be possible to specify multiple protocols in the same rule using the following (prospective) spec:

network:
    matchProtocols:
    - protocol: [TCP,UDP,ICMP]    #---> Use comma separated protocols to specify multiple protocols.
      fromSource:
      - path: [absolute file path]
      - dir: [absolute directory path]
        recursive: [true:false]

The same is true for specifying multiple system capabilities.

Feature extension for resource restriction

CPU/mem/disk limit - possible to define them in docker or k8s pod configurations
It would be possible to change such restrictions in runtime without configuration updates through KubeArmor

Need a security policy validator and supporting per-field/rule action/severity

When we apply security policies based on KubeArmor's CRD, we need to check two things.

  1. Syntax

CRD basically contains syntax validation logic for each field, but it can only check input types or patterns.

  1. Semantics

CRD doesn't contain any validations for semantics, meaning that a user needs to properly define security policies without semantics errors.

Here are some examples.

  • 'ownerOnly' works with the Allow action.
  • 'recursive' is only effective with directories, not paths.
  • NodeSelector cannot be defined with selector.

As a result, we need a security policy validator at the Kubernetes side in order to check syntax and semantics errors
as soon as some policies are applied into Kubernetes and before they are detected by KubeArmor.

The validator might be implemented as a custom controller along with the CRD.

Add Namespace and Pod info in audit logs

KubeArmor currently produces audit logs containing the IDs and names of containers.
I'll update KubeArmor to produce audit logs that contain {Namespace Name, Pod Name, Container ID, Container Name}.

Policy enforcement failure due to rule conflicts

Let's say that we have the following policies.

Policy A

process:
  matchPaths:
  - path: /bin/sleep
     fromSource:
     - path: /bin/bash
  action:
    Allow

Policy B

process:
  matchPaths:
  - path: /bin/bash
action:
  Allow

Those policies are converted something like this.

...
/bin/bash cx,
profile /bin/bash {
  /bin/sleep ix,
}
/bin/bash ix,
...

Causing the policy enforcement failure because of two /bin/bash lines.

Update documents and examples

Since we're going to update the policy specification and the log format,
we also need to update documents and examples.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.