Giter Club home page Giter Club logo

faas-netes's People

Contributors

acornies avatar akosveres avatar aleks-fofanov avatar alexellis avatar bartsmykla avatar bmcustodio avatar cpanato avatar dependabot[bot] avatar elliott-beach avatar ericstoekl avatar feifeiiiiiiiiiii avatar ivanayov avatar kevin-lindsay-1 avatar kirecek avatar lihaiswu avatar lucasroesler avatar martindekov avatar milsonian avatar mirroredge avatar nitishkumar71 avatar rdimitrov avatar rgee0 avatar rimusz avatar stefanprodan avatar viveksyngh avatar waterdrips avatar weikinhuang avatar welteki avatar zeerorg avatar zouyee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faas-netes's Issues

Create a Helm Chart for FaaS

A helm chart would make deploying FasS to Kubernetes much simpler, and it is quickly becoming the standard way to try out third party solutions on Kubernetes...

Once FaaS on kubernetes is somewhat stable we can then submit our chart to the official repo ... https://github.com/kubernetes/charts

I am happy to work on this unless someone else wants to give it a go...

[Question] IPv6-only support

Hi,

I try to deploy Fass-netes in my IPv6-only Kubernetes cluster (v1.8.2).

Every services seem to work at glances, but the problem occur when the Gateway pod try to redirect to the Function pod.

I try following this link : https://blog.alexellis.io/first-faas-python-function/ (hello-python function),

#  Curl the final function pod 
16:31:22 › curl '[1404:f200:f::d011:e21d]:8080' -d 'open-faas is awesome'                                                                                                                                   
Hello! You said: open-faas is awesome

# Logs when trying to curl the Gateway pod : 
2017/10/30 05:29:44 > Forwarding [POST] to /function/hello-pytho
2017/10/30 05:31:22 http: proxy error: context canceled
2017/10/30 05:31:22 < [http://faas-netesd.openfaas.svc.domain-k8s.tld:8080/function/hello-python] - 502 took 97.582934 seconds
2017/10/30 05:31:22 function=hello-python
GetHeaderCode before 502

16:35:15 › k -n  openfaas exec -it gateway-b4bc9dd89-r79m4 sh                                                                                                                                        
~ # ping faas-netesd.openfaas.svc.domain-k8s.tld
PING faas-netesd.openfaas.svc.domain-k8s.tld (1404:f200:f::66a8:6360): 56 data bytes
64 bytes from 1404:f200:f::66a8:6360: seq=0 ttl=62 time=0.608 ms

( internal pod) 
bash-4.3# curl http://faas-netesd.openfaas.svc.domain-k8s.tld:8080
404 page not found
bash-4.3# curl http://gateway.openfaas.svc.domain-k8s.tld:8080/
<a href="/ui/">Moved Permanently</a>.

#  When curl the gateway without '-d curl argument' get HTTP 200 : 
- curl 'gateway.openfaas.svc.domain-k8s.tld:8080'/function/hello-python
2017/10/30 05:43:15 > Forwarding [GET] to /function/hello-python
2017/10/30 05:43:15 > Forwarding [GET] to /function/hello-python
2017/10/30 05:43:15 < [http://faas-netesd.openfaas.svc.domain-k8s.tld:8080/function/hello-python] - 200 took 0.004765 seconds
2017/10/30 05:43:15 function=hello-python
GetHeaderCode before 200
2017/10/30 05:43:15 < [http://faas-netesd.openfaas.svc.domain-k8s.tld:8080/function/hello-python] - 200 took 0.004765 seconds
2017/10/30 05:43:15 function=hello-python
GetHeaderCode before 200

EDIT :

Maybe I found any cause of the problem,

faas-netesd logs  :

2017/10/30 05:48:31 Post http://:8080/function/hello-python: EOF
2017/10/30 05:48:31 [1509342018] took 493.257782 seconds

It seem Faas-netesd try to use the ClusterIP, but in IPv6 only, ClusterIP is useless.

Any feature to use Service'Endpoints instead of Service'ClusterIP to speak to function ? (Like Nginx-Ingress does) ?

Thanks for OpenFaas ! Pretty awesome project.

Typo in HELM.md

At the bottom of the list of addl Helm chart options, there is a missing bullet point.

### Additional OpenFaaS Helm chart options:

* `functionNamespace=defaults` - to the deployed namespace, kube namespace to create function deployments in
* `async=true/false` - defaults to false, deploy nats if true
* `armhf=true/false` - defaults to false, use arm images if true (missing images for async)
* `exposeServices=true/false` - defaults to true, will always create `ClusterIP` services, and expose `NodePorts/LoadBalancer` if true (based on serviceType)
* `serviceType=NodePort/LoadBalancer` - defaults to NodePort, type of external service to use when exposeServices is set to true
* `rbac=true/false` - defaults to true, if true create roles
`ingress.enabled=true/false` - defaults to false, set to true to create ingress resources. See openfaas/values.yaml for detailed Ingress configuration.

ingress.endabled=true/false should have a * in front of it.

Enable NATS Streaming for async by default

Suggestion from @stefanprodan - to enable NATS Streaming for async by default.

Expected Behaviour

Deployment already includes NAT Streaming for async behaviour

Current Behaviour

Optional via helm or by using separate YAML

Possible Solution

Bake it in, but document how to swap/remove in markdown/docs.

[Proposal] New Secret management in faas-netes

Context

Currently, faas-netes assumes that all secrets are related to docker creditials, specifically that they should be mounted as ImagePullSecrets. See the current deploy handler where it sets each named secret as an image pull secret

	imagePullSecrets := []apiv1.LocalObjectReference{}
	for _, secret := range request.Secrets {
		imagePullSecrets = append(imagePullSecrets,
			apiv1.LocalObjectReference{
				Name: secret,
			})
	}

The image pull secrets are a particular special use case for secrets, but does not actually facilitate the use of secrets by the functions for their unique needs, for example DB credentials. This functionality has been implemented for docker swarm, see pull request 292.

Proposal

Kubernetes allows for several configuration options when mounting and using secrets, more than Swarm. I propose that we deploy functions so that the usage in Swarm and K8S is identical. Specifically, this means that secrets are deployed so that they are mounted as files at /run/secrets

Per the documentation on kubernetes.io this can be implemented

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/run/secrets"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret

Implementing this requires a potentially backwards incompatible change or weird naming going forward. Concretely I propose that

  1. The request.Secrets should be reserved for function secrets

  2. When creating the the DeploymentSpec for the function, the secrets list should be added to the volumes list so that the secrets are mounted as files at /run/secrets, see the example yaml above. In the source code this would roughly looks like

    Containers: []Container{
        {
            VolumeMounts: []VolumeMount{
                {
                    Name: "Secret1"
                    ReadOnly: true,
                    MountPath: "/run/secrets",
                },
            },
        },
    },
    Volumes: []Volume{
        {
            Name: "Secret1",
            Secret: &SecretVolumeSource{
                SecretName: "Secret1",
            },
    }
  3. Registry credentials secrets should be passed in a new request.RegistryAuth, this would again be a list of strings, e.g.

    RegistryAuth:
      - dockerHub
      - internalHub
      - internal-gcr
      - internal-ecr

    this would allow K8S to pull from multiple private registries and is already supported by the K8S api.

Problems

I believe this proposal will break the current support for ImagePullSecrets. Alternatively, we could continue the support for ImagePullSecrets via request.Secrets and then add a new request.FunctionSecrets to support secrets that are mounted as files as described above. This would require refactoring the secrets support in the core gateway and I also believe brings in weird naming issues. Consider this table of how to translate between the platforms we currently support

OpenFaas Swarm k8s
Registry Auth secrets docker login imagePullSecrets
Secret value functionSecrets secrets secrets

It feels like using a new name for secrets mounted as a file would be confusing in the long term.

The trickiest part of this proposal is agreeing on an API and the potential break in backwards compatibility.

Question: GKE use-case

Current Behaviour

i used Helm to setup the openfaas stack
configured my function with

gateway: http://localhost:8001/api/v1/proxy/namespaces/default/services/gateway:8080

deployed it, but when I navigate to https://EXTERNAL_CLUSTER_IP/system/functions/FUNCTION_NAME
I'm getting the following error

User "system:anonymous" cannot get path "/system/functions/FUNCTION_NAME".: "No policy matched.\nUnknown user \"system:anonymous\""

Context

I'm trying to deploy a telegram bot that uses a webhook

I'm new to kubernetes and i've had a really hard time deploying to GKE, I think we can all benefit from some more resources regarding this combo (gke+faas-nets)

Thanks!

Async function could not be called when installed with helm and functionNamespace set.

Expected Behaviour

Deployed by helm chart with functionNamespace set as 'openfaas-fn'.
Async function called by queue-worker.

Current Behaviour

Async function could not be called, as function service could not be resolved.

Possible Solution

Define faas_function_suffix environment variable for queue-worker in helm chart.

Steps to Reproduce (for bugs)

  1. Install by helm, set functionNamespace
    helm upgrade --dry-run --install --debug --namespace openfaas --reset-values --set async=true --set ingress.enabled=true --set functionNamespace=openfaas-fn openfaas openfaas/
  2. Create a function
  3. Call as async function
  4. Check log of queue-worker pod,
    Post http://[function-name]:8080/ failed

Context

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Client:
    Version: 17.10.0-ce
    API version: 1.30 (downgraded from 1.33)
    Go version: go1.9.1
    Git commit: f4ffd25
    Built: unknown-buildtime
    OS/Arch: darwin/amd64

Server:
Version: 17.06.1-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22:54:55 2017
OS/Arch: linux/amd64
Experimental: false

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    FaaS-netes

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Ubuntu Server

  • Link to your project or a code example to reproduce issue:

Add CPU based constraints

Memory based constraints are already enabled in the k8s provider, we should extend this functionality to add cpu based constraints which are already defined in the gateway.
https://github.com/openfaas/faas/blob/613ac888cdb6ffad2dcbef90282f1eea90ce85a3/gateway/requests/requests.go#L47

Expected Behaviour

When a new function is deployed or a replica is created any cpu constraints set in the stack file should be applied to the deployment spec.

Current Behaviour

Currently only memory is applied, cpu is ignored

Possible Solution

Amend func createResources(request requests.CreateFunctionRequest) to handle CPU limits passed from the gateway.

armhf (Raspberry Pi): AlertManager not ported yet

Expected Behaviour

Running

kubectl apply -f ./faas.armhf.yml,monitoring.armhf.yml,rbac.yml

should bring up pods on Raspberry Pi k8s cluster.

Current Behaviour

alertmanager pod godes into CrashLoopBackOff.

HypriotOS/armv7: pirate@navi in ~
$ kubectl get pods -o wide
NAME                            READY     STATUS             RESTARTS   AGE       IP           NODE
alertmanager-2609462557-bf54n   0/1       CrashLoopBackOff   142        11h       10.244.2.5   tatl
faas-netesd-1317931779-wk4gw    1/1       Running            0          11h       10.244.2.3   tatl
gateway-1085489343-cgs53        1/1       Running            0          11h       10.244.1.5   tael
prometheus-4259297277-qhj0b     1/1       Running            0          11h       10.244.2.4   tatl

Alertmanager pod log shows missing /alertmanager.yml

HypriotOS/armv7: pirate@navi in ~      
$ kubectl logs alertmanager-2609462557-bf54n
time="2017-09-19T18:50:52Z" level=info msg="Starting alertmanager (version=0.5.1, branch=master, revision=0ea1cac51e6a620ec09d053f0484b97932b5c902)" source="main.go:101" 
time="2017-09-19T18:50:52Z" level=info msg="Build context (go=go1.7.3, user=root@fb407787b8bf, date=20161125-08:19:00)" source="main.go:102" 
time="2017-09-19T18:50:52Z" level=info msg="Loading configuration file" file="/alertmanager.yml" source="main.go:195" 
time="2017-09-19T18:50:52Z" level=error msg="Loading configuration file failed: open /alertmanager.yml: no such file or directory" file="/alertmanager.yml" source="main.go:198"

Possible Solution

Update https://github.com/alexellis/faas-netes/blob/master/monitoring.armhf.yml#L68 to

command: ["/bin/alertmanager","-config.file=/etc/alertmanager/config.yml", "-storage.path=/alertmanager"]

Steps to Reproduce (for bugs)

  1. Setup Raspberry PI k8s cluster
  2. kubectl apply -f ./faas.armhf.yml,monitoring.armhf.yml,rbac.yml

Context

Following example doc: https://github.com/alexellis/faas/blob/master/guide/deployment_k8s.md

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
$ docker version
Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:30:54 2017
 OS/Arch:      linux/arm

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:30:54 2017
 OS/Arch:      linux/arm
 Experimental: false
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes v1.7.5 (FaaS-netes)

  • Operating System and version (e.g. Linux, Windows, MacOS):
    HypriotOS v1.5.0 on Raspberry Pi 3

OpenFaaS components should not be "BestEffort" pod

The BestEffort pod (no resource definition) is the candidate to be evicted when node is under resource pressure. It is never recommended for system pods.

Expected Behaviour

Faas components all have resource limit at least.

Current Behaviour

Faas components all have no resource limit.

Possible Solution

Add resource limit to Deployment.

Steps to Reproduce (for bugs)

  1. Create a kubernetes cluster on small size node
  2. Enable pod eviction
  3. faas-netes will be evicted to make room for other high priority pods

Context

I have to patch faas-netes manually.

https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Docker 1.12.6
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes 1.8.2
  • Operating System and version (e.g. Linux, Windows, MacOS):
    ARM64 Linux
  • Link to your project or a code example to reproduce issue:

HELM - Tiller Component Installation Error

Expected Behaviour

Install Server Side Tiller Component Via:
$ helm init --skip-refresh --upgrade --service-account tiller

Current Behaviour

Installation Failed:
$ sudo helm init --skip-refresh --upgrade --service-account tiller /usr/local/bin/helm: 1: /usr/local/bin/helm:ELF: not found /usr/local/bin/helm: 2: /usr/local/bin/helm: }▒: not found /usr/local/bin/helm: 8: /usr/local/bin/helm: Syntax error: "(" unexpected

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
  • Docker 17.11ce
  • Kubernetes 1.8.4

Are you using Docker Swarm or Kubernetes (FaaS-netes)?

  • Kubernetes

Operating System and version (e.g. Linux, Windows, MacOS):

  • Linux/arm

Proposal: enable private Hub repos via Kubernetes secrets

Proposal: enable private Hub repos via Kubernetes secrets

Expected Behaviour

At deploy time we should be able to specify an additional secret name so that private Docker Hub images can be pulled.

Current Behaviour

Current behaviour supported in Swarm provider, but unsupported in this provider.

  • Use public images

  • Can work-around this with a private registry hosted in the cluster.

Possible Solution

Create secrets ahead of time with kubectl.

Add metadata when deploying via the CLI to refer to the desired secret name

Steps to Reproduce (for bugs)

  1. Re-tag alpine:3.6
  2. Push to Docker Hub
  3. Deploy as function
  4. Mark private
  5. Repeat - this function will not be pulled.

General Secret Management

This proposal is inspired by the Docker Swarm secret management but should be generalizable to Kubernetes Secrets and possibly other 3rd party secret management systems.

Summary

It is common that a function may need to connect to a database or a secure API. The username and password values used to connect to these data sources should be kept secret and secured. Currently these values are either hardcoded into functions or provided as environment variable. This means that these values are not encrypted at rest and even more specifically it leads to the highly likely situation where these secret values are checked into git repositories. This puts these sensitive values at risk.

Both Docker Swarm and Kubernetes have built in secret management systems. In particular, both provide an API for management of secrets and support mounting those secrets in services as files. Kubernetes is the most flexible in allowing you to specify the exact path of where these files are mounted. Docker Swarm on the other hand strictly mounts the secrets to /run/secrets inside the containers.

Expected behavior

OpenFaaS should support defining services that access secret values from and encrypted storage.

Current Behavior

Secrets can be provided via the Environment variables, but these values are not encrypted at rest.

Possible solution

The end user should be able to provide secrets to the orchestration layer (docker swarm or kubernetes) and then reference those same secrets in the function. The creator of a function would simply provide a list of secrets that are required for the function during the function creation (or in the stack yaml for the cli) and the orchestration layer would be responsible for mounting those secrets as files in a standard location.

Specifically,

  1. We should standardize functions to expect secret files mounted in /run/secrets.
  2. Secret values will be created and managed independently from functions via the orchestration tool
  3. When creating a new function service, the API accepts a list of secret names as strings. The gateway then passes this forward to the orchestration layer when defining the new function service/pod.
// CreateFunctionRequest create a function in the swarm.
type CreateFunctionRequest struct {
    // Service corresponds to a Docker Service
    Service string `json:"service"`

    // Image corresponds to a Docker image
    Image string `json:"image"`

    // Network is specific to Docker Swarm - default overlay network is: func_functions
    Network string `json:"network"`

    // EnvProcess corresponds to the fprocess variable for your container watchdog.
    EnvProcess string `json:"envProcess"`

    // EnvVars provides overrides for functions.
    EnvVars map[string]string `json:"envVars"`

    // Secrets is a list of secrets required for the orchestration layer to provide
    Secrets []string `json:"secrets"`

    // RegistryAuth is the registry authentication (optional)
    // in the same encoded format as Docker native credentials
    // (see ~/.docker/config.json)
    RegistryAuth string `json:"registryAuth,omitempty"`

    // Constraints are specific to back-end orchestration platform
    Constraints []string `json:"constraints"`
}

Context

I want to use this functionality specifically to securely store database and api access credentials.

Disable Prometheus scraping

We should prevent Prometheus scraping of function services, this can be done by adding an annotation to the service meta.

Expected Behaviour

Since the watchdog doesn't expose a /metrics endpoint, Prometheus should not try to scrape any function.

Current Behaviour

If Prometheus is configured with Kubernetes service discovery it will try to scrape the functions resulting in function errors and high resources usage.

Possible Solution

On function deploy, add the prometheus.io.scrape='false' annotation to the function service.

Context

The colorisebot will run into OOM after multiple calls to /metrics.

deploy fail on kubernetes: kubectl apply fail

Expected Behaviour

success deploy faas-netes on kubernetes following the deployment_k8s.md guide

Current Behaviour

deploy fail with below error message:
`kubectl apply -f ./faas.yml,monitoring.yml,rbac.yml

service "faas-netesd" configured
serviceaccount "faas-controller" configured
deployment "faas-netesd" configured
service "gateway" configured
deployment "gateway" configured
service "prometheus" configured
deployment "prometheus" configured
service "alertmanager" configured
deployment "alertmanager" configured
clusterrolebinding "faas-controller" configured
Error from server (Forbidden): error when creating "rbac.yml": clusterroles.rbac.authorization.k8s.io "faas-controller" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["create"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["delete"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["update"]}] user=&{[email protected] [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]`

Possible Solution

it's similar like #41. But I not use Helm and minikube

Steps to Reproduce (for bugs)

execute kubectl apply -f ./faas.yml,monitoring.yml,rbac.yml

Context

I have a image processing function want to use faas framework to triggered by my web service.
But I stuck on deploy stage...The key error is: error when creating "rbac.yml": clusterroles.rbac.authorization.k8s.io "faas-controller" is forbidden: attempt to grant extra privileges

I'm new in kubernetes. I'm not sure how to open the privileges on my kubernetes.

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
    docker version
    Client:
    Version: 17.09.0-ce
    API version: 1.32
    Go version: go1.8.3
    Git commit: afdb6d4
    Built: Tue Sep 26 22:40:09 2017
    OS/Arch: darwin/amd64

Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:45:38 2017
OS/Arch: linux/amd64
Experimental: true

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes (FaaS-netes)
  • Operating System and version (e.g. Linux, Windows, MacOS):
    MacOS 10.12.5
  • Link to your project or a code example to reproduce issue:

add e2e test

I will add e2e test using minikube

Expected Behaviour

Current Behaviour

Possible Solution

Steps to Reproduce (for bugs)

Context

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?

  • Operating System and version (e.g. Linux, Windows, MacOS):

  • Link to your project or a code example to reproduce issue:

Proposal: CRD/controller to manage functions

Create a CRD custom resource definition which represents the Function abstraction. Plus controller to reconcile the state.

A Function would internally create the existing pair of:

  • Service
  • Deployment

The faas-netesd controller would be responsible for accepting the Function definition via the existing RESTful API. Then rather than creating/updating/deleting services and deployments we'd create/update/delete Functions.

The controller would then be responsible for CRUD on Services/Deployments.

Other concerns include:

  • migration to users with existing deployments of OpenFaaS
  • maintaining backwards compatibility - i.e. what is the minimum version CRDs become available?
  • maintaining a least-privilege scope for RBAC

Status

Level: advanced
Status: design/PoC

Initial work / artifacts

Initial work / artifacts should begin to define and scope the work only.

Please create architectural diagrams to support design.

Proposal: configure memory/CPU limits upon deployment

Expected Behaviour

This needs to sync with the main faas project, but the idea is to provide memory/CPU limits for functions as we create their deployment spec through the Kubernetes API.

Current Behaviour

Unbounded.

Possible Solution

Add to basic Create type in faas project.

Pass values onto the spec when doing a create/update.

Helm and RBAC question

On a fresh 1.7.5 minikube helm fails to deploy a release of openfaas due to RBAC rules inclusion with the error

Expected Behaviour

helm install --name openfaas --set async=true ./openfaas/ should return the release details.

Current Behaviour

Error: release openfaas failed: clusterroles.rbac.authorization.k8s.io "faas-controller" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["create"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["delete"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:tiller acb814ed-ab9a-11e7-ab23-1673569d708d [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Possible Solution

You can still install openfaas via Helm skipping use of RBAC with
helm install --name openfaas --set async=true --set rbac=false ./openfaas/

Steps to Reproduce (for bugs)

  1. install minikube and start a new cluster (1.7.5)
  2. follow the HELM.md steps to install Helm and Tiller with RBAC

Context

This is preventing use of openfaas via Helm with default settings

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Version: 17.09.0-ce

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Faas-Netes

  • Operating System and version (e.g. Linux, Windows, MacOS):
    MacOS

How to set function's resource(request/limit) of pod memory

I have a image processing function that need to use 13G memory. I create a Cluster
have 16G memory with 3 nodes. But I can not find a way to configure my function's
Pod memory size or swap memory size. And if several request at same time, How to
set configure to make my each function call can have enough memory to complete the job?

Expected Behaviour

The image processing job can have enough memory(13G) resource to finish job and upload the result.

Current Behaviour

Since faas-cli use default settings to deploy function. The function's pod yaml only set cpu resource request = 100m. So this cause the image processing job return memory not enough error. I observe the pod resource usage are only use 40M memory and 0.419 core cpu.

Possible Solution

Does faas-cli or faas-netes have command that can set the deploy function's memory/swap memory size? or I can manual set the function's pod's memory? If I can make sure the function's pod yaml can have request and limit memory resource, I can let large resource requirement job complete safety.

Context

I can run my image processing function on local docker with large swap memory. I try to enlarge
my cluster's memory but I found the root cause is that the function's pod only can use 40M memory and 0.419 cores cpu for one job request. I try to check the pod's yaml: it shows resources->requests->cpu:100m. I guess this is create by faas-cli deploy command or default faas-netes settings. Can I use faas-cli or chage faas-netes settings to increase my function's pod memory?

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):
    Client:
    Version: 17.09.0-ce
    API version: 1.32
    Go version: go1.8.3
    Git commit: afdb6d4
    Built: Tue Sep 26 22:40:09 2017
    OS/Arch: darwin/amd64
    Server:
    Version: 17.09.0-ce
    API version: 1.32 (minimum version 1.12)
    Go version: go1.8.3
    Git commit: afdb6d4
    Built: Tue Sep 26 22:45:38 2017
    OS/Arch: linux/amd64
    Experimental: true
  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes (FaaS-netes)
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Local: MacOS 10.12.5, remote: GCP k8s
  • Link to your project or a code example to reproduce issue:

Fine-tune RBAC permissions

It would be great to get a fine-tune done on the RBAC permissions which are currently broader than they need to be. Hoping for a PR from @luxas soon.

Proposal: Refactor custom limits for maximum auto-scaling limits

We currently have a max number of replicas for any function, but in Swarm we also support overriding that. This issue is about providing a label that can be passed at deploy-time to specify the minimum number of replicas and maximum number too.

  • If the min=max this disables auto-scaling on a per-function basis
  • This should be agnostic to Kubernetes, Swarm or any other back-end
  • We may have to expose labels through the GET /system/functions endpoint which currently lists off deployed functions.

Proposal: Introduce a .DEREK.yml file for managing maintainer permissions

Expected Behaviour

The latest version of Derek uses yaml format for managing maintainers and features. The repo should include a .DEREK.yml to enable the use of the latest version of Derek.

Current Behaviour

Derek is still wearing flares & a tank top and relies on MAINTAINERS.

Possible Solution

Add a .DEREK.yml which includes the current maintainers and includes comments and dco_check as enabled features.

Context

Derek is evolving, the project must too.

Your Environment

N/A

add NOTES.txt in Charts

Expected Behaviour

Current Behaviour

add NOTES.txt in Charts

Possible Solution

Steps to Reproduce (for bugs)

Context

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?

  • Operating System and version (e.g. Linux, Windows, MacOS):

  • Link to your project or a code example to reproduce issue:

Enable variable timeouts for HTTP

This matches the API gateway in the faas repo:

read_timeout - seconds as an integer
write_timeout - seconds as an integer

The code can be copied from read_config.go

Proposal: for non-helm deployment - use openfaas ns

Helm allows us to specify a namespaces:

For OpenFaaS system components: openfaas
For OpenFaaS functions: openfaas-fn

Let's leverage the changes we made for helm for our default set of YAML files including the mounted configs for Prometheus/alertmanager.

Adjust function name regex - to allow DNS-1123

Deployment, pod and service names conform to a requirement to match a DNS-1123 label. These must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character. The regular expression used to validate names is 'a-z0-9?'. If you use a name, for your deployment, that does not validate, an error is returned.

Expected Behaviour

"function1" should be valid, but isn't with our current regex.

Current Behaviour

Must type functiontwo

Possible Solution

Alter regex and unit tests in project.

Steps to Reproduce (for bugs)

faas-cli deploy --image functions/alpine --name function1 --fprocess=env

Start here:

https://github.com/openfaas/faas-netes/blob/master/handlers/deploy.go#L29

Acceptance Criteria:

  • Must have unit test coverage for positive and negative scenarios

  • Must be tested on Kubernetes 1.8 (with console output pasted into the PR)

  • Must pass CI etc.

  • Valid RegEx needs to be documented in the readme and in the troubleshooting guide in the main faas repo.

Potential issue with env-vars and Deployment v1beta1

I believe there is a potential issue with env-vars and Deployment v1beta1 when testing on Kubernetes 1.6.7 with RBAC enabled

Expected Behaviour

Rolling updates should honour env-vars set in the container spec

Current Behaviour

They're not honouring it.

Possible Solution

Code is present, maybe need to try different versions of Kubernetes to see if if that resolves things.

https://github.com/openfaas/faas-netes/blob/master/handlers/update.go#L44

Steps to Reproduce (for bugs)

Deploy:

faas-cli deploy --image functions/alpine:latest --fprocess="env" --name env-tester \
 --env env1=true

Invoke it and you'll see the env1 set to "true"

Now do a rolling-update:

faas-cli deploy --image functions/alpine:latest --fprocess="env" --name env-tester \
 --env env1=false --update=true --replace=false

In my configuration I saw that Kubernetes did not honour the env-vars and env1 was still true.

Update with cached images

It looks like this flow is not working as expected:

  • build
  • push
  • deploy

(Looks OK)

  • edit ->
  • build
  • push
  • deploy

(Change is not reflected in the pod)

Work-around

  • If you run kubectl delete manually on the svc/deployment it seems to fix when deployed again.

  • Alternatively - use a unique tag name for each push

Warning when deploying function

When I perform a faas-cli deploy -f function.xml against my kubrnetes cluster, I receive the message "Server returned unexpected status code 500 deployment.extension 'function name' not found". The function works fine, no issues with using the function - it appears to be a cosmetic error but I can't determine the origin.

Expected Behaviour

I would expect the functions to deploy without the error message displaying OR provide more details around why the error was occurring.

Current Behaviour

When I perform a faas-cli deploy -f function.xml against my kubrnetes cluster, I receive the message "Server returned unexpected status code 500 deployment.extension 'function name' not found".

Possible Solution

Provide additional details regarding the error message that's being triggered

Steps to Reproduce (for bugs)

  1. From a command line, run faas-cli deploy -f functions.xml
  2. Functions deploy and message is thrown
  3. Calling functions is successful with no errors displayed

Context

Your Environment

  • Docker version docker version (e.g. Docker 17.0.05 ):

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Faas-Netes

  • Operating System and version (e.g. Linux, Windows, MacOS):
    CentOS 7

  • Link to your project or a code example to reproduce issue:
    Examples of the code im deploying are here - https://github.com/codyde/faas-functions/

Feature: Offer native Horizontal Pod Autoscaling as an alternative to alert-manager auto-scaling

We should offer Horizontal Pod Autoscaling as an alternative to alert-manager+Prometheus auto-scaling for Kubernetes.

https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/

cc/ @stefanprodan

Current scaling solution pros/cons

Solution

Auto-scaling currently works through the API Gateway so is automatic for any provider and can be tweaked with min/max scaling numbers and also varying alerts - if you want you can include NodeExporter metrics.

Built-in
Agnostic to provider

HPA2 pros/cons

HPA2 is Kubernetes native
Will not work for Swarm or other back-ends
Harder to configure up front

A feature flag or / env-var would allow us to toggle.

Remove command does not remove replicaset and pod objects

Prior to Kubernetes v1.7 deletion basically requires four separate Kubernetes API requests, first one will scale down number of pods to zero, second will remove deployment object, third will remove replicaset object and finally last one will remove service object (exactly like kubectl is working), this method will leave no orphaned objects behind, basically it will execute a proper GC after deletion.

On the other hand, on older versions of Kubernetes, by default you will have situation where only deployment and service objects are removed and all other subordinate objects (replicaset and pods) will still be running on Kubernetes, this is definitely some kind of GC issue.

This issue has already been addressed for Kubernetes >=v1.7 #44058.

That said, it would be nice if you can make it vertically compatible by applying proposed changes in to the faas-netesd code.

Thanks

Add homebrew as install option for Helm

Update Readme to include Homebrew option

Expected Behaviour

N/A not a code change

Current Behaviour

N/A not a code change

Possible Solution

N/A not a code change

Steps to Reproduce (for bugs)

N/A not a code change

Context

The current steps have it installing manually when there is also a package available for it.

Your Environment

MacOS

  • Link to your project or a code example to reproduce issue:

Include Helm Chart Repository

This would allow for easier deployment with Helm, and would also allow for blazing fast deployment with tools such as Kubeapps, which pull charts from repositories.

Explicitly define the DNS service domain to reduce DNS lookups

Hi Alex,

Been working to run faas-netes as a demo ontop of http://play-with-k8s.com

While debugging some DNS failures, I noticed the faas gateway was swamping the DNS server with requests for possible permutations of a k8s or AWS service domain, ie;

.default.svc.cluster.local
.svc.cluster.local
.cluster.local
.ec2.internal

See for more tcpdump examples.

I thought setting function_provider_url in faas.yml to simply faas-netesd (the first name the lookups try) would solve this and not lookup any of the other domains, but it seems it's not just for finding faasd itself, the same hunting behaviour is also then seen with each configured function when hit via the gateway;

15:41:53.756886 IP (tos 0x0, ttl 64, id 34988, offset 0, flags [DF], proto UDP (17), length 181)
    10.32.0.3.domain > 10.44.0.0.46904: [bad udp cksum 0x1501 -> 0xb1b0!] 58910 NXDomain q: AAAA? urlping2.default.default.svc.cluster.local. 0/1/0 ns: cluster.local. SOA ns.dns.cluster.local. hostmaster.cluster.local. 1502290800 28800 7200 604800 60 (153)

Would it be possible to add an env: item into faas.yml such as:

faas_service_lookup_domain which in my case would be set to `default.svc.cluster.local, in order to prevent this try-em-all behaviour.

I ask specifically because PWK's has a default 150 concurrent DNS query limit, which this is dos-ing and i think PWK will create a nice testbed for teaching / using faas.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.