Giter Club home page Giter Club logo

kubeflow / kfp-tekton Goto Github PK

View Code? Open in Web Editor NEW
168.0 15.0 119.0 78.77 MB

Kubeflow Pipelines on Tekton

Home Page: https://developer.ibm.com/blogs/kubeflow-pipelines-with-tekton-and-watson/

License: Apache License 2.0

Python 14.36% Shell 2.33% Makefile 0.29% Dockerfile 0.20% Go 22.62% Starlark 0.24% JavaScript 6.92% TypeScript 52.54% HTML 0.05% PowerShell 0.04% Mustache 0.31% CSS 0.03% MDX 0.07%
kubeflow-pipeline tekton tekton-pipelines mlops kubeflow python dsl hacktoberfest

kfp-tekton's Introduction

Kubeflow Pipelines on Tekton

Project bringing Kubeflow Pipelines and Tekton together. The project is driven according to this design doc. The current code allows you run Kubeflow Pipelines with Tekton backend end to end.

  • Create your Pipeline using Kubeflow Pipelines DSL, and compile it to Tekton YAML.
  • Upload the compiled Tekton YAML to KFP engine (API and UI), and run end to end with logging and artifacts tracking enabled.
  • In KFP-Tekton V2, the SDK compiler will generate the same intermediate representation as in the main Kubeflow pipelines SDK. All the Tekton related implementations are all embedded into the V2 backend API service.

For more details about the project please follow this detailed blog post . For the latest KFP-Tekton V2 implementation and supported offerings, please follow our latest Kubecon Talk and slides. For information on the KFP-Tekton V1 implementation, look at these slides as well as this deep dive presentation for demos.

Architecture

We are currently using Kubeflow Pipelines 1.8.4 and Tekton >= 0.53.2 in the master branch for this project.

For Kubeflow Pipelines 2.0.5 and Tekton >= 0.53.2 integration, please check out the kfp-tekton v2-integration branch and KFP-Tekton V2 deployment instead.

kfp-tekton

Kubeflow Pipelines is a platform for building and deploying portable, scalable machine learning (ML) workflows. More architectural details about the Kubeflow Pipelines can be found on the Kubeflow website.

The Tekton Pipelines project provides Kubernetes-style resources for declaring CI/CD-style pipelines. Tekton introduces several Custom Resource Definitions(CRDs) including Task, Pipeline, TaskRun, and PipelineRun. A PipelineRun represents a single running instance of a Pipeline and is responsible for creating a Pod for each of its Tasks and as many containers within each Pod as it has Steps. Please look for more details in the Tekton repo.

Get Started using Kubeflow Pipelines on Tekton

Install Kubeflow Pipelines with Tekton backend

KFP Tekton Pipelines User Guide

Use KFP Tekton SDK

Run Samples

Available KFP DSL Features

Tekton Specific Features

Development Guides

Backend Developer Guide

SDK Developer Guide

Compilation Tests Status Report

Design Guides

Design Doc

KFP, Argo and Tekton Features Comparison

Community

Kubeflow Slack

References

Kubeflow and TFX Pipelines

Kubeflow and TFX Pipelines talk at Tensorflow World

kfp-tekton's People

Contributors

animeshsingh avatar ckadner avatar dependabot[bot] avatar drewbutlerbb4 avatar evan-hataishi avatar fenglixa avatar huixa avatar humairak avatar jfigura avatar jiaxuanyang avatar jinchihe avatar jlewi avatar jritten avatar kevinyu98 avatar kfp-tekton-bot avatar kunalpatel1793 avatar maxdebayser avatar nikenano avatar pugangxa avatar rafalbigaj avatar rimolive avatar scrapcodes avatar shrivs3 avatar tedhtchang avatar tomcli avatar udiknedormin avatar vincent-pli avatar wzhanw avatar xawangyd avatar yhwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kfp-tekton's Issues

Error during running openshift/pipelines-tutorial

OpenShift Version
4.3.1
Installed
openshift-pipelines-operator.v0.10.7

on ibm cloud

when I follow this example

pipeline-tutorial

after started the pipeline run, when I try to get the logs

`[build-ui : build] STEP 2: LABEL "io.openshift.s2i.build.image"="registry.access.redhat.com/rhscl/python-36-rhel7" "io.openshift.s2i.build.source-location"="."
[build-ui : build] error building at STEP "LABEL "io.openshift.s2i.build.image" "registry.access.redhat.com/rhscl/python-36-rhel7" "io.openshift.s2i.build.source-location" "."": error ensuring container path "/opt/app-root/src": lstat /var/lib/containers/storage/overlay/262cf5f36861100bf27c9eb5f9baa42f2471984ee129a5f1dd901d81cf3d4cbd/merged/opt: invalid argument

failed to get logs for task build-ui : container step-build has failed : [{"name":"","digest":"","key":"StartedAt","value":"2020-03-18T05:00:49Z","resourceRef":{}}]`

  • the detail logs:

`tkn pipelinerun logs build-and-deploy-run-wcp2z -f -n pipelines-tutorial
(base) Qianyangs-MBP:kevin-openshift-v4 qianyangyu$ tkn pipeline list
NAME AGE LAST RUN STARTED DURATION STATUS
build-and-deploy 18 minutes ago build-and-deploy-run-wcp2z 1 minute ago --- Running
(base) Qianyangs-MBP:kevin-openshift-v4 qianyangyu$ tkn pipeline logs -f
....

[build-api : git-source-api-repo-zjvxx] {"level":"info","ts":1584507618.453761,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[build-api : git-source-api-repo-zjvxx] {"level":"info","ts":1584507646.4758844,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[build-api : git-source-api-repo-zjvxx] {"level":"info","ts":1584507648.5243325,"logger":"fallback-logger","caller":"git/git.go:102","msg":"Successfully cloned http://github.com/openshift-pipelines/vote-api.git @ master in path /workspace/source"}
[build-api : git-source-api-repo-zjvxx] {"level":"warn","ts":1584507648.5244727,"logger":"fallback-logger","caller":"git/git.go:149","msg":"Unexpected error: creating symlink: symlink /tekton/home/.ssh /root/.ssh: file exists"}
[build-api : git-source-api-repo-zjvxx] {"level":"info","ts":1584507648.6062376,"logger":"fallback-logger","caller":"git/git.go:130","msg":"Successfully initialized and updated submodules in path /workspace/source"}

[build-ui : generate] {"level":"info","ts":1584507622.4569638,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[build-ui : generate] Application dockerfile generated in /gen-source/Dockerfile.gen

[build-api : build] {"level":"info","ts":1584507632.1757002,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[build-api : build] STEP 1: FROM golang:alpine AS builder
[build-api : build] Getting image source signatures
[build-api : build] Copying blob sha256:d909eff282003e2d64af08633f4ae58f8cab4efc0a83b86579b4bbcb0ac90956
[build-api : build] Copying blob sha256:cbb0d8da1b304e1b4f86e0a2fb11185850170e41986ce261dc30ac043c6a4e55
[build-api : build] Copying blob sha256:a50ef8b76e536c1f848f61399fe1e8721531496a1a3501124e2b24f4677f0cd0
[build-api : build] Copying blob sha256:c9b1b535fdd91a9855fb7f82348177e5f019329a58c53c47272962dd60f71fc9
[build-api : build] Copying blob sha256:8b9d9d6824f5457e80af26521acf1c1e52493e7a72889d778eb9bcc5f7eb68c4
[build-api : build] Copying config sha256:51e47ee4db586c983e61a925bea3b7b08f2d7b95718e3bd3fac3da97c1c6325f
[build-api : build] Writing manifest to image destination
[build-api : build] Storing signatures
[build-api : build] STEP 2: WORKDIR /build
[build-api : build] 4b4d17ba12c7cfe122a9b3ed8a5a3687a280d6761547aee55586251584af5321
[build-api : build] STEP 3: ADD . /build/
[build-api : build] 2cb4b995b852e4f1aa3a2fa736d64ad88c8c6b668c53a9b5903f70cf9bf45ce2
[build-api : build] STEP 4: RUN GOOS=linux GARCH=amd64 CGO_ENABLED=0 go build -mod=vendor -o api-server .
[build-ui : build] {"level":"info","ts":1584507640.7401912,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[build-ui : build] STEP 1: FROM registry.access.redhat.com/rhscl/python-36-rhel7
[build-ui : build] Getting image source signatures
[build-ui : build] Copying blob sha256:6a4fa4bc2d06e942c0e92d69614b4ee30c2d409c95f29a3a22ece8087ce164be
[build-ui : build] Copying blob sha256:455ea8ab06218495bbbcb14b750a0d644897b24f8c5dcf9e8698e27882583412
[build-ui : build] Copying blob sha256:bb13d92caffa705f32b8a7f9f661e07ddede310c6ccfa78fb53a49539740e29b
[build-ui : build] Copying blob sha256:c8106f599d69375cbfc2ef44b11812ddc33938ab1e94860b02c262118f837611
[build-ui : build] Copying blob sha256:84e620d0abe585d05a7bed55144af0bc5efe083aed05eac1e88922034ddf1ed2
[build-ui : build] Copying config sha256:3c93c53ba3715f62aad12366410a1cd57957c39f573c0681807000d12f3cccdc
[build-ui : build] Writing manifest to image destination
[build-ui : build] Storing signatures
[build-ui : build] STEP 2: LABEL "io.openshift.s2i.build.image"="registry.access.redhat.com/rhscl/python-36-rhel7" "io.openshift.s2i.build.source-location"="."
[build-ui : build] error building at STEP "LABEL "io.openshift.s2i.build.image" "registry.access.redhat.com/rhscl/python-36-rhel7" "io.openshift.s2i.build.source-location" "."": error ensuring container path "/opt/app-root/src": lstat /var/lib/containers/storage/overlay/262cf5f36861100bf27c9eb5f9baa42f2471984ee129a5f1dd901d81cf3d4cbd/merged/opt: invalid argument

failed to get logs for task build-ui : container step-build has failed : [{"name":"","digest":"","key":"StartedAt","value":"2020-03-18T05:00:49Z","resourceRef":{}}]
[build-api : build] f72f609911547c6bf2364e63cf786987cd7189c74d787d585d659284b7feba77
[build-api : build] STEP 5: FROM scratch
[build-api : build] STEP 6: WORKDIR /app

`

a related issue

Make the CI tests for repo

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]

Make the CI tests for repo, so that we can ensure the quality by basic testing, such as function user end to end cases, Pylint testing etc ... After the CI tests setup, contributor can develop test case while implement functions. Thanks.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Refactor compiler and op to template function.

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]
As we introduce more features, there are some task/step template that we have to introduce in order to replicate some actions in the Argo sidecar. So we should refactor the code in a way that any additional template can be added as a plugin function.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

support volumeSnapShotOp

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]
support volumeSnapShotOp

Additional information:
[Miscellaneous information that will assist in solving the issue.]
This issue requires resourceOp to be implemented in #49

Pipeline level timeout

timeout for whole pipeline: run tkn p start pipeline-includes-two-steps-which-fail-randomly --showlog --timeout 180s

Tekton takes it at runtime, so we need to figure out transferring from DSL to runtime execution as part of the API/backend work

Compiler work items

This is an "umbrella issue" to capture planned work items related to the TektonCompiler:

  • Generate a Pipeline with Tasks instead of a Task with steps to allow parallel execution (PR #17)
  • Allow passing output parameters as inputs for subsequent Tasks (issue #19, PR #27)
  • Migrate to Tekton 0.11 to make use of API v1beta1 (PR #27)
  • Add Tekton integration tests (issue #28)
  • Add support for dsl.VolumeOp (#51)
  • Add support for set_image_pull_secrets (#54)
  • Add support for conditions (#33)
  • Add support for static LoopArguments (#67)
  • Add support for dynamic LoopArguments (#82)
  • Add support for ParallelFor (#67)
  • Option to generate Tekton PipelineRun spec (#62)
  • Add support for Tekton Workspaces to share resources between Tasks (#74)
  • Handler for checking Argo specific params, find equivalent for Tekton (#78 #80)
  • Fix big data passing (#63)
  • ...

Specific template conversions:

Existing code that was copied from the KFP SDK compiler but temporarily commented out. See _op_to_template.py):

  • Add support for dsl.ResourceOp (#49)
  • Add support for artifacts parameters (#64)
  • Add support for affinity (#65) (requires PipelineRun #62)
  • Add support for initContainers (#50)
  • Add support for metadata pod_annotations (#76)
  • Add support for metadata pod_labels (#76)
  • Add support for node selector (#65) (requires PipelineRun #62)
  • Add support for retries (#48)
  • Add support for sidecars (#42)
  • Add support for timeout (#46)
  • Add support for tolerations (#65) (requires PipelineRun #62)
  • Add support for volumes (#34)

In terms of prioritization, we could look at how often these features are used in any of the KFP pipeline examples (see KFP samples test report).

Of the 88 dsl.pipeline examples and samples found in the kubeflow/pipelines repository 54 fail:

  45 'volumes' are not (yet) implemented
  38 `dsl.ResourceOp` is not yet implemented
   4 `input artifacts` are not yet implemented
   3 'sidecars' are not (yet) implemented
   3 'retries' is not (yet) implemented
   1 'tolerations' is not (yet) implemented
   1 'timeout' is not (yet) implemented
   1 'nodeSelector' is not (yet) implemented
   1 'affinity' is not (yet) implemented

Update on KFP testdata compilation failure reasons as of Apr 3, 2020:

FAILURE: basic_no_decorator.py           - needs ExitHandlerOp, support pipeline with no decorator 
FAILURE: compose.py                      - needs function to flatten nested pipeline 
FAILURE: input_artifact_raw_value.py     - needs S3 compatible artifact passing
FAILURE: param_substitutions.py          - needs dsl.ResourceOp
FAILURE: resourceop_basic.py             - needs dsl.ResourceOp
FAILURE: volume_snapshotop_rokurl.py     - needs dsl.VolumeSnapshotOp
FAILURE: volume_snapshotop_sequential.py - needs dsl.VolumeSnapshotOp
FAILURE: volumeop_basic.py               - needs dsl.VolumeOp
FAILURE: volumeop_dag.py                 - needs dsl.VolumeOp
FAILURE: volumeop_parallel.py            - needs dsl.VolumeOp
FAILURE: volumeop_sequential.py          - needs dsl.VolumeOp
FAILURE: loop_over_lightweight_output.py - needs Tekton dynamic looping
FAILURE: withparam_global.py             - needs Tekton dynamic looping
FAILURE: withparam_global_dict.py        - needs Tekton dynamic looping
FAILURE: withparam_output.py             - needs Tekton dynamic looping
FAILURE: withparam_output_dict.py        - needs Tekton dynamic looping

Mapping of features from Argo to Tekton:

General guidance to map KFP/Argo features to Tekton features can be found in the KFP, Argo and Tekton Comparision Spreadsheet maintained by @afrittoli

Map Argo variables to Tekton varaibles.

KFP uses Argo variables to refer artifact locations and exit ops status. Therefore, we want to map as much variables in Argo as possible since issues #64 and other pipelines assume those variable can be interpreted during execution.

As of today, Tekton variables $() only has inputs and outputs params on the task level. For other variables, we will try to map them with env variables using Kubernetes FieldRef Parameter.

However, many Argo variables are still not possible with Tekton variables and pure Kubernetes FieldRef Parameter. Thus, we will raise a warning for users to avoid using them in a pipeline.

How to start for GSOC ?

@animeshsingh @ckadner , I saw this project mentioned as a GSOC project and I am interested to work on this project. have experience in Python and Data Science. Please provide me some insight on how and where to start as an newbie and how to get further in contact with you mentors, it would be very helpful of you.

Generate PipelineRun with embedded pipelineSpec and taskSpec

/kind feature

Description:
Convert the YAML produced by the kfp-tekton compiler:

  • from multiple documents (Conditions, Tasks, Pipeline, PipelineRun) into
  • into a single PipelineRun with embedded pipelineSpec and taskSpecs

This will simply the required work for down-stream integration with the KFP API and KFP UI.

Additional information:

spec:
  pipelineSpec:
    tasks:
    - name: task1
      taskSpec:
        steps:
          ...

Add Tekton integration tests

For a start the integration tests would assume (and test for) a working Tekton cluster. The user/developer would make sure that the kubectl and tkn environment is configured and the Python script would rely on shell commands to apply the Tekton YAML files, start Tekton pipelines, check pipeline run output and remove the resources created in the course of the integration tests.

In the future this setup should be improved to use containers for Tekton and appropriate Python libraries to execute the tests.

Conceptually, each individual integration test would take one of the Golden YAML files from the sdk/python/tests/compiler/testdata folder and do:

  • kubectl apply -f some.yaml
  • tkn pipeline start some-pipeline
  • tkn pipeline logs parallel-pipeline --last > some-pipeline-run.log
  • some smart diff of some-pipeline-run.log and some-pipeline-run.expected.log which was previously captured

Refer to current Tekton sidecar output limitations somewhere in our docs

https://github.com/tektoncd/pipeline/blob/master/docs/developers/README.md#handling-of-injected-sidecars

There are known issues with the existing implementation of sidecars:

When the nop image does provide the sidecar's command, the sidecar will continue to run even after nop has been swapped into the sidecar container's image field. See the issue tracking this bug for the issue tracking this bug. Until this issue is resolved the best way to avoid it is to avoid overriding the nop image when deploying the tekton controller, or ensuring that the overridden nop image contains as few commands as possible.

kubectl get pods will show a Completed pod when a sidecar exits successfully but an Error when the sidecar exits with an error. This is only apparent when using kubectl to get the pods of a TaskRun, not when describing the Pod using kubectl describe pod ... nor when looking at the TaskRun, but can be quite confusing.

add support for dsl.volumeOps

Documentation for volumeOps: https://github.com/kubeflow/pipelines/blob/master/samples/core/volume_ops/README.md

volumeOps in kfp currently do the following things:

  1. Create the PVC.
  2. Mount PVC for every component that uses the volumeOps

In the very low level, it uses the same resource function in Argo for creating the PVC resource. Therefore, we can reuse the resourceOps task for volumeOps for the PVC creation.

For the PVC mount, since we already know the name of the pvc and have volume support. We only need to make the resourceOps task to fail fast if something went wrong.

Alternatively, instead of using the pvc mount, we can use workspaces instead. However, we need to reimplement our own logic for workspaces and remove the existing volumeOps PVC mount logic. Furthermore, we will need to generate a pipelinerun yaml because the tkn cli generator is not yet supporting workspaces code-gen. The same workspaces discussion also in #41 for using it to copy files between steps.

The proposed auto workspaces could also work if it's implemented the way as it promised. However, it still going to take some time for someone to start a PR in Tekton and implement this feature.

Develop (not copy) E2E example(s)

We need to develop a few succinct examples, which demonstrate some of the benefits of generating Tekton over Argo. We could start by picking one or two example from Kubeflow Pipelines core samples but we should not simply duplicate files in this repo.

One of those examples should serve as the main reference in our README and future docs and our end-to-end testing.

Originally posted in #69 (comment)

Run Tekton pipelines in KFP compile report script

The KFP compile report script currently checks for which and how many KFP DSL scripts the KFP-Tekton compiler succeeds or fails.

For those DSL scripts that pass compilation we shall try to run on a Tekton cluster and report if and why (and how many) fail at that stage. This is not to preempt the actual integration tests described in issue #28 but rather serves as another bellwether to guide development efforts.

Important compiler feature items

/kind feature

Description:
This is an extended list of features from #18 that are the top priority for this project. Items that are depended on each other will be grouped.

    • ResourceOp #49
      • VolumeOp #51
      • VolumeSnapShotOp #86
    • Support Artifacts with Object Storage
      • Artifact outputs with S3 Location config #64
      • Artifact inputs
    • Support Exit Handler #85 (Require contributing to Tekton)
    • Support Recursion and parallelLoop with dynamic parameters #82 (Require contributing to Tekton)

Additional information:

Add unit tests for 'basic_no_decorator.py' and 'compose.py'

#91 (comment)

  1. create a PR that refactors your test_util.py in a way that the actual test cases move into our compiler unit test suite and the the test_util.py becomes only a thin shim that runs the test methods and makes sure we get the right exit code and the compiled output YAML with the similar parameters as dsl-compile does
  • move/refactor test_util.test_workflow_without_decorator to compiler_tests.test_basic_workflow_without_decorator (same name as in KFP compiler tests)
  • move/refactor test_util.test_nested_workflow to compiler_tests.test_composing_workflow (same name as in KFP compiler tests)
  • change the code in test_util.py so that it calls on the new unit test cases and make sure to generate/copy the "golden" YAML and copy them to the specified output path

kfp 0.2.5 has requirement kubernetes<=10.0.0,>=8.0.0, but kubeflow-tfjob 0.1.1 has requirement kubernetes>=10.0.1 which is conflict

# pip install sdk/python
Stored in directory: /private/var/folders/_s/4bdbvdrs655gnsyvky53tbc40000gn/T/pip-ephem-wheel-cache-hrmrp_w8/wheels/e6/32/11/93809934762b7fa6b60993126c1f4a7f571f8aaf3145b9f764
Successfully built kfp-tekton
ERROR: kubeflow-tfjob 0.1.1 has requirement kubernetes>=10.0.1, but you'll have kubernetes 10.0.0 which is incompatible.
... ...

# pip install "kubernetes>=10.0.1"
... ...
Requirement already satisfied: pyasn1>=0.1.3 in /Users/fengli/venv/lib/python3.6/site-packages (from rsa<4.1,>=3.1.4->google-auth>=1.0.1->kubernetes>=10.0.1) (0.4.8)
ERROR: kfp 0.2.5 has requirement kubernetes<=10.0.0,>=8.0.0, but you'll have kubernetes 11.0.0 which is incompatible.

Add feature to showcase Tekton's workspace feature.

Tekton has this concept called workspaces where all the tasks within the same pipeline can share the same storage. We already have some example of implementing it into the compiler at #72. However, kfp does not have any feature that requires workspaces, so we want to propose workspaces as one of the Tekton unique functionalities into kfp dsl.

Clarify building from Tekton master in sdk/README

https://github.com/kubeflow/kfp-tekton/blame/master/sdk/README.md#L23

In order to use parameter passing from task outputs into condition parameters Tekton must be built from master.

master will continue to change, we need to add the commit and date or the PR that brought the feature into Tekton and the expected release that has it

Also, how to build Tekton from master? Can we point to instructions or quickly describe it here.

Maybe a better way would be to describe that certain features are "cutting edge" and add a new section that lists those features used in the compiler and what commit/PR/release in Tekton it depends on.

Docs for deploying tekton pipeline on kfp user namespaces.

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]

We need step by step docs for users who want to deploy tekton on kfp namespace. (such as workaround on istio-inject without kfp api)

/assign

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Support artifact outputs

Support artifact outputs where we should store those artifacts to an object storage.

Required resources:

  1. s3/gcs secrets
  2. pipelineresource/resourcetemplate
  3. resource mapping in pipelinerun (can be generate with tkn cli)
  4. Task and pipeline resource mapping (WIP branch https://github.com/Tomcli/kfp-tekton/tree/artifacts)
  5. The kfp dsl also treated the output parameters as output artifacts. Do we want to setup a flag to disable it?

Add instructions for assigning rbac/service account to Tekton pipeline.

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]
Some features such as resourceOps #89 and volumeOps might need more rbac permission to create those resources. Thus we need to add proper instructions/code changes for setting rbac and service account to pipeline run.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Verify generated Tekton YAML

/kind feature

Description:
After upgrading to KFP SDK version 0.5.0 we took over some code to:

  • verify the variables resolution in the parameters of the generated compiled workflow and
  • lint the generated Tekton YAML (syntax verification)

That code is commented out as a guide just to serve as a guide for the appropriate Tekton specific implementation.

We should implement the method kfp_tekton.compiler.compiler._validate_workflow when time permits to catch problems with the Tekton YAML before a pipeline gets deployed.

Additional information:
Originally posted by @ckadner in #135

Publish kfp-tekton SDK in Pypi

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]

Now usr needs to install kfp-tekton by pip install -e ... after cloning code from github, but sometime user would not like to download source code, just run pip to install the SDK, we should publish the SDK to Pypi, and avoid the name is used recently by others.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Extend resourceOp warpper image to have multiple outputs

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]
As of today, the kube-client wrapper only takes the output as a string list and output those in a single output file. https://github.com/kubeflow/kfp-tekton/blob/master/sdk/python/tests/compiler/testdata/resourceop_basic.yaml#L84-L85

However, in ideally solution, we should input a dictionary of output name and output variables. Then the wrapper should save each of those variables in an individual output file under /tekton/results.

Right now, I have a temporary step for adding the volumeop information to the results using an extra step. But ideally we should able to use the resourceOp generated results without that extra step. https://github.com/kubeflow/kfp-tekton/pull/93/files#diff-cb633bb266cb10fcc88b8633ffdf50d8R199-R220

This feature could be somewhat challenging. @fenglixa @vincent-pli Let me know what you guys think and we could discuss how we can approach this.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Should the compiler generate Pipeline or PipelineRun

This is a general discussion, not a work item.

Conceptually a Pipeline is a static specification orchestrating how a set of Tasks should be executed. A PipelineRun is an instantiation of a pipeline. A pipeline is a re-usable artifact that can be "run" repeatedly, a pipeline-run is one-and-done.

Currently the KFP-Tekton compiler generates a YAML with (multiple) Tasks(s) and one Pipeline document. There are however certain features supported by the KFP DSL which are not easily transferred into an equivalent feature in Tekton Pipelines, but instead require the concept of a PipelineRun, like mounting PVC volume to a pipeline workspace or setting a timeout at the pipeline level.

Linked issues

  • Pipeline level timeout #56
  • Passing parameters between tasks #41
  • Volume mounts #51
  • Generate PipelineRun with embedded pipelineSpec and taskSpec #143

Identify initial set of KFP sample pipelines to test

Objective:

We need to identify an initial set of Kubeflow Pipeline samples in order to identify feature gaps in the KFP-Tekton compiler. Once a set of ~10 pipeline is identified, they should be added to the unit test and integration test suites to enable test-driven development. Each pipeline identified should expose a distinct feature like conditional execution, resource operations, recursions, ...

Background:

Since this project started as a proof-of-concept with code having been copied from the KFP SDK compiler, many code branches remain commented out.
As this project progresses, more and more features should be implemented/re-enabled. The identified sample pipelines will help drive this effort.

Kubeflow Pipeline samples and tests:

Samples:
https://github.com/kubeflow/pipelines/tree/0.2.2/samples/core

Tests:
https://github.com/kubeflow/pipelines/tree/0.2.2/sdk/python/tests/compiler/testdata

Create makefile for sdk/python

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]
It will be nice to have makefile so that it will make testing easier.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Enable passing output parameters as inputs for subsequent Tasks

How do you pass parameter between each task? When trying to use KFP's parallel join example we get the following errors:
https://github.com/kubeflow/pipelines/blob/master/samples/core/parallel_join/parallel_join.py

Generated pipeline.yaml:

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: gcs-download
spec:
  inputs:
    params:
    - name: url1
  steps:
  - args:
    - gsutil cat $0 | tee $1
    - $(inputs.params.url1)
    - /tmp/results.txt
    command:
    - sh
    - -c
    image: google/cloud-sdk:279.0.0
    name: gcs-download
---
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: gcs-download-2
spec:
  inputs:
    params:
    - name: url2
  steps:
  - args:
    - gsutil cat $0 | tee $1
    - $(inputs.params.url2)
    - /tmp/results.txt
    command:
    - sh
    - -c
    image: google/cloud-sdk:279.0.0
    name: gcs-download-2
---
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: echo
spec:
  inputs:
    params:
    - name: gcs-download-2-data
    - name: gcs-download-data
  steps:
  - args:
    - 'echo "Text 1: $0"; echo "Text 2: $1"'
    - $(inputs.params.gcs-download-data)
    - $(inputs.params.gcs-download-2-data)
    command:
    - sh
    - -c
    image: library/bash:4.4.23
    name: echo
---
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  annotations:
    pipelines.kubeflow.org/pipeline_spec: '{"description": "Download two messages
      in parallel and prints the concatenated result.", "inputs": [{"default": "gs://ml-pipeline-playground/shakespeare1.txt",
      "name": "url1", "optional": true}, {"default": "gs://ml-pipeline-playground/shakespeare2.txt",
      "name": "url2", "optional": true}], "name": "Parallel pipeline"}'
  name: parallel-pipeline
spec:
  params:
  - default: gs://ml-pipeline-playground/shakespeare1.txt
    name: url1
  - default: gs://ml-pipeline-playground/shakespeare2.txt
    name: url2
  tasks:
  - name: gcs-download
    params:
    - name: url1
      value: $(params.url1)
    taskRef:
      name: gcs-download
  - name: gcs-download-2
    params:
    - name: url2
      value: $(params.url2)
    taskRef:
      name: gcs-download-2
  - name: echo
    params:
    - name: gcs-download-2-data
      value: $(params.gcs-download-2-data)
    - name: gcs-download-data
      value: $(params.gcs-download-data)
    taskRef:
      name: echo

Console output:

$ kubectl apply -f pipeline.yaml
task.tekton.dev/gcs-download configured
task.tekton.dev/gcs-download-2 configured
task.tekton.dev/echo configured
Error from server (BadRequest): error when creating "pipeline.yaml": admission webhook "webhook.tekton.dev" denied the request: mutation failed: non-existent variable in "$(params.gcs-download-2-data)" for task parameter param[gcs-download-2-data]: pipelinespec.params.param[gcs-download-2-data]

Originally posted by @Tomcli in #17 (comment)

Fix big data passing

Re-implement the fix_big_data_passing function, which converts a workflow where some artifact data is passed as parameters and converts it to a workflow where this data is passed as artifacts.

This method is currently only a No-op pass-through in the KFP-Tekton compiler:

def fix_big_data_passing(workflow: Dict[Text, Any]) -> Dict[Text, Any]: 
    """
    No-op
    """
    return workflow

Motivation

Due to the convoluted nature of the DSL compiler, the artifact consumption and passing has been implemented on top of that using parameter passing. The artifact data is passed as parameters and the consumer template creates an artifact/file out of that data.

Due to the limitations of Kubernetes (and Argo, Tekton) this scheme cannot pass data larger than few kilobytes preventing any serious use of artifacts.

The fix_big_data_passing function rewrites the compiled workflow so that the data consumed as artifact is passed as artifact. It also prunes the unused parameter outputs. This is important since if a big piece of data is ever returned through a file that is also output as parameter, the execution will fail. This makes is possible to pass large amounts of data.

Implementation in KFP

  1. Index the DAGs to understand how data is being passed and which inputs/outputs are connected to each other.
  2. Search for direct data consumers in container/resource templates and some DAG task attributes (e.g. conditions and loops) to find out which inputs are directly consumed as parameters/artifacts.
  3. Propagate the consumption information upstream to all inputs/outputs all the way up to the data producers.
  4. Convert the inputs, outputs and arguments based on how they're consumed downstream.

KFP implementation

See kubeflow/pipelines: sdk/python/kfp/compiler/_data_passing_rewriter.py

Add support for dsl.ResourceOp

The KFP-Tekton compiler does not currently support dsl.ResourceOp since Tekton does not provide native support for manipulating Kubernetes resources.

Argo provides ResourceTemplate to support "resource operations" which the Kubeflow Pipelines SDK wrapped in class Resource/ResourceOp

@afrittoli suggested to implement a standard (set of) Resource Tasks that could be maintained under the tektoncd/catalog project which provides a library for often reused tasks.

Proposed implementation

In order to streamline that implementation from a compiler perspective, we could follow the precedence laid out by Argo's ResourceTemplate, and create a Tekton Task with the following parameters:

  • action: ['get', 'create', 'apply', 'delete', 'replace', 'patch'] -- the action to perform to the resource
  • merge_strategy: ['strategic', 'merge', 'json'] -- the strategy used to merge a patch, defaults to "strategic"
  • success_condition: a label selector expression which describes the success condition
  • failure_condition: a label selector expression which describes the failure condition
  • manifest: the kubernetes manifest

Link to issue in tektoncd/catalog

Task(s) to manage kubernetes resources in a cluster #233

Additional information

Excerpt from KFP/Argo-vs-Tekton capabilities spreadsheet

KFP DSL (kfp.dsl) Argo Workflow (Current Implementation) Tekton v0.10.0 (Potential implementation) Notes Tekton Backend
ResourceOp ResourceTemplate Tasks to create, patch, delete, etc. resourcesThe tasks have to be defined, and they could live in the catalog. This is not natively supported by Tekton. Tekton could define a ResourceTask, similar to what Argo does with ResourceTemplate, but it is not required to implement ResourceOp. Workaround

Running kfp-tekton/flip-coin on openshift 4.3.1

openshift 4.3.1
deploy openshift pipeline
then run the kfp-tekton/samples/kfp-tekton/flip-coin

flip-coin
it failed at creating the pod, it is not able to write to the storage.

10m     Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Error while attaching the device Error while updating the iscsi config files for host pvc-fbf78f11-2fef-4a0c-8fb2-cfa947fe9ece and access to pv 10.221.112.173 . Error: [sliscsi]: failed to restart iscsi: Redirecting to /bin/systemctl restart open-iscsi.service
Failed to restart open-iscsi.service: Unit not found.
 : exit status 5

detail logs:

(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ tkn pipelinerun logs flip-coin-condition-demo-run-wnm6c -f -n default
^C
(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ ./oc get pods -n default
NAME                               READY  STATUS   RESTARTS  AGE
flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  0/5   Init:0/2  0     12m
(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ ./oc get events -n default
LAST SEEN  TYPE   REASON         OBJECT                                MESSAGE
<unknown>  Warning  FailedScheduling    pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
<unknown>  Warning  FailedScheduling    pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
<unknown>  Normal  Scheduled        pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Successfully assigned default/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn to 10.221.112.173
56s     Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  MountVolume.SetUp failed for volume "volume-coin-bucket-pipelines-cos-credentials" : secret "pipelines-cos-credentials" not found
9m9s    Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Unable to attach or mount volumes: unmounted volumes=[volume-coin-bucket-pipelines-cos-credentials], unattached volumes=[tekton-internal-workspace tekton-internal-home tekton-internal-secret-volume-pipeline-dockercfg-4jzm6 pipeline-token-fsdtd tekton-internal-tools tekton-internal-downward flip-coin-condition-demo-run-wnm6c-pvc volume-coin-bucket-pipelines-cos-credentials]: timed out waiting for the condition
6m51s    Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Unable to attach or mount volumes: unmounted volumes=[volume-coin-bucket-pipelines-cos-credentials], unattached volumes=[tekton-internal-secret-volume-pipeline-dockercfg-4jzm6 pipeline-token-fsdtd tekton-internal-tools tekton-internal-downward flip-coin-condition-demo-run-wnm6c-pvc volume-coin-bucket-pipelines-cos-credentials tekton-internal-workspace tekton-internal-home]: timed out waiting for the condition
4m35s    Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Unable to attach or mount volumes: unmounted volumes=[volume-coin-bucket-pipelines-cos-credentials], unattached volumes=[pipeline-token-fsdtd tekton-internal-tools tekton-internal-downward flip-coin-condition-demo-run-wnm6c-pvc volume-coin-bucket-pipelines-cos-credentials tekton-internal-workspace tekton-internal-home tekton-internal-secret-volume-pipeline-dockercfg-4jzm6]: timed out waiting for the condition
2m17s    Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Unable to attach or mount volumes: unmounted volumes=[volume-coin-bucket-pipelines-cos-credentials], unattached volumes=[tekton-internal-home tekton-internal-secret-volume-pipeline-dockercfg-4jzm6 pipeline-token-fsdtd tekton-internal-tools tekton-internal-downward flip-coin-condition-demo-run-wnm6c-pvc volume-coin-bucket-pipelines-cos-credentials tekton-internal-workspace]: timed out waiting for the condition
0s     Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Unable to attach or mount volumes: unmounted volumes=[volume-coin-bucket-pipelines-cos-credentials], unattached volumes=[tekton-internal-tools tekton-internal-downward flip-coin-condition-demo-run-wnm6c-pvc volume-coin-bucket-pipelines-cos-credentials tekton-internal-workspace tekton-internal-home tekton-internal-secret-volume-pipeline-dockercfg-4jzm6 pipeline-token-fsdtd]: timed out waiting for the condition
11m     Normal  ExternalProvisioning  persistentvolumeclaim/flip-coin-condition-demo-run-wnm6c-pvc     waiting for a volume to be created, either by external provisioner "ibm.io/ibmc-block" or manually created by system administrator
12m     Normal  Provisioning      persistentvolumeclaim/flip-coin-condition-demo-run-wnm6c-pvc     External provisioner is provisioning volume for claim "default/flip-coin-condition-demo-run-wnm6c-pvc"
11m     Normal  ProvisioningSucceeded  persistentvolumeclaim/flip-coin-condition-demo-run-wnm6c-pvc     Successfully provisioned volume pvc-fbf78f11-2fef-4a0c-8fb2-cfa947fe9ece
10m     Warning  FailedMount       pod/flip-coin-condition-demo-run-wnm6c-initial-flip-p4vpc-pod-8vbxn  Error while attaching the device Error while updating the iscsi config files for host pvc-fbf78f11-2fef-4a0c-8fb2-cfa947fe9ece and access to pv 10.221.112.173 . Error: [sliscsi]: failed to restart iscsi: Redirecting to /bin/systemctl restart open-iscsi.service
Failed to restart open-iscsi.service: Unit not found.
 : exit status 5
(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ ./oc get pvc -n default
NAME                   STATUS  VOLUME                   CAPACITY  ACCESS MODES  STORAGECLASS    AGE
flip-coin-condition-demo-run-wnm6c-pvc  Bound  pvc-fbf78f11-2fef-4a0c-8fb2-cfa947fe9ece  20Gi    RWO      ibmc-block-bronze  23m
(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$

How to contribute?

I have done some contributions to Kubeflow and Argo before and would be happy to help out with this project. Super excited about Tekton Pipelines for Kubeflow. Where would be a good place to start contribution to kfp-tekton?

I read the contribution.md file but, is there a plan for the work or similar?

/Niklas

Investigate different ways to pass parameters between tasks

Currently, we are using the task.results for passing parameter outputs. This has a limitation where the output parameter files have to be under /tekton/results. However, some of the Kubeflow pipeline has no configuration for the output file path (e.g. The current Watson ML example). Therefore, we need to figure out an alternative way that can take output parameter files from any path in the container.

Running parallel_join on openshift 4.3.1

Openshift 4.3.1 with openshift pipeline

then run parallel_join samples.

./oc apply -f https://raw.githubusercontent.com/kubeflow/kfp-tekton/ab5d43331a53600f3baecd1969ca6f3a101350fc/sdk/python/tests/compiler/testdata/parallel_join.yaml

tkn pipeline start parallel-pipeline

Here is the log

(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ ./oc apply -f https://raw.githubusercontent.com/kubeflow/kfp-tekton/ab5d43331a53600f3baecd1969ca6f3a101350fc/sdk/python/tests/compiler/testdata/parallel_join.yaml task.tekton.dev/gcs-download created task.tekton.dev/gcs-download-2 created task.tekton.dev/echo created pipeline.tekton.dev/parallel-pipeline created (.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ tkn pipeline start parallel-pipeline ? Value for param url1of typestring? (Default is gs://ml-pipeline-playground/shakespeare1.txt) gs://ml-pipeline-playground/shakespeare1.txt ? Value for param url2of typestring? (Default is gs://ml-pipeline-playground/shakespeare2.txt`) gs://ml-pipeline-playground/shakespeare2.txt
Pipelinerun started: parallel-pipeline-run-k7chw

In order to track the pipelinerun progress run:
tkn pipelinerun logs parallel-pipeline-run-k7chw -f -n default
(.venv) (base) Qianyangs-MBP:kevin-openshift-v4-tekton qianyangyu$ tkn pipelinerun logs parallel-pipeline-run-k7chw -f -n default
[gcs-download : gcs-download] {"level":"info","ts":1584642628.819261,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[gcs-download : gcs-download] With which he yoketh your rebellious necks Razeth your cities and subverts your towns And in a moment makes them desolate

[echo : echo] {"level":"info","ts":1584642648.9410698,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[echo : echo] Text 1: gs://ml-pipeline-playground/shakespeare1.txt
[echo : echo] Text 2: gs://ml-pipeline-playground/shakespeare2.txt

[gcs-download-2 : gcs-download-2] {"level":"info","ts":1584642629.569524,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: "KO_DATA_PATH" does not exist or is empty"}
[gcs-download-2 : gcs-download-2] I find thou art no less than fame hath bruited And more than may be gatherd by thy shape Let my presumption not provoke thy wrath

`

Add list of sample kfp pipelines for Tekton

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]

Tracking issues for list of samples for kfp-tekton.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Generated Tekton steps have no name field.

The new Tekton compiler didn't add any name to the Tekton steps. Thus, the pipeline is returning the following errors.

Error from server (InternalError): error when creating "pipeline.yaml": Internal error occurred: admission webhook "webhook.tekton.dev" denied the request: mutation failed: invalid value "": taskspec.steps.name
Task step name must be a valid DNS Label, For more info refer to https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

Add developer guide

The instructions in the sdk/README.md currently only provide very basic instructions to set up the KFP-Tekton compiler.

The "Developer Guide" will contain more detailed instructions to set up a development environment and explain the structure, concepts and ideas of the compiler code.

Extend resourceOp warpper image to operate the list of resources

/kind feature

Description:
[A clear and concise description of what your proposal. What problem does it solve?]
As discussion in the ticket #70 and #82. Tekton has been explicitly avoiding adding support for recursion and dynamic behavior. So we have to find another way for the solutions.

ResourceOps and VolumeOps are supported in kfp-tekton right now, by PR #89 , #93 and kubectl-wrapper from @vincent-pli .

Another solution we can extend resourceOp warpper image to have additional parmeter (could be named as "parallel-loop-items") with the value of the list, support the type of [a, b, c, d] and also [{'a': 1, 'b': 2}, {'a': 10, 'b': 20}], apply each value of the list to manifest to support this feature.

Additional information:
[Miscellaneous information that will assist in solving the issue.]

Support for dynamic LoopArguments (not currently supported in Tekton)

/kind feature

Description:

Since Tekton doesn't have loop/recursion support, PR #67 introduced a workaround for static loop parameters by flattening the loop with parallel TaskRuns.
But for dynamic loop parameters, Tekton does not have a feature to support that.

Additional information:

  • PR #67 introduced static loop parameter support
  • Issue #70 discusses support and workarounds for dynamic looping in Tekton

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.