Giter Club home page Giter Club logo

charts's People

Contributors

aboyd-v2x avatar amq avatar angelotinho avatar bsundsrud avatar danyworks avatar erictendian avatar eugene-chernyshenko avatar fanyang89 avatar graillus avatar gw0 avatar j14s avatar lemuelbarango avatar mclavel avatar namelessvoid avatar naseemkullah avatar pedromctech avatar phi2039 avatar philoserf avatar pulledtim avatar rbt avatar rimusz avatar rverenich avatar saschagrunert avatar sjeandeaux avatar sjmiller609 avatar tasnyc avatar tomrk-esteam8 avatar tuananhnguyen-ct avatar veneliniliev avatar wmozejkostp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

[gcloud-sqlproxy] Use instanceShortName (if specified) for service port name

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report

Version of Helm and Kubernetes:

Which chart:
gcloud-sqlproxy

What happened:
I'm trying to deploy the chart and defining 2 SQL instances that are in different projects but having the same instance name. The deployment failed with the following message:

Service "x" is invalid: spec.ports[5].name: Duplicate value: "x-consultation"

Even with the instanceShortName set it still uses the original instance name for naming the port in the service object.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

cloudsql:
  instances:
    - project: project-1
      region: europe-west1
      instance: abc-consultation
      instanceShortName: abc-consult-stg
      port: 3309
    - project: project-2
      region: europe-west1
      instance: abc-consultation
      instanceShortName: abc-consult-prd
      port: 3310

Anything else we need to know:

This might be fixable by also using the instanceShortName (when available) in the svc.yaml for naming the port.

Newest version doesn't work 0.25.3

Service name no longer corresponds to application name rendering the Helm chart non-functional. It caused 2 of my production setups a few hours of down time.

Please don't do something like this again.............

Caused by commit 867f85b

[stable/contour] Why use a value for namespace rather than `helm –namespace`?

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

I consider this a BUG as it is contrary to community practice.

Version of Helm and Kubernetes:

any supported versions

Which chart:

stable/contour

What happened:

must override a values.yml file default to change namespace

What you expected to happen:

helm --namespace my-namespace rimusz/contour --everything else

Add ReclaimPolicy to StorageClass

Feature Request

Add a configuration for ReclaimPolicy in your StorageClass template, something like

storageClass.reclaimPolicy: Retain

Options: are Retain and Delete

Sqlproxy hasCredentials logic

For sqlproxy to default to no credentials.json, usingGCPController: "false" should instead be set to usingGCPController: "" in values.yaml.

{{- define "gcloud-sqlproxy.hasCredentials" -}}
{{ or .Values.serviceAccountKey ( or .Values.existingSecret .Values.usingGCPController ) -}}
{{- end -}}
        {{ if $hasCredentials -}}
        - -credential_file=/secrets/cloudsql/{{ include "gcloud-sqlproxy.secretKey" . }}
        {{ end -}}

https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/templates/_helpers.tpl#L58-L60

https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/templates/deployment.yaml#L35-L37

Allow to set reclaimPolicy in storage class

Is this a request for help?:

No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

feature request

Version of Helm and Kubernetes:

does not apply

Which chart:

hostpath-provisioner

What happened:
All PVs have retain policy "delete"

What you expected to happen:
Retain policy should be possible to set

How to reproduce it (as minimally and precisely as possible):
just create a PVC, check PV's retain policy

Anything else we need to know:
This can easily be achieved by setting the policy for the created storage class: https://medium.com/faun/kubernetes-how-to-set-reclaimpolicy-for-persistentvolumeclaim-7eb7d002bb2e

gcloud-sqlproxy - HPA kind

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG

If this is a BUG REPORT, please:

Version of Helm and Kubernetes:
helm: 3.13.0
kubernetes: 1.24

Which chart:
gcloud-sqlproxy:0.25.2

What happened:
Template is setting my HPA kind as Deployment even though in my values.yaml file i have useStatefulset: true.

What you expected to happen:
Should set the HPA kind as Statefulset instead of Deployment

*** Possible Fix***:
Probably just need an if else statement depending on what useStatefulset is set to in the values.yaml file.

Thank you

API Access type

I've guessed that the Cloud SQL Client role is whats required to be tied to the GCP service account, but it would be nice to have that written out explicitly.

image

[gcloud-sqlproxy] add support for PRIVATE IPs

FEATURE REQUEST:
add support for PRIAVTE IPs

Which chart:
gcloud-sqlproxy

What happened:
I needed the connect to the CloudSQL via private IP; so I modified the command for the container to:

/cloud_sql_proxy                                                                                                                                                                     
       --dir=/cloudsql                                                                                                                                                                      
       -instances= XXXXXXX,
       -ip_address_types=PRIVATE

But it would be nice to get this from the values file

[gcloud-sqlproxy] - FR - Add custom labels

FEATURE REQUEST

Version of Helm and Kubernetes: All

Which chart: gcloud-sqlproxy

Feature: Add custom labels to the deployments/stateful sets.

It would be great if we could have the possibility of adding additional labels, on top of what is already there. In my case I would need to add custom labels for Datadog, Istio, etc. Technically I know it's possible to use the post-rendering feature helm offers however that introduces a lot of complexity for something so simple.

Let me know what you think.
Thanks and have a great Friday!

gcloud-sqlproxy installation instructions do not work with some versions of `base64`

Is this a request for help?: NO


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Version of Helm and Kubernetes:
helm:

Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}

kubectl:

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

Using minikube version: v0.32.0

Which chart:
gcloud-sqlproxy

What happened:

$ helm upgrade pg-sqlproxy rimusz/gcloud-sqlproxy --namespace sqlproxy \
    --set serviceAccountKey="$(cat service-account.json | base64)" \
    --set cloudsql.instances[0].instance=INSTANCE \
    --set cloudsql.instances[0].project=PROJECT \
    --set cloudsql.instances[0].region=REGION \
    --set cloudsql.instances[0].port=5432 -i

Does not work
Error was:

Error: YAML parse error on gcloud-sqlproxy/templates/secrets.yaml: error converting YAML to JSON: yaml: line 14: could not find expected ':'

This was caused by the base64 command appending newlines after so many characters.

What you expected to happen:
The chart to deploy properly

How to reproduce it (as minimally and precisely as possible):
Attempt to deploy the chart using a version of base64 that appends newlines.

Anything else we need to know:

The fix is to do $(cat db-proxy_key.json | base64 | tr -d '\n')`. This removes the newlines.

Standard set +x was really useful for figuring this one out.

Edit:
Command is tr -d '\n'. I accidentally pasted an early attempt.

sqlproxy not working

Is this a request for help?:

Yes

Version of Helm and Kubernetes:

Helm v3.9.0
Kubernetes: 1.21.10-gke.2000

Which chart:

sqlproxy

What happened:

None of my pods are able to connect to CloudSQL (mysql) anymore. It was working fine for well over a year, it suddenly broke.

What you expected to happen:

All deployments are up and the sqlproxy service is not outputting any errors, but none of the pods are connecting. If I try to access any endpoint from these pods which are not triggering the DB connection, they work. Only DB connection endpoints are broken. Also tried to login through PHPmyadmin to the DB and it wont login (because it is unable to connect to the DB).

How to reproduce it (as minimally and precisely as possible):

I am unsure, this was working perfectly fine for well over a year.

Anything else we need to know:

Yes, thinking I might have had a broken install or perhaps missing an update I tried upgrading by using:

helm upgrade sqlproxy rimusz/gcloud-sqlproxy --namespace sqlproxy --set serviceAccountKey="$(cat sqlproxy-key.json | base64 | tr -d '\n')" --set cloudsql.instances[0].instance=instancename --set cloudsql.instances[0].project=projectid --set cloudsql.instances[0].region=region --set cloudsql.instances[0].port=1234 -i

However I am receiving: zsh: no matches found: cloudsql.instances[0].instance=instancename

Yet the instance is certainly there and the name is correct.

So I created a completely new project from scratch and created a new SQL Cloud Instance and a new Cluster and tried to rerun the setup from scratch and I am coming up with the exact same error. So I am unclear if maybe something has changed somewhere between GKE, SQL Cloud, Helm or ProxySQL which has things in a broken state.

I'm in a bit of panic as some services in production are offline as a consequence.

Appreciate the help.

UPDATE: I was actually able to run the update eventually after realizing I just needed to escape brakets (sorry first time on macos). After updating it would appear that sqlproxy is able to connect, my apps are still not connecting but reading the log from sqlproxy appears to suggest it is connected, so I will re-run some more tests and report back soon.

gcloud-sqlproxy not working with Kubernetes 1.16+

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:
Helm version: `version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2eb
f7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}``

Kubernetes version: 1.16

Minikube version: `minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad``

Which chart:
https://github.com/rimusz/charts/tree/master/stable/gcloud-sqlproxy

What happened:
When running:

helm upgrade pg-sqlproxy rimusz/gcloud-sqlproxy --namespace sqlproxy \
    --set serviceAccountKey="$(cat service-account.json | base64 | tr -d '\n')" \
    --set cloudsql.instances[0].instance=INSTANCE \
    --set cloudsql.instances[0].project=PROJECT \
    --set cloudsql.instances[0].region=REGION \
    --set cloudsql.instances[0].port=5432 -i

(of course, I replaced the proper macros)

I get:

Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

What you expected to happen:

The chart should install itself.

How to reproduce it (as minimally and precisely as possible):

Create a kubernetes with version 1.16 with minikube start

It currently create a cluster with kubernetes 1.16 and then run the above command (helm upgrade ...).

Anything else we need to know:

Based on https://stackoverflow.com/questions/58481850/no-matches-for-kind-deployment-in-version-extensions-v1beta1 it seems that the Deployment object is no longer working with the extensions/v1beta1 apiVersion.

It is now only working with apps/v1.

See: https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/templates/deployment.yaml#L3

Note: The apps/v1 seems to work on 1.14.9 as well:

$minikube kubectl api-resources | grep deployment
deployments                       deploy       apps                           true         Deployment
deployments                       deploy       extensions                     true         Deployment

with a minikube created with 1.14.9 (minikube start --kubernetes-version v1.14.9)

CloudSQL Memory/CPU limit causes spikes in query times

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? Bug report

Which chart: gcloud-sqlproxy

What happened: CPU limit causes odd queries to spike in response time

How to reproduce it (as minimally and precisely as possible):

\timing
SELECT "id" FROM "User" WHERE "id" = 1;

Will result in response times like:
2ms, 2ms, 2ms, 50ms, 2ms, 2ms, 90ms, 2ms, 2ms

After ripping my hair out wondering how some of my API calls are slow I found this issue which mentions removing the CPU Limit stops spikes: GoogleCloudPlatform/cloud-sql-proxy#168 (comment)

Tried it and as expected, now my queries are consistently around 2ms

So I guess what I propose is increasing the CPU limit or removing it entirely as it'll affect people using the provided one by default

gcloud-sqlproxy: Support for alpine based container image for livenessProbe ?

Is this a request for help?:

Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature Request

** If this is a FEATURE REQUEST, please:

  • Describe in detail the feature/behavior/change you'd like to see.
    I'm using sqlproxy chart and wanted to setup a health check or maybe a serviceMonitor

Version of Helm and Kubernetes:
Kubernetes: 1.17
Helm: 3.2.4

Which chart:
gcloud-sqlproxy 0.19.13

What happened:
Is there a workaround to use a different image that has nc installed.
I enabled livenessProbe and defined the port as 3306

livenessProbe:
  exec:
    command: ["nc", "-z", "127.0.0.1", "3306"]

What you expected to happen:
I did read that as of v1.16 and above cloudsql proxy container is based on distroless and there are a couple of alternatives mentioned here https://github.com/GoogleCloudPlatform/cloudsql-proxy#container-images

The version of the chart I'm using is 0.19.13. So if I were to set the image tag in my values to this:

image: gcr.io/cloudsql-docker/gce-proxy-alpine
imageTag: "1.16"

Would it have any breaking changes ?

How to reproduce it (as minimally and precisely as possible):
define livenessProbe with exec

livenessProbe:
  exec:
    command: ["nc", "-z", "127.0.0.1", "3306"]
  enabled: true
  port: 3306

gcloud-sqlproxy - Helm Chart not compatible with `cloud-sql-proxy` >=2.0.0

Hello, thank you for maintaining this helm chart, it is unfortunately not compatible with the cloud-sql-proxy images starting at the version 2.0.0.

The entrypoint of the container has changed from cloud_sql_proxy to cloud-sql-proxy and since it is not configurable via the values, we are stuck with containers <2.0.0.
This is the PR which introduced the change: GoogleCloudPlatform/cloud-sql-proxy#1326

Note that the container registry has changed at the same time: gcr.io/cloudsql-docker/gce-proxy -> gcr.io/cloud-sql-connectors/cloud-sql-proxy (cf GoogleCloudPlatform/cloud-sql-proxy#1607 (comment))

[gcloud-sqlproxy] livenessprobe failure threshold should be higher as readinessprobe

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
3 / 1.14

Which chart:
glcoud-sqlproxy

What happened:
Best practice is that livenessprobe failure threshold should be higher as readinessprobe failure threshold to prevent unwanted restarts.

See: https://github.com/rimusz/charts/blob/master/stable/gcloud-sqlproxy/values.yaml#L108

But maybe a real readiness check via sql statement would be much nicer anyway. Would you consider to add another gcloud docker image which contains a mysql client too?

hostpath-provisioner: unexpected error getting claim reference: selfLink was empty, can't make reference

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BugReport

Version of Helm and Kubernetes:
k8s: 1.21.3
helm: 3.6.3

Which chart: hostpath-provisioner

What happened:
Provisioner doesn't provision PVs:

I0901 10:41:21.403079       1 controller.go:926] provision "local-ns/s3" class "SomeName": started
E0901 10:41:21.410537       1 controller.go:943] provision "local-ns/s3" class "SomeName": unexpected error getting claim reference: selfLink was empty, can't make reference

Googled and found similar issue in nfs provisioner (kubernetes-sigs/nfs-subdir-external-provisioner#25).
k8s from v1.20.0 changed default behaviour and started using RemoveSelfLink=true
Workaround for kube-apiserver was to pass --feature-gates=RemoveSelfLink=false as argument or load via kubectl apply -f. It worked until k8s v1.21.0
Now I can't make hostpath-provisioner work with kind cluster.

What you expected to happen:
Provisioner should provision PVs.

How to reproduce it (as minimally and precisely as possible):

  1. start kind cluster with default settings
  2. helm install hostpath-provisioner (any way of installation could be used, use --set storageClass.name="SomeName")
  3. add a PVC request for a storage class "SomeName"

Anything else we need to know:
Any help will be appreciated.

[gcloud-sqlproxy] does this chart require me to create the serviceAccount or does it create it itself?

I'm using a pulumi provider to deploy the gcloud-sqlproxy helm chart, and something's preventing the workload from scaling up. I'm pretty sure it's just my bad configuration, so I'm going through the documentation again.

I'm a bit confused, because the installation instructions tell me that I need to manually create the service account and pass the proxy the credentials. Which is fine, but then I also notice in values serviceAccount.create, which defaults to true...

Are these separate? The same? Is the documentation out of date?

Help! :D

gcloud-sqlproxy - Duplicate Port Name if instanceShortName is not defined

Bug in gcloud-sqlproxy chart

Used Tools:
Google Kubernetes Engine Version: v1.27.3-gke.1700
Helm Version: v3.11.3
Helm Terraform Provider Version: 2.11.0

Description:
This bug affects gcloud-sqlproxy chart, because of this I could neither install this chart as a new release nor upgrade an existing one.
The values.yaml of the release, looked like this

cloudsql:
  instances:
    - instance: "long-instance-name-foo"
      project: "project"
      region: "region"
      port: 5432
    - instance: "long-instance-name-bar"
      project: "project"
      region: "region"
      port: 5433
# other values....

The helm release install/upgrade had failed with the error message " Duplicate Values for port name in deployment.yaml "

snippet from error message:
* Service "sqlproxy" is invalid: [spec.ports[1].name: Duplicate value: "long-instance-n", spec.ports[2].name: Duplicate value: "long-instance-n"] * Deployment.apps "sqlproxy" is invalid: [spec.template.spec.containers[0].ports[1].name: Duplicate value: "long-instance-n", spec.template.spec.containers[0].ports[2].name: Duplicate value: "long-instance-n"]

My analysis on this issue:

if the instanceShortName is not explicitly set, the chart creates instanceShortName using the first 15 characters of instance
and the created instanceShortName is no more unique if two instances have identitical prefix names of length 15 characters or longer.
As in our case we had multiple cloudsql instances with the same prefix and the prefix was longer than 15 characters.

What you expected to happen:
When the sqlproxy is deployed, it should have generated a random alphanumeric string of 9 or less digits and have appended it to instance name, so to create a unique instanceShortName

instanceShortName in helpers.tpl (gcloud-sqlproxy Chart Version 0.25.2):

{{/*
Create the short instance name
*/}}
{{- define "gcloud-sqlproxy.instanceShortName" -}}
{{ .instanceShortName |  default ((.instance |  trunc 15 | trimSuffix  "-" ) }}
{{- end -}} 

Suggested Fix (gcloud-sqlproxy Chart Version 0.25.3):

{{- define "gcloud-sqlproxy.instanceShortName" -}}
{{- $randomString := randAlphaNum 9 | lower -}}
{{ .instanceShortName | default (printf "%s-%s" (.instance | trunc 5 | trimSuffix "-") $randomString) }}
{{- end -}}

as a result, instanceShortName looks like this
image

How to reproduce it:


Deploy the chart using helm (even without terraform helm provider), with the following values but the only condition is to have the prefix on more than one instances, and also the prefix should be more than 15 characters long

  cloudsql:
      instances:
        - instance: "long-instance-name-foo" # or any other name you prefer 
          project: "project"
          region: "region"
          port: 5432
        - instance: "long-instance-name-bar" # or any other name you prefer 
          project: "project"
          region: "region"
          port: 5433

Help with installation of rimusz/nfs-client-provisioner with helm v3

Is this a request for help?:
Yes

Version of Helm and Kubernetes: helm v3.5, kubectl: v1.21.2

Which chart: rimusz/nfs-client-provisioner

What happened:
Error on installation

Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2"

What you expected to happen:
helm v3 install the chart

How to reproduce it (as minimally and precisely as possible):
helm install nfs-us-central1-c rimusz/nfs-client-provisioner --namespace nfs-storage --set nfs.server=${FSADDR} --create-namespace

Anything else we need to know:
No

gcloud-sqlproxy - Define instances by environment variables

Is this a request for help?:
Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Feature Request

  • Describe in detail the feature/behavior/change you'd like to see.
    I need to pass the value "project:region:instance" by environment variables or secret

Version of Helm and Kubernetes:
helm: 3.9.1
kubernetes: 1.22.11

Which chart:
gcloud-sqlproxy:0.22.8

What happened:
Cannot pass instance values from secret and can't define extra env vars for the deployment.

What you expected to happen:
Possibility to have an "extraEnv" value to set the instance values like project:region:instance or, preferably, pass the values from a secret

gcloud-sqlproxy - Helm Chart hpa fails using autoscaling/v2

Hello, in version 0.24.0 helmchart fails while applying hpa. For version 0.23.0 everything works well.

I see that between both version some changes regarding apiVersion happend, it was changed from autoscaling/v2beta1
to autoscaling/v2 which is fine, because v2beta1 is deprecated in 1.19 and no longer served as of 1.25.

autoscaling/v2 is available since 1.23, but has some breaking changes that are currently not reflected in the gcloud-sqlproxy chart.

I would like to suggest using autoscaling/v2beta2 to apply HorizontalPodAutoscaler ressource.

K8s: v1.23.14-gke.1800
Error while applying helm chart:

Helm upgrade failed: error validating "": error validating data: [ValidationError(HorizontalPodAutoscaler.spec.metrics[0].resource): unknown field "targe
tAverageUtilization" in io.k8s.api.autoscaling.v2.ResourceMetricSource, ValidationError(HorizontalPodAutoscaler.spec.metrics[0].resource): missing required field "target" in io.k8s
.api.autoscaling.v2.ResourceMetricSource]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.