Giter Club home page Giter Club logo

Comments (19)

rimusz avatar rimusz commented on July 30, 2024

@lukebond
it is better not to mix production cluster with the rest.
production is production it needs to be kept away from the development cluster.

e.g. use of different coreos release channels.

from paz.

sublimino avatar sublimino commented on July 30, 2024

@rimusz +1 for separating production environment - total isolation, nothing shared if possible

from paz.

rimusz avatar rimusz commented on July 30, 2024

@sublimino @lukebond
only private docker registry can be used for both, so docker images can be shared between clusters

from paz.

lukebond avatar lukebond commented on July 30, 2024

although i agree with this, there shouldn't be anything in Paz that cares whether you separate them or not. Paz just needs to be aware of environments (ie. a parameter to most REST calls) and translate them down to Fleet machine metadata at deployment time.

although Paz avoids doing infra stuff, i'm beginning to think it would be good to have a cluster provisioning tool (separate from Paz) that allows you to choose Etcd cluster topology, group machines and add metadata, etc.

from paz.

rimusz avatar rimusz commented on July 30, 2024

@lukebond
Yes, Paz should not care how your dev/production is set. It is just good practice to keep them separately.

I think the separate cluster provisioning tool makes sense, which as you said: allows you to choose Etcd cluster topology, group machines and add metadata, etc.

from paz.

sublimino avatar sublimino commented on July 30, 2024

That cluster provisioning tool(/GUI?) sounds like it could be a cloud-config generator via https://terraform.io/ - in support of immutable infrastructure we should deploy a new host with new config, health check, rebalance containers, and decommission old host? Servers should automatically be distributed between AZs where applicable.

etcd topology - for large deployments of any size CoreOS recommends running a separate 5-node etcd cluster, otherwise etcd should run on each host.

from paz.

rimusz avatar rimusz commented on July 30, 2024

@sublimino https://terraform.io/ it is good choice for cloud one setups. What about the bare-metal?

Regarding the etcd:
ever ever we should run etcd on each host, very bad idea, coreos does not recommend that.
I got bitten by that setup very badly.
I would recommend such setup:

  1. Up to 9 workers, one etcd
  2. then we can start from 3 etcd nodes for 10 up to 50 worker machines, then increase to 5 and so one.

Also etcd machines do not have to very powerful as they run only etcd cluster, e.g at GCE g1-small instances work just fine.
I had a long chat about it with Kelsey when he was at London Kubernetes meetup.

from paz.

lukebond avatar lukebond commented on July 30, 2024

Agree with all of this and aware of the Etcd-on-every-machine anti-pattern from previous experience (and was also at that meet-up). But since Paz doesn't do infra then that's down to whoever sets up the cluster.

from paz.

rimusz avatar rimusz commented on July 30, 2024

@lukebond yep, it is more for the cluster provisioning tool, which makes sense to have for sure, to prepare cluster for Paz.

from paz.

sublimino avatar sublimino commented on July 30, 2024

@rimusz if those bare-metal machines are accessible via ssh already we could conceivably rewrite the cloud-config file and reboot the server? Would have to ensure they're all on the same release channel.

Also been bitten by etcd 0.4 - hopefully we're fixed in v2, although not stressed it myself yet.

Read "on each host" above as "on three or fewer node clusters" - my concern with running less than three nodes is loss of resilience and the smallest machine breaking the cluster (AWS micro/small is not sufficient for etcd nodes). How much hand-holding should a provisioning tool do, @lukebond? And possibly it's another issue as I've hijacked this one! :)

As a footnote, the upper bound of etcd nodes required for stability across any cluster size is 5 according to a chat with Alex Polvi via some Chubby engineers. Further nodes add no meaningful resilience.

from paz.

rimusz avatar rimusz commented on July 30, 2024

@sublimino We can have a choice e.g. if somebody wants very small cluster of 3-5 nodes, they can have if they want just one etcd node, then 3 or 5 nodes depending on cluster size :-)
Yes, AWS ones micro/small are very bad, but Google g1-small (AWS small kind of) runs my etcd clusters just fine. This why I run away from AWS to GC.

@lukebond regarding this cluster provisioning tool, we need a separate repository under paz-sh.
I was looking forward to start messing with https://terraform.io/ for my small projects too, so there we can can put our brains to make a nice different clouds cluster provisioning tool, multi-clouds and etc.

from paz.

lukebond avatar lukebond commented on July 30, 2024

@rimusz good idea. i took the liberty of choosing a name: https://github.com/paz-sh/clusterform

from paz.

rimusz avatar rimusz commented on July 30, 2024

👍

from paz.

sublimino avatar sublimino commented on July 30, 2024

Splendid!

On 24 March 2015 at 17:47, Rimas Mocevicius [email protected]
wrote:

[image: 👍]


Reply to this email directly or view it on GitHub
#41 (comment).

from paz.

rimusz avatar rimusz commented on July 30, 2024

Will Paz support already provisioned clusters?
Maybe cloudform can be used there?

from paz.

lukebond avatar lukebond commented on July 30, 2024

@rimusz currently that's all it supports. there are some helper scripts for bringing up a cluster (for testing/playing only really) but the idea is that you've already got your cluster and then you put Paz on it.

from paz.

rimusz avatar rimusz commented on July 30, 2024

If paz is going to use all that metadata stuff, some instructions need to be provided then, what metadata settings needs to be set on to current cluster to make paz to function properly

from paz.

lukebond avatar lukebond commented on July 30, 2024

Yes, when we start using it. Currently there are no such requirements but there soon will be, e.g. for tying scheduler and service directory to a particular host (they're the ones that have a DB and therefore need a volume mount and to not move hosts). I've been doing that manually so far.

There will also be some metadata for environments and as you say that needs to be defined and documented.

from paz.

rimusz avatar rimusz commented on July 30, 2024

Cool

from paz.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.