Giter Club home page Giter Club logo

Comments (31)

nckturner avatar nckturner commented on June 28, 2024 4

This repository is the right location for the external cloud controller manager, and I'll be spending much more time investing in it this year. At some point, likely this year, we will migrate the source for the AWS cloud provider from upstream to this repo. At that point, development will shift from upstream to here. For now, we are importing the upstream cloud provider and relying on bug fixes upstream. That being said, significant work this year needs to be done on testing and documentation in this repository to make it usable, and that's one of my highest priority goals.

from cloud-provider-aws.

particledecay avatar particledecay commented on June 28, 2024 3

How fitting that @selslack comments that this repo is kinda dead, and the next comment is the bot adding a stale label lol...

Is there any status on this? I've had the same trouble as @darwin67 with getting the external cloud provider working on k8s 1.17, since in-tree is deprecated now. The Azure and OpenStack cloud provider repos actually have documentation on getting those working, but nothing for this one.

Is there anyone out there that's gotten this project working as an external cloud provider in recent versions of Kubernetes?

from cloud-provider-aws.

brookssw avatar brookssw commented on June 28, 2024 1

would love to see action/support, or at least a definitive response from those that manage this repo. I was able to get the cloud controller and ebs driver working after hammering my head against it for a while, and building the cloud controller image myself, but it was far from pleasant, and the lack of support/responsiveness leads me to fear for future support of this config. Is Amazon abandoning kubernetes, or trying to force eveyrone to use EKS, or something else entirely?

from cloud-provider-aws.

sargun avatar sargun commented on June 28, 2024 1

Is there any plan to hoist the legacy code from https://github.com/kubernetes/legacy-cloud-providers, and into this repo, so that the code can be edited in a central place? Alternatively, would you be unhappy if someone else did that @andrewsykim? I realize it's "ugly", but it seems like it'd unblock some contributions?

from cloud-provider-aws.

selslack avatar selslack commented on June 28, 2024

This repo is kinda dead, yea.

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on June 28, 2024

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

from cloud-provider-aws.

selslack avatar selslack commented on June 28, 2024

/remove-lifecycle stale

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on June 28, 2024

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

from cloud-provider-aws.

ncdc avatar ncdc commented on June 28, 2024

@nckturner 👋! I'm wondering if you have any more updates since your last comment a few months ago? Thanks!

/remove-lifecycle stale

from cloud-provider-aws.

StrongMonkey avatar StrongMonkey commented on June 28, 2024

Same here looking for clear documentation. Does anyone figure how to deploy this into a kubernete daemonset as instructed in https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#examples?

from cloud-provider-aws.

nckturner avatar nckturner commented on June 28, 2024

Hey, thanks for your interest! We are working on investing in documentation and publishing container images, but we're always looking for help! If your interested in contributing please let us (myself, @andrewsykim and @justinsb) know as we build out the documention!

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

/assign

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

You also don't have any sample manifests for deploying the cloud-controller-manager and I haven't had any luck getting the aws provider to work as an external cloud provider ever since I first attempted at 1.13..

@darwin67 regarding sample manifests, we added some in #93. Until we have a public image repo you have to build the image yourself though.

from cloud-provider-aws.

darwin67 avatar darwin67 commented on June 28, 2024

@andrewsykim thanks for the update. great to see you joining as the owner and hope that this project will be getting more updates.
I'm no longer at the company when I filed this request so I don't really have the k8s clusters to provide feedbacks anymore, but happy to keep the issue opened until request is resolved.

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

Looking for some feedback on what the documentation for this project should look like, please comment #102 if you have thoughts/opinions.

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

I may have missed some context here. So currently the "central place" is in https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws. That is consumed by the in-tree provider and eventually here via k8s.io/legacy-cloud-providers. This ensures we only have to maintain 1 provider at the moment. In the near future we will cut the tie to legacy-cloud-providers and port the provider into this repo and develop it here. But that can only happen once the in-tree providers are removed.

Are you proposing to fork the current provider into this repo and develop them separately?

from cloud-provider-aws.

sargun avatar sargun commented on June 28, 2024

@andrewsykim correct. The fact that any functionality change has to be made in that repo, and then this repo has to be updated is messy. It also makes maintaining our own patchset more difficult, as that other repo has a bunch of unrelated stuff.

IMHO, it would be easier to declare bankruptcy on that existing repo and say it's EOL, and have people move to binaries from this repo -- And hoist the relevant AWS code into this repo. There are still aspects of legacy-cloud-providers we might want to use -- like configuration, but I don't see any reason to keep the AWS-specific functionality there.

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

We need to be careful about breaking existing behavior. If we branch off we could lose bug fixes or accidentally break compatiblity for users migrating from in-tree to out-of-tree.

I would be in favor of just starting a v2 provider on a clean slate and redesign it from the ground up (i.e. enabled with --cloud-provider=aws/v2). It would only be supported for new clusters. We can take the good parts of the existing provider and replace the bad parts. Do folks have an appetite for this as opposed to building on top of the existing provider?

from cloud-provider-aws.

sargun avatar sargun commented on June 28, 2024

I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over.

I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar.

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over.

This is totally fair, but many of the common feature requests from users like the node name change is very difficult to change without breaking existing clusters. The migration semantics get complicated very quickly. Starting on a clean slate here could possibly be less work overall.

I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar.

Sure, I would be open to this and we can continue discussions there. Worth noting that we will likely cut an alpha version soon, we were just blocked on getting our GCR registry setup for a while (kubernetes/k8s.io#859).

from cloud-provider-aws.

TBBle avatar TBBle commented on June 28, 2024

Would it make sense to slurp-over the existing code from https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws, and keep a v1 branch from which changes made here are then replicated over to there until such time as "there" is removed? That would allow migrating discussion and feature development for the AWS cloud-provider here, since https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers has a pretty clear "Do not add new features here" note, which we're gleefully ignoring for things like kubernetes/kubernetes#79523. Edit: Ignore this, I just saw #61 which moved the other way.

It's not terribly clear to me what the timeline is for removing in-tree provider support, but clearly this project needs to be up-and-running and probably in wide use before that can happen for the in-tree AWS support.

I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever.

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever.

My thinking here is: v1 (current implementation) is both in-tree / out-of-tree with almost identical behavior. v2 can be a complete rewrite from scratch where we take the good from v1 and redo the bad.

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

FYI folks, we cut the first alpha release https://github.com/kubernetes/cloud-provider-aws/releases/tag/v1.18.0-alpha.0

Please try it out and provide feedback, example manifest linked in the release notes.

from cloud-provider-aws.

TBBle avatar TBBle commented on June 28, 2024

Another relevant question: Where should AWS Cloud Provider issues be lodged? The code lives in https://github.com/kubernetes/kubernetes/ but the code-ownership and future publishing vests here (I guess?). I'm noticing bug reports in both trackers, and sometimes for the same issue.

from cloud-provider-aws.

sargun avatar sargun commented on June 28, 2024

@andrewsykim See here: #111

I still do not think "starting from scratch" is a great idea....

from cloud-provider-aws.

andrewsykim avatar andrewsykim commented on June 28, 2024

I still do not think "starting from scratch" is a great idea....

Starting a v2 provider from scratch wouldn't mean we abandon the existing one. There are some feature requests for the legacy provider, like the node and ELB name change, that are just too difficult to implement without breaking existing clusters. We can maintain both providers for the forseeable future.

from cloud-provider-aws.

nckturner avatar nckturner commented on June 28, 2024

@TBBle I think either works, maybe using this repo would make them easier to find and fit better with future goals for the project, but I doubt we will be able to prevent others from filing issues at k/k, so we'll have to be cognizant of both.

@sargun I appreciate your dilemma. I'm open to all options, but we really do have to be careful about breaking existing users. That being said we need a way to allow contributions that doesn't cause excessive friction. I'm guessing you've submitted your patches upstream at some point and they stagnated, could you link any PRs you have open? If not, let's at least start by opening PR's against k/k so we can discuss them, and decide between V2/copying code over into this repo/merging into upstream.

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on June 28, 2024

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on June 28, 2024

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on June 28, 2024

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

from cloud-provider-aws.

k8s-ci-robot avatar k8s-ci-robot commented on June 28, 2024

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

from cloud-provider-aws.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.