Comments (31)
This repository is the right location for the external cloud controller manager, and I'll be spending much more time investing in it this year. At some point, likely this year, we will migrate the source for the AWS cloud provider from upstream to this repo. At that point, development will shift from upstream to here. For now, we are importing the upstream cloud provider and relying on bug fixes upstream. That being said, significant work this year needs to be done on testing and documentation in this repository to make it usable, and that's one of my highest priority goals.
from cloud-provider-aws.
How fitting that @selslack comments that this repo is kinda dead, and the next comment is the bot adding a stale label lol...
Is there any status on this? I've had the same trouble as @darwin67 with getting the external cloud provider working on k8s 1.17, since in-tree is deprecated now. The Azure and OpenStack cloud provider repos actually have documentation on getting those working, but nothing for this one.
Is there anyone out there that's gotten this project working as an external cloud provider in recent versions of Kubernetes?
from cloud-provider-aws.
would love to see action/support, or at least a definitive response from those that manage this repo. I was able to get the cloud controller and ebs driver working after hammering my head against it for a while, and building the cloud controller image myself, but it was far from pleasant, and the lack of support/responsiveness leads me to fear for future support of this config. Is Amazon abandoning kubernetes, or trying to force eveyrone to use EKS, or something else entirely?
from cloud-provider-aws.
Is there any plan to hoist the legacy code from https://github.com/kubernetes/legacy-cloud-providers, and into this repo, so that the code can be edited in a central place? Alternatively, would you be unhappy if someone else did that @andrewsykim? I realize it's "ugly", but it seems like it'd unblock some contributions?
from cloud-provider-aws.
This repo is kinda dead, yea.
from cloud-provider-aws.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
from cloud-provider-aws.
/remove-lifecycle stale
from cloud-provider-aws.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
from cloud-provider-aws.
@nckturner 👋! I'm wondering if you have any more updates since your last comment a few months ago? Thanks!
/remove-lifecycle stale
from cloud-provider-aws.
Same here looking for clear documentation. Does anyone figure how to deploy this into a kubernete daemonset as instructed in https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#examples?
from cloud-provider-aws.
Hey, thanks for your interest! We are working on investing in documentation and publishing container images, but we're always looking for help! If your interested in contributing please let us (myself, @andrewsykim and @justinsb) know as we build out the documention!
from cloud-provider-aws.
/assign
from cloud-provider-aws.
You also don't have any sample manifests for deploying the cloud-controller-manager and I haven't had any luck getting the aws provider to work as an external cloud provider ever since I first attempted at 1.13..
@darwin67 regarding sample manifests, we added some in #93. Until we have a public image repo you have to build the image yourself though.
from cloud-provider-aws.
@andrewsykim thanks for the update. great to see you joining as the owner and hope that this project will be getting more updates.
I'm no longer at the company when I filed this request so I don't really have the k8s clusters to provide feedbacks anymore, but happy to keep the issue opened until request is resolved.
from cloud-provider-aws.
Looking for some feedback on what the documentation for this project should look like, please comment #102 if you have thoughts/opinions.
from cloud-provider-aws.
I may have missed some context here. So currently the "central place" is in https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws. That is consumed by the in-tree provider and eventually here via k8s.io/legacy-cloud-providers. This ensures we only have to maintain 1 provider at the moment. In the near future we will cut the tie to legacy-cloud-providers and port the provider into this repo and develop it here. But that can only happen once the in-tree providers are removed.
Are you proposing to fork the current provider into this repo and develop them separately?
from cloud-provider-aws.
@andrewsykim correct. The fact that any functionality change has to be made in that repo, and then this repo has to be updated is messy. It also makes maintaining our own patchset more difficult, as that other repo has a bunch of unrelated stuff.
IMHO, it would be easier to declare bankruptcy on that existing repo and say it's EOL, and have people move to binaries from this repo -- And hoist the relevant AWS code into this repo. There are still aspects of legacy-cloud-providers we might want to use -- like configuration, but I don't see any reason to keep the AWS-specific functionality there.
from cloud-provider-aws.
We need to be careful about breaking existing behavior. If we branch off we could lose bug fixes or accidentally break compatiblity for users migrating from in-tree to out-of-tree.
I would be in favor of just starting a v2 provider on a clean slate and redesign it from the ground up (i.e. enabled with --cloud-provider=aws/v2
). It would only be supported for new clusters. We can take the good parts of the existing provider and replace the bad parts. Do folks have an appetite for this as opposed to building on top of the existing provider?
from cloud-provider-aws.
I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over.
I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar.
from cloud-provider-aws.
I would much rather see an incremental approach to a v2. We have immediate interests in features such as being able to make the node name the i-, or adding EC2 healthcheck info into the node conditions we're not wanting to scorch the earth over.
This is totally fair, but many of the common feature requests from users like the node name change is very difficult to change without breaking existing clusters. The migration semantics get complicated very quickly. Starting on a clean slate here could possibly be less work overall.
I can put together a PR proposal, if you want. As far as I know this project has no official releases as of yet. We could do this, and release a 0.0.1alpha, or similar.
Sure, I would be open to this and we can continue discussions there. Worth noting that we will likely cut an alpha version soon, we were just blocked on getting our GCR registry setup for a while (kubernetes/k8s.io#859).
from cloud-provider-aws.
Would it make sense to slurp-over the existing code from https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/aws, and keep a v1 branch from which changes made here are then replicated over to there until such time as "there" is removed? That would allow migrating discussion and feature development for the AWS cloud-provider here, since https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers has a pretty clear "Do not add new features here" note, which we're gleefully ignoring for things like kubernetes/kubernetes#79523. Edit: Ignore this, I just saw #61 which moved the other way.
It's not terribly clear to me what the timeline is for removing in-tree provider support, but clearly this project needs to be up-and-running and probably in wide use before that can happen for the in-tree AWS support.
I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever.
from cloud-provider-aws.
I guess it depends on how different you envisage v1 and v2 being, from either a UX or code-design approach: whether cloud-provider-aws v1 and v2 ever need to be codeveloped, or if that division can be "in-tree"/"out-of-tree" forever.
My thinking here is: v1 (current implementation) is both in-tree / out-of-tree with almost identical behavior. v2 can be a complete rewrite from scratch where we take the good from v1 and redo the bad.
from cloud-provider-aws.
FYI folks, we cut the first alpha release https://github.com/kubernetes/cloud-provider-aws/releases/tag/v1.18.0-alpha.0
Please try it out and provide feedback, example manifest linked in the release notes.
from cloud-provider-aws.
Another relevant question: Where should AWS Cloud Provider issues be lodged? The code lives in https://github.com/kubernetes/kubernetes/ but the code-ownership and future publishing vests here (I guess?). I'm noticing bug reports in both trackers, and sometimes for the same issue.
from cloud-provider-aws.
@andrewsykim See here: #111
I still do not think "starting from scratch" is a great idea....
from cloud-provider-aws.
I still do not think "starting from scratch" is a great idea....
Starting a v2 provider from scratch wouldn't mean we abandon the existing one. There are some feature requests for the legacy provider, like the node and ELB name change, that are just too difficult to implement without breaking existing clusters. We can maintain both providers for the forseeable future.
from cloud-provider-aws.
@TBBle I think either works, maybe using this repo would make them easier to find and fit better with future goals for the project, but I doubt we will be able to prevent others from filing issues at k/k, so we'll have to be cognizant of both.
@sargun I appreciate your dilemma. I'm open to all options, but we really do have to be careful about breaking existing users. That being said we need a way to allow contributions that doesn't cause excessive friction. I'm guessing you've submitted your patches upstream at some point and they stagnated, could you link any PRs you have open? If not, let's at least start by opening PR's against k/k so we can discuss them, and decide between V2/copying code over into this repo/merging into upstream.
from cloud-provider-aws.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
from cloud-provider-aws.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
from cloud-provider-aws.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
from cloud-provider-aws.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from cloud-provider-aws.
Related Issues (20)
- Dependency update tool Check fails in CLOMonitor HOT 5
- Release Automation for Primary Artifacts HOT 3
- Finding the ProviderID fails when using a custom domain in the dhcp options HOT 12
- /etc/eks/image-credential-provider/ecr-credential-provider HOT 1
- `service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout` is never applied HOT 3
- Publish the ecr-credential-provider as a container image HOT 8
- Newly autoscaled worker-nodes not added to the targets of Network Loadbalancer. HOT 5
- Please ignore (created by mistake) HOT 3
- Fork the tagging controller into generic node customization controller HOT 3
- TalosOSv1.5.5: AWS CCM can't find the instance via the API so it can't configure the nodes in peer region HOT 5
- Website does not have the correct trademark disclaimer HOT 7
- GitHub repository does not link to the project website url HOT 5
- AWS CCM DockerFile build for more than one platform HOT 9
- cloud-provider-aws does not Prefer CLI Arguments for Configuring Kubernetes HOT 5
- Improve documentation HOT 5
- GitHub releases for latest tags missing HOT 3
- NLB does not map to manual EndpointSlice HOT 3
- label nodes with the name of the autoscaling group they belong to (if they belong to one) HOT 7
- Multiple ENIs is confusing cloud-provider-aws controller HOT 5
- Karpenter does not terminate instances in Pending state HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cloud-provider-aws.