Giter Club home page Giter Club logo

Comments (10)

p-bakker avatar p-bakker commented on July 20, 2024

Any update on this?

We're looking for a way to keep all oauth client registrations in sync between multiple Hydra deployments, each connected to their own DB.

These are all completely separate Hydra instances backed by their own IdP, the only thing they share are oauth clients (multi-tenant setup with a marketplace for integrations that are to be available for all tenants)

Ideally, multiple Hydra instances all connected to the same DB (for scaling purposes/zero-downtime deployment etc) would be seen as one from the perspective of maester, so that when the Hydra deployment for a single tenant has multiple Hydra instances running all connected to the same DB, the api calls from maester to Hydra are done on the K8S Service level once and the Service just forwards the request to one of the Hydra instances

from hydra-maester.

aeneasr avatar aeneasr commented on July 20, 2024

I believe that kyma has pushed this back a bit as they are working on other issues right now. If you have some time to dedicate to this issue I think that could be a good addition.

I think we could sync this quite easily, by adding a label selector that works across namespaces and executing the necessary queries as pointed out above. Maybe instead of a CronJob we could simply listen on updates (e.g. new deployment, restart pod) of the hydra containers?

from hydra-maester.

p-bakker avatar p-bakker commented on July 20, 2024

Haven't done any work developing stuff for K8S, just utilized K8S to deploy stuff, but I'll try to carve of some time and see what I can whip up :-)

Maybe instead of a CronJob we could simply listen on updates (e.g. new deployment, restart pod) of the hydra containers?

Yeah, afaik k8s has ways to listen for stuff, so a CronJob shouldnt be needed

from hydra-maester.

aeneasr avatar aeneasr commented on July 20, 2024

Awesome! And, if that is unfruitful or takes too long, you can always resort to a simple go thing that does that for you.

from hydra-maester.

piotrmsc avatar piotrmsc commented on July 20, 2024

CronJob was something we thought at the beginning, after second thought we came up with k8s endpoint approach which returns all k8s IPs for the given k8s service. So if you have for example autoscaling of hydra it will return a list of ips of hydra instances. But... we did not plan to have support for synchro between totally different hydra instances (different k8s services). And here k8s endpoint could be still used,but controller should have configuration of hydra services deployed inside the cluster in different namespaces/with different k8s services and for registered services it could fetch endpoints and make synchronization queries.

Thoughts @p-bakker @aeneasr

from hydra-maester.

p-bakker avatar p-bakker commented on July 20, 2024

And here k8s endpoint could be still used,but controller should have configuration of hydra services deployed inside the cluster in different namespaces/with different k8s services and for registered services it could fetch endpoints and make synchronization queries.

Not 100% sure what exactly it is you're suggesting and as said I'm not yet familiar with developing K8S controllers (or its lingo), but I think it should be something like this:

  • controller watches for new Services being created that match the configured labels and when a new service is noticed, it syncs all oath client CRDs to it
  • controller also watches for changes in the CRDs and when it notices changes, it finds all services that match the configured labels and calls the Hydra /client/ API endpoint through the service (actual path from the service url to the /client/ endpoint of Hydra is configurable

Now, I don't know (yet) if the watching for new services being created can be made generic so that it can either watch for services of for pods based on some config, but if that's possible, I think we could have a single implementation that could serve both the current implementation as well as the multi-tenant implementation

from hydra-maester.

aeneasr avatar aeneasr commented on July 20, 2024

Ah @piotrmsc - I remember, your use case was to support multiple in-memory hydras that are logically the same Authorization Server, right?

from hydra-maester.

piotrmsc avatar piotrmsc commented on July 20, 2024

sorry for late response... TBH we are offering in memory hydra as a default(playground) config with persistence coming soon on production profile.

We are thinking of namespace isolation with ory so hydra per namespace but that is a thought for the future.

Using k8s endpoints was to target multiple instances of hydra cluster wides. In k8s if you create a service for your app and scale the deployment, endpoint for this service is updated with new IP adress of the new instance. In terms of multi tenant case the controller should check if Oauth2 client CR has info about particular hydra instance and make a call to all hydra instances of this given service in the given namespace.

from hydra-maester.

aeneasr avatar aeneasr commented on July 20, 2024

We've done a couple of iterations on other projects and believe that we'll switch to gobuffalo/pop as a DBAL on all projects, which allows us to easily adopt SQLite as well. I'm not sure how well that scales in terms of write access (the file is probably locked? not sure) but it will add some basic persistence in "playground" scenarios.

from hydra-maester.

aeneasr avatar aeneasr commented on July 20, 2024

I am closing this issue as it has not received any engagement from the community or maintainers in a long time. That does not imply that the issue has no merit. If you feel strongly about this issue

  • open a PR referencing and resolving the issue;
  • leave a comment on it and discuss ideas how you could contribute towards resolving it;
  • open a new issue with updated details and a plan on resolving the issue.

We are cleaning up issues every now and then, primarily to keep the 4000+ issues in our backlog in check and to prevent maintainer burnout. Burnout in open source maintainership is a widespread and serious issue. It can lead to severe personal and health issues as well as enabling catastrophic attack vectors.

Thank you to anyone who participated in the issue! 🙏✌️

from hydra-maester.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.