Giter Club home page Giter Club logo

Comments (10)

boudewijn-tribler avatar boudewijn-tribler commented on June 1, 2024

Cashes should be identified by ID and type (currently it only uses ID, causing the problem you describe above). We may event want to give each type its own request cache instance. That would simplify the code.

There is no longer a reason for the request cache to be global (historically Dispersy handled all messages, we are slowly moving away from this, i.e. making Community responsible).

from dispersy.

NielsZeilemaker avatar NielsZeilemaker commented on June 1, 2024

Seems like a nice solution. Currently I can't run my experiments in debug mode due to this, and in optimized mode this is causing stuffs as the clashes are not checked.

from dispersy.

boudewijn-tribler avatar boudewijn-tribler commented on June 1, 2024

Does everyone still agree with the proposed solution?

  • remove the type stuff in the cache
  • have one cache for each type
  • have each cache as a Community member

from dispersy.

NielsZeilemaker avatar NielsZeilemaker commented on June 1, 2024

The only problem I can think of is inheritance. Currently both the has and get method allow you to pass a parent to find a child class. If we create separate caches for each type, this won't work anymore.

from dispersy.

boudewijn-tribler avatar boudewijn-tribler commented on June 1, 2024

Using either has or get to obtain the cache associated with some ID would result with whatever cache is stored. Things should be fine as long as the stored cache is compatible with what the code expects, i.e. has the required properties and methods.

Your point is still valid, but do we have a better option?

from dispersy.

NielsZeilemaker avatar NielsZeilemaker commented on June 1, 2024

Could we not have a single requestcache, which has multiple cache for types. But now types arn't defined by the class you pass, but by an unicode identifier very much similar to all other identifiers?

Seems like a cleaner fix to me, and won't require much refactoring. As we still have a single request_cache member in each community. As opposed to 10, a single requestcache for each type of requests.

from dispersy.

boudewijn-tribler avatar boudewijn-tribler commented on June 1, 2024

Very true. The only downside that I can think of having a single requestcache is more code and more parameters for its API. With 10 'single requestcaches' we won't need to pass and store the string that indicates the type, since it is one type per cache.

Effectively, do we call

# many small caches
self.request_cache_A.get(random_number)
self.request_cache_B.get(random_number)

or

# single big cache
self.request_cache.get(random_number, "A")
self.request_cache.get(random_number, "B")

I like the simplicity of multiple smaller caches over one big one, but as Niels pointed out, keeping one big cache will require less refactoring... choices choices 😸

Anyway, I don't mind taking the 'less refactoring' approach.

from dispersy.

NielsZeilemaker avatar NielsZeilemaker commented on June 1, 2024

Still, a big single cache does not neccesarily mean a "big single cache".
It's only the api, internally the request_cache could create a dictionary for each "type".

Moreover, I don't see the simplicity of multiple smaller caches. Implementation concerned, it's probably only a couple lines per method of the request_cache, and has the added benefit of being a single property in the community.

from dispersy.

synctext avatar synctext commented on June 1, 2024

Trying to understand the discussion above.

  1. Please formulate what would be the minimal required change to fix this issue?
  2. Can we avoid radical change of removing the single requestcache?

The idea of 1 request cache per community sounds nice, but I guess it is a lot of work with little in return (communities become more atomic, but too deep in Dispersy to use as a selling point)

from dispersy.

synctext avatar synctext commented on June 1, 2024

(my above 'atomic' comment is now obsolete)

Complete Community isolation is now a goal for V6.2.
Spoofing is trivial if answers to requests are not peer sensitive, I understand.
Hopefully we can avoid any collision and collusion with the 16-bit random outstanding-request in a future refacturing attempt. New issue created: #90.

If Boudewijn as the author prefers this style: self.request_cache_A.get(random_number)
then please do that. Fixing #3 is then next and seem much more 'involving'....

from dispersy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.