Comments (2)
This is a quirk of the internal details that is more apparent when capacity is very small, and items are accessed in a sequence that causes them to be evicted before they are requested the second time.
Internally, ConcurrentLru maintains 3 queues: hot, warm and cold. When initialized with capacity = 5, each queue has a size of capacity/3. So, after rounding down each queue has a size of 1 internally.
When an item is fetched, it is first added to the hot queue. When the hot queue is full and an item is fetched, the last item in the hot queue is moved to either warm or cold depending on whether it has been accessed since it was added. The same process is followed for the warm and cold queues. When an item is fetched, it is marked as touched. When the queues are cycled, touched items in the warm queue and cold queues are moved back to the head of warm. Untouched items in warm are moved to cold. Untouched items in cold are evicted.
So, in your repro, the sequence of steps is laid out below (and can be confirmed by stepping through in the debugger and looking at the internal state of the LRU).
Initial state, all queues empty and have capacity 1:
H[], W[], C[]
First for loop (items are added):
i = 0. H[0], W[], C[]
i = 1. H[1], W[], C[0]
i = 2. H[2], W[], C[1]
i = 3. H[3], W[], C[2]
i = 4. H[4], W[], C[3]
Second for loop (items are fetched second time):
i = 0. H[0], W[], C[4]
i = 1. H[1], W[], C[0]
i = 2. H[2], W[], C[1]
i = 3. H[3], W[], C[2]
i = 4. H[4], W[], C[3]
Since each queue has a capacity of 1, the items are cycled out of hot first to cold and then evicted before they are fetched again in the second for loop.
If your for loop fetches only 2 items, ConcurrentLru will work as you expect with capacity 5. If you loop over 5 items and set capacity to 9, items will not be evicted (each queue will have size = 3, so you will have 3 items in hot and 2 in cold) before the second loop runs. Alternatively, you can use ClassicLru which will perform worse under heavy concurrent load, but will maintain exact item order. Note that if you use ClassicLru, with size 5 and fetch in sequence 6 items, the same problem will occur.
The ConcurrentLru algorithm is a pseudo LRU and gives approximate LRU order - it is trading lack of exact count/sequence to give the best possible performance. In most practical settings, in sequence fetches aren't common - I mostly tested using a Zipf distribution which is fairly standard in academic studies.
I can make this slightly better by fixing the rounding that occurs when capacity is not an exact multiple of 3 - this is a bug. For example, if the capacity is set to 5, I can make the hot and cold queues size 2. This means there will be the correct number of internal slots H:2 + W:1 + C:2 = 5. But for your repro, the sequence of fetches would still force all items to be evicted even with the fix. This is because you are fetching 5 items in sequence before fetching again in sequence, but the capacity of hot + cold is be 4 (warm queue is never used because no item is fetched before it is evicted). So capacity is exceed before items are fetched again.
In general, to get stable results for repeated sequence fetches of length n, ConcurrentLru capacity must be set to at least (n+1) + (n+1)/3.
from bitfaster.caching.
Will document how it works here: https://github.com/bitfaster/BitFaster.Caching/wiki/ConcurrentLru
from bitfaster.caching.
Related Issues (20)
- Any plans to evict expired items on the background? HOT 2
- Expiration based on a Value's property? HOT 2
- NuGet package is missing intellisense xml file HOT 1
- Use NonBlocking instead of ConcurrentDictionary HOT 1
- [Feature request] individual items expiry HOT 1
- Current time provider HOT 6
- [Feature request] Allow to pass additional factory argument to the `GetOrAdd`/`GetOrAddAsync` cache methods HOT 2
- Entry left in cache configured with WithAtomicGetOrAdd when value factory throws HOT 4
- [Feature request] Atomic TryRemove HOT 3
- [Feature request] Expire after access LRU HOT 5
- [Feature request] Add TryRemove(K, out V) overload HOT 1
- [Feature request] Add MRU cache HOT 3
- [Bug] Cold queue increases infinitely for some partitions and cache sizes HOT 3
- [Feature Request] Add TryRemove(KeyValuePair<K, V>) overload HOT 2
- Clearing LFU cache doesn't actually clear it HOT 9
- Clearing ConcurrentLru leaves cache in broken state HOT 7
- `cache.Clear()` doesn't seem to be clearing entire cache HOT 6
- Is it possible to disable the eviction? HOT 2
- Doesn't look like capacity expiration/evicting is happening properly (might be specific to the initial population of the warm queue) HOT 3
- [Feature request] - Keys without values HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bitfaster.caching.