Comments (2)
I think it is possible but you would need to tweak the item policy to choose a different expiry based on the item. The default policy is fixed expiry for all items - it's the simplest thing that can work that has the cheapest cost.
Specifically you would need to replace TLruLongTicksPolicy with your own implementation, the existing definition of ConcurrentTLru is like this:
public sealed class ConcurrentTLru<K, V>
: ConcurrentLruCore<K, V, LongTickCountLruItem<K, V>, TLruLongTicksPolicy<K, V>, TelemetryPolicy<K, V>>
{
}
You would switch to a PerValueTLruLongTicksPolicy
, and define your own new cache class like this:
public sealed class NewConcurrentTLru<K, V>
: ConcurrentLruCore<K, V, LongTickCountLruItem<K, V>, PerValueTLruLongTicksPolicy<K, V>, TelemetryPolicy<K, V>>
{
public NewConcurrentTLru(int capacity, Func<V, TimeSpan> getTimeToLive)
: base(Environment.ProcessorCount,
new FavorWarmPartition(capacity),
EqualityComparer<K>.Default,
new PerValueTLruLongTicksPolicy<K, V>(getTimeToLive),
default)
{
}
}
Where PerValueTLruLongTicksPolicy
would be the same as TLruLongTicksPolicy
, but with different CreateItem and Update methods:
public readonly struct PerValueTLruLongTicksPolicy<K, V> : IItemPolicy<K, V, LongTickCountLruItem<K, V>>
{
private static readonly double stopwatchAdjustmentFactor = Stopwatch.Frequency / (double)TimeSpan.TicksPerSecond;
private readonly long epoch;
private readonly Func<V, long> getTimeToLive;
public PerValueTLruLongTicksPolicy(Func<V, TimeSpan> getTimeToLive)
: this(v => PerValueTLruLongTicksPolicy<K, V>.ToTicks(getTimeToLive(v)))
{
}
public PerValueTLruLongTicksPolicy(Func<V, long> getTimeToLive)
{
this.epoch = Stopwatch.GetTimestamp();
this.getTimeToLive = getTimeToLive;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public LongTickCountLruItem<K, V> CreateItem(K key, V value)
{
long ttl = this.getTimeToLive(value);
ttl -= Stopwatch.GetTimestamp() - epoch;
return new LongTickCountLruItem<K, V>(key, value, ttl);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Touch(LongTickCountLruItem<K, V> item)
{
item.WasAccessed = true;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Update(LongTickCountLruItem<K, V> item)
{
long ttl = this.getTimeToLive(item.Value);
ttl -= Stopwatch.GetTimestamp() - epoch;
item.TickCount = ttl;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool ShouldDiscard(LongTickCountLruItem<K, V> item)
{
if (Stopwatch.GetTimestamp() - item.TickCount > this.epoch)
{
return true;
}
return false;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool CanDiscard()
{
return true;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ItemDestination RouteHot(LongTickCountLruItem<K, V> item)
{
if (this.ShouldDiscard(item))
{
return ItemDestination.Remove;
}
if (item.WasAccessed)
{
return ItemDestination.Warm;
}
return ItemDestination.Cold;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ItemDestination RouteWarm(LongTickCountLruItem<K, V> item)
{
if (this.ShouldDiscard(item))
{
return ItemDestination.Remove;
}
if (item.WasAccessed)
{
return ItemDestination.Warm;
}
return ItemDestination.Cold;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ItemDestination RouteCold(LongTickCountLruItem<K, V> item)
{
if (this.ShouldDiscard(item))
{
return ItemDestination.Remove;
}
if (item.WasAccessed)
{
return ItemDestination.Warm;
}
return ItemDestination.Remove;
}
// This no longer makes sense
public TimeSpan TimeToLive => TimeSpan.Zero;
public static long ToTicks(TimeSpan timespan)
{
return (long)(timespan.Ticks * stopwatchAdjustmentFactor);
}
public static TimeSpan FromTicks(long ticks)
{
return TimeSpan.FromTicks((long)(ticks / stopwatchAdjustmentFactor));
}
}
Obviously, this depends on you being able to tell the value is from the database just by inspecting it's properties - from your description it sounds like this is the case. I have never tested this so there could be a hidden problem, but I think it would work.
The purpose of using a struct and marking inline here is that it forces the JIT to do extra tricks - for example to elide unused code. The downside of this is that strange things can happen if you make the struct mutable. And obviously any code you inject will influence perf for creating and updating items.
It could be worth generalizing this as part of the library, so that you could give your own delegate to compute expiry via the builder instead of passing a fixed value:
Func<string, TimeSpan> getTimeToLive = { TimeSpan.FromSeconds(1) };
ICache<int, strimg> lru = new ConcurrentLruBuilder<int, string>()
.WithExpireAfterWrite(getTimeToLive)
.Build();
Under the covers this would switch to the different policy/generic cache class and set up the delegate. I would also need to figure out if this should be based on TimeSpan (which is less error prone) or tick count (for ultimate speed), or both. I have tried to only provide the fastest possible options, but this can make things fiddly to use.
from bitfaster.caching.
I edited the above sample code to give a complete working implementation and tried it out - at least in my simple test case this works.
It's not a good candidate for adding at this point, since the ITimePolicy interface would now have a broken TimeToLive property and the scoped and atomic implementations of the cache would send things like AtomicFactory<K, V> to the policy before the value is actually created. To fix would require breaking changes, so would defer this to v3.0.
Also, you can't combine the above approach with atomic creates and the GetOrAdd method if you are using that feature. So, for atomic create this is not currently possible unless you are only calling Update.
from bitfaster.caching.
Related Issues (20)
- FastConcurrentTLru's Constructor timetolive is not precise on liunx mac platform HOT 7
- Any plans to evict expired items on the background? HOT 2
- NuGet package is missing intellisense xml file HOT 1
- Use NonBlocking instead of ConcurrentDictionary HOT 1
- [Feature request] individual items expiry HOT 1
- Current time provider HOT 6
- [Feature request] Allow to pass additional factory argument to the `GetOrAdd`/`GetOrAddAsync` cache methods HOT 2
- Entry left in cache configured with WithAtomicGetOrAdd when value factory throws HOT 4
- [Feature request] Atomic TryRemove HOT 3
- [Feature request] Expire after access LRU HOT 5
- [Feature request] Add TryRemove(K, out V) overload HOT 1
- [Feature request] Add MRU cache HOT 3
- [Bug] Cold queue increases infinitely for some partitions and cache sizes HOT 3
- [Feature Request] Add TryRemove(KeyValuePair<K, V>) overload HOT 2
- Clearing LFU cache doesn't actually clear it HOT 9
- Clearing ConcurrentLru leaves cache in broken state HOT 7
- `cache.Clear()` doesn't seem to be clearing entire cache HOT 6
- Is it possible to disable the eviction? HOT 2
- Doesn't look like capacity expiration/evicting is happening properly (might be specific to the initial population of the warm queue) HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bitfaster.caching.