perrich / hangfire.memorystorage Goto Github PK
View Code? Open in Web Editor NEWA memory storage for Hangfire.
License: Apache License 2.0
A memory storage for Hangfire.
License: Apache License 2.0
Possible error in nuspec file
https://github.com/perrich/Hangfire.MemoryStorage/blob/master/Hangfire.MemoryStorage.nuspec#L22
target dir is netstandard1.4, but source dir is netstandard1.3
Because of this I cannot add Hangfire.MemoryStorage in project that targets netstandard1.3, get an error
Package Hangfire.MemoryStorage 1.5.0 is not compatible with netstandard1.3 (.NETStandard,Version=v1.3). Package Hangfire.MemoryStorage 1.5.0 supports:
- net40 (.NETFramework,Version=v4.0)
- netstandard1.4 (.NETStandard,Version=v1.4)
Hi,
Are you planning to support asp net core 1.1/1.0?
Hello
I tried to use HangFire.Console to log items to the console. However with the HangFire.MemoryStorage no items showed up in the Processing tab of the job. Does MemoryStorage work with the HangFire.Console extension?
Regards,
Simon
Hi i get the following error when using the MemoryStorage:
2019-09-03 12:02:23 [WARN] (Hangfire.Server.Worker) Slow log: Hangfire.DisableConcurrentExecutionAttribute performed "OnPerforming for e1e86681-70dc-484e-b7ba-015188696f79" in 60 sec
2019-09-03 12:02:23 [ERROR] (Hangfire.AutomaticRetryAttribute) Failed to process the job 'e1e86681-70dc-484e-b7ba-015188696f79': an exception occurred.
System.Threading.SynchronizationLockException
Object synchronization method was called from an unsynchronized block of code.
at Hangfire.MemoryStorage.Utilities.LocalLock..ctor(String resource, TimeSpan timeout)
at Hangfire.MemoryStorage.MemoryStorageConnection.AcquireDistributedLock(String resource, TimeSpan timeout)
at Hangfire.DisableConcurrentExecutionAttribute.OnPerforming(PerformingContext filterContext)
at Hangfire.Profiling.ProfilerExtensions.InvokeAction[TInstance](InstanceAction`1 tuple)
at Hangfire.Profiling.SlowLogProfiler.InvokeMeasured[TInstance,TResult](TInstance instance, Func`2 action, String message)
at Hangfire.Profiling.ProfilerExtensions.InvokeMeasured[TInstance](IProfiler profiler, TInstance instance, Action`1 action, String message)
at Hangfire.Server.BackgroundJobPerformer.InvokePerformFilter(IServerFilter filter, PerformingContext preContext, Func`1 continuation)
The job that is causing this error has the following attributes
[AutomaticRetry(Attempts = 0)][DisableConcurrentExecution(timeoutInSeconds: 60)]
What could be causing this issue?
When the expiration manager runs, it doesn't dispose of the expired job states and job parameters.
I've had a look at the SQL Server provider but a quick glance doesn't show any obvious way of cleaning this up.
I'll make the change for the memory storage and submit a pull request in the next couple of days
Package Hangfire.MemoryStorage 1.2.0 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Hangfire.MemoryStorage 1.2.0 supports: net45 (.NETFramework,Version=v4.5)
One or more packages are incompatible with .NETCoreApp,Version=v1.0.
Hi,
Thanks for a great addition to the Hangfire project. Sorry if this question has come up earlier. I did not find any answer in previous issues.
We are having issues with long jobs restarting 30 minute into the process using MemoryStorage. I suppose the FetchNextJobTimeout variable is used to control the timespan for restarts of jobs that seem "stale". Is there a way to increase this limit using a config option?
The project is running .net Core 2 on Ubuntu.
I believe this needs to throw if TryEnter returns false for it to match behavior.
What is the retention / expiration set to when using the memory storage?
And can you override it?
I do only need to keep all my job information etc. for a few hours or maybe a day. Thats why I'd love to know, if the memory storage fills the memory until it reaches the servers capacity or if its limited to a timespan / max memory usage.
Hi @perrich have you got any objections if I open a PR to upgrade HangFire.Core to a more recent version, as currently there is a dependency on a version from over 3 years ago?
ArgumentException: Um item com a mesma chave já foi adicionado.]
System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +52
System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add) +12998942 System.Linq.Enumerable.ToDictionary(IEnumerable
1 source, Func2 keySelector, Func
2 elementSelector, IEqualityComparer1 comparer) +247 Hangfire.MemoryStorage.MemoryStorageConnection.GetAllEntriesFromHash(String key) +260 Hangfire.RecurringJobManager.AddOrUpdate(String recurringJobId, Job job, String cronExpression, RecurringJobOptions options) +244 Hangfire.RecurringJobManagerExtensions.AddOrUpdate(IRecurringJobManager manager, String recurringJobId, Job job, String cronExpression, TimeZoneInfo timeZone, String queue) +107 Hangfire.RecurringJob.AddOrUpdate(Expression
1 methodCall, String cronExpression, TimeZoneInfo timeZone, String queue) +165
I'm using this to run my hangfire unit tests but jobs etc. are retained between test runs. I can manually delete jobs but it would cleaner if I could start with a fresh instance, or there is a quick way to remove all entries in the Data class.
My current thinking with the existing design is to use cast the JobStorage to the MemoryStorage class, use the GetEnumerable to get a list of keys and then Delete these manually with the Delete method.
Is this is preferred approach or am I missing something obvious?
Thanks
Please see: HangfireIO/Hangfire#1025 (comment)
Doesn't seem memory storage is actually dequeuing the job or marking it as dequeued (I don't know what the correct behavior should be).
config.UseMemoryStorage(
new MemoryStorageOptions
{
FetchNextJobTimeout = TimeSpan.FromSeconds(10)
});
public class MyJobPerformer
{
private readonly string _performerId;
public MyJobPerformer()
{
_performerId = Guid.NewGuid().ToString("N");
}
public async Task Perform(RequestBase request)
{
Console.WriteLine($"{_performerId}: {DateTime.UtcNow}");
await Task.Delay(TimeSpan.FromSeconds(5));
Console.WriteLine($"{_performerId}: {DateTime.UtcNow}");
await Task.Delay(TimeSpan.FromSeconds(5));
Console.WriteLine($"{_performerId}: {DateTime.UtcNow}");
await Task.Delay(TimeSpan.FromSeconds(5));
Console.WriteLine($"{_performerId}: {DateTime.UtcNow}");
}
}
69b3501a7a284d0c88b030abde997810: 3/17/19 11:38:20 PM
69b3501a7a284d0c88b030abde997810: 3/17/19 11:38:25 PM
69b3501a7a284d0c88b030abde997810: 3/17/19 11:38:30 PM
6d0b3551c5354633b7d572da2cd5abc8: 3/17/19 11:38:32 PM <<< Duplicate job execution! 10 seconds lapsed.
69b3501a7a284d0c88b030abde997810: 3/17/19 11:38:35 PM
6d0b3551c5354633b7d572da2cd5abc8: 3/17/19 11:38:37 PM
6d0b3551c5354633b7d572da2cd5abc8: 3/17/19 11:38:42 PM
6d0b3551c5354633b7d572da2cd5abc8: 3/17/19 11:38:47 PM
One job enqueue results in multiple worker executions. It's even worse if FetchNextJobTimeout
is reduced further. I'm not sure whether this is a problem with Hangfire, MemoryStorage, or both.
Hello,
I have a problem regarding cleanning up the completed job. I tried to use the attribute to define the expiration/life time.
public class HangfireJobRentionAttribute : JobFilterAttribute, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
context.JobExpirationTimeout = TimeSpan.FromMinutes(1);
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
context.JobExpirationTimeout = TimeSpan.FromMinutes(2);
}
}
After apply the attribute, I do see the hangfire pick it up in my dashboard. It said that it should be auto delted 28 min ago.
However, I don't see the reduction of the memory footprint of my hangfire application. It keeps going up.
Does MemoryStorage has some build in clean up mechanism like SQL?
https://discuss.hangfire.io/t/does-hangfire-clear-down-old-job-data/2294/2
public SqlServerStorageOptions()
{
TransactionIsolationLevel = null;
QueuePollInterval = TimeSpan.FromSeconds(15);
SlidingInvisibilityTimeout = null;
#pragma warning disable 618
InvisibilityTimeout = TimeSpan.FromMinutes(30);
#pragma warning restore 618
JobExpirationCheckInterval = TimeSpan.FromMinutes(30);
CountersAggregateInterval = TimeSpan.FromMinutes(5);
PrepareSchemaIfNecessary = true;
DashboardJobListLimit = 10000;
_schemaName = Constants.DefaultSchema;
TransactionTimeout = TimeSpan.FromMinutes(1);
}
Can you describe how to use it in Markdown?
I am using MemoryStorage for my unit tests to check if background jobs are enqueued. I initialize like this
Hangfire.JobStorage.Current = new MemoryStorage();
During my tests, I often use the following to check the number of enqueued jobs increased.
Hangfire.JobStorage.Current.GetMonitoringApi().EnqueuedCount("default")
Since MemoryStorage()
uses a static dictionary, the storage is shared across tests, which can be confusing. It is an issue since I am assuming the most common use of MemoryStorage
is during tests, and therefore I think it might be useful if this point is made clear in the README. Alternatively, a flag to create a new dictionary for storing jobs can be added either to the MemoryStorage
constructor or the MemoryStorageOptions
to allow the developers to make that choice.
I'm looking at using MemoryStorage in a production system. The readme suggests there are issues with thread safety and static data usage. With the PR #26 the static Data problem is resolved now. It looks like AutoIncrementIdGenerator may have a similar static issue. What changes do you think are still needed to make it safe for production usage?
I created a console app using .NET Framework 4.8 and installed Hangfire.Core
v1.8.11.0, and Hangfire.Autofac
, Autofac
, and this package.
Everything builds, but then, upon execution, my app crashes with a System.IO.FileNotFoundException
:
Could not load file or assembly 'Hangfire.Core, Version=1.7.35.0, Culture=neutral, PublicKeyToken=e33b67d3bb5581e4' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.
We have more hangfire jobs than our system can process, so they pile up in a queue that never empties completely.
We've noticed that the oldest jobs never get executed. It's only the newly created jobs that do. I started looking in the code and found the code referenced below. If I understand correctly, jobs are processed in descending order (last in first out). Is this normal? I know Hangfire is FIFO by default if we use SQL.
Thank you in advance,
Jean-Simon Collard
Hi i get the following error when using the MemoryStorage:
[ArgumentException: the same key already added.]
System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +60
System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add) +5619817 System.Linq.Enumerable.ToDictionary(IEnumerable
1 source, Func2 keySelector, Func
2 elementSelector, IEqualityComparer1 comparer) +302 Hangfire.MemoryStorage.MemoryStorageMonitoringApi.GetTimelineStats(List
1 dates, Func2 formatorAction) +556 Hangfire.MemoryStorage.MemoryStorageMonitoringApi.GetHourlyTimelineStats(String type) +229 Hangfire.Dashboard.Pages.HomePage.Execute() +368 Hangfire.Dashboard.RazorPage.TransformText(String body) +31 Hangfire.Dashboard.RazorPageDispatcher.Dispatch(DashboardContext context) +89 Hangfire.Dashboard.<>c__DisplayClass1_1.<UseHangfireDashboard>b__1(IDictionary
2 env) +618
Microsoft.Owin.Mapping.d__0.MoveNext() +486
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +32
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +62
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__5.MoveNext() +197
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +32
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) +62
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.d__2.MoveNext() +184
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() +32
Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.StageAsyncResult.End(IAsyncResult ar) +118
System.Web.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +510
System.Web.HttpApplication.ExecuteStepImpl(IExecutionStep step) +220
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +134
What could be causing this issue?
Saw this error in OpenTelemetry.Instrumentation.Hangfire.Tests.
System.InvalidOperationException
Collection was modified; enumeration operation may not execute.
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.List1.Enumerator.MoveNextRare() at System.Linq.Enumerable.All[TSource](IEnumerable
1 source, Func2 predicate) at OpenTelemetry.Instrumentation.Hangfire.Tests.HangfireInstrumentationJobFilterAttributeTests.<WaitJobProcessedAsync>d__8.MoveNext() in C:\github\opentelemetry-dotnet-contrib\test\OpenTelemetry.Instrumentation.Hangfire.Tests\HangfireInstrumentationJobFilterAttributeTests.cs:line 181 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter.GetResult() at OpenTelemetry.Instrumentation.Hangfire.Tests.HangfireInstrumentationJobFilterAttributeTests.<Should_Create_Activity_With_Status_Error_When_Job_Failed>d__3.MoveNext() in C:\github\opentelemetry-dotnet-contrib\test\OpenTelemetry.Instrumentation.Hangfire.Tests\HangfireInstrumentationJobFilterAttributeTests.cs:line 58 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Xunit.Sdk.TestInvoker
1.<>c__DisplayClass48_0.<b__1>d.MoveNext() in //src/xunit.execution/Sdk/Frameworks/Runners/TestInvoker.cs:line 276
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Xunit.Sdk.ExecutionTimer.d__4.MoveNext() in //src/xunit.execution/Sdk/Frameworks/ExecutionTimer.cs:line 48
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Xunit.Sdk.ExceptionAggregator.d__9.MoveNext() in /_/src/xunit.core/Sdk/ExceptionAggregator.cs:line 90
It happens in WaitJobProcessedAsync
method
private async Task WaitJobProcessedAsync(string jobId, int timeToWaitInSeconds)
{
var timeout = DateTime.Now.AddSeconds(timeToWaitInSeconds);
string[] states = new[] { "Enqueued", "Processing" };
JobDetailsDto jobDetails;
while (((jobDetails = this.hangfireFixture.MonitoringApi.JobDetails(jobId)) == null ||
jobDetails.History.All(h => states.Contains(h.StateName))) &&
DateTime.Now < timeout)
{
await Task.Delay(500);
}
}
From my understanding, the MonitoringApi.JobDetails(jobId).History
returns the reference to the list instance used as in-memory storage. If this list changes during the enumeration, the caller will get the mentioned error.
Instead, the data snapshot should be returned. No modification should be allowed during the snapshot creation.
I see the job executed but nothing seems to be done. Is it fake execution ?
From a shallow reading of the sources it seems that the only InMemory part are AutoIncrementIdGenerator and Data static classes in the Database directory. If so, then replacing them with any other Dictionary-like implementation will allow adding new storage back-ends with relatively minimal amount of work. For example, several key-value stores (e.g. ManagedEsent PersistentDictionary) provide such interface.
Am I correct in my assumption? I don't want to start this process, just to abandon it after the allotted time quota runs out.
I'm using a 'test host' based approach in some of my integration tests and each test case/method creates its own discrete instance.
When registering Hangfire we're using the MemoryStorage provider and seeing some odd behaviors that we're suspecting may relate to use of this storage provider. On the main page @ https://github.com/perrich/Hangfire.MemoryStorage there is mention of the following:
What I'm observing happening is that we can run the tests 1-by-1 and they pass just fine. But when running them as a suite/set of tests they fail with 1 passing (first one ran) and the others then failing. We're using xUnit and we have measures in place to prevent concurrent/parallel execution of the tests that leverage the test host so I don't think that's the root cause here.
Previously we would see these failures and had little to no real clues to go on, but I've recently leveraged the newer method for triggering recurring jobs that does return a JobId when successful. Our helper code for triggering jobs includes a guard/check for null values being returned (indicating that it couldn't trigger the job) which throws an exception. Now with that check in place, I can see that the 1st test host instance / test case executes fine; It builds the test host/registers the job & triggers the job with the test passing. But the 2nd & subsequent test cases are failing saying they can't even trigger the referenced job. I know the job is registered by the test as they pass when ran individually so that doesn't seem like it should be the root-cause.
When I started to analyze why the job couldn't even be triggered, I started looking into where anything stateful could be complicit and the use of the MemoryStorage came to mind. With the statement on the main repo page about using static members to manage state in this provider I thought I had my 'Ah Ha!' moment - but on reviewing the current code it doesn't appear to truly be using static state management.
I reviewed the following files to try and trace the behavior & state management/lifetime:
The impression I have is that we are using a unique instance of a test host for each test method/case, we use Hangfire's extension methods to register it and then we register each job using the RecurringJob.AddOrUpdate method. While registering Hangfire we call the 'UseMemoryStorage' method passing in the MemoryStorageOptions and that in turn would get a new instance of Data for each discrete use of the test host/Hangfire.
As an attempt to verify if the storage provider and static state management was complicit, I updated the tests to register all of the jobs in each test case so I'd know "yes they're registered" but that didn't resolve the issue.
1 - Is the state really 'static' as described on the main page?
If I am using multiple instances of a test host, each with their own discrete registration of Hangfire & use of MemoryStorage provider would you expect to see the overall state of registered jobs be impacted as described? I'm wondering if that comment is perhaps from an older version and just out of date (perhaps needing removed/clarified).
2 - Would Hangfire itself be holding onto the StorageConnection (which is MemoryProvider) and yield the behavior described?
Once I noted the statement on this storage provider about static state, I started off here but I'm now wondering if there's something on the Hangfire side that would 'hold onto' the state this provider is using.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.