Giter Club home page Giter Club logo

django-background-task's Introduction

Django Background Task

Django Background Task is a databased-backed work queue for Django, loosely based around Ruby's DelayedJob library.

In Django Background Task, all tasks are implemented as functions (or any other callable).

There are two parts to using background tasks:

  • creating the task functions and registering them with the scheduler
  • setup a cron task (or long running process) to execute the tasks

Creating and registering tasks

To register a task use the background decorator:

from background_task import background
from django.contrib.auth.models import User

@background(schedule=60)
def notify_user(user_id):
    # lookup user by id and send them a message
    user = User.objects.get(pk=user_id)
    user.email_user('Here is a notification', 'You have been notified')

This will convert the notify_user into a background task function. When you call it from regular code it will actually create a Task object and stores it in the database. The database then contains serialised information about which function actually needs running later on. This does place limits on the parameters that can be passed when calling the function - they must all be serializable as JSON. Hence why in the example above a user_id is passed rather than a User object.

Calling notify_user as normal will schedule the original function to be run 60 seconds from now:

notify_user(user.id)

This is the default schedule time (as set in the decorator), but it can be overridden:

notify_user(user.id, schedule=90) # 90 seconds from now
notify_user(user.id, schedule=timedelta(minutes=20)) # 20 minutes from now
notify_user(user.id, schedule=datetime.now()) # at a specific time

Running tasks

There is a management command to run tasks that have been scheduled:

python manage.py process_tasks

This will simply poll the database queue every few seconds to see if there is a new task to run.

NB: to aid the management task in finding the registered tasks it is best to put them in a file called 'tasks.py'. You can put them elsewhere, but you have to ensure that they will be imported so the decorator can register them with the scheduler. By putting them in tasks.py they will be auto-discovered and the file automatically imported by the management command.

The process_tasks management command has the following options:

  • duration - Run task for this many seconds (0 or less to run forever) - default is 0
  • sleep - Sleep for this many seconds before checking for new tasks (if none were found) - default is 5
  • log-file - Log file destination
  • log-std - Redirect stdout and stderr to the logging system
  • log-level - Set logging level (CRITICAL, ERROR, WARNING, INFO, DEBUG)

You can use the duration option for simple process control, by running the management command via a cron job and setting the duration to the time till cron calls the command again. This way if the command fails it will get restarted by the cron job later anyway. It also avoids having to worry about resource/memory leaks too much. The alternative is to use a grown-up program like supervisord to handle this for you.

Settings

There are two settings that can be set in your settings.py file.

  • MAX_ATTEMPTS - controls how many times a task will be attempted (default 25)
  • MAX_RUN_TIME - maximum possible task run time, after which tasks will be unlocked and tried again (default 3600 seconds)

Task errors

Tasks are retried if they fail and the error recorded in last_error (and logged). A task is retried as it may be a temporary issue, such as a transient network problem. However each time a task is retried it is retried later and later, using an exponential back off, based on the number of attempts:

(attempts ** 4) + 5

This means that initially the task will be tried again a few seconds later. After four attempts the task is tried again 261 seconds later (about four minutes). At twenty five attempts the task will not be tried again for nearly four days! It is not unheard of for a transient error to last a long time and this behavior is intended to stop tasks that are triggering errors constantly (i.e. due to a coding error) form dominating task processing. You should probably monitor the task queue to check for tasks that have errors. After MAX_ATTEMPTS the task will be marked as failed and will not be rescheduled again.

django-background-task's People

Contributors

johnsensible avatar lilspikey avatar tdruez avatar zagrebelin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

django-background-task's Issues

best way to relaunch background task after a new docker-compose build and launch of container

I tried to install supervisord but could never make it work.
always exiting right after launch:
2020-07-06 15:57:56,453 INFO supervisord started with pid 27437
2020-07-06 15:57:57,457 INFO spawned: 'tasks' with pid 27438
2020-07-06 15:57:58,325 INFO success: tasks entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-07-06 15:57:58,405 INFO exited: tasks (exit status 1; not expected)

here is my config:
[program:tasks]
directory=/home/apptitude/my_docker_folder/
command=sudo /usr/local/bin/docker-compose exec web python manage.py process_tasks
startsecs=0
autorestart=false
user=root

Any help welcome... so annoying to relaunch manually after each new docker-compose relaunch of my django server.

thanks

db_exec_type

Hi, I have this error, I'm using Django 1.7
db_exc_type = getattr(self.wrapper.Database, dj_exc_type.name)
AttributeError: 'DatabaseWrapper' object has no attribute 'Database'

That happened when I create the Task

Tasks not added to _tasks[] queue for some reason?

I setup a rather-simple function to be queued with your lovely decorator and after executing the function, the task is in queue but receives an error message coming from your very own tasks.py. Note that I'm running manage.py process_tasks in a separate terminal on my Windows 7 dev box; I keep coming up with the following error:

Traceback (most recent call last):
File "C:\Users\givity\Envs\givity\lib\site-packages\background_task\tasks.py", line 162, in run_task
tasks.run_task(task.task_name, args, kwargs)
File "C:\Users\givity\Envs\givity\lib\site-packages\background_task\tasks.py", line 46, in run_task
task = self._tasks[task_name]
KeyError: u'projects.lib.files.files_chop'
WARNING:root:Rescheduling task Task(projects.lib.files.files_chop) for 0:00:21 later at 2011-12-30 04:28:00.596000

c:\users\givity\envs\givity\lib\site-packages\background_task\tasks.py(46)run_task()

I added a pdb.set_trace() right above line 44 (the site of the crash) to see if I could grasp what was going on:

-> task = self._taskstask_name self._tasks
{}

I noted that self._tasks was consistently empty... this explains the error message, but not the underlying problem?

The code for my actual tasks is as follows:

@background(schedule=60)
def files_chop(path):
    """ chop away div elements with class givity-chop
        """
    # change the current working dir to the new project folder
    os.chdir(path)
    logger.info('changed folder to %s' % path)

    # remove all class="givity-chop" divs, etc.
    files = find_files(path, search_pattern='*.html')
    logger.info('files found: %s' % files)
    ...

Of course, I'm calling the function files_chop(path) elsewhere:

files_chop(project_path_src)

Thanks for the great app; it's a lot leaner than Celery and looks like it has a lot of promise. You are also #2 in the Google results for "django background task" (props on choosing a good package name).

Givity

ModuleNotFoundError: No module named 'models'

I'm using:

  • Django 2.2.2
  • Python 3.7.3
  • django-background-task 0.1.8

To circumvent #28 , I applied this patch to models.py:

--- models.py	2019-06-27 13:20:34.467204000 +0000
+++ models.py_	2019-06-27 13:20:21.818710000 +0000
@@ -5,7 +5,11 @@
 from datetime import timedelta
 from hashlib import sha1
 import traceback
-from StringIO import StringIO
+import sys
+if sys.version_info[0] < 3:
+    from StringIO import StringIO
+else:
+    from io import StringIO
 import logging

 try:

When starting Django, startup now fails with ModuleNotFoundError: No module named 'models' instead.
The reason seems to be that in Python 3 the import behaviour for local modules has changed (see PEP 0328).

This patch for admin.py fixes the issue while keeping support for Python2 >=2.5:

--- admin.py	2019-06-27 13:22:49.715516000 +0000
+++ admin.py_	2019-06-27 13:29:52.350516000 +0000
@@ -1,6 +1,7 @@
+from __future__ import absolute_import
 from django.contrib import admin

-from models import Task
+from .models import Task

 class TaskAdmin(admin.ModelAdmin):
     display_filter = ['task_name']

The full error printed is this:

    Exception in thread django-main-thread:
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/threading.py", line 917, in _bootstrap_inner
        self.run()
      File "/usr/local/lib/python3.7/threading.py", line 865, in run
        self._target(*self._args, **self._kwargs)
      File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
        fn(*args, **kwargs)
      File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
        autoreload.raise_last_exception()
      File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 77, in raise_last_exception
        raise _exception[1]
      File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 337, in execute
        autoreload.check_errors(django.setup)()
      File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
        fn(*args, **kwargs)
      File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
        apps.populate(settings.INSTALLED_APPS)
      File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 122, in populate
        app_config.ready()
      File "/usr/local/lib/python3.7/site-packages/django/contrib/admin/apps.py", line 24, in ready
        self.module.autodiscover()
      File "/usr/local/lib/python3.7/site-packages/django/contrib/admin/__init__.py", line 26, in autodiscover
        autodiscover_modules('admin', register_to=site)
      File "/usr/local/lib/python3.7/site-packages/django/utils/module_loading.py", line 47, in autodiscover_modules
        import_module('%s.%s' % (app_config.name, module_to_search))
      File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
      File "<frozen importlib._bootstrap>", line 983, in _find_and_load
      File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/usr/local/lib/python3.7/site-packages/background_task/admin.py", line 3, in <module>
        from models import Task
    ModuleNotFoundError: No module named 'models'

use module json instead of simplejson

Hi,

Is there a possibility to have django.utils.simplejson replaced with the json module?
In future releases of Django (1.7+) simplejson is no longer available.

I would be happy to submit a pull request for this change.

Background task Runs multiple times per second even after setting up schedule and repeat

I have added this code to my models. I want the background task to run only once per minute. So i gave the value for schedule as 60 (seconds) and repeat = 60 (seconds).

from background_task import background
@background(schedule=60)
def update_queue_status():
    print(datetime.now().time())
    ................
    ...............
            
update_queue_status(repeat=60)

But when i run python manage.py process_tasks, it runs the background task multiple times per second-

(qms_env) (base) C:\Users\shisharma\Documents\futuredev\QMS>python manage.py process_tasks
11:51:06.860161
11:51:06.911026
11:51:06.945932
11:51:07.022755
11:51:07.055640
11:51:07.071596
11:51:07.145399
11:51:07.165345
11:51:07.261117
11:51:07.276049
11:51:07.367802
11:51:07.461586
11:51:07.535353
11:51:07.630100
11:51:07.715872
11:51:07.784686
11:51:07.816629
11:51:07.825579
11:51:07.918329
11:51:07.977172
11:51:08.027037
11:51:08.112807
11:51:08.224510
11:51:08.280358
11:51:08.345213
11:51:08.442924
11:51:08.548641
11:51:08.587536
11:51:08.614464
11:51:08.632415
11:51:08.705221
11:51:08.751128
11:51:08.855819
11:51:08.901694
11:51:08.937628
11:51:08.978518
11:51:09.064289
11:51:09.155046
11:51:09.225828
11:51:08.941590
11:51:09.108143
11:51:08.927654
11:51:09.013395
11:51:09.197934
11:51:59.796843
11:51:59.824775
11:51:59.930155
11:52:00.026355
11:52:00.041316
11:52:00.113123
11:52:00.155011

Is there something wrong with what i am doing.

Thank you
Shivani

arteria#200

handling an exception during error

Hi i am handling an exception in a background_task. Running the code alone works fine, but when i want to run it as a background_task there is always an error message. It seems that handling an exception is not possible through a background_task. I want to set a status when the background_tasks failed, so i tried it with an exception. In this case i try to load something into a teradata table that does not exist to force an error and handle teh exception.

Here is my Code:

@background(schedule=0)
def forced_load_schedule(id, user):
#Identify user and send message
#user = User.objects.get(pk=user_id)
#user.email_user('Here is a notification', 'You have been notified')
queryconfig = "SELECT * FROM ETL_config"
queryloads = "SELECT * FROM ETL_load WHERE id ='" + str(id) + "'"
teradatahost = '195.233.30.21'
udaExec = teradata.UdaExec (appName="Toolbox2.0_ETL", version="1.0", logConsole=False)
conn = sqlite3.connect('C:/inetpub/wwwroot/db.sqlite3')
conn2 = sqlite3.connect('C:/inetpub/wwwroot/db.sqlite3')

#Get teradata user and etl configuration from SQLite database
df = pd.read_sql_query(queryconfig, conn)

for row in df.iterrows():
	pos, d = row
	teradatauser = d["Teradata_User"]
	teradatapassword = d["Teradata_Password"]
	etltimer = d["ETL_Timer"]
	etlstatus = d["ETL_Status"]
	runningindicator = d["ETL_Running_Indicator"]
	
	
df = pd.read_sql_query(queryloads, conn)

for row in df.iterrows():
	pos, d = row
	loadsql = d["Load_SQL"]
	loadname = d["Name"]
	loadid = d["id"]
	etlgroup = d["ETL_Group"]
	
	#Connect to teradata
	with udaExec.connect(method="odbc",system=teradatahost, username=teradatauser,
						password=teradatapassword, driver="Teradata") as connect:
						now = str(datetime.now())[0:19]
						#Execute load SQL
						try:
							curs = conn.cursor()
							curs.execute("UPDATE ETL_load SET Trigger_Status = '1' WHERE id ='" + str(loadid) + "'")
							conn.commit()
							curs = connect.cursor()
							curs.execute(loadsql)
							curs = conn.cursor()
							curs.execute("UPDATE ETL_load SET Load_Status = '1' WHERE id ='" + str(id) + "'")
							curs.execute("UPDATE ETL_load SET Last_Load = '" + str(datetime.now()) + "' WHERE id ='" + str(id) + "'")
							curs.execute("INSERT INTO log_log (Appname, Log_Title, Log_Message, Timestamp, Username) SELECT 'ETL_process' AS Appname, 'Info' AS Log_Title, 'Loadprocess " + loadname + " from ETL-Group " + etlgroup + " has been forced' AS Log_Message, '" + now + "' AS Timestamp, '" + user + "' AS Username")
							conn.commit()
						except Exception as e:
							curs2 = conn2.cursor()
							curs2.execute("UPDATE ETL_load SET Load_Status = '2' WHERE id ='" + str(id) + "'")
							conn2.commit()

#Set ETL config Status back to 'active'
curs = conn.cursor()
curs.execute("UPDATE ETL_config SET ETL_Status = 'active'")
conn.commit()
conn.close()

My error message is this in the tasks table:
Traceback (most recent call last):
File "C:\inetpub\wwwroot\ETL\tasks.py", line 49, in forced_load_schedule
curs = connect.cursor()
File "C:\Python38-32\lib\site-packages\teradata\udaexec.py", line 745, in execute
self._execute(self.cursor.execute, query, params, **kwargs)
File "C:\Python38-32\lib\site-packages\teradata\udaexec.py", line 790, in _execute
func(query, params, **kwargs)
File "C:\Python38-32\lib\site-packages\teradata\tdodbc.py", line 614, in execute
checkStatus(rc, hStmt=self.hStmt, method="SQLExecDirectW")
File "C:\Python38-32\lib\site-packages\teradata\tdodbc.py", line 231, in checkStatus
raise DatabaseError(i[2], u"[{}] {}".format(i[0], msg), i[0])
teradata.api.DatabaseError: (3807, "[42S02] [Teradata][ODBC Teradata Driver][Teradata Database] Object 'AVU_NL.TASHKKXAIADAD' does not exist. ")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python38-32\lib\site-packages\background_task\tasks.py", line 43, in bg_runner
func(*args, **kwargs)
File "C:\inetpub\wwwroot\ETL\tasks.py", line 60, in forced_load_schedule
sqlite3.OperationalError: near ".": syntax error

unable to open database file

Hi,
I was using background taks in production mode and everything was going fine. Now I just deployed my django app on a server with Apache2 and sqlite3 and Django 2.2.1, but when I try to access the app via the browser I get "unable to open database file" which happens at the moment I call my task in urls.py. By commenting the line the app works just fine, though. Of course, no background task is run anymore then.

Any idea?

IntegrityError and migrate fail

I'm running migrate after installing background_task with Django==1.11.13, django-background-tasks==1.1.13 and it won't finish with success. [EDIT]: previously installed django-background-task==0.1.8 with Django==1.5.1.

In fact the first migrate might have been ran with an older version of Django and the MySQL database set-up is old and is a production copy.

I get the following error on Django Admin:
IntegrityError at /admin/background_task/task/add/

and after migrate I get the following error:

    _mysql.connection.query(self, query)
django.db.utils.IntegrityError: (1364, "Field 'name' doesn't have a default value")

What kind of information would help if i give in order to get me to find some fix?
Thanks!

excessive retries and backoff times?

The exponential backoff time seems rather excessive with the default MAX_ATTEMPTS of 25
This line:

backoff = timedelta(seconds=(self.attempts ** 4) + 5)

results in multi-minute rescheduling past the third attempt, and hours past the eighth attempt, ending at several days.
For example:

for i in range(25): print i, "attempts; minutes =", (i ** 4 + 5) / 60.0
...
0 attempts; minutes = 0.0833333333333
1 attempts; minutes = 0.1
2 attempts; minutes = 0.35
3 attempts; minutes = 1.43333333333
4 attempts; minutes = 4.35
5 attempts; minutes = 10.5
6 attempts; minutes = 21.6833333333
7 attempts; minutes = 40.1
8 attempts; minutes = 68.35 # An hour
9 attempts; minutes = 109.433333333
10 attempts; minutes = 166.75
11 attempts; minutes = 244.1
12 attempts; minutes = 345.683333333
13 attempts; minutes = 476.1
14 attempts; minutes = 640.35
15 attempts; minutes = 843.833333333
16 attempts; minutes = 1092.35
17 attempts; minutes = 1392.1 # Almost a day
18 attempts; minutes = 1749.68333333
19 attempts; minutes = 2172.1
20 attempts; minutes = 2666.75
21 attempts; minutes = 3241.43333333
22 attempts; minutes = 3904.35
23 attempts; minutes = 4664.1
24 attempts; minutes = 5529.68333333 # Almost 4 days!

The reschedule deltas aren't logged either, and neither are exceptions, so it's easy to have a trivial mistake such as a NameError in your background function and have it rescheduled for days, and unless you read the django-background-task source and do the math, you won't realize it's doing that; and unless you do your own logging you won't ever learn why it fails.

no such table: background_task

Hi, I'm very new at Django and all the 3rd party app world. I found yours and it seems very easy to use, I installed it, but when I ran an example had got this error "no such table: background_task". I notice that it have its own models.py file but after doing "$ sudo python setup.py install" in my database there is not a "background_task" table created, not even after doing "python manage.py makemigrations" or "syncdb", I'm afraid that I'm missing something. Hope you can help me.

ModuleNotFoundError: No module named 'StringIO'

I'm using:

  • Django 2.2.2
  • Python 3.7.3
  • django-background-task 0.1.8

When starting Django, startup fails with ModuleNotFoundError: No module named 'StringIO'.
The reason seems to be that StringIO module was removed with Python 3.0 (see announcement).

The full error printed is this:

Exception in thread django-main-thread:
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.7/threading.py", line 865, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
    fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
    autoreload.raise_last_exception()
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 77, in raise_last_exception
    raise _exception[1]
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 337, in execute
    autoreload.check_errors(django.setup)()
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
    fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 114, in populate
    app_config.import_models()
  File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 211, in import_models
    self.models_module = import_module(models_module_name)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/usr/local/lib/python3.7/site-packages/background_task/models.py", line 8, in <module>
    from StringIO import StringIO
ModuleNotFoundError: No module named 'StringIO'

Option to disable atomic transaction for specific task

Can you add feature that allows to disable atomic transaction for specific task? I have a case when task changes multiple times status of object and i want to follow this changes by async requests but i cant do it right know because changes are saved in database only when task ends

run the tasks asynchronous

I want to run background tasks at the same time, option BACKGROUND_TASK_RUN_ASYNC = true but i am having a problem. Can you help me?

Logging options do nothing if the root logger already has any handlers

You do your logging setup via logging.basicConfig(). As per the python docs:
"This function does nothing if the root logger already has handlers configured for it."

So if I have logging configured for django via settings.LOGGING, I can't use the process_tasks logging options to override that.
And confusingly, --log-file silently has no effect.

This is really problematic when running under cron if settings.LOGGING includes a stream handler, as you need to silence everything but errors on stdout/stderr to avoid having cron email you non-error output all the time.

Since process_tasks runs in its own process, maybe we should instead remove all handlers and add a file handler if and only if --log-file is provided?

Tasks are not autodiscovered when using explicit AppConfig in INSTALLED_APPS

In my INSTALLED_APPS, my_app is not specified as simply as my_app but rather as my_app.apps.MyAppConfig. My tasks are located at my_app.tasks but autodiscover() looks for them at my_app.apps.MyAppConfig.tasks because of how it iterates over INSTALLED_APPS.

autodiscover tries to import_module('my_app.apps.MyAppConfig') and fails because MyAppConfig is not a module. I don't know enough about AppConfigs or INSTALLED_APPS to know if it's a reasonable fix to assume the first part of the path is always an app e.g. import_module(app.split('.')[0])

Please remove your version of django-background-task to avoid further confusion.

So, this is what happens... It confuses everyone, and not for the first time.

I want to encourage you to remove your version of django-background-task to avoid further confusion.

Of course we respect the copyright, your invested effort and will keep you mentioned in the README.

Django Background Task is a databased-backed work queue for Django, loosely based around Ruby's DelayedJob library. This project was adopted and adapted from lilspikey django-background-task.

AttributeError: 'module' object has no attribute 'autocommit'

When running 'python manage.py' after adding background-task in installed apps, I am getting following error 'AttributeError: 'module' object has no attribute 'autocommit'.
Full output is as follow.


Traceback (most recent call last):
File "manage.py", line 9, in
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 338, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/init.py", line 312, in execute
django.setup()
File "/usr/local/lib/python2.7/dist-packages/django/init.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/usr/local/lib/python2.7/dist-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
File "/home/mhassan/second-vagrant-env/dev-platform/code/worksteps/testing/models.py", line 1148, in
@background(schedule=5)
File "/usr/local/lib/python2.7/dist-packages/background_task/init.py", line 5, in background
from tasks import tasks
File "/usr/local/lib/python2.7/dist-packages/background_task/tasks.py", line 115, in
class DBTaskRunner(object):
File "/usr/local/lib/python2.7/dist-packages/background_task/tasks.py", line 145, in DBTaskRunner
@transaction.autocommit

AttributeError: 'module' object has no attribute 'autocommit'

Why not pickle?

Hi, why JSON serialize, not pickle?

P.S.: Solution is very interesting, it's much simpler than celery.

Object of type 'WSGIRequest' is not JSON serializable

my code:


@background(schedule=60)
def g_trends(request):
	get_trends()
	return HttpResponse('hello')

when i run code in django and i get to my url for that function i get this error:
Object of type 'WSGIRequest' is not JSON serializable
how can i fix it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.