oca / connector Goto Github PK
View Code? Open in Web Editor NEWOdoo generic connector framework (jobs queue, asynchronous tasks, channels)
License: GNU Affero General Public License v3.0
Odoo generic connector framework (jobs queue, asynchronous tasks, channels)
License: GNU Affero General Public License v3.0
Hello,
I need to import sales and get this error :
Fault 1: "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'imported' in 'where clause', query was: SELECT main_table
.increment_id
FROM sales_flat_order
AS main_table
WHERE (imported = '') AND (created_at >= '2015/10/26 09:51:39' AND created_at <= '2015/10/26 14:25:31') AND (store_id IN('1')) AND (state != 'canceled')">
Is anyone could please give me the solution ?
When a job is failed and retried, the result of the job is
<openerp.addons.connector.controllers.main.RunJobController object at 0x4731250>
Hi,
I'm creating some unit test for testing some mails invitations, that uses the connector module. In my current code when a user sends an invitation, then a job is created and later the worker runs the job sending the email.
My problem is that I need the test also works with the worker from connector. In the unit test I can create a worker and also the worker finds the new job but it can't run it. I guess the problem is with sessions, maybe in test environment something is locked and new sessions doesn't work.
I noticed that the watcher started but it can't finish the: self._update_workers(). I also checked in the unit test from connector module, but the worker tests are not done.
It would help a lot some ideas about how to do unit tests that using the worker.
Hi everyone,
I know that the 2.2.0 is an old tag and the API has been refarctored.
Yet this tag seems to be the one used for OCA/connector-prestashop#11
So I use it and try to create a new database and it failed.
I have two issues :
2015-08-22 18:28:26,929 2061 DEBUG ? openerp.http: Loading sale_payment_method_automatic_workflow
2015-08-22 18:28:26,933 2061 DEBUG ? openerp.http: Loading prestashoperpconnect
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/queue/worker.py", line 313, in run
self._update_workers()
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/queue/worker.py", line 301, in _update_workers
db_names = self.available_db_names()
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/queue/worker.py", line 266, in available_db_names
services = openerp.netsvc.ExportService._services
AttributeError: 'module' object has no attribute 'ExportService'
And later it causes crashes when installing xml files :
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 363, in old_api
result = method(recs, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/addons/base/res/res_partner.py", line 564, in write
result = super(res_partner, self).write(vals)
ParseError: "write() takes at least 5 arguments (2 given)" while parsing /home/florent/DEV/Serveurs/odoo/odoo/openerp/addons/base/base_data.xml:23, near
<record id="main_partner" model="res.partner" context="{'default_is_company': True}">
<field name="name">Your Company</field>
<field name="company_id" eval="None"/>
<field name="image" eval="False"/>
<field name="customer" eval="False"/>
<field name="is_company" eval="True"/>
<field name="street"/>
<field name="city"/>
<field name="zip"/>
<field name="phone"/>
<field name="email">[email protected]</field>
<field name="website">www.yourcompany.com</field>
</record>
Not referencing the OCA connector repo don't causes any problem.
Regards
I created 60k jobs (with chunks of 1000) in a first job, and at the same time the workers start to process them, all is fine.
But once the 60k jobs are created, and when I have about 33k remaining pending jobs, if I stop my server (properly, the current workers finish their tasks and terminate), and re-start it, the workers do not run anymore.
I thought it was due to some huge transactions to take some time, so I let the server like that the whole night, hoping the workers will recover and process these 33k pending jobs, but nothing happened.
To re-run the jobs, I just remove them directly in the db (DELETE FROM queue_job WHERE job_function_id=X
), recreate the 60k jobs and this time all of them has been processed (without stopping Odoo).
I really don't know what's happening in fact, someone has experienced this?
Version used: 3.2
We had this case today with the jobrunner (version 7.0).
A job was stuck in enqueued
, blocking the queue because it took a place, considered as a running job.
Our analysis was that the jobrunner updated the state of the job to enqueued, but the GET
call to run the job on the Odoo workers had never reached Odoo (no trace of GET
for this job in the logs). The reason why the GET
call was not done is unknown (other jobs were correctly processed just before).
In the runner's logs, we have:
2015-11-17 18:25:27,873 9168 INFO ? openerp.addons.connector.jobrunner.runner: asking Odoo to run job a9e5d41a-433f-43e5-b584-e6991cf2a64e on db openerp_prod_xxx
2015-11-17 18:25:27,876 9168 INFO ? requests.packages.urllib3.connectionpool: Starting new HTTP connection (1): localhost
So we expect to have this in the Odoo logs:
2015-11-17 18:25:27,876 5327 INFO openerp_prod_xxx werkzeug: 127.0.0.1 - - [17/Nov/2015 18:25:27] "GET /connector/runjob?db=openerp_prod_xxx&job_uuid=a9e5d41a-433f-43e5-b584-e6991cf2a64e HTTP/1.1" 200 -
But there is nothing.
In a general manner, if anything goes wrong when we call _async_http_get
(https://github.com/OCA/connector/blob/9.0/connector/jobrunner/runner.py#L134-L149) (which is very rare but could happen), a job will stay stuck in enqueued
state.
A possible workaround would be to add a cron that check if a job is enqueued for too long and requeue it. But I wonder if we couldn't find a nicer way to fix it (what about removing the enqueued
state? side-effects?)
Hello,
I developed a module to validate invoices using a webservice. This module has some troubles that I think your module can help to manage.
One requirement for my module is validate invoices in batch. Sometimes the process take more time than the web client timeout, and other times one invoice is not valid, then raise an exception with a database rollback.
I would like to known how to create a job for each invoice, and when the job ends send a message to the invoice owner. And how start this jobs from as a client action.
Thanks,
Cristian.
It floods the log files
I had some checkpoints that are not very explict.
Even the notes below are not efficient.
It would be great that the reason of the check could be available in the checkpoint
The category of the Connector should be "Connector" so the related modules and the Connector itself may be found with the same Category search. Without doing this, the main module does not seem to follow cinvension and all users would need to search at least two different categories to find the connector modules.
Notice the documentation: openerp_connector/connector/doc/guides/bootstrap_connector.rst
Connector
.Since i have trouble with the worker system, I decided to activate the jobrunner. I follow all the steps describe here : http://odoo-connector.com/guides/jobrunner.html.
I restart the odoo server and the log shows those lines first :
openerp.service.server: Worker WorkerHTTP (23571) alive
2015-12-03 11:18:43,856 23572 INFO ? openerp.service.server: Worker WorkerHTTP (23572) alive
2015-12-03 11:18:43,872 23573 INFO ? openerp.service.server: Worker WorkerHTTP (23573) alive
2015-12-03 11:18:43,892 23574 INFO ? openerp.service.server: Worker WorkerHTTP (23574) alive
2015-12-03 11:18:43,910 23575 INFO ? openerp.service.server: Worker WorkerHTTP (23575) alive
2015-12-03 11:18:43,922 23576 INFO ? openerp.service.server: Worker WorkerHTTP (23576) alive
2015-12-03 11:18:43,932 23577 INFO ? openerp.service.server: Worker WorkerHTTP (23577) alive
2015-12-03 11:18:43,944 23578 INFO ? openerp.service.server: Worker WorkerHTTP (23578) alive
2015-12-03 11:18:43,957 23579 INFO ? openerp.service.server: Worker WorkerHTTP (23579) alive
2015-12-03 11:18:43,970 23580 INFO ? openerp.service.server: Worker WorkerHTTP (23580) alive
2015-12-03 11:18:43,982 23581 INFO ? openerp.service.server: Worker WorkerCron (23581) alive
2015-12-03 11:18:43,994 23582 INFO ? openerp.service.server: Worker WorkerCron (23582) alive
2015-12-03 11:19:29,297 23578 INFO ? openerp.addons.report.models.report: Will use the Wkhtmltopdf binary at /usr/bin/wkhtmltopdf
2015-12-03 11:19:29,542 23578 WARNING ? openerp.addons.magentoerpconnect.connector: Deprecated: at line 27: This call to 'install_in_connector()' has no effect and is not required.
which seems good but I expected those lines will be followed by :
...INFO...connector.jobrunner.runner: starting
...INFO...connector.jobrunner.runner: initializing database connections
...INFO...connector.jobrunner.runner: connector runner ready for db
...INFO...connector.jobrunner.runner: database connections ready
like the tutorial said but this never happens and odoo began to create jobs which never start/run (now i have 1500+ pending jobs)
what may be the cause of that ?
PS : i update my connector module with the 3.2.0 release but didn't update the others dependencies of the magento-odoo-connector. Does it matter ?
Thanks.
I regularly have errors such as:
Traceback (most recent call last):
File "/home/gbaconnier/customers/xxx/dev/parts/connector/connector/controllers/main.py", line 99, in runjob
retry_postpone(job, unicode(err), seconds=PG_RETRY)
File "/home/gbaconnier/customers/xxx/dev/parts/connector/connector/controllers/main.py", line 83, in retry_postpone
self.job_storage_class(session).store(job)
File "/home/gbaconnier/customers/xxx/dev/parts/connector/connector/queue/job.py", line 200, in store
db_record.write(vals)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/connector/connector/queue/model.py", line 152, in write
res = super(QueueJob, self).write(vals)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 565, in new_api
result = method(self._model, cr, uid, self.ids, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/addons/mail/mail_thread.py", line 432, in write
result = super(mail_thread, self).write(cr, uid, ids, values, context=context)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 372, in old_api
result = method(recs, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/connector/connector/producer.py", line 62, in write
result = write_original(self, vals)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/models.py", line 3787, in write
self._write(old_vals)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/api.py", line 565, in new_api
result = method(self._model, cr, uid, self.ids, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/models.py", line 3898, in _write
cr.execute(query, params + (sub_ids,))
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/sql_db.py", line 158, in wrapper
return f(self, *args, **kwargs)
File "/home/gbaconnier/customers/xxx/dev/parts/odoo/openerp/sql_db.py", line 234, in execute
res = self._obj.execute(query, params)
TransactionRollbackError: could not serialize access due to concurrent update
And I don't think I had such errors previously. I wonder if it could come from the recent changes in the jobrunner (https://github.com/OCA/connector/blob/8.0/connector/jobrunner/runner.py#L137-L145) but didn't investigated.
This connector worker is not supported with the dbfilter having either %d or %h.
i.e. dbfilter = '^%d.*' then it is not working with the following line of code.
dbfilter = config['dbfilter']
if dbfilter and db_names:
db_names = [d for d in db_names if re.match(dbfilter, d)]
here dbfilter variable having '^%d.*' thats why re.match will not found any matches with this and it is returning [ ](blank list).
same this openerp-connector-worker script is not working with dbfilter.
job runner is not activated in case of using %d or %h in db_filter
def get_db_names(self):
if openerp.tools.config['db_name']:
db_names = openerp.tools.config['db_name'].split(',')
else:
db_names = openerp.service.db.exp_list()
dbfilter = openerp.tools.config['dbfilter']
if dbfilter:
db_names = [d for d in db_names if re.match(dbfilter, d)]
_logger.info('************************ get_db_names %s', [openerp.tools.config['db_name'], openerp.service.db.exp_list(), dbfilter])
return db_names
log output:
2015-10-13 12:57:54,990 25521 INFO ? openerp.addons.connector.jobrunner.runner: ************************ get_db_names [False, [u'erp'], '^%d$']
Read OCA/maintainer-tools#29 to know more about it
Hi,
One of the bottleneck of the connector is the way it stores the jobs (in the database). I was thinking about another storage like Redis or RabbitMQ to speed up the job creation/consumption, and I just seen that there is already some abstraction about this with the JobStorage
class and the implementation OpenERPJobStorage
.
But the last one seems used as a direct dependency by the Web controller, the queue.job
model, the Worker
class, and the Job.delay
method.
Is there any thoughts about this? If we want to provide different storage backends for the jobs/tasks in the future, is this have to be implemented in another module (e.g. connector_redis
, with some monkey-patches/overloads of connector
elements) or integrated directly in the connector
(and choose the storage backend through some system parameters)?
In fact, I need some performance on a project (just a test), and started to think about the integration of Celery for the tasks, but it is a bit redundant with the connector framework, that's why I am asking some info/feedback on this topic!
Step to reproduce:
started
stateExpected:
The jobs are reset to pending
.
We use the connector with the magento-connector framework together. We have various problems because having different products with the same default_code after import.
So we want to have a check on Import or after import if the current default_code not exist. For sale.orders we can defined "sale.exception". We have not similar yet as "checkpoint.exception" that we execute the import.job (instead trigger a exception their), to check in general after we have execute a import job, if everything is fine or what is done.
So our workaround is, to patch the button on reviewed to include the check their.
class connector_checkpoint(Model):
_inherit = 'connector.checkpoint'
def reviewed(self, cr, uid, ids, context=None):
for checkpoint in self.browse(cr, uid, ids, context=context):
if checkpoint.model_id.model == 'product.product':
product_pool = self.pool.get('product.product')
product = product_pool.browse(cr, uid, checkpoint.record_id, context=context)
product_ids_with_this_sku = product_pool.search(cr,
uid,
[('default_code', '=', product.default_code)],
context=context)
if len(product_ids_with_this_sku) > 1:
raise except_orm(_('Error'),
_('there more then one product with the reference %s')
% product.default_code)
return super(connector_checkpoint, self).reviewed(cr, uid, ids, context=context)
May it would be better to extend the checkpoint concept to defined rules, to display to the user?
Today I have encountered a strange timing issue with the connector (Odoo 7, Connector 2.2.0):
2014-09-29 14:24:46,367 4864 ERROR test openerp.service.workers: Worker (4864) Exception occured, exiting...
Traceback (most recent call last):
File "/srv/openerp/odoo/openerp/service/workers.py", line 307, in run
self.process_work()
File "/srv/openerp/addons/addons-external/connector/openerp-connector-worker", line 92, in process_work
self._work_database(cr)
File "/srv/openerp/addons/addons-external/connector/openerp-connector-worker", line 73, in _work_database
max_jobs=MAX_JOBS)
File "/srv/openerp/addons/addons-external/connector/queue/model.py", line 272, in assign_then_enqueue
self.enqueue_jobs(cr, uid, context=context)
File "/srv/openerp/addons/addons-external/connector/queue/model.py", line 297, in enqueue_jobs
self._enqueue_jobs(cr, uid, context=context)
File "/srv/openerp/addons/addons-external/connector/queue/model.py", line 350, in _enqueue_jobs
db_worker_id = self._worker_id(cr, uid, context=context)
File "/srv/openerp/addons/addons-external/connector/queue/model.py", line 253, in _worker_id
"of 1" % len(worker_ids))
AssertionError: 0 worker found in database instead of 1
The only thing that I found is the old bug on Launchpad https://bugs.launchpad.net/openerp-connector/+bug/1281073 but this seems to be a different issue because I am running a multiprocessing setup with "--workers=2" on both openerp-server and openerp-connector-worker (and a -d $DBNAME). The problem occurs because _enqueue_jobs is called before _notify_alive but I could not figure out why this is the case. The only strange thing that I noticed is that the connector workers are started in a sleep cycle before going into their first run cycle. This seems to different from other connector installations that I am running successfully.
My solution for now is to surround the line 350 of connector/queue/model.py by try/except:
try:
db_worker_id = self._worker_id(cr, uid, context=context)
except Exception:
return
When you install the connector (Branch 8) in ODOO 8 appears after a few seconds the following error aborts the installation.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/openerp/http.py", line 499, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/usr/local/lib/python2.7/site-packages/openerp/http.py", line 516, in dispatch
result = self._call_function(*_self.params)
File "/usr/local/lib/python2.7/site-packages/openerp/http.py", line 282, in _call_function
return checked_call(self.db, *args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/service/model.py", line 113, in wrapper
return f(dbname, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/http.py", line 279, in checked_call
return self.endpoint(_a, *_kw)
File "/usr/local/lib/python2.7/site-packages/openerp/http.py", line 732, in call
return self.method(_args, *_kw)
File "/usr/local/lib/python2.7/site-packages/openerp/http.py", line 375, in response_wrap
response = f(_args, *_kw)
File "/var/packages/Odoo8/target/addons/web/controllers/main.py", line 948, in call_button
action = self._call_kw(model, method, args, {})
File "/var/packages/Odoo8/target/addons/web/controllers/main.py", line 936, in _call_kw
return getattr(request.registry.get(model), method)(request.cr, request.uid, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 237, in wrapper
return old_api(self, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/addons/base/module/wizard/base_module_upgrade.py", line 104, in upgrade_module
openerp.modules.registry.RegistryManager.new(cr.dbname, update_module=True)
File "/usr/local/lib/python2.7/site-packages/openerp/modules/registry.py", line 324, in new
openerp.modules.load_modules(registry._db, force_demo, status, update_module)
File "/usr/local/lib/python2.7/site-packages/openerp/modules/loading.py", line 358, in load_modules
loaded_modules, update_module)
File "/usr/local/lib/python2.7/site-packages/openerp/modules/loading.py", line 263, in load_marked_modules
loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks)
File "/usr/local/lib/python2.7/site-packages/openerp/modules/loading.py", line 182, in load_module_graph
_load_data(cr, module_name, idref, mode, kind='data')
File "/usr/local/lib/python2.7/site-packages/openerp/modules/loading.py", line 118, in _load_data
tools.convert_file(cr, module_name, filename, idref, mode, noupdate, kind, report)
File "/usr/local/lib/python2.7/site-packages/openerp/tools/convert.py", line 899, in convert_file
convert_xml_import(cr, module, fp, idref, mode, noupdate, report)
File "/usr/local/lib/python2.7/site-packages/openerp/tools/convert.py", line 985, in convert_xml_import
obj.parse(doc.getroot(), mode=mode)
File "/usr/local/lib/python2.7/site-packages/openerp/tools/convert.py", line 851, in parse
self._tags[rec.tag](self.cr, rec, n, mode=mode)
File "/usr/local/lib/python2.7/site-packages/openerp/tools/convert.py", line 765, in _tag_record
id = self.pool['ir.model.data']._update(cr, self.uid, rec_model, self.module, res, rec_id or False, not self.isnoupdate(data_node), noupdate=self.isnoupdate(data_node), mode=self.mode, context=rec_context )
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 237, in wrapper
return old_api(self, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/addons/base/ir/ir_model.py", line 1047, in _update
res_id = model_obj.create(cr, uid, values, context=context)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 237, in wrapper
return old_api(self, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 332, in old_api
result = method(recs, _args, *_kwargs)
File "/var/packages/Odoo8/target/addons/connector/producer.py", line 48, in create
record_id = create_original(self, vals)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 235, in wrapper
return new_api(self, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/models.py", line 3952, in create
vals = self._add_missing_default_values(vals)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 235, in wrapper
return new_api(self, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 464, in new_api
result = method(self._model, cr, uid, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/models.py", line 1836, in _add_missing_default_values
defaults = self.default_get(cr, uid, list(missing_defaults), context)
File "/usr/local/lib/python2.7/site-packages/openerp/api.py", line 237, in wrapper
return old_api(self, _args, *_kwargs)
File "/usr/local/lib/python2.7/site-packages/openerp/models.py", line 1331, in default_get
value = record[name]
File "/usr/local/lib/python2.7/site-packages/openerp/models.py", line 5441, in getitem
return self._fields[key].get(self, type(self))
File "/usr/local/lib/python2.7/site-packages/openerp/fields.py", line 717, in get
return record._cache[self]
File "/usr/local/lib/python2.7/site-packages/openerp/models.py", line 5775, in getitem
return value.get() if isinstance(value, SpecialValue) else value
File "/usr/local/lib/python2.7/site-packages/openerp/fields.py", line 55, in get
raise self.exception
ParseError: "Field queue.job.channel.complete_name is accessed before being computed." while parsing /var/packages/Odoo8/target/addons/connector/queue/queue_data.xml:47, near
<record model="queue.job.channel" id="channel_root"><field name="name">root</field></record>
If I delete the block
<record model="queue.job.channel" id="channel_root"> <field name="name">root</field> </record>
from the XML and re-Install, then the installation is successful. Of course, then no root channel created. A manual edit of the channels leads to the same exception.
I'm not sure if this is a bug in the connector or a problem with the Odoo installation.
File "/opt/odoo/additional_addons/community/connector/queue/worker.py", line 124, in run_job
job.perform(session)
File "/opt/odoo/additional_addons/community/connector/queue/job.py", line 492, in perform
self.result = self.func(session, _self.args, *_self.kwargs)
File "/opt/odoo/additional_addons/community/magentoerpconnect/unit/export_synchronizer.py", line 392, in export_record
record = session.browse(model_name, binding_id)
File "/opt/odoo/additional_addons/community/connector/session.py", line 167, in browse
model_obj = self.pool[model]
File "/opt/odoo/sources/odoo/openerp/modules/registry.py", line 102, in getitem
return self.models[model_name]
Scenario
When using the connector framework you have to use some API wrapper for your external system. This API wrapper maybe stateless or not. If your API wrapper is statefull, how do you pass the api around without creating the API instance and connecting each time again (which is costly)?
E.g. for reading the record, for reading children, for reading translation etc....
Workaround
In order to reuse the API instance, I wrote a custom ConnectorEnvironment which gets the API instance at instantiation:
class APIConnectorEnvironment(ConnectorEnvironment):
def __init__(self, backend_record, session, model_name, api=None):
super(APIConnectorEnvironment, self).__init__(
backend_record, session, model_name)
self.api = api
def get_environment(...):
...
env = APIConnectorEnvironment(...., api=api)
....
return env
Unfortunately this does not work all the time. In some situations the framework creates a new ConnectorEnvironment and it naturally does not use my custom ConnectorEnvironment class because of the unit_for
method in the ConnectorUnit
:
def unit_for(self, connector_unit_class, model=None):
...
if model is None or model == self.model._name:
env = self.connector_env
else:
env = ConnectorEnvironment(self.backend_record,
self.session,
model)
return env.get_connector_unit(connector_unit_class)
So, in order to get my custom ConnectorEnvironment working I have to
ConnectorUnit
(Exporter, Importer, Deleter, Synchronizer, Mapper, ...)unit_for
method and replace it in the mixinFeature Request
For case (2) it would be nicer to have a hook method in ConnectorUnit
like:
def unit_for(...):
if model is None or model == self.model._name:
env = self.connector_env
else:
env = self._create_connector_environment(model)
return env.get_connector_unit(connector_unit_class)
def _create_connector_environment(self, model):
return ConnectorEnvironment(self.backend_record, self.session, model)
What solutions exist for (1) except for monkey patching ? Did I overlook something or should this be a supported feature: Interchangeable ConnectorEnvironment classes?
I get the following error on a 7.0 instance with the job runner running in production.
Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 850, in emit
msg = self.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
return fmt.format(record)
File "/srv/openerp/parts/server/openerp/netsvc.py", line 154, in format
return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
File "/srv/openerp/parts/connector/connector/jobrunner/channels.py", line 369, in __str__
len(self._failed))
TypeError: %d format: a number is required, not NoneType
Logged from file channels.py, line 458
Connector should provides a helper to be used when validating data.
This helper must ensure that all keys in the dictionary passed to create/write method of Odoo model
are part of the model columns.
This helper should ease development and migration.
The jobrunner does an HTTP request per job and the werkzeug server creates one new session file each time. With thousands of jobs, this leads to a huge amount of session files on the server (we got near an inode shortage).
I guess using requests sessions could be a solution.
Hello,
I encountered a bug when a RetryableJobError is raised.
The retry_postpone method is then called, without the keyword arg seconds.
This raises an error in job.py, method postpone:
TypeError: unsupported type for timedelta seconds component: NoneType
Steps to reproduce
Current Behaviour
Traceback because of timedelta not allowing NoneType
Expected Behaviour
No traceback, job postponed.
If we start the openerp-connector-runner
script before Odoo itself, the jobs processing is blocked.
It happens because this line https://github.com/OCA/connector/blob/7.0/connector/jobrunner/runner.py#L251 sets the state to enqueued
before sending it to the workers. If the workers can't take the jobs (because they are not running for instance), the jobs stay in enqueued
and are never reset to pending
. The runner consider them as running
so it won't start new jobs.
For the history, when it happens, we can unblock them by manually requeuing the jobs.
I've set up the jobrunner on a multi-database installation but when I omit --database <DATABASE_NAME>
from the odoo start script in /etc/init.d/
I get Access denied
error messages in my odoo log:
2015-12-27 09:43:04,780 1205 INFO ? openerp.addons.connector.jobrunner.runner: initializing database connections 2015-12-27 09:43:04,780 1205 ERROR ? openerp.addons.connector.jobrunner.runner: exception: sleeping 5s and retrying Traceback (most recent call last): File "/var/lib/odoo/.local/share/Odoo/addons/8.0/connector/jobrunner/runner.py", line 344, in run self.initialize_databases() File "/var/lib/odoo/.local/share/Odoo/addons/8.0/connector/jobrunner/runner.py", line 285, in initialize_databases for db_name in self.get_db_names(): File "/var/lib/odoo/.local/share/Odoo/addons/8.0/connector/jobrunner/runner.py", line 267, in get_db_names db_names = openerp.service.db.exp_list() File "/usr/lib/python2.7/dist-packages/openerp/service/db.py", line 309, in exp_list raise openerp.exceptions.AccessDenied() AccessDenied: Access denied.
My line for starting odoo looks like that:
start-stop-daemon --start --quiet --pidfile $PIDFILE --chuid $USER:$USER --background --make-pidfile --exec $DAEMON -- --config $CONFIG --logfile $LOGFILE --load=web,connector --workers=4 --db-filter='^%d$' --log-level=debug --longpolling-port=8072
(all params are correctly configured and when I add --database <DATABASE_NAME>
the errors go away but odoo executes only the jobs from the specified db name.
Is there a way to pass --database='^%d$'
or something like that? Please advise.
Hi everyone,
Looks very strange, I run the scheduler so that procurement order could be plannd and I have this error :
2015-11-13 15:10:11,172 17899 DEBUG yse-manager openerp.osv.expression: File "/usr/lib64/python2.7/threading.py", line 783, in __bootstrap
self.__bootstrap_inner()
File "/usr/lib64/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/procurement/wizard/schedulers_all.py", line 62, in _procure_calculation_all
proc_obj.run_scheduler(new_cr, uid, use_new_cursor=new_cr.dbname, company_id = comp, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/stock/procurement.py", line 286, in run_scheduler
self._procure_orderpoint_confirm(cr, SUPERUSER_ID, use_new_cursor=use_new_cursor, company_id=company_id, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/stock/procurement.py", line 372, in _procure_orderpoint_confirm
self.run(cr, uid, [proc_id])
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/stock/procurement.py", line 219, in run
self.pool.get('stock.move').action_confirm(cr, uid, move_to_confirm_ids, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/delivery/stock.py", line 195, in action_confirm
res = super(stock_move, self).action_confirm(cr, uid, ids, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/stock/stock.py", line 2225, in action_confirm
self._picking_assign(cr, uid, move_ids, procurement_group, location_from, location_to, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 372, in old_api
result = method(recs, *args, **kwargs)
File "/home/florent/DEV/Lib_Odoo/OCA-ecommerce/sale_automatic_workflow/stock_move.py", line 32, in _picking_assign
location_to)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 565, in new_api
result = method(self._model, cr, uid, self.ids, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/stock/stock.py", line 2159, in _picking_assign
pick = pick_obj.create(cr, uid, values, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/stock/stock.py", line 711, in create
return super(stock_picking, self).create(cr, user, vals, context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/addons/mail/mail_thread.py", line 381, in create
thread_id = super(mail_thread, self).create(cr, uid, values, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 345, in old_api
result = method(recs, *args, **kwargs)
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/producer.py", line 48, in create
record_id = create_original(self, vals)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/models.py", line 4084, in create
record = self.browse(self._create(old_vals))
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 481, in new_api
result = method(self._model, cr, uid, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/models.py", line 4256, in _create
recs.modified(self._fields)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/models.py", line 5678, in modified
spec += self._fields[fname].modified(self)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/fields.py", line 962, in modified
target = env[field.model_name].search([(path, 'in', records.ids)])
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 481, in new_api
result = method(self._model, cr, uid, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/models.py", line 1645, in search
return self._search(cr, user, args, offset=offset, limit=limit, order=order, context=context, count=count)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/models.py", line 4664, in _search
query = self._where_calc(cr, user, args, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/api.py", line 250, in wrapper
return old_api(self, *args, **kwargs)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/models.py", line 4475, in _where_calc
e = expression.expression(cr, user, domain, self, context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/osv/expression.py", line 657, in __init__
self.parse(cr, uid, context=context)
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/osv/expression.py", line 905, in parse
_logger.debug(''.join(traceback.format_stack()))
As you can see, it looks like the connector is part of the issue.
Any advice?
Using the installation instructions I cloned the git repos. All modules showed up correctly, except the "connector" itself. I worked around the issue for now by copying connector/connector to connector and deleting the line "application: True". That is ok for me in a testing environment, but not good for production. This is using v7.
Improvement: replace the FOR UPDATE
locks in the worker by Advisory Locks
Here is the part that could be improved with that: https://github.com/OCA/connector/blob/8.0/connector/queue/model.py#L309-L356
It is more efficient, particularly, locks are held in memory so a lock does not incur a write on the disk.
Beware: advisory locks are application-level locks, so they won't prevent users to write on the job using the ORM, see if that should be prevented in the crud methods.
See also http://hashrocket.com/blog/posts/advisory-locks-in-postgres, http://stackoverflow.com/a/25514889
Hi, @guewen
Since the deprecated install_in_connector(), I've tried to search a replacement like here.
I haven't saw your new method is_module_installed, oups, but
if module_name in list(self.env.registry._init_modules):
works like a charm.
It seems to be a low level attribute defined here
https://github.com/odoo/odoo/blob/8.0/openerp/modules/registry.py#L56
Don't you think it could be used safely as an alternative of is_module_installed() at least for future Connector version ?
Hi everyone,
I'm working with the prestashop connector and I have some strange phenomenon. From time to time, some jobs are stucked in the queue an block everything (server is not restarted, I already aware of jobrunner caveat). It looks like some long duration jobs are locking the channel.
I try to adapt the cron so that the jobs are not using the ressource simultaneously.
Any chance to have a tip for preventing this?
regards
This is a document explaining how would be implemented the channels in the jobs queue. It is in draft and will be modified. At time of writing, the what and how regarding the implementation has not been decided yet so read it with caution.
In the jobs queue, jobs are all pushed in a queue. Then, one or many processes take them from the queue and run them one after the other.
This has serious limitations in real use cases:
This is were the channels come in handy.
All jobs that have no channel specified goes into a default
channel.
A channel can be specified on using the keyword argument channel
of the job
decorator:
@job(channel='channel`)
def import_record(session, model_name, backend_id, external_id):
pass
Sub-channels allow more fine-grain control on the execution of the jobs by the workers.
@job(channel='channel.subchannel.subsubchannel`)
def import_record(session, model_name, backend_id, external_id):
pass
If nothing is specified when the workers are started, the workers watch all the channels.
The usual command starts 2 workers that will executes all the jobs whatever their channel is. So it works in a backward compatible manner:
$ PYTHONPATH=/path/to/server connector/openerp-connector-worker --config
configfile --workers=2
The same thing could be done with:
$ PYTHONPATH=/path/to/server connector/openerp-connector-worker --config
configfile --workers=all:2
all
is a reserved channel which shouldn't be used in the job
decorator.
More control is possible using the --workers
option.
1 worker on default
, 1 on system_a
, 2 on system_b
.
--workers=default:1,system_a:1,system_b:2
1 worker on default
, 1 on system_a.import
, 1 on system_a.export
, 2 on system_b
.
--workers=default:1,system_a.import:1,system_a.export:1;system_b:2
Several channels can be watched by a worker:
So we want 2 workers sharing all the imports of system A and B, and 1 for the export of both systems:
--workers=system_a.import:system_b.import:2,system_a.export:system_b.export:1
At last, we can exclude a subchannel with ~
, for instance if it is an heavy task that we want to exclude:
So we want 2 workers for system A but another worker for one specific job of this system:
--workers=system_a:~system_a.export.heavyjob:2,system_a.export.heavyjob:1
Hello all,
I have a production database and connect with Magento and inside the database I have installed connector and its successfully installed and configured as document share by OCA link: http://odoo-magento-connector.com/guides/installation_guide.html#installation.
Product, partner all features are working fine and import/Export. When I have restart the server with ./openerp-server --addons-path=addons,magento_addons/connector,magento_addons/connector-ecommerce/,magento_addons/connector-magento/,magento_addons/e-commerce/,magento_addons/product-attribute/,magento_addons/sale-workflow/ -u all -d my_database
bellow traceback raise
.........................all my product behave like this first delete and after traceback raise
2015-11-18 11:37:29,906 31356 INFO my_database openerp.addons.base.ir.ir_model: Deleting [email protected] (product.product_835)
2015-11-18 11:37:32,639 31356 INFO my_database openerp.addons.base.ir.ir_model: Deleting [email protected] (product.product_833)
2015-11-18 11:37:32,712 31356 INFO my_database openerp.addons.base.ir.ir_model: Deleting [email protected] (product.product_832)
2015-11-18 11:37:32,836 31356 CRITICAL my_database openerp.service.server: Failed to initialize database my_database
.
Traceback (most recent call last):
File "/home/feed/odoo8/openerp/service/server.py", line 924, in preload_registries
registry = RegistryManager.new(dbname, update_module=update_module)
File "/home/feed/odoo8/openerp/modules/registry.py", line 370, in new
openerp.modules.load_modules(registry._db, force_demo, status, update_module)
File "/home/feed/odoo8/openerp/modules/loading.py", line 382, in load_modules
registry['ir.model.data']._process_end(cr, SUPERUSER_ID, processed_modules)
File "/home/feed/odoo8/openerp/api.py", line 241, in wrapper
return old_api(self, _args, *_kwargs)
File "/home/feed/odoo8/openerp/addons/base/ir/ir_model.py", line 1254, in _process_end
self.pool[model].unlink(cr, uid, [res_id], context=context)
File "/home/feed/odoo8/openerp/api.py", line 241, in wrapper
return old_api(self, _args, *_kwargs)
File "/home/feed/odoo8/addons/mail/mail_thread.py", line 449, in unlink
res = super(mail_thread, self).unlink(cr, uid, ids, context=context)
File "/home/feed/odoo8/openerp/api.py", line 241, in wrapper
return old_api(self, _args, *_kwargs)
File "/home/feed/odoo8/openerp/api.py", line 363, in old_api
result = method(recs, _args, *_kwargs)
File "/home/feed/odoo8/magento_addons/connector/connector/producer.py", line 86, in unlink
return unlink_original(self)
File "/home/feed/odoo8/openerp/api.py", line 239, in wrapper
return new_api(self, _args, *_kwargs)
File "/home/feed/odoo8/openerp/api.py", line 546, in new_api
result = method(self._model, cr, uid, self.ids, _args, *_kwargs)
File "/home/feed/odoo8/openerp/models.py", line 3617, in unlink
'where id IN %s', (sub_ids,))
File "/home/feed/odoo8/openerp/sql_db.py", line 158, in wrapper
return f(self, _args, *_kwargs)
File "/home/feed/odoo8/openerp/sql_db.py", line 234, in execute
res = self._obj.execute(query, params)
IntegrityError: update or delete on table "product_product" violates foreign key constraint "account_invoice_line_product_id_fkey" on table "account_invoice_line"
DETAIL: Key (id)=(1577) is still referenced from table "account_invoice_line".
2015-11-18 11:37:32,853 31356 INFO my_database openerp.modules.loading: loading 1 modules...
I have find some code, unlink method was fire in file path: connector/connector/producer.py
Please give me solution, Is there any configuration issue or another?
Hi,
I'm trying to run the connector in the multiprocessing mode following this doc : http://odoo-connector.com/guides/multiprocessing.html
I run on the 3.0 tag
I launch this command from the connector repo folder :
PYTHONPATH=/home/florent/DEV/Serveurs/odoo/odoo/openerp connector/openerp-connector-worker --config=/path/to/my/conf/flotho.conf --logfile=/home/florent/myTmp/connector.log --workers=2
here is the log of the command, and of course it failed
Traceback (most recent call last):
File "connector/openerp-connector-worker", line 12, in <module>
import openerp
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/__init__.py", line 98, in <module>
import cli
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/cli/__init__.py", line 38, in <module>
import deploy
File "/home/florent/DEV/Serveurs/odoo/odoo/openerp/cli/deploy.py", line 5, in <module>
import requests
File "/usr/lib/python2.7/site-packages/requests/__init__.py", line 52, in <module>
from . import utils
File "/usr/lib/python2.7/site-packages/requests/utils.py", line 22, in <module>
from .compat import parse_http_list as _parse_list_header
File "/usr/lib/python2.7/site-packages/requests/compat.py", line 95, in <module>
from .packages import chardet
File "/usr/lib/python2.7/site-packages/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/__init__.py", line 16, in <module>
from .connectionpool import (
File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 20, in <module>
from queue import LifoQueue, Empty, Full
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/queue/__init__.py", line 22, in <module>
from . import model
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/queue/model.py", line 30, in <module>
from .job import STATES, DONE, PENDING, OpenERPJobStorage
File "/home/florent/DEV/Lib_Odoo/gitMag-OCA-Connector/connector/queue/job.py", line 34, in <module>
from ..exception import (NotReadableJobError,
ValueError: Attempted relative import beyond toplevel package
I'm certainly missing something subtle here but I don't see what,
Could it be a bug
I'm working on a new connector module to integrate OpenERP with Drupal Commerce, later will try to publish under OCA, but for now I wonder if there is a some tool that we could leverage for follow user case:
OpenERP will hold the master data for product categories and products but not all the products will be available on the Drupal store so we need a way to filter what products will be upload into the Drupal website.
At this moment I'm hardcoding the domain on the export functions but don't seems very elegant or extensible way, is there any other way available on connector?
We have seen when a large number of jobs have built up the autovacuum job stops running correctly and the backlog of jobs to delete continues to increase. By adding an optional limit parameter and increasing the frequency of the autovacuum job we were able to get this backlog cleared eventually.
Hello,
as discussed in PR #103 I encountered a bug with the job decorator.
The retry_pattern does not work, as it is not returned in the functools.partial.
Steps to reproduce
@job(retry_pattern={1: 5})
def dummy_task(session):
raise RetryableJobError
Current Behaviour
The job is postponed with the default RETRY_INTERVAL
Expected Behaviour
The job should be postponed with the interval specified in the retry_pattern.
The docs do not build due to this bug: sphinx-doc/sphinx#2142
Waiting the new release for new documentation builds.
Hi,
Do you have any example of redmine/openERP connector with your framework?
Thanks
Looking at the .travis.yml
I notice that odoo tests run twice.
Looking at the Travis build log I confirm that idea.
Could the first test call be redundant?
The openerp-connector-worker script doesn't filter out databases when started with --db-filter parameter.
It seams that on module update permission groups are not correctly updated, as beeing created dynamically (at installation). This breakes the User View. Is there any solution for this in OdooSA modules?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.