Giter Club home page Giter Club logo

connector-interfaces's People

Contributors

damdam-s avatar guewen avatar gurneyalex avatar kittiu avatar mmequignon avatar mpanarin avatar oca-git-bot avatar oca-travis avatar sebalix avatar simahawk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

connector-interfaces's Issues

Import ASYNC does not take care of user language into the context

Tested with v10. When the job are created the langage is not conserved into the future model creation.
This is problematic for importation of translatable datas as it is the user langage stored into its context that allow the translation process to be triggered.
I currently locally fixed this by passing the context too into the different "delay" but without knowing if it is the best approach?
Could you please tell me if I am missing something (maybe into queue_job itself?)

Many thanks in advance

-jne

context not respected on components

The importer and the record handler loads specific contexts for create and write, but they are attached only to the current record and not to the ctx of the env of the component.
This means that post/pre_create/write won't have the same ctx.

Requires OCA/connector#465

Chinese envirnment error

error msg is "python csv "delimiter" must be string, not unicode"

_create_csv_attachment
writer = csv.writer(f,
delimiter=options.get(OPT_SEPARATOR),
quotechar=options.get(OPT_QUOTING))

should change to below,can solve this problem:

writer = csv.writer(f,
delimiter=str(options.get(OPT_SEPARATOR)),
quotechar=str(options.get(OPT_QUOTING)))

v8 base_import_async: Concurrent import of models with parent-columns leads to broken data

I observed the following problem in a v8 installation with base_import_async.

When I uploaded a CSV file for the model stock.quant.package with a few thousand packages (see stock.quant.package.csv.txt) and checked the "Import in the background" box, the parent_left/parent_right data was garbage. In a normal situation the following SQL statement should return 0.

select count(*) 
from stock_quant_package p1, 
stock_quant_package p2
where p1.id < p2.id 
and p1.parent_left = p2.parent_left
and p1.parent_right = p2.parent_right

However, after the import I had a few thousand distinct packages with the same parent_left/parent_right information, which clearly is wrong.

I don't think that this is a problem of the module itself, I suppose this is a bug in Odoo that is exposed by concurrency. I am not sure what the best way is to deal with this. The first idea that I had was to call _parent_store_compute at the end of every import chunk, but this might lead to concurrency issues (DB serialization failures) among the jobs which are tedious to resolve.

I think the best solution would be to show an error message when the asynchronous import is used for a model with 'parent' columns since this causes problems regardless of whether the data to be imported contains parent information: If it contains parent information, there will probably be database serialization failures. If it does not contain parent information, the parent_left/_right data structure will be garbage.

connector flow module with workflow implementation

module: connector_flow
version: 8.0

Steps to reproduce

Hi All,

We are using connector_flow module to import Sales order data but it seems we can not validate sales order since read_chunk method call per SO line bases but what we want after importing complete CSV we want to validate SO and create invoice for that SO's.

IS it possible using connector flow module? If yes how ?

Please help. Thanks in Advance.

Mustufa Rangwala

  • ..

Current behavior

Expected behavior

Validation using workflow.

As a user i can't create a report

Hi @simahawk ping you directly because i as i can see you are the maintainer of this module: connector_importer_product

I'm testing in runboat but there is something weird, please check this video: Link

V9 is putting the jobs on pending state

Hi,
Tried importing two different files one with 96 records and second with 1200 records, but the status of the both jobs has been pending since 3-4 hours, Records are validated. can you

base_import_async.py: put job_uuid into import instance for callers to find the job

it would be convenient if the base_import_async instance would store the job uuid in the instance so callers can access the job after the do() method finishes. Use case is that I'd like to attach additional messages to the job I collected during preparation for the job.

--- base_import_async.py.orig   2016-07-15 15:48:27.000000000 +0200
+++ base_import_async.py    2016-07-15 15:49:06.000000000 +0200
@@ -26,7 +26,7 @@
 import os
 from cStringIO import StringIO

-from openerp.models import TransientModel
+from openerp.models import fields, TransientModel
 from openerp.models import fix_import_export_id_paths
 from openerp.tools.translate import _

@@ -189,6 +189,10 @@
 class BaseImportConnector(TransientModel):
     _inherit = 'base_import.import'

+    _columns = {
+        'job_uuid': fields.char(string="Job UUID"),
+    }
+
     def do(self, cr, uid, res_id, fields, options, dryrun=False, context=None):
         if dryrun or not options.get(OPT_USE_CONNECTOR):
             # normal import
@@ -234,5 +238,8 @@
                                     file_name=record.file_name,
                                     description=description)
         _link_attachment_to_job(session, job_uuid, att_id)
+        self.write(cr, uid, [res_id], {
+                'job_uuid': job_uuid,
+            }, context=context)

         return []

Modules to handle files, decisions on their inclusion

Several modules with the same purpose have been proposed for inclusion in the OCA.

The goal of these modules is to provide facilities to handle files, their transfer (ftp, email, ...) and their import / export.

Only one of them should be selected, then remaining features of the others could be proposed in the selected module.

All of them work with "chunks" of documents, they split the documents in chunks that can be imported using the jobs queue of the connector.

Their main features are:

  • Connector Flow
    • Operations are split in tasks (1 task could be "import from ftp" and another "import from csv"), these tasks can be chained with 1 or many destinations (graph) so the "flow" is flexible
  • File Exchange
    • Integration of tasks with crons
    • More APIs implemented (email, sftp, ...) at the moment
  • Connector File
    • Robust handling of documents with hashes

Please let's all be honest and recognize the best of breed in each of these modules.

In my opinion, the most important thing is how is implemented the ground of the module, and in this idea, the connector_flow seems to have the better one with the tasks flow (but I surely miss some points).

I am probably missing a lot of things, so can I ask to the different authors to complete the information (@tremlin, @OSguard, @sebastienbeau, @lepistone, @nbessi)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.