Comments (11)
I'm not sure that can be done in a reasonable way, because there is no option for pylint to ignore comments with local disable commands. We would have to disable a lot of warnings globally in the configuration that is checked automatically and that is against the goal of this proposal. Even with too-many-branches
, you should review your code before pushing it to master and think really hard whether there is any more elegant way to do it. And if there is not, then you should write the comment that locally disables the warning, meaning “yes, I really thought this through and I want 50 branches in this function”. And you have to do that right then, because I can guarantee that you will not get back to it later and fix something just because it has lower pylint score. And even if you did, it would be much more unnecessary work.
You will still see a pylint info statement about the local ignore in the pylint output, so can easily judge what file is the worst (I'm thinking train.py
at the moment) even without a numerical score.
Python is a terrible language and any way to consistently limit its dreadful anarchy must be used to the fullest extent!
from neuralmonkey.
I don't see much point in trying to get the 10/10 of pylint score if it means you prepend each TODO comment with pylint: disable=fixme (as in vocabulary.py). It only messes up the code.
from neuralmonkey.
We might consider disabling fixme
in .pylintrc
. TODO comments have nothing to do with code quality anyway, so I don't see why pylint should care.
from neuralmonkey.
What am I saying is that there is no evil in not having the full score if it can point you to unresolved TODOs or other issues for that matter. I see much more evil in pylint comments scattered all over the place.
from neuralmonkey.
That would be fine, if everyone was responsible and run pylint on his code before commiting. Which, as you can clearly see from the state of the codebase, is not the case.
The point is to get to the state where every file was checked and then automatically test for regression. Meaning that we will not rely on individual responsibility (which inevitably fails), but on automatic testing. If you introduce new errors into a file that have already been checked and push those changes, Travis will run pylint, the build will fail, you will have to correct the errors.
10/10 score is necessary to enable automatic checking. Besides, I think that # pylint: disable=...
comments are useful form of documentation. It means that you should either refactor that part of the code, or something really special is happening and you should pay attention to that (eg. at some places, catching general exception might be the right thing to do, but usually it is not and therefore a comment explicitly stating that this is in fact what we want to do is a useful one).
@jlibovicky, what do you think?
from neuralmonkey.
I would consider having two pylint configurations. If the pylint was intended to be used as a pass/not-pass test, it would not have the scoring. There is certainly stuff that needs to be checked (missing imports, using variables before assigning) every time we commit to the repo. On the other hand, there are other useful metrics that tells us about the quality of the code (too many branches, too many local variables) which also influence the score and we would lose this useful feedback by locally disabling them. It is the same thing with the TODOs. Number of unresolved TODOs tells you something about the code quality and therefore it makes sense that it influences the score, at the same time we don't need to check it every time we commit.
from neuralmonkey.
Btw. pylint seems to error on no-member way too much, even when the members clearly exist. Should we disable it globally? It might still be useful, but writing a local comment every time pylint decides it doesn't see something might be annoying.
from neuralmonkey.
I would disable the checks on 3rd-party modules as in here (
http://stackoverflow.com/questions/20553551/how-do-i-get-pylint-to-recognize-numpy-members).
Or, I would NOT use this pylint piece of junk at all
2016-06-22 14:52 GMT+02:00 Tomáš Musil [email protected]:
Btw. pylint seems to error on no-member way too much, even when the
members clearly exist. Should we disable it globally? It might still be
useful, but writing a local comment every time pylint decides it doesn't
see something might be annoying.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#6 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABwcs48vlWAA_YP2mzLt5SD-NLmo6Ce-ks5qOTAOgaJpZM4I7FN1
.
from neuralmonkey.
I fixed the .pylintrc
. If you encounter this error in the future, just add the offending module to the list of ignored modules.
from neuralmonkey.
I'm going to disable fixme
in .pylintrc
, because I am now convinced that it should not be in pylint at all. The code with TODOs is actually higher quality code, than the same code without TODOs. Giving the code that documents its own shortcomings a lesser score makes no sense at all.
from neuralmonkey.
This is far from done yet, why did you close it?
from neuralmonkey.
Related Issues (20)
- max_length must be undefined when using SequenceLabeler HOT 1
- learning_utils.join_exection_results does not support OutputSeries == dict
- from_dataset does not exist HOT 2
- dataset.from_files HOT 1
- Confusing Exception message in dataset.py
- Add an output buffering for neuralmonkey-run HOT 2
- Unify dropout usage
- GreedyRunner should not fetch training-related tensors by default during inference HOT 4
- Running as server is broken HOT 1
- neuralmonkey-run with RNN model does not work without reference HOT 1
- Dataset series should support max_len (max_size) flag.
- Neural Monkey does not throw exception when main.initial_variables contains nonexistent path.
- Neural Monkey should throw Exception when tf.Saver.restore fails
- How to train a transformer model with multi-source encoders ? HOT 3
- Exception: Unexpected fields: runners_batch_size HOT 1
- Requirement of editing the post-edit.ini and translation.ini for APE and MT respectively HOT 1
- The Model Configuration in Machine Translation task HOT 1
- some questions about multi-source based transformer model HOT 2
- Did you mean file './dataset.load_dataset_from_files'? HOT 1
- Interested in your paper HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neuralmonkey.