Giter Club home page Giter Club logo

sureal's Introduction

SUREAL - Subjective Recovery Analysis

Version on pypi

Build Status

SUREAL is a toolbox developed by Netflix that includes a number of models for the recovery of mean opinion scores (MOS) from noisy measurements obtained in psychovisual subjective experiments. Read this paper and this latest paper for some background.

SUREAL also includes models to recover MOS from paired comparison (PC) subjective data, such as Thurstone (Case V) and Bradley-Terry.

Installation

SUREAL can be either installed through pip (available via PyPI), or locally.

Installation through pip

To install SUREAL via pip, run:

pip install sureal

Local installation

To install locally, first, download the source. Under the root directory, (preferably in a virtualenv), install the requirements:

pip install -r requirements.txt

Under Ubuntu, you may also need to install the python-tk (Python 2) or python3-tk (Python 3) packages via apt.

To test the source code before installing, run:

python -m unittest discover --start test --pattern '*_test.py' --verbose --buffer

Lastly, install SUREAL by:

pip install .

If you want to edit the source, use pip install --editable . or pip install -e . instead. Having --editable allows the changes made in the source to be picked up immediately without re-running pip install .

Usage in command line

Run:

sureal --help

This will print usage information:

usage: sureal [-h] --dataset DATASET --models MODELS [MODELS ...] [--output-dir OUTPUT_DIR]
[--plot-raw-data] [--plot-dis-videos] [--plot-observers]

optional arguments:
  -h, --help            show this help message and exit
  --dataset DATASET     Path to the dataset file.
  --models MODELS [MODELS ...]
                        Subjective models to use (can specify more than one),
                        choosing from: MOS, P910, P913, BT500.
  --output-dir OUTPUT_DIR
                        Path to the output directory (will force create is not existed).
                        If not specified, plots will be displayed and output will be printed.
  --plot-raw-data       Plot the raw data. This includes the raw opinion scores presented
                        in a video-subject matrix, counts per video and counts per subject.
  --plot-dis-videos     Plot the subjective scores of the distorted videos.
  --plot-observers      Plot the scores of the observers.

Below are two example usages:

sureal --dataset resource/dataset/NFLX_dataset_public_raw_last4outliers.py --models MOS P910 \
    --plot-raw-data --plot-dis-videos --plot-observers --output-dir ./output/NFLX_dataset_public_raw_last4outliers
sureal --dataset resource/dataset/VQEGHD3_dataset_raw.py --models MOS P910 \
    --plot-raw-data --plot-dis-videos --plot-observers --output-dir ./output/VQEGHD3_dataset_raw

Here --models are the available subjective models offered in the package, including:

The sureal command can also invoke subjective models for paired comparison (PC) subjective data. Below is one example:

sureal --dataset resource/dataset/lukas_pc_dataset.py --models THURSTONE_MLE BT_MLE \
--plot-raw-data --plot-dis-videos --output-dir ./output/lukas_pc_dataset

Here --models are the available PC subjective models offered in the package:

Both models leverage MLE-based solvers. For the mathematics behind the implementation, refer to this document.

Dataset files

--dataset is the path to a dataset file. Dataset files may be .py or .json files. The following examples use .py files, but JSON-formatted files can be constructed in a similar fashion.

There are two ways to construct a dataset file. The first way is only useful when the subjective test is full sampling, i.e. every subject views every distorted video. For example:

ref_videos = [
    {
      'content_id': 0, 'content_name': 'checkerboard',
      'path': 'checkerboard_1920_1080_10_3_0_0.yuv'
    },
    {
      'content_id': 1, 'content_name': 'flat',
      'path': 'flat_1920_1080_0.yuv'
    },
]
dis_videos = [
    {
      'content_id': 0, 'asset_id': 0,
      'os': [100, 100, 100, 100, 100],
      'path': 'checkerboard_1920_1080_10_3_0_0.yuv'
    },
    {
      'content_id': 0, 'asset_id': 1,
      'os': [40, 45, 50, 55, 60],
      'path': 'checkerboard_1920_1080_10_3_1_0.yuv'
    },
    {
      'content_id': 1, 'asset_id': 2,
      'os': [90, 90, 90, 90, 90],
      'path': 'flat_1920_1080_0.yuv'
    },
    {
      'content_id': 1, 'asset_id': 3,
      'os': [70, 75, 80, 85, 90],
      'path': 'flat_1920_1080_10.yuv'
    },
]
ref_score = 100

In this example, ref_videos is a list of reference videos. Each entry is a dictionary, and must have keys content_id, content_name and path (the path to the reference video file). dis_videos is a list of distorted videos. Each entry is a dictionary, and must have keys content_id (the same content ID as the distorted video's corresponding reference video), asset_id, os (stands for "opinion score"), and path (the path to the distorted video file). The value of os is a list of scores, reach voted by a subject, and must have the same length for all distorted videos (since it is full sampling). ref_score is the score assigned to a reference video, and is required when differential score is calculated, for example, in DMOS.

The second way is more general, and can be used when the test is full sampling or partial sampling (i.e. not every subject views every distorted video). The only difference from the first way is that, the value of os is now a dictionary, with the key being a subject ID, and the value being his/her voted score for particular distorted video. For example:

'os': {'Alice': 40, 'Bob': 45, 'Charlie': 50, 'David': 55, 'Elvis': 60}

Since partial sampling is allowed, it is not required that every subject ID is present in every os dictionary.

In case a subject has voted a distorted video twice or more (repetitions), the votes can be logged by having a list in lieu of single vote. For example:

'os': {'Alice': 40, 'Bob': [45, 45], 'Charlie': [50, 60], 'David': 55, 'Elvis': 60}

In case of a PC dataset, a distorted video is compared against another distorted video, and a vote is recorded. In this case, the key is a tuple of the subject name and the asset_id of the distorted video compared against. For example:

'os': {('Alice', 1): 40, ('Bob', 3): 45}

where 1 and 3 are the asset_id of the distorted videos compared against. For an example PC dataset, refer to lukas_pc_dataset.py.

Note that for PC models, we current do not yet support repetitions.

Deprecated command line

The deprecated version of the command line can still be invoked by:

PYTHONPATH=. python ./sureal/cmd_deprecated.py

Usage in Python code

See here for an example script to use SUREAL in Google Collab notebook.

For developers

SUREAL uses tox to manage automatic testing and continuous integration with Travis CI on Github, and setupmeta for new version release, packaging and publishing. Refer to DEVELOPER.md for more details.

sureal's People

Contributors

aspyker avatar christosbampis avatar krasuluk avatar li-zhi avatar perrin-af avatar pi-rate14 avatar qub3k avatar sghill avatar silviobe avatar slhck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sureal's Issues

Python2.7, --print / --output-dir causes "float64 not serializable" error.

Using python 2.7 in a clean virtual environment, using --print and/or --output-dir arguments causes a serialization error.

Environment setup:

git clone sureal
virtualenv srEnv
source srEnv/bin/activate
cd sureal
cp sureal/test/resource/NFLX_dataset_public_raw_PARTIAL.py ../NFLX_dataset_public_PARTIAL.py
pip install -r requirements.txt
pip install .

python -m sureal MOS ../NFLX_dataset_public_raw_PARTIAL.py --print

produces:

[crop]$ python -m sureal MOS ../NFLX_dataset_public_raw_PARTIAL.py --print
Run model MosModel on dataset NFLX_dataset_public_raw_PARTIAL.py
Dataset: ../NFLX_dataset_public_raw_PARTIAL.py
Subjective Model: MOS 1.0
Result:
Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/elijs/QoEAT_Sureal/sureal/sureal/__main__.py", line 77, in <module>
    ret = main()
  File "/home/elijs/QoEAT_Sureal/sureal/sureal/__main__.py", line 65, in main
    json.dumps(results[0], indent=4, sort_keys=True)
  File "/usr/lib/python2.7/json/__init__.py", line 251, in dumps
    sort_keys=sort_keys, **kw).encode(obj)
  File "/usr/lib/python2.7/json/encoder.py", line 209, in encode
    chunks = list(chunks)
  File "/usr/lib/python2.7/json/encoder.py", line 434, in _iterencode
    for chunk in _iterencode_dict(o, _current_indent_level):
  File "/usr/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict
    for chunk in chunks:
  File "/usr/lib/python2.7/json/encoder.py", line 442, in _iterencode
    o = _default(o)
  File "/usr/lib/python2.7/json/encoder.py", line 184, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: 0     4.884615
1     4.692308
2     4.730769
3     4.884615
4     4.461538
5     4.730769
6     4.730769
7     2.115385
8     1.961538
9     2.653846
10    2.807692
11    3.576923
12    4.423077
13    4.884615
14    4.846154
15    1.000000
16    1.923077
17    2.769231
18    3.269231
19    3.961538
20    4.346154
21    4.500000
22    1.192308
23    1.576923
24    3.153846
25    3.461538
26    4.153846
27    4.615385
28    4.692308
29    1.307692
30    2.615385
31    2.923077
32    3.192308
33    3.653846
34    4.000000
35    4.384615
36    4.653846
37    4.769231
38    1.115385
39    1.846154
40    2.461538
41    3.076923
42    4.230769
43    4.384615
44    4.769231
45    1.576923
46    2.576923
47    3.230769
48    3.307692
49    4.230769
50    4.538462
dtype: float64 is not JSON serializable

This seems to occur regardless of model.

Missing Tkinter dependency note

I just installed vanilla Ubuntu 18.04 LTS. Following the wiki I tried to issue "./run_subj". When doing so I got the following output:

Traceback (most recent call last):
  File "./run_subj", line 9, in <module>
    from sureal.routine import run_subjective_models
  File "/home/jakub/Workspace/sureal/python/src/sureal/routine.py", line 2, in <module>
    from matplotlib import pyplot as plt
  File "/home/jakub/.local/lib/python2.7/site-packages/matplotlib/pyplot.py", line 115, in <module>
    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
  File "/home/jakub/.local/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 62, in pylab_setup
    [backend_name], 0)
  File "/home/jakub/.local/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 4, in <module>
    from . import tkagg  # Paint image to Tk photo blitter extension.
  File "/home/jakub/.local/lib/python2.7/site-packages/matplotlib/backends/tkagg.py", line 5, in <module>
    from six.moves import tkinter as Tk
  File "/home/jakub/.local/lib/python2.7/site-packages/six.py", line 203, in load_module
    mod = mod._resolve()
  File "/home/jakub/.local/lib/python2.7/site-packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/home/jakub/.local/lib/python2.7/site-packages/six.py", line 82, in _import_module
    __import__(name)
  File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 42, in <module>
    raise ImportError, str(msg) + ', please install the python-tk package'
ImportError: No module named _tkinter, please install the python-tk package

I believe it would be beneficial to mention in the wiki that the Tkinter module is necessary to run this code.

Calculate DMOS

Hello!

I'm afraid that the result of DMOS calculation is wrong. I used the dataset/VQEGHD3_dataset_raw.py to calculate the DMOS as follows.

sureal --dataset dataset/VQEGHD3_dataset_raw.py --models DMOS MOS --plot-raw-data --plot-dis-videos --plot-observers --output-dir output/VQEGHD3_dataset

In the output.json, the DMOS got higher along with the MOS, which is wrong.

Please show how to calculate the DMOS correctly with the sureal tool.

Thank you very much!

Examples and unit tests not working anymore

Hi,
After the last few updates it seems like the examples and unit tests are failing. I installed the latest version with "pip install sureal" and ran the unit tests and they failed. I also tried running the example script
sureal --dataset resource/dataset/lukas_pc_dataset.py --models THURSTONE_MLE BT_MLE \ --plot-dis-videos --output-dir ./output/lukas_pc_dataset
but got the error message
Error: Must have one and only one subclass of SubjectiveModel with type --dataset, but got 0. I have used sureal in the past without issues so it looks like a recent update has caused a failure.

Subject Rejection

How could I make sure that the subject rejection is on when I use P.910?
Super thanks!

PC dataset format

In the readme file for PC datasets, the example given for OS scores is: 'os': {('Alice', 1): 40, ('Bob', 3): 45}

But the example file for PC dataset that is linked to from the readme has all the OS values set to 1. eg. 'os': 'os': {('Alice', 1): 1, ('Bob', 3): 1}

Is PC computation allowed for both binary scoring (is dis1 better than dis2) and non-binary (how would you rate dis1 compared to dis2 from 1 to 5)? Are the values in the linked dataset file all 1, because it is a binary comparison? If so, are 0s not needed and by default added?

Thanks!

MaximumLikelihoodEstimationModel and NaNs

It seems like the MaximumLikelihoodEstimationModel is introducing NaNs somewhere. When trying it out with my data that has no NaN values anywhere I get an error:

ValueErrorTraceback (most recent call last)
<ipython-input-1-cf2cd2ac15d7> in <module>()
     22     sub_mod_denoise_mle = MaximumLikelihoodEstimationModel(dataset_reader=dataset_reader)
     23     sub_mod_denoise_mle_rest = MaximumLikelihoodEstimationModel(dataset_reader=dataset_reader_rest)
---> 24     result_denoise_mle = sub_mod_denoise_mle.run_modeling()
     25     result_denoise_mle_rest = sub_mod_denoise_mle_rest.run_modeling()
     26     corrs_denoised_mle.append(np.corrcoef(result_denoise_mle['quality_scores'],result_denoise_mle_rest['quality_scores']))

/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py in run_modeling(self, **kwargs)
     61 
     62     def run_modeling(self, **kwargs):
---> 63         model_result = self._run_modeling(self.dataset_reader, **kwargs)
     64         self._postprocess_model_result(model_result, **kwargs)
     65         self.model_result = model_result

/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py in _run_modeling(cls, dataset_reader, **kwargs)
    816             print(np.isnan(x_e).any())
    817 
--> 818             delta_x_e = linalg.norm(x_e_prev - x_e)
    819 
    820             likelihood = np.sum(cls.loglikelihood_fcn(x_es, x_e, b_s, v_s, a_c, dataset_reader.content_id_of_dis_videos, 1, numerical_pdf))

/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/scipy/linalg/misc.pyc in norm(a, ord, axis, keepdims)
    135     """
    136     # Differs from numpy only in non-finite handling and the use of blas.
--> 137     a = np.asarray_chkfinite(a)
    138 
    139     # Only use optimized norms if axis and keepdims are not specified.

/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/numpy/lib/function_base.pyc in asarray_chkfinite(a, dtype, order)
   1231     if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
   1232         raise ValueError(
-> 1233             "array must not contain infs or NaNs")
   1234     return a
   1235 

ValueError: array must not contain infs or NaNs

Notice I did change the model to check for NaN values in x_e_prev and x_e, and only x_e contains them. I am not sure what causes this bug, but I'd love to see if someone knows what's the cause.

Edit: Also, I get these additional warning messages:

/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:711: RuntimeWarning: divide by zero encountered in divide
  num = - np.tile(a_c_e, (S, 1)).T / vs2_add_ace2 + np.tile(a_c_e, (S, 1)).T * a_es**2 / vs2_add_ace2**2
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:711: RuntimeWarning: invalid value encountered in divide
  num = - np.tile(a_c_e, (S, 1)).T / vs2_add_ace2 + np.tile(a_c_e, (S, 1)).T * a_es**2 / vs2_add_ace2**2
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:717: RuntimeWarning: divide by zero encountered in divide
  den = - vs2_minus_ace2 / vs2_add_ace2**2 + a_es**2 * poly_term / vs2_add_ace2**4
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:717: RuntimeWarning: invalid value encountered in divide
  den = - vs2_minus_ace2 / vs2_add_ace2**2 + a_es**2 * poly_term / vs2_add_ace2**4
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:725: RuntimeWarning: divide by zero encountered in divide
  -vs2_minus_ace2 / vs2_add_ace2**2 + a_es**2 * poly_term / vs2_add_ace2**4
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:725: RuntimeWarning: invalid value encountered in divide
  -vs2_minus_ace2 / vs2_add_ace2**2 + a_es**2 * poly_term / vs2_add_ace2**4
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:730: RuntimeWarning: divide by zero encountered in divide
  a_c_std = 1.0 /np.sqrt(np.maximum(0., -lpp))
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:779: RuntimeWarning: divide by zero encountered in divide
  num = pd.DataFrame(num_num / num_den).sum(axis=1) # sum over s
/home/fh/miniconda2/envs/keras_py27up/lib/python2.7/site-packages/sureal/subjective_model.py:782: RuntimeWarning: divide by zero encountered in divide
  den = pd.DataFrame(den_num / den_den).sum(axis=1) # sum over s

jupyter notebook

Hi,

I tried to run the code in jupyter notebook but I get such error:

import matplotlib.pyplot as plt
import numpy.random as rn
import numpy as np
import scipy.stats as stat

import sys 
# please update
sys.path.append('your path')

import sureal
from sureal.config import SurealConfig, DisplayConfig
from sureal.tools.misc import import_python_file
from sureal.dataset_reader import RawDatasetReader, SyntheticRawDatasetReader, MissingDataRawDatasetReader, SelectSubjectRawDatasetReader, CorruptSubjectRawDatasetReader, CorruptDataRawDatasetReader
from sureal.subjective_model import MosModel, MaximumLikelihoodEstimationModel, SubjrejMosModel, ZscoringSubjrejMosModel
from sureal.routine import run_subjective_models
from sureal.tools.decorator import persist_to_file

Error is:

Traceback (most recent call last):

  File "/home/lucjan/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)

  File "<ipython-input-1-494c8aa0b319>", line 13, in <module>
    from sureal.subjective_model import MosModel, MaximumLikelihoodEstimationModel, SubjrejMosModel, ZscoringSubjrejMosModel

  File "/home/lucjan/Dropbox/Projekty/SAM/sureal/python/src/sureal/subjective_model.py", line 96
    lambda (content_id, is_refvideo):
           ^
SyntaxError: invalid syntax

What is the ordering scheme of resultant subject biases?

Let's say I have an input file using a dictionary to define opinion scores for each stimulus. Let's use as an example, the dictionary given in the README.md file:

'os': {'Alice': 40, 'Bob': 45, 'Charlie': 50, 'David': 55, 'Elvis': 60}

Now, I apply the MLE_CO_AP2 model to my input file. As a result, I get a .json file. Among other things, it contains subject biases and subject inconsistency scores. Let's say this is the beginning of my output .json file (the three dots at the end mean that the file is longer than the snippet presented):

{
    "aic": 2.0689578770044244,
    "bic": 2.311378499225354,
    "dof": 0.030222222222222223,
    "loglikelihood": -1.00425671627999,
    "num_iter": 60,
    "observer_bias": [
        0.24779200387111744,
        -0.705541329462216,
        0.3477920038711175,
        -0.29887466279554914,
        -0.31220799612888256
    ],
    ...
}

My question is as follows, what ordering of subjects is used when sureal generates the output file? Does the first subject bias corresponds to Alice and the last one to Elvis?

I searched the source code and noticed the following function sureal.tools.misc.get_unique_sorted_list. Am I right to assume that sureal always uses a sorted list of unique subject IDs, when generating its output?

Please note that sureal's documentation does not mention this. Thus, if your input file uses dictionaries to identify which score comes from which subject, it is non-obvious what ordering sureal uses when generating its output. ๐Ÿ˜Œ

I would be really glad if any of the maintainers could answer this. I could then generate a pull request with a relevant README.md file update.

Python3, map object is not subscriptable + empty figs 2 [,3,4].

Current master seems not working in python3.6.7 (clean virtual environment with installed requirements.txt). Checked with NFLX reduced dataset pyfile taken from test directory.

All models fail to work (first figure normal, second etc. figures are generated but empty).

reproduction steps:

git clone [sureal's dl link]

python3 -m venv srEnv
cp sureal/test/resource/NFLX_dataset_public_raw_PARTIAL.py data.py
source srEnv/bin/activate
cd sureal
pip install -r requirements.txt
pip install .

python3 -m sureal MLE ../data.py --print

produces:

Run model MaximumLikelihoodEstimationModel on dataset data.py
No handles with labels found to put in legend.
No handles with labels found to put in legend.
Dataset: ../data.py
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/elijs/QoEAT_Sureal/sureal/sureal/__main__.py", line 77, in <module>
    ret = main()
  File "/home/elijs/QoEAT_Sureal/sureal/sureal/__main__.py", line 63, in main
    print(("Subjective Model: {} {}".format(subjective_models[0].TYPE, subjective_models[0].VERSION)))
TypeError: 'map' object is not subscriptable

Seems to happen with all models. Without the --print command, same effective outcome (empty figures) without the error trace in terminal.

Notes:
in-environment python3:

Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

pip-freeze:

cycler==0.10.0
kiwisolver==1.0.1
matplotlib==3.0.2
numpy==1.15.4
pandas==0.23.4
pkg-resources==0.0.0
pyparsing==2.3.0
python-dateutil==2.7.5
pytz==2018.9
scipy==1.2.0
six==1.12.0
sureal==0.2

Package structure is broken

I think that the package structure is not ideal and not according to Python guidelines, which will eventually lead to problems when installing via pip.

For example, run_subj is a script in the main directory, outside of any Python package. It then always calls the pip-installed sureal module:

from sureal.subjective_model import SubjectiveModel

So the code cannot be run if sureal was not properly installed as a package before.

I would suggest to:

  • Create a folder called sureal which is a Python package that should get installed
  • Have this module contain a main entry point for the CLI
  • Move the actual CLI-code from run_subj to sureal.__main__

This would allow the user to run:

python -m sureal

to call the CLI interface, and to use from sureal import โ€ฆ whenever they need SUREAL within another Python package.

You can see a simple example of such a module structure here, or a general recommendation in this post (section Installable Single Package).

pip install, ./run_subj -> From sureal.config no module named config

Installed via pip install sureal, on ubuntu 18.04 installation. Downloaded the git repository, tried to run ./run_subj, with or without arguments, own user or sudo-ed, either way it fails to run with this printback:

Traceback (most recent call last):
  File "./run_subj", line 11, in <module>
    from sureal.config import DisplayConfig
ImportError: No module named config

./unittest works...

Output from (rerunning) pip install sureal:

Requirement already satisfied: sureal in /home/elijs/.local/lib/python2.7/site-packages (0.1.1)
Requirement already satisfied: pandas>=0.19.2 in /home/elijs/.local/lib/python2.7/site-packages (from sureal) (0.23.4)
Requirement already satisfied: scipy>=0.17.1 in /home/elijs/.local/lib/python2.7/site-packages (from sureal) (1.1.0)
Requirement already satisfied: matplotlib>=2.0.0 in /home/elijs/.local/lib/python2.7/site-packages (from sureal) (2.0.2)
Requirement already satisfied: numpy>=1.12.0 in /home/elijs/.local/lib/python2.7/site-packages (from sureal) (1.15.4)
Requirement already satisfied: python-dateutil>=2.5.0 in /home/elijs/.local/lib/python2.7/site-packages (from pandas>=0.19.2->sureal) (2.7.5)
Requirement already satisfied: pytz>=2011k in /home/elijs/.local/lib/python2.7/site-packages (from pandas>=0.19.2->sureal) (2018.7)
Requirement already satisfied: subprocess32 in /home/elijs/.local/lib/python2.7/site-packages (from matplotlib>=2.0.0->sureal) (3.5.3)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=1.5.6 in /home/elijs/.local/lib/python2.7/site-packages (from matplotlib>=2.0.0->sureal) (2.3.0)
Requirement already satisfied: six>=1.10 in /home/elijs/.local/lib/python2.7/site-packages (from matplotlib>=2.0.0->sureal) (1.10.0)
Requirement already satisfied: functools32 in /home/elijs/.local/lib/python2.7/site-packages (from matplotlib>=2.0.0->sureal) (3.2.3.post2)
Requirement already satisfied: cycler>=0.10 in /home/elijs/.local/lib/python2.7/site-packages (from matplotlib>=2.0.0->sureal) (0.10.0)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.