Giter Club home page Giter Club logo

recommenders's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

recommenders's Issues

BUG: python tests doesn't pass if we use only python environment

when using reco_bare environment and do pytest -m "not notebooks and not spark" tests/unit/, I get in a Mac:

(reco_bare) MININT-JFKQCE5:Recommenders miguel$ pytest -m "not notebooks and not spark" tests/unit/
================== test session starts ===================
platform darwin -- Python 3.6.0, pytest-3.6.4, py-1.6.0, pluggy-0.7.1
rootdir: /Users/miguel/MS/code/Recommenders, inifile:
plugins: pylint-0.11.0, datafiles-2.0, cov-2.6.0
collected 39 items / 3 errors / 3 deselected             

========================= ERRORS =========================
____ ERROR collecting tests/unit/test_sar_pyspark.py _____
ImportError while importing test module '/Users/miguel/MS/code/Recommenders/tests/unit/test_sar_pyspark.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/unit/test_sar_pyspark.py:7: in <module>
    from reco_utils.recommender.sar.sar_pyspark import SARpySparkReference
reco_utils/recommender/sar/sar_pyspark.py:13: in <module>
    import pyspark.sql.functions as F
E   ModuleNotFoundError: No module named 'pyspark'
__ ERROR collecting tests/unit/test_spark_evaluation.py __
ImportError while importing test module '/Users/miguel/MS/code/Recommenders/tests/unit/test_spark_evaluation.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/unit/test_spark_evaluation.py:7: in <module>
    from reco_utils.evaluation.spark_evaluation import (
reco_utils/evaluation/spark_evaluation.py:5: in <module>
    from pyspark.mllib.evaluation import RegressionMetrics, RankingMetrics
E   ModuleNotFoundError: No module named 'pyspark'
___ ERROR collecting tests/unit/test_spark_splitter.py ___
ImportError while importing test module '/Users/miguel/MS/code/Recommenders/tests/unit/test_spark_splitter.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/unit/test_spark_splitter.py:10: in <module>
    from reco_utils.dataset.spark_splitters import spark_chrono_split, spark_random_split
reco_utils/dataset/spark_splitters.py:6: in <module>
    from pyspark.sql import Window
E   ModuleNotFoundError: No module named 'pyspark'
!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!
========= 3 deselected, 3 error in 2.02 seconds ==========

Add test data sets

Tests on certain modules such as evaluation and split should be performed on a sufficiently large dataset (e.g., Netflix, Movielens-1M, etc.).

SAR unit test configs

SAR unit test configs for both single node and pySpark unit tests are the same - right now they are duplicated in each of the test files. They should be imported from conftest.py

DS VM pySpark version and library mismatch

As DS VM upgraded to Spark 2.3 the supporting libraries and environment don't work with Spark 2.2 which is required for Airship (note Recommenders). The quick fix is to upgrade virtual env to Spark 2.3. Going forward, we should figure out if we want to keep upgrading Spark versions as DS VM upgrades them OR anchor to specific Spark version with standalone DS VM libs.

SAR has no predict method

The predict method should fill in the SAR score for a given User, Item pair, into a column called prediction. This is needed in order to utilize existing ML Lib libraries.

Add environment installation to smoke and integration tests

As of now, the smoke and integration tests run on a pre created environment.

We have to test the full pipeline, this means that, for each environment, python, spark and GPU, we create the conda file, install the environment, execute the tests and then remove the environment.

TODO:

  • Change names of builds in VSTS
  • Add different environments for unit tests
  • Create and delete the environment after smoke and integration tests
  • Modify README and link statuses

Review problem with Jupyter kernel in the unit tests

When creating a new cicd system, we had to register a jupyter kernel.

ipykernel install --user --name py36 --display-name "Python (py36)"

python -m ipykernel install --user --name recommender --display-name "Python (recommender)"

We need to review this issue

Recalculate / update SAR user-item affinity matrix or item-item similarity matrix

One suggestion for SAR implementation - SAR currently seems does both user-item affinity matrix calculation and item-item similarity matrix calculation in the same function fit(). Would be good to have them separately in the case we want to re-calculate (update) one of the matrix. Or even better if we have an update function for individual user or item records that only re-calculate the cells related to the user or item from the matrices.

Clean up notebooks in git

  • Find a way to reduce git changes created by changes in kernel, cell run history.
  • Make sure notebooks are following correct templates, review templates

Consider using nbdiff

Rename "utilities" folder to "reco_utils"

As per Tao - it would be less confusing if we use a name like rec_utils instead of utilities.

from utilities.recommender.sar.sar_singlenode import SARSingleNodeReference
from utilities.dataset.url_utils import maybe_download
from utilities.dataset.python_splitters import python_random_split
from utilities.evaluation.python_evaluation import PythonRatingEvaluation, PythonRankingEvaluation

Define and create master PR strategy

  • Consider what additional smoke tests / other tests we need in place
  • Timers on integration tests
  • Consider tests on larger datasets
  • Test bench on cosmosdb - consider the scaling of each algorithm

Review docstrings in reco_utils

Some files don't have the correct docstrings. Ex:

class SparkRatingEvaluation:
    """Spark Rating Evaluator"""

    def __init__(
        self,
        rating_true,
        rating_pred,
        col_user=DEFAULT_USER_COL,
        col_item=DEFAULT_ITEM_COL,
        col_rating=DEFAULT_RATING_COL,
        col_prediction=PREDICTION_COL,
    ):
        """Initializer.
        Args:
            rating_true (spark.DataFrame): True labels.
            rating_pred (spark.DataFrame): Predicted labels.
        """

debug issue with pysar test and tolerance

On some machines, using a tolerance of 1e-8, the tests pass, but in others they don't.

We got this error on Prometheus, when testing test_sar_single_node.py:

(py36) miguel@prometheus:~/repos/Recommenders$ pytest tests/unit/test_sar_singlenode.py 
=================================================================================== test session starts ====================================================================================
platform linux -- Python 3.6.5, pytest-3.6.4, py-1.7.0, pluggy-0.7.1
rootdir: /home/miguel/repos/Recommenders, inifile:
collected 15 items                                                                                                                                                                         

tests/unit/test_sar_singlenode.py ...........FFFF                                                                                                                                    [100%]

========================================================================================= FAILURES =========================================================================================
____________________________________________________________________________________ test_user_affinity ____________________________________________________________________________________

demo_usage_data =                  UserId    MovieId     Timestamp  Rating  exponential  rating_exponential
0      0003000098E85347  DQF...076
11837  00030000822E3BAE  DAF-00448  1.416292e+09       1     0.009076            0.009076

[11838 rows x 6 columns]
sar_settings = {'ATOL': 1e-08, 'FILE_DIR': 'http://recodatasets.blob.core.windows.net/sarunittest/', 'TEST_USER_ID': '0003000098E85347'}
header = {'col_item': 'MovieId', 'col_rating': 'Rating', 'col_timestamp': 'Timestamp', 'col_user': 'UserId'}

    def test_user_affinity(demo_usage_data, sar_settings, header):
        time_now = demo_usage_data[header["col_timestamp"]].max()
        model = SARSingleNodeReference(
            remove_seen=True,
            similarity_type="cooccurrence",
            timedecay_formula=True,
            time_decay_coefficient=30,
            time_now=time_now,
            **header
        )
        _apply_sar_hash_index(model, demo_usage_data, None, header)
        model.fit(demo_usage_data)
    
        true_user_affinity, items = load_affinity(sar_settings["FILE_DIR"] + "user_aff.csv")
        user_index = model.user_map_dict[sar_settings["TEST_USER_ID"]]
        test_user_affinity = np.reshape(
            np.array(
                _rearrange_to_test(
                    model.user_affinity, None, items, None, model.item_map_dict
                )[user_index,].todense()
            ),
            -1,
        )
>       assert np.allclose(
            true_user_affinity.astype(test_user_affinity.dtype),
            test_user_affinity,
            atol=sar_settings["ATOL"],
        )
E       AssertionError: assert False
E        +  where False = <function allclose at 0x7f6110e1d730>(array([0.        , 0.        , 0.        , 0.        , 0.        ,\n       0.        , 0.        , 0.        , 0.      ...       , 0.        , 0.        ,\n       0.        , 0.        , 0.15181286, 1.        , 0.        ,\n       0.        ]), array([0.        , 0.        , 0.        , 0.        , 0.        ,\n       0.        , 0.        , 0.        , 0.      ...       , 0.        , 0.        ,\n       0.        , 0.        , 0.15195908, 1.        , 0.        ,\n       0.        ]), atol=1e-08)
E        +    where <function allclose at 0x7f6110e1d730> = np.allclose
E        +    and   array([0.        , 0.        , 0.        , 0.        , 0.        ,\n       0.        , 0.        , 0.        , 0.      ...       , 0.        , 0.        ,\n       0.        , 0.        , 0.15181286, 1.        , 0.        ,\n       0.        ]) = <built-in method astype of numpy.ndarray object at 0x7f60fc6adee0>(dtype('float64'))
E        +      where <built-in method astype of numpy.ndarray object at 0x7f60fc6adee0> = array(['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0',\n       '0', '0.0221122254449968', '0', '0', '0..., '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0',\n       '0', '0.151812861826336', '1', '0', '0'], dtype='<U18').astype
E        +      and   dtype('float64') = array([0.        , 0.        , 0.        , 0.        , 0.        ,\n       0.        , 0.        , 0.        , 0.      ...       , 0.        , 0.        ,\n       0.        , 0.        , 0.15195908, 1.        , 0.        ,\n       0.        ]).dtype

tests/unit/test_sar_singlenode.py:201: AssertionError
___________________________________________________________________________ test_userpred[3-cooccurrence-count] ____________________________________________________________________________

threshold = 3, similarity_type = 'cooccurrence', file = 'count', header = {'col_item': 'MovieId', 'col_rating': 'Rating', 'col_timestamp': 'Timestamp', 'col_user': 'UserId'}
sar_settings = {'ATOL': 1e-08, 'FILE_DIR': 'http://recodatasets.blob.core.windows.net/sarunittest/', 'TEST_USER_ID': '0003000098E85347'}
demo_usage_data =                  UserId    MovieId     Timestamp  Rating  exponential  rating_exponential
0      0003000098E85347  DQF...076
11837  00030000822E3BAE  DAF-00448  1.416292e+09       1     0.009076            0.009076

[11838 rows x 6 columns]

    @pytest.mark.parametrize(
        "threshold,similarity_type,file",
        [(3, "cooccurrence", "count"), (3, "jaccard", "jac"), (3, "lift", "lift")],
    )
    def test_userpred(
        threshold, similarity_type, file, header, sar_settings, demo_usage_data
    ):
        time_now = demo_usage_data[header["col_timestamp"]].max()
        model = SARSingleNodeReference(
            remove_seen=True,
            similarity_type=similarity_type,
            timedecay_formula=True,
            time_decay_coefficient=30,
            time_now=time_now,
            threshold=threshold,
            **header
        )
        _apply_sar_hash_index(model, demo_usage_data, None, header)
        model.fit(demo_usage_data)
    
        true_items, true_scores = load_userpred(
            sar_settings["FILE_DIR"]
            + "userpred_"
            + file
            + str(threshold)
            + "_userid_only.csv"
        )
        test_results = model.recommend_k_items(
            demo_usage_data[
                demo_usage_data[header["col_user"]] == sar_settings["TEST_USER_ID"]
            ],
            top_k=10,
        )
        test_items = list(test_results[header["col_item"]])
        test_scores = np.array(test_results["prediction"])
        assert true_items == test_items
>       assert np.allclose(true_scores, test_scores, atol=sar_settings["ATOL"])
E       assert False
E        +  where False = <function allclose at 0x7f6110e1d730>(array([40.96870941, 40.37760085, 19.55002941, 18.10756063, 13.24775154,\n       12.67358812, 12.49898911, 12.0359004 , 10.91842008, 10.91185623]), array([41.00239015, 40.41649126, 19.5650067 , 18.12114858, 13.26051135,\n       12.6742369 , 12.50043289, 12.047493  , 10.92893636, 10.92236618]), atol=1e-08)
E        +    where <function allclose at 0x7f6110e1d730> = np.allclose

tests/unit/test_sar_singlenode.py:245: AssertionError
_______________________________________________________________________________ test_userpred[3-jaccard-jac] _______________________________________________________________________________

threshold = 3, similarity_type = 'jaccard', file = 'jac', header = {'col_item': 'MovieId', 'col_rating': 'Rating', 'col_timestamp': 'Timestamp', 'col_user': 'UserId'}
sar_settings = {'ATOL': 1e-08, 'FILE_DIR': 'http://recodatasets.blob.core.windows.net/sarunittest/', 'TEST_USER_ID': '0003000098E85347'}
demo_usage_data =                  UserId    MovieId     Timestamp  Rating  exponential  rating_exponential
0      0003000098E85347  DQF...076
11837  00030000822E3BAE  DAF-00448  1.416292e+09       1     0.009076            0.009076

[11838 rows x 6 columns]

    @pytest.mark.parametrize(
        "threshold,similarity_type,file",
        [(3, "cooccurrence", "count"), (3, "jaccard", "jac"), (3, "lift", "lift")],
    )
    def test_userpred(
        threshold, similarity_type, file, header, sar_settings, demo_usage_data
    ):
        time_now = demo_usage_data[header["col_timestamp"]].max()
        model = SARSingleNodeReference(
            remove_seen=True,
            similarity_type=similarity_type,
            timedecay_formula=True,
            time_decay_coefficient=30,
            time_now=time_now,
            threshold=threshold,
            **header
        )
        _apply_sar_hash_index(model, demo_usage_data, None, header)
        model.fit(demo_usage_data)
    
        true_items, true_scores = load_userpred(
            sar_settings["FILE_DIR"]
            + "userpred_"
            + file
            + str(threshold)
            + "_userid_only.csv"
        )
        test_results = model.recommend_k_items(
            demo_usage_data[
                demo_usage_data[header["col_user"]] == sar_settings["TEST_USER_ID"]
            ],
            top_k=10,
        )
        test_items = list(test_results[header["col_item"]])
        test_scores = np.array(test_results["prediction"])
        assert true_items == test_items
>       assert np.allclose(true_scores, test_scores, atol=sar_settings["ATOL"])
E       assert False
E        +  where False = <function allclose at 0x7f6110e1d730>(array([0.0616357 , 0.04918001, 0.04247487, 0.04009872, 0.03847229,\n       0.03839772, 0.03251167, 0.02474822, 0.02432458, 0.0224889 ]), array([0.06163639, 0.04921205, 0.04247624, 0.04011545, 0.03848885,\n       0.03843471, 0.0325135 , 0.02477206, 0.02432508, 0.02249099]), atol=1e-08)
E        +    where <function allclose at 0x7f6110e1d730> = np.allclose

tests/unit/test_sar_singlenode.py:245: AssertionError
________________________________________________________________________________ test_userpred[3-lift-lift] ________________________________________________________________________________

threshold = 3, similarity_type = 'lift', file = 'lift', header = {'col_item': 'MovieId', 'col_rating': 'Rating', 'col_timestamp': 'Timestamp', 'col_user': 'UserId'}
sar_settings = {'ATOL': 1e-08, 'FILE_DIR': 'http://recodatasets.blob.core.windows.net/sarunittest/', 'TEST_USER_ID': '0003000098E85347'}
demo_usage_data =                  UserId    MovieId     Timestamp  Rating  exponential  rating_exponential
0      0003000098E85347  DQF...076
11837  00030000822E3BAE  DAF-00448  1.416292e+09       1     0.009076            0.009076

[11838 rows x 6 columns]

    @pytest.mark.parametrize(
        "threshold,similarity_type,file",
        [(3, "cooccurrence", "count"), (3, "jaccard", "jac"), (3, "lift", "lift")],
    )
    def test_userpred(
        threshold, similarity_type, file, header, sar_settings, demo_usage_data
    ):
        time_now = demo_usage_data[header["col_timestamp"]].max()
        model = SARSingleNodeReference(
            remove_seen=True,
            similarity_type=similarity_type,
            timedecay_formula=True,
            time_decay_coefficient=30,
            time_now=time_now,
            threshold=threshold,
            **header
        )
        _apply_sar_hash_index(model, demo_usage_data, None, header)
        model.fit(demo_usage_data)
    
        true_items, true_scores = load_userpred(
            sar_settings["FILE_DIR"]
            + "userpred_"
            + file
            + str(threshold)
            + "_userid_only.csv"
        )
        test_results = model.recommend_k_items(
            demo_usage_data[
                demo_usage_data[header["col_user"]] == sar_settings["TEST_USER_ID"]
            ],
            top_k=10,
        )
        test_items = list(test_results[header["col_item"]])
        test_scores = np.array(test_results["prediction"])
        assert true_items == test_items
>       assert np.allclose(true_scores, test_scores, atol=sar_settings["ATOL"])
E       assert False
E        +  where False = <function allclose at 0x7f6110e1d730>(array([0.00134902, 0.00084695, 0.00072497, 0.00072133, 0.00066855,\n       0.0006003 , 0.00045299, 0.00045202, 0.00041803, 0.00034772]), array([0.00134902, 0.00084696, 0.00072513, 0.00072134, 0.00066871,\n       0.00060031, 0.00045312, 0.00045204, 0.00041804, 0.00034806]), atol=1e-08)
E        +    where <function allclose at 0x7f6110e1d730> = np.allclose

tests/unit/test_sar_singlenode.py:245: AssertionError

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.