Giter Club home page Giter Club logo

robertmartin8 / pyportfolioopt Goto Github PK

View Code? Open in Web Editor NEW
4.2K 127.0 925.0 9.45 MB

Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity

Home Page: https://pyportfolioopt.readthedocs.io/

License: MIT License

Python 27.73% Dockerfile 0.05% Jupyter Notebook 72.22%
finance portfolio-optimization portfolio-management quantitative-finance algorithmic-trading investing efficient-frontier covariance python investment

pyportfolioopt's Introduction

python   platforms   pypi   MIT license   build   codecov   downloads   binder  

PyPortfolioOpt is a library that implements portfolio optimization methods, including classical mean-variance optimization techniques and Black-Litterman allocation, as well as more recent developments in the field like shrinkage and Hierarchical Risk Parity.

It is extensive yet easily extensible, and can be useful for either a casual investors, or a professional looking for an easy prototyping tool. Whether you are a fundamentals-oriented investor who has identified a handful of undervalued picks, or an algorithmic trader who has a basket of strategies, PyPortfolioOpt can help you combine your alpha sources in a risk-efficient way.

PyPortfolioOpt has been published in the Journal of Open Source Software 🎉

PyPortfolioOpt is now being maintained by Tuan Tran.

Head over to the documentation on ReadTheDocs to get an in-depth look at the project, or check out the cookbook to see some examples showing the full process from downloading data to building a portfolio.

Table of contents

Getting started

If you would like to play with PyPortfolioOpt interactively in your browser, you may launch Binder here. It takes a while to set up, but it lets you try out the cookbook recipes without having to deal with all of the requirements.

Note: macOS users will need to install Command Line Tools.

Note: if you are on windows, you first need to installl C++. (download, install instructions)

This project is available on PyPI, meaning that you can just:

pip install PyPortfolioOpt

(you may need to follow separate installation instructions for cvxopt and cvxpy).

However, it is best practice to use a dependency manager within a virtual environment. My current recommendation is to get yourself set up with poetry then just run

poetry add PyPortfolioOpt

Otherwise, clone/download the project and in the project directory run:

python setup.py install

PyPortfolioOpt supports Docker. Build your first container with docker build -f docker/Dockerfile . -t pypfopt. You can use the image to run tests or even launch a Jupyter server.

# iPython interpreter:
docker run -it pypfopt poetry run ipython

# Jupyter notebook server:
docker run -it -p 8888:8888 pypfopt poetry run jupyter notebook --allow-root --no-browser --ip 0.0.0.0
# click on http://127.0.0.1:8888/?token=xxx

# Pytest
docker run -t pypfopt poetry run pytest

# Bash
docker run -it pypfopt bash

For more information, please read this guide.

For development

If you would like to make major changes to integrate this with your proprietary system, it probably makes sense to clone this repository and to just use the source code.

git clone https://github.com/robertmartin8/PyPortfolioOpt

Alternatively, you could try:

pip install -e git+https://github.com/robertmartin8/PyPortfolioOpt.git

A quick example

Here is an example on real life stock data, demonstrating how easy it is to find the long-only portfolio that maximises the Sharpe ratio (a measure of risk-adjusted returns).

import pandas as pd
from pypfopt import EfficientFrontier
from pypfopt import risk_models
from pypfopt import expected_returns

# Read in price data
df = pd.read_csv("tests/resources/stock_prices.csv", parse_dates=True, index_col="date")

# Calculate expected returns and sample covariance
mu = expected_returns.mean_historical_return(df)
S = risk_models.sample_cov(df)

# Optimize for maximal Sharpe ratio
ef = EfficientFrontier(mu, S)
raw_weights = ef.max_sharpe()
cleaned_weights = ef.clean_weights()
ef.save_weights_to_file("weights.csv")  # saves to file
print(cleaned_weights)
ef.portfolio_performance(verbose=True)

This outputs the following weights:

{'GOOG': 0.03835,
 'AAPL': 0.0689,
 'FB': 0.20603,
 'BABA': 0.07315,
 'AMZN': 0.04033,
 'GE': 0.0,
 'AMD': 0.0,
 'WMT': 0.0,
 'BAC': 0.0,
 'GM': 0.0,
 'T': 0.0,
 'UAA': 0.0,
 'SHLD': 0.0,
 'XOM': 0.0,
 'RRC': 0.0,
 'BBY': 0.01324,
 'MA': 0.35349,
 'PFE': 0.1957,
 'JPM': 0.0,
 'SBUX': 0.01082}

Expected annual return: 30.5%
Annual volatility: 22.2%
Sharpe Ratio: 1.28

This is interesting but not useful in itself. However, PyPortfolioOpt provides a method which allows you to convert the above continuous weights to an actual allocation that you could buy. Just enter the most recent prices, and the desired portfolio size ($10,000 in this example):

from pypfopt.discrete_allocation import DiscreteAllocation, get_latest_prices


latest_prices = get_latest_prices(df)

da = DiscreteAllocation(weights, latest_prices, total_portfolio_value=10000)
allocation, leftover = da.greedy_portfolio()
print("Discrete allocation:", allocation)
print("Funds remaining: ${:.2f}".format(leftover))
12 out of 20 tickers were removed
Discrete allocation: {'GOOG': 1, 'AAPL': 4, 'FB': 12, 'BABA': 4, 'BBY': 2,
                      'MA': 20, 'PFE': 54, 'SBUX': 1}
Funds remaining: $11.89

Disclaimer: nothing about this project constitues investment advice, and the author bears no responsibiltiy for your subsequent investment decisions. Please refer to the license for more information.

An overview of classical portfolio optimization methods

Harry Markowitz's 1952 paper is the undeniable classic, which turned portfolio optimization from an art into a science. The key insight is that by combining assets with different expected returns and volatilities, one can decide on a mathematically optimal allocation which minimises the risk for a target return – the set of all such optimal portfolios is referred to as the efficient frontier.

Although much development has been made in the subject, more than half a century later, Markowitz's core ideas are still fundamentally important and see daily use in many portfolio management firms. The main drawback of mean-variance optimization is that the theoretical treatment requires knowledge of the expected returns and the future risk-characteristics (covariance) of the assets. Obviously, if we knew the expected returns of a stock life would be much easier, but the whole game is that stock returns are notoriously hard to forecast. As a substitute, we can derive estimates of the expected return and covariance based on historical data – though we do lose the theoretical guarantees provided by Markowitz, the closer our estimates are to the real values, the better our portfolio will be.

Thus this project provides four major sets of functionality (though of course they are intimately related)

  • Estimates of expected returns
  • Estimates of risk (i.e covariance of asset returns)
  • Objective functions to be optimized
  • Optimizers.

A key design goal of PyPortfolioOpt is modularity – the user should be able to swap in their components while still making use of the framework that PyPortfolioOpt provides.

Features

In this section, we detail some of PyPortfolioOpt's available functionality. More examples are offered in the Jupyter notebooks here. Another good resource is the tests.

A far more comprehensive version of this can be found on ReadTheDocs, as well as possible extensions for more advanced users.

Expected returns

  • Mean historical returns:
    • the simplest and most common approach, which states that the expected return of each asset is equal to the mean of its historical returns.
    • easily interpretable and very intuitive
  • Exponentially weighted mean historical returns:
    • similar to mean historical returns, except it gives exponentially more weight to recent prices
    • it is likely the case that an asset's most recent returns hold more weight than returns from 10 years ago when it comes to estimating future returns.
  • Capital Asset Pricing Model (CAPM):
    • a simple model to predict returns based on the beta to the market
    • this is used all over finance!

Risk models (covariance)

The covariance matrix encodes not just the volatility of an asset, but also how it correlated to other assets. This is important because in order to reap the benefits of diversification (and thus increase return per unit risk), the assets in the portfolio should be as uncorrelated as possible.

  • Sample covariance matrix:
    • an unbiased estimate of the covariance matrix
    • relatively easy to compute
    • the de facto standard for many years
    • however, it has a high estimation error, which is particularly dangerous in mean-variance optimization because the optimizer is likely to give excess weight to these erroneous estimates.
  • Semicovariance: a measure of risk that focuses on downside variation.
  • Exponential covariance: an improvement over sample covariance that gives more weight to recent data
  • Covariance shrinkage: techniques that involve combining the sample covariance matrix with a structured estimator, to reduce the effect of erroneous weights. PyPortfolioOpt provides wrappers around the efficient vectorised implementations provided by sklearn.covariance.
    • manual shrinkage
    • Ledoit Wolf shrinkage, which chooses an optimal shrinkage parameter. We offer three shrinkage targets: constant_variance, single_factor, and constant_correlation.
    • Oracle Approximating Shrinkage
  • Minimum Covariance Determinant:
    • a robust estimate of the covariance
    • implemented in sklearn.covariance

(This plot was generated using plotting.plot_covariance)

Objective functions

  • Maximum Sharpe ratio: this results in a tangency portfolio because on a graph of returns vs risk, this portfolio corresponds to the tangent of the efficient frontier that has a y-intercept equal to the risk-free rate. This is the default option because it finds the optimal return per unit risk.
  • Minimum volatility. This may be useful if you're trying to get an idea of how low the volatility could be, but in practice it makes a lot more sense to me to use the portfolio that maximises the Sharpe ratio.
  • Efficient return, a.k.a. the Markowitz portfolio, which minimises risk for a given target return – this was the main focus of Markowitz 1952
  • Efficient risk: the Sharpe-maximising portfolio for a given target risk.
  • Maximum quadratic utility. You can provide your own risk-aversion level and compute the appropriate portfolio.

Adding constraints or different objectives

  • Long/short: by default all of the mean-variance optimization methods in PyPortfolioOpt are long-only, but they can be initialised to allow for short positions by changing the weight bounds:
ef = EfficientFrontier(mu, S, weight_bounds=(-1, 1))
  • Market neutrality: for the efficient_risk and efficient_return methods, PyPortfolioOpt provides an option to form a market-neutral portfolio (i.e weights sum to zero). This is not possible for the max Sharpe portfolio and the min volatility portfolio because in those cases because they are not invariant with respect to leverage. Market neutrality requires negative weights:
ef = EfficientFrontier(mu, S, weight_bounds=(-1, 1))
ef.efficient_return(target_return=0.2, market_neutral=True)
  • Minimum/maximum position size: it may be the case that you want no security to form more than 10% of your portfolio. This is easy to encode:
ef = EfficientFrontier(mu, S, weight_bounds=(0, 0.1))

One issue with mean-variance optimization is that it leads to many zero-weights. While these are "optimal" in-sample, there is a large body of research showing that this characteristic leads mean-variance portfolios to underperform out-of-sample. To that end, I have introduced an objective function that can reduce the number of negligible weights for any of the objective functions. Essentially, it adds a penalty (parameterised by gamma) on small weights, with a term that looks just like L2 regularisation in machine learning. It may be necessary to try several gamma values to achieve the desired number of non-negligible weights. For the test portfolio of 20 securities, gamma ~ 1 is sufficient

ef = EfficientFrontier(mu, S)
ef.add_objective(objective_functions.L2_reg, gamma=1)
ef.max_sharpe()

Black-Litterman allocation

As of v0.5.0, we now support Black-Litterman asset allocation, which allows you to combine a prior estimate of returns (e.g the market-implied returns) with your own views to form a posterior estimate. This results in much better estimates of expected returns than just using the mean historical return. Check out the docs for a discussion of the theory, as well as advice on formatting inputs.

S = risk_models.sample_cov(df)
viewdict = {"AAPL": 0.20, "BBY": -0.30, "BAC": 0, "SBUX": -0.2, "T": 0.131321}
bl = BlackLittermanModel(S, pi="equal", absolute_views=viewdict, omega="default")
rets = bl.bl_returns()

ef = EfficientFrontier(rets, S)
ef.max_sharpe()

Other optimizers

The features above mostly pertain to solving mean-variance optimization problems via quadratic programming (though this is taken care of by cvxpy). However, we offer different optimizers as well:

  • Mean-semivariance optimization
  • Mean-CVaR optimization
  • Hierarchical Risk Parity, using clustering algorithms to choose uncorrelated assets
  • Markowitz's critical line algorithm (CLA)

Please refer to the documentation for more.

Advantages over existing implementations

  • Includes both classical methods (Markowitz 1952 and Black-Litterman), suggested best practices (e.g covariance shrinkage), along with many recent developments and novel features, like L2 regularisation, shrunk covariance, hierarchical risk parity.
  • Native support for pandas dataframes: easily input your daily prices data.
  • Extensive practical tests, which use real-life data.
  • Easy to combine with your proprietary strategies and models.
  • Robust to missing data, and price-series of different lengths (e.g FB data only goes back to 2012 whereas AAPL data goes back to 1980).

Project principles and design decisions

  • It should be easy to swap out individual components of the optimization process with the user's proprietary improvements.
  • Usability is everything: it is better to be self-explanatory than consistent.
  • There is no point in portfolio optimization unless it can be practically applied to real asset prices.
  • Everything that has been implemented should be tested.
  • Inline documentation is good: dedicated (separate) documentation is better. The two are not mutually exclusive.
  • Formatting should never get in the way of coding: because of this, I have deferred all formatting decisions to Black.

Testing

Tests are written in pytest (much more intuitive than unittest and the variants in my opinion), and I have tried to ensure close to 100% coverage. Run the tests by navigating to the package directory and simply running pytest on the command line.

PyPortfolioOpt provides a test dataset of daily returns for 20 tickers:

['GOOG', 'AAPL', 'FB', 'BABA', 'AMZN', 'GE', 'AMD', 'WMT', 'BAC', 'GM',
'T', 'UAA', 'SHLD', 'XOM', 'RRC', 'BBY', 'MA', 'PFE', 'JPM', 'SBUX']

These tickers have been informally selected to meet several criteria:

  • reasonably liquid
  • different performances and volatilities
  • different amounts of data to test robustness

Currently, the tests have not explored all of the edge cases and combinations of objective functions and parameters. However, each method and parameter has been tested to work as intended.

Citing PyPortfolioOpt

If you use PyPortfolioOpt for published work, please cite the JOSS paper.

Citation string:

Martin, R. A., (2021). PyPortfolioOpt: portfolio optimization in Python. Journal of Open Source Software, 6(61), 3066, https://doi.org/10.21105/joss.03066

BibTex::

@article{Martin2021,
  doi = {10.21105/joss.03066},
  url = {https://doi.org/10.21105/joss.03066},
  year = {2021},
  publisher = {The Open Journal},
  volume = {6},
  number = {61},
  pages = {3066},
  author = {Robert Andrew Martin},
  title = {PyPortfolioOpt: portfolio optimization in Python},
  journal = {Journal of Open Source Software}
}

Contributing

Contributions are most welcome. Have a look at the Contribution Guide for more.

I'd like to thank all of the people who have contributed to PyPortfolioOpt since its release in 2018. Special shout-outs to:

  • Tuan Tran (who is now the primary maintainer!)
  • Philipp Schiele
  • Carl Peasnell
  • Felipe Schneider
  • Dingyuan Wang
  • Pat Newell
  • Aditya Bhutra
  • Thomas Schmelzer
  • Rich Caputo
  • Nicolas Knudde

Getting in touch

If you are having a problem with PyPortfolioOpt, please raise a GitHub issue. For anything else, you can reach me at:

pyportfolioopt's People

Contributors

88d52bdba0366127fffca9dfa93895 avatar anchitshrivastava avatar andyherfer avatar armbruer avatar bhutraaditya avatar bvonboyen avatar dependabot[bot] avatar dpapakyriak avatar duranvrnubank avatar fdabrandao avatar gliptak avatar gpfins avatar gumblex avatar lbrummer avatar mikylucky avatar mkeds avatar phschiele avatar pmn4 avatar robertmartin8 avatar ryanrussell avatar samatix avatar schneiderfelipe avatar seapea1 avatar stevediamond avatar thenordine avatar tommybark avatar tschm avatar tuantp7 avatar wilm0r avatar yosukesan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyportfolioopt's Issues

pandas version for "@" operator between DataFrame and Series

I just tested the latest repo version of PyPortfolioOpt and got some errors related to the usage of @: TypeError: unsupported operand type(s) for @: 'DataFrame' and 'Series'.

I installed PyPortfolioOpt with pip3 -e ., which means all dependencies were satisfied.

Pandas stable (0.25.3) says @ can be used for dot product. But since I'm using Pandas 0.25.1, it might be a version issue.

Since PyPortfolioOpt currently depends on Pandas 0.21.0 or higher, should we change requirements.txt?


Full test output:

➜  PyPortfolioOpt git:(master) ✗ pytest-3
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.6.9, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /home/schneider/Dropbox/PyPortfolioOpt, inifile:
collected 140 items                                                                                                                                                                                               

tests/test_base_optimizer.py ............                                                                                                                                                                   [  8%]
tests/test_black_litterman.py ..........FF                                                                                                                                                                  [ 17%]
tests/test_cla.py ..........                                                                                                                                                                                [ 24%]
tests/test_custom_objectives.py ......                                                                                                                                                                      [ 28%]
tests/test_discrete_allocation.py ..............                                                                                                                                                            [ 38%]
tests/test_efficient_frontier.py .........................................                                                                                                                                  [ 67%]
tests/test_expected_returns.py .........                                                                                                                                                                    [ 74%]
tests/test_hrp.py .....                                                                                                                                                                                     [ 77%]
tests/test_objective_functions.py .......                                                                                                                                                                   [ 82%]
tests/test_risk_models.py ...................                                                                                                                                                               [ 96%]
tests/test_value_at_risk.py .....                                                                                                                                                                           [100%]

==================================================================================================== FAILURES =====================================================================================================
____________________________________________________________________________________________ test_market_implied_prior ____________________________________________________________________________________________

    def test_market_implied_prior():
        df = get_data()
        S = risk_models.sample_cov(df)
    
        prices = pd.read_csv(
            "tests/spy_prices.csv", parse_dates=True, index_col=0, squeeze=True
        )
        delta = black_litterman.market_implied_risk_aversion(prices)
    
        mcaps = {
            "GOOG": 927e9,
            "AAPL": 1.19e12,
            "FB": 574e9,
            "BABA": 533e9,
            "AMZN": 867e9,
            "GE": 96e9,
            "AMD": 43e9,
            "WMT": 339e9,
            "BAC": 301e9,
            "GM": 51e9,
            "T": 61e9,
            "UAA": 78e9,
            "SHLD": 0,
            "XOM": 295e9,
            "RRC": 1e9,
            "BBY": 22e9,
            "MA": 288e9,
            "PFE": 212e9,
            "JPM": 422e9,
            "SBUX": 102e9,
        }
>       pi = black_litterman.market_implied_prior_returns(mcaps, delta, S)

tests/test_black_litterman.py:281: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

market_caps = {'AAPL': 1190000000000.0, 'AMD': 43000000000.0, 'AMZN': 867000000000.0, 'BABA': 533000000000.0, ...}, risk_aversion = 2.6854910662283147
cov_matrix =           GOOG      AAPL        FB      BABA      AMZN        GE       AMD  \
GOOG  0.093211  0.046202  0.030801  0.02...  0.056298  0.070269  0.034757  0.146893  0.049530  
SBUX  0.028284  0.050217  0.046886  0.024195  0.049530  0.152589  
risk_free_rate = 0.02

    def market_implied_prior_returns(
        market_caps, risk_aversion, cov_matrix, risk_free_rate=0.02
    ):
        r"""
        Compute the prior estimate of returns implied by the market weights.
        In other words, given each asset's contribution to the risk of the market
        portfolio, how much are we expecting to be compensated?
    
        .. math::
    
            \Pi = \delta \Sigma w_{mkt}
    
        :param market_caps: market capitalisations of all assets
        :type market_caps: {ticker: cap} dict or pd.Series
        :param risk_aversion: risk aversion parameter
        :type risk_aversion: positive float
        :param cov_matrix: covariance matrix of asset returns
        :type cov_matrix: pd.DataFrame or np.ndarray
        :param risk_free_rate: risk-free rate of borrowing/lending, defaults to 0.02.
                               You should use the appropriate time period, corresponding
                               to the covariance matrix.
        :type risk_free_rate: float, optional
        :return: prior estimate of returns as implied by the market caps
        :rtype: pd.Series
        """
        mcaps = pd.Series(market_caps)
        mkt_weights = mcaps / mcaps.sum()
        # Pi is excess returns so must add risk_free_rate to get return.
>       return risk_aversion * cov_matrix @ mkt_weights + risk_free_rate
E       TypeError: unsupported operand type(s) for @: 'DataFrame' and 'Series'

pypfopt/black_litterman.py:44: TypeError
________________________________________________________________________________________ test_black_litterman_market_prior ________________________________________________________________________________________

    def test_black_litterman_market_prior():
        df = get_data()
        S = risk_models.sample_cov(df)
    
        prices = pd.read_csv(
            "tests/spy_prices.csv", parse_dates=True, index_col=0, squeeze=True
        )
        delta = black_litterman.market_implied_risk_aversion(prices)
    
        mcaps = {
            "GOOG": 927e9,
            "AAPL": 1.19e12,
            "FB": 574e9,
            "BABA": 533e9,
            "AMZN": 867e9,
            "GE": 96e9,
            "AMD": 43e9,
            "WMT": 339e9,
            "BAC": 301e9,
            "GM": 51e9,
            "T": 61e9,
            "UAA": 78e9,
            "SHLD": 0,
            "XOM": 295e9,
            "RRC": 1e9,
            "BBY": 22e9,
            "MA": 288e9,
            "PFE": 212e9,
            "JPM": 422e9,
            "SBUX": 102e9,
        }
>       prior = black_litterman.market_implied_prior_returns(mcaps, delta, S)

tests/test_black_litterman.py:351: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

market_caps = {'AAPL': 1190000000000.0, 'AMD': 43000000000.0, 'AMZN': 867000000000.0, 'BABA': 533000000000.0, ...}, risk_aversion = 2.6854910662283147
cov_matrix =           GOOG      AAPL        FB      BABA      AMZN        GE       AMD  \
GOOG  0.093211  0.046202  0.030801  0.02...  0.056298  0.070269  0.034757  0.146893  0.049530  
SBUX  0.028284  0.050217  0.046886  0.024195  0.049530  0.152589  
risk_free_rate = 0.02

    def market_implied_prior_returns(
        market_caps, risk_aversion, cov_matrix, risk_free_rate=0.02
    ):
        r"""
        Compute the prior estimate of returns implied by the market weights.
        In other words, given each asset's contribution to the risk of the market
        portfolio, how much are we expecting to be compensated?
    
        .. math::
    
            \Pi = \delta \Sigma w_{mkt}
    
        :param market_caps: market capitalisations of all assets
        :type market_caps: {ticker: cap} dict or pd.Series
        :param risk_aversion: risk aversion parameter
        :type risk_aversion: positive float
        :param cov_matrix: covariance matrix of asset returns
        :type cov_matrix: pd.DataFrame or np.ndarray
        :param risk_free_rate: risk-free rate of borrowing/lending, defaults to 0.02.
                               You should use the appropriate time period, corresponding
                               to the covariance matrix.
        :type risk_free_rate: float, optional
        :return: prior estimate of returns as implied by the market caps
        :rtype: pd.Series
        """
        mcaps = pd.Series(market_caps)
        mkt_weights = mcaps / mcaps.sum()
        # Pi is excess returns so must add risk_free_rate to get return.
>       return risk_aversion * cov_matrix @ mkt_weights + risk_free_rate
E       TypeError: unsupported operand type(s) for @: 'DataFrame' and 'Series'

pypfopt/black_litterman.py:44: TypeError
================================================================================================ warnings summary =================================================================================================
tests/test_black_litterman.py::test_input_errors
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_parse_views
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_dataframe_input
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_default_omega
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_returns_no_prior
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_relative_views
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_cov_default
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_weights
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

-- Docs: http://doc.pytest.org/en/latest/warnings.html
================================================================================ 2 failed, 138 passed, 8 warnings in 19.60 seconds ================================================================================

Update examples.py

examples.py hasn't been updated since v0.2.0, and as such may not be working since the refactor. I should probably also add examples of the new functionality.

backtest integration

You put it on your roadmap already but I would like to integrate this into a backtrader backtest. Where you think I should start?

Is there a limit on the number of assets?

I tried to apply the Markowitz model with 61 assets from CSV files.

This functions execute and show the data that is suposed.
mu = expected_returns.mean_historical_return(df)
S = CovarianceShrinkage(df).ledoit_wolf()
ef = EfficientFrontier(mu, S)

but when it comes to this
raw_weights = ef.max_sharpe()

it just shows NAN on all the positions.

I have run the code with less assets and it works just fine.

Is there a limit on the number of assets?

Market neutral weights should be normalised

The result of market neutral optimisation is essentially a long and short portfolio. The sum of weights for the longs and the sum of weights for the shorts should probably both add up to one so that it is easier for the user.

cannot import name 'hrp_portfolio' from 'pypfopt.hierarchical_risk_parity'

Dear All

Within the directory where I have the jupyter notebook script I want to work with, which uses this library, clone "PyPortfolioOpt and run the setup for installation. When executing the script I get the error" ImportError: cannot import name 'hrp_portfolio 'from' pypfopt.hierarchical_risk_parity '(C: \ Anaconda3 \ lib \ site-packages \ pyportfolioopt-0.5.1-py3.7.egg \ pypfopt \ hierarchical_risk_parity.py) ".
I will appreciate help to overcome this problem.

Best regards

Add new shrinkage estimators

Hi @robertmartin8,

I'm trying to implement 2 estimators of Ledoit-Wolf in python: Sharpe’s single factor (or single-index) model and The constant-correlation model. And I think it's suitable to contribute it into your repo, is it okay or do you have any suggestion for me?

Undocumented shrinkage estimators

Hi, #20 has offered two variants on the Ledoit-Wolf shrinkage estimator that are undocumented. I did some backtests and I believe that the single factor target may be better than the default constant variance target.

I have two questions regarding this:

  1. Is there a reason why the single factor target isn't documented?
  2. Is there a theoretical reason why the single factor might be better?

Weird behaviour wit max_sharpe and efficient_risk methods

I have been loading two datasets :

tickers = ['VBMFX', 'VTSMX']
select_val = 'Adj Close'
#tickers = []'AGG', 'VTSMX']

df_complete = web.DataReader(tickers ,data_source='yahoo', start='2000-01-01', end="2020-01-16")[select_val]

I then calculate the variables required by efficient frontier:

mu = expected_returns.ema_historical_return(df_complete)
S = CovarianceShrinkage(df_complete).ledoit_wolf()
ef = EfficientFrontier(mu, S,
                       #gamma=1,
                       )

# weights = ef.max_sharpe(risk_free_rate=0.005)
eff_risk = ef.efficient_risk(target_risk=0.1, risk_free_rate=0.005)


ef.portfolio_performance(verbose=True)

As a result I get:

Expected annual return: 7.6%
Annual volatility: 3.8%
Sharpe Ratio: 1.46
(0.0759353668927616, 0.03834329016364384, 1.4588045693011014)

It does not matter what risk I indicate it does not change fiven values. I have though no issue with efficient_target. Could you take a look why it seems to get stuck without adjusting to a target volatility. Manually I can tune it to 10% but the algorithm should be able to do so as well.

Question: Is there a way to set a minimum allocation?

Hi Robert... This is awesome, thanks so much for your work on this. I have two related questions:

  1. I've found that the library is telling me to allocate 0.1% to something, and that's just not realistic for me, partly because my investment platform has a minimum purchase of $250. That's a cool quarter mil... :) I tried weight_bounds=(0.1, 1), but that didn't do what I thought it would, which makes sense now that I think about it. Is there a way to set minimum allocations?

  2. I also tried setting gamma, but that basically told me to buy a roughly equal weight of every possible asset. That may well be optimal, but it's a pain for similar reasons. Is there a way to limit the number of assets chosen?

Thanks!

ImportError: No module named setuptools

I am trying, after cloning it, to install this module on a Linuxmint computer. When executing the command python setup.py install, inside the cloning directory, I get the error:

enri@enri-Presario-CQ57-Notebook-PC:/media/enri/TRABAJO/PyPortfolioOpt-master$ python setup.py install
Traceback (most recent call last):
  File "setup.py", line 1, in <module>
    from setuptools import setup
ImportError: No module named setuptools

What can be the cause?

e

Question:How to set the total number of stocks

Thanks for this program! It is awesome!
Counld you hlep me figure out how to set the limit of total number of stocks?
For example, the data contains about 2000 stocks, and I want to get a portfolio which has only 100

The covariance matrix calculation problem

I assume you are very much aware of the famous paper Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy? that pointed out the issues with trying to calculate the covariance matrix for portfolio optimization. The documentation states that:

Includes both classical methods (Markowitz 1952), suggested best practices (e.g covariance shrinkage), along with many recent developments and novel features, like L2 regularisation, shrunk covariance, hierarchical risk parity.

From these recent advances in the field you mentioned, would you consider the problem to be reasonably solved now? Would you kindly point out some papers with those advances in portfolio optimization? Thank you.

Initial weights Guess

Hi,

How can I add initial weights x0 for the optimization algorithm, for example in max_sharpe ?

Is there any stock limit for Efficient frontier

Can I use 500 stocks for Portfolio optimization to calculate weights (like recommend gamma value, in the docs it says for 20 stocks gamma value is 1, what if we use 500 stocks ) ?

We tried with gamma value as 100 for 500 stocks but 80% of the tickers were assigned zero.

Can you please help us with this ?

Refactor risk and return models

Currently, there is a lot of repeated code within risk_models.py and expected_returns.py.

Almost all of the functions therein take prices as inputs, before processing them into returns, with the following couple of lines repeated a lot.

    if not isinstance(prices, pd.DataFrame):
        warnings.warn("prices are not in a dataframe", RuntimeWarning)
        prices = pd.DataFrame(prices)
    daily_returns = prices.pct_change().dropna(how="all")

In the spirit of DRY, I'd like to refactor this without complicating the API. Haven't decided the best way of proceeding. I suppose I could put these lines into a function, but that would probably need to go in a separate file (not very elegant IMO).

EfficientFrontier - erorr

Is this some python-version issue?
It worked fine until this point.

ef = EfficientFrontier(mu, S)
Traceback (most recent call last):
File "", line 1, in
File "pypfopt/efficient_frontier.py", line 84, in init
super().init(len(tickers), tickers, weight_bounds)
TypeError: super() takes at least 1 argument (0 given)

Generalize bounds to be specific for each stock

Hi,

I have a portfolio for which I want to keep certain stocks at a given level and optimize the rest, but that's not possible with the current implementation.

Can we have the bounds input be either a tuple or an array of tuples? The change is simple enough and I have something working already. I'd be happy to push it up

pipenv support

current using pipenv install PyPortfolioOpt would result in 'Not Found' error

CVar Bugs?

Just a q.

Really like the lib, thank you.

I notice this comment in the examples "# CVaR optimisation - very buggy"

What are the known issues? I'm keen to use so maybe I can try and fix some of them on the way.

Cheers

Conditional value-at-risk bug

Currently, the CVaR optimisation using NoisyOpt is a little buggy. Because the weights aren't normalised by default, we must post-process them. However, this post-processing also means that the final weights don't respect the initial bounds. I'd appreciate any suggestions for a fix.

ModuleNotFoundError: No module named 'pulp'

Dear Robert
As you can see in the attached image, the import of modules returns the error mentioned in the query title.
GestionValores
Perhaps modifications to the API have affected some package names. `What should be the new names in the API, of "DiscreteAllocation or get_latest_prices"?
I will appreciate your help. Best regards

scipy.stats.kde: LinAlgError: singular matrix

Thanks a lot for coding and sharing this awesome library!
When I use min_cvar() in value_at_risk.py, a LinAlgError raised:

> LinAlgError                               Traceback (most recent call last)
<ipython-input-36-b8e9d23c399e> in <module>()
----> 1 a.opt_min_cvar()

<ipython-input-29-cc78a27bceb6> in opt_min_cvar(self, s, beta, random_state)
    159             x0=self.initial_weights,
    160             niter=1000,
--> 161             paired=False,
    162         )
    163         return result

C:\ProgramData\Anaconda3\lib\site-packages\noisyopt\main.py in minimizeSPSA(func, x0, args, bounds, niter, paired, a, c, disp, callback)
    323             xplus = project(x + ck*delta)
    324             xminus = project(x - ck*delta)
--> 325             grad = (funcf(xplus, **fkwargs) - funcf(xminus, **fkwargs)) / (xplus-xminus)
    326         x = project(x - ak*grad)
    327         # print 100 status updates if disp=True

C:\ProgramData\Anaconda3\lib\site-packages\noisyopt\main.py in funcf(x, **kwargs)
    306         # freeze function arguments
    307         def funcf(x, **kwargs):
--> 308             return func(x, *args, **kwargs)
    309 
    310     N = len(x0)

<ipython-input-29-cc78a27bceb6> in _obj_cvar(self, wgts, ret_mat, s, beta, random_state)
    129         # Sample from the historical distribution
    130         print(pf_rets)
--> 131         dist = scipy.stats.gaussian_kde(pf_rets)
    132         sample = dist.resample(s)
    133         # Calculate the value at risk

C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\kde.py in __init__(self, dataset, bw_method)
    170 
    171         self.d, self.n = self.dataset.shape
--> 172         self.set_bandwidth(bw_method=bw_method)
    173 
    174     def evaluate(self, points):

C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\kde.py in set_bandwidth(self, bw_method)
    497             raise ValueError(msg)
    498 
--> 499         self._compute_covariance()
    500 
    501     def _compute_covariance(self):

C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\kde.py in _compute_covariance(self)
    508             self._data_covariance = atleast_2d(np.cov(self.dataset, rowvar=1,
    509                                                bias=False))
--> 510             self._data_inv_cov = linalg.inv(self._data_covariance)
    511 
    512         self.covariance = self._data_covariance * self.factor**2

C:\ProgramData\Anaconda3\lib\site-packages\scipy\linalg\basic.py in inv(a, overwrite_a, check_finite)
    973         inv_a, info = getri(lu, piv, lwork=lwork, overwrite_lu=1)
    974     if info > 0:
--> 975         raise LinAlgError("singular matrix")
    976     if info < 0:
    977         raise ValueError('illegal value in %d-th argument of internal '

LinAlgError: singular matrix

The input data is the monthly simple returns of 3 stocks (APPLE, MICROSOFT AND GOOGLE) from Jan 2015 to Dec 2018 :

|AAPL.O|MSFT.O|GOOGL.O
2005-01-31|0.194099|-0.016467|0.014679
2005-02-28|-0.416645|-0.042618|-0.039004
2005-03-31|-0.071110|-0.039348|-0.039789
2005-04-29|-0.134629|0.046752|0.218769
2005-05-31|0.102579|0.019763|0.260318
2005-06-30|-0.074172|-0.037209|0.060879
2005-07-29|0.158653|0.030998|-0.021724
...
2018-06-29|-0.009418|-0.002327|0.026536
2018-07-31|0.027983|0.075753|0.086814
2018-08-31|0.196227|0.058918|0.003732
2018-09-28|-0.008303|0.018161|-0.020068
2018-10-31|-0.030478|-0.066101|-0.096514
2018-11-30|-0.184045|0.038199|0.017486
2018-12-31|-0.116698|-0.084047|-0.058298

It seems one of iterations by noisyopt.minimizeSPSA is all zero matrix. Then scipy.stats.kde gives LinAlgError: singular matrix.

I would appreciate help in solving this problem.
Thanks!

Another optimizer on CVAR function

Hi @robertmartin8

I was thinking of trying the below optimiser package with the cvar problem.

https://github.com/uqfoundation/mystic/blob/master/examples/example08.py

Would I just need to create another function in this class -
https://github.com/robertmartin8/PyPortfolioOpt/blob/master/pypfopt/value_at_risk.py

and replace the line -

    result = noisyopt.minimizeSPSA(
        objective_functions.negative_cvar,
        args=args,
        bounds=self.bounds,
        x0=self.initial_guess,
        niter=1000,
        paired=False,
    )

with -

# use DE to solve 8th-order Chebyshev coefficients
npop = 10*ndim
solver = DifferentialEvolutionSolver2(ndim,npop)
solver.SetRandomInitialPoints(min=[-100]*ndim, max=[100]*ndim)
solver.SetGenerationMonitor(stepmon)
solver.enable_signal_handler()
solver.Solve(chebyshev8cost, termination=VTR(0.01), strategy=Best1Exp, \
             CrossProbability=1.0, ScalingFactor=0.9, \
             sigint_callback=plot_solution)
solution = solver.Solution()

I know its not exact, but am I on the write track?
Best,
Andrew

Sigue el error: No module named 'pulp'

I have re-created the environment and installed pulp with:
pip install pulp==1.6.10
Al ejecutar
from pypfopt import discrete_allocation
It keep returning me the error.
`ModuleNotFoundError Traceback (most recent call last)
in
----> 1 from pypfopt import discrete_allocation

C:\Anaconda3\lib\site-packages\pyportfolioopt-0.5.1-py3.7.egg\pypfopt\discrete_allocation.py in
6 import numpy as np
7 import pandas as pd
----> 8 import pulp
9
10

ModuleNotFoundError: No module named 'pulp'
`

Does it make sense to set benchmark=risk_free_rate in semicovariance?

It seems that semicovariance, as implemented in pyportfolioopt, sets a penalty for assets that go below a certain threshold. Even though it's natural to demand this threshold to be non-negative, wouldn't it be more reasonable to give it a more meaningful value as default, e.g., the risk free rate, instead of zero?

Sorry if I'm missing something here, is there a reason for this being zero by default?

Does your library have a 'long only' constraint?

I am curious to know if your library has a 'long only' constraint while optimizing a portfolio. It seems like stocks' weights are always postiive, but the documents do not specify whether it is the case or not.

Thank you for creating such a nice portfolio optimization library.

Calculate correlation matrix by using sample covariance.

I want to draw correlation graph by correlation matrix. So I need to calculate the correlation matrix through the sample covariance. Based on the achieved formula, the correlation matrix is calculated by using covariance matrix .

def cov2cor(cov_in):
    if not isinstance(cov_in, pd.DataFrame):
        warnings.warn("cov are not in a dataframe", RuntimeWarning)
        cov_in = pd.DataFrame(cov_in)
        cov = cov_in.values
    else:
        cov = cov_in.values
    
    p = len(cov)
    nd = cov.diagonal()
    nsd = np.sqrt(nd)
    e = np.eye(p)
    for i in range(p):
        for j in range(p):
            e[i,j] = nsd[i]*nsd[j]
    cor = cov/e
    cor_df = pd.DataFrame(cor ,index = cov_in.index, columns = cov_in.columns)
    return cor_df

Hopefully this will work for others as well.

Allow the use of center of mass, half-life or alpha as alternatives to span

Right now, expected_returns.ema_historical_return and risk_models.exp_cov support only using span, but it could be changed to allow direct usage of the other equivalent optional inputs of pandas.DataFrame.ewm (namely com, span, halflife, alpha). (risk_models._pair_exp_cov should be modified as well.)

This is simple to do. If there's interest, I could prepare a PR myself.

Functions only accept prices data not returns data

This was originally what I intended, but a user has notified me that some data sources (e.g the Fama and French 30) only provide returns. If you are having a problem with this, a quick workaround is to construct a new series of "prices" by adding a row of 1s to your dataset then using df.Series.cumprod() to get the cumulative product. This works because most methods first take the percentage change to make a returns series, and this percentage change does not care about the starting value.

Would love to hear any opinions on whether this is an issue, and if so, what would be the cleanest way to implement optionally providing returns instead of prices. I guess I could do it by having a boolean parameter price_data=True; if False then the data passed in will be interpreted as returns. However, I don't want to clutter the API without good reason.

Rolling window

How would I implement a rolling window on the risk models and efficientfrontier?

Include grouped industry constraints

Hi,
Is there a way to include a portfolio optimisation with grouped by industry constraint?
for example:

  • Between 0 and 10% of my portfolio weights have to be in industry "Technology"
  • Between 15% and 30% of my portfolio weights have to be in industry "Energy"

A very good example is shown here, which uses Scipy as well.
Thanks

Additional Optimisers

Hi,
Thanks for putting out this amazing library. Have you (would you) consider adding these two additional optimization methods in the near future?

  1. Risk Parity
  2. Diversification Ratio

In case you are not planning on adding these. How difficult do you think they would be to implement using your custom objective function for a novice python person?

Thanks a lot.

Only for shorts

Hi there!!

can we optimize only for shorts? I tried setting the weight_bounds to (-1,0) but the result are nan's.

tks!

Divide by zero when passing in omega with 0 on the diagonal

I'm slowly but surely trying to recreate the results in the Idzorek paper using PyPortfolioOpt.

On p. 21 he mentions "Setting all of the diagonal elements of omega equal to zero is equivalent to specifying 100% confidence in all of the K views".

But when I pass in a zero diagonal matrix as omega I get a divide by zero error when I call bl_returns(). This is also a problem with bl_cov()

During the Coursera course https://www.coursera.org/learn/advanced-portfolio-construction-python the instructor notes that inverting omega is not always possible so he references two alternative ways to calculate the expected returns and covariance matrix:

image

Maybe that can inspire?

The instructor specifically refers to https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1314585 where Walters has rewritten the two expressions to versions without inversing omega. When I compare the formulas on the screen dump with the Walters paper it seems that the first plus should be a minus but given that it is quite a long time since I deep dived into this kind of math I might be wrong.

BTW, I can approximate a solution by filling the diagonal with np.fill_diagonal(omega_100, 1E-8) but that seems pretty hack-ish.

omega_inv = np.diag(1 / np.diag(self.omega))

Updated function - Portfolio opt

Hi @robertmartin8

I am getting an error with the below code, and i am not sure what is the updated function?

imports a tool to convert capital into shares

from pypfopt import discrete_allocation

# returns the number of shares to buy given the asset weights, prices, and capital to invest
alloc = discrete_allocation.portfolio(
    weights, 
    df_buy_in['Buy In: 2014-12-31'], 
    total_portfolio_value=capital
)

# returns same as above but for the MIS
mis_alloc = discrete_allocation.portfolio(
    mis_weights, 
    df_mis_buy_in['Buy In: 2014-12-31'],
    total_portfolio_value=capital
)
# imports a tool to convert capital into shares
from pypfopt import discrete_allocation
​
# returns the number of shares to buy given the asset weights, prices, and capital to invest
alloc = discrete_allocation.portfolio(
    weights, 
    df_buy_in['Buy In: 2014-12-31'], 
    total_portfolio_value=capital
)
​
# returns same as above but for the MIS
mis_alloc = discrete_allocation.portfolio(
    mis_weights, 
    df_mis_buy_in['Buy In: 2014-12-31'],
    total_portfolio_value=capital
)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-63-7ff1e26da9af> in <module>
      3 
      4 # returns the number of shares to buy given the asset weights, prices, and capital to invest
----> 5 alloc = discrete_allocation.portfolio(
      6     weights,
      7     df_buy_in['Buy In: 2014-12-31'],

AttributeError: module 'pypfopt.discrete_allocation' has no attribute 

Tests for Hierarchical Risk Parity

Right now, I've only got one test for HRP, and it doesn't meaningfully target the inner workings:

def test_hrp_portfolio():
    df = get_data()
    returns = df.pct_change().dropna(how="all")
    w = hrp_portfolio(returns)
    assert isinstance(w, dict)
    assert set(w.keys()) == set(df.columns)
    np.testing.assert_almost_equal(sum(w.values()), 1)

I would appreciate help in testing some of the components, like the clustering and linkages

Output portfolio weights to text

One user has raised a point that we should be able to output weights to a text file. I think this is a good point.

Something simple like

ef.save_to_text()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.