Giter Club home page Giter Club logo

pyfair's Introduction

pyfair

logo

rtd_badge pypi_badge

Factor Analysis of Information Risk (FAIR) model written in Python.

This package endeavors to create a simple API for automating the creation of FAIR Monte Carlo risk simulations.

This is based on the terms found in:

  1. Open FAIR™ RISK TAXONOMY (O-RT); and,
  2. Open FAIR™ RISK ANALYSIS (O-RA)

"Open FAIR" is a trademark of the Open Group.

Installation

pyfair is available on PyPI. To use pyfair with your Python installation, you can run:

pip install pyfair

Documentation

Documentation can be found at the Read the Docs site.

Code

import pyfair

# Create using LEF (PERT), PL, (PERT), and SL (constant)
model1 = pyfair.FairModel(name="Regular Model 1", n_simulations=10_000)
model1.input_data('Loss Event Frequency', low=20, mode=100, high=900)
model1.input_data('Primary Loss', low=3_000_000, mode=3_500_000, high=5_000_000)
model1.input_data('Secondary Loss', constant=3_500_000)
model1.calculate_all()

# Create another model using LEF (Normal) and LM (PERT)
model2 = pyfair.FairModel(name="Regular Model 2", n_simulations=10_000)
model2.input_data('Loss Event Frequency', mean=.3, stdev=.1)
model2.input_data('Loss Magnitude', low=2_000_000_000, mode=3_000_000_000, high=5_000_000_000)
model2.calculate_all()

# Create metamodel by combining 1 and 2
mm = pyfair.FairMetaModel(name='My Meta Model!', models=[model1, model2])
mm.calculate_all()

# Create report comparing 2 vs metamodel.
fsr = pyfair.FairSimpleReport([model1, mm])
fsr.to_html('output.html')

Report Output

Overview

Tree

Violin

Serialized Model

{
    "Loss Magnitude": {
        "mean": 100000,
        "stdev": 20000
    },
    "Loss Event Frequency": {
        "low": 20,
        "mode": 90,
        "high": 95,
        "gamma": 4
    },
    "name": "Sample Model",
    "n_simulations": 10000,
    "random_seed": 42,
    "model_uuid": "2e55fba4-c897-11ea-881b-f26e0bbd6dbc",
    "type": "FairModel",
    "creation_date": "2020-07-17 20:37:03.122525"
}

pyfair's People

Contributors

cneskey avatar coronabeachidme avatar darvid avatar gymzombie avatar theonaunheim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyfair's Issues

Some input_data abbreviations are failing

These inputs throw a key error:

model.input_data('C', low=10, mode=30, high=40)
model.input_data('CF', low=10, mode=30, high=40)
model.input_data('Contact Frequency', low=10, mode=30, high=40)

This works fine:
model.input_data('Contact', low=10, mode=30, high=40 )

Similarly, 'A', 'Probability of Action' fail but'Action'works.

I've tested all other abbreviations from target_map and they seem to work fine. Please do check.

Currency prefix on charts

Good day,

Adjusting the currency prefix Fair Simple Report doesn't change the violin chart on "Components and Aggregate Risk."

Regards

Post-Node Bayesian Processing

Possibly add an additional step immediately after processing that allows:

  1. Bayesian processing to filter out or modify the output vector
  2. Multi-level
  3. Serializable to json that can be stored with the model.

Modernize 🧑‍🔬 (build sys, pypi classifier,dependencies, unit testing, contrib.md, pre-com hooks, lint, semantic rels)

  • move to modern build system and packaging standards (poetry, pdm; pyproject.toml)
  • update pypi classifier list to declare supported python versions (3.8+)
  • upgrade dated dependencies and ensure compatibility
  • move to modern unit testing framework (pytest)
  • add CONTRIBUTING.md
  • add pre-commit hooks
  • enforce black/ruff formatting
  • enforce conventional commits and use semantic release

Create raw_input function

Allow people to input their own raw distributions in the event they want to add pooled or otherwise non-supported distributions. Figure out how to input this into supplied params without bloating stuff.

Secondary Loss Event Frequency PERT input values should be between 0 and 1

Similar to the validation checks in place for 'Probability of Action', 'Vulnerability', 'Control Strength', 'Threat Capability', there probably should be a check in place for Secondary Loss Event Frequency as well. The value will eventually be multiplied with Loss Event Frequency, so it should be less than or equal to one, more specifically between zero and one.

I looked through some other issues, to see if others are using it as I am, and they are. (Example issue 17)

futurewarning in model.py

pyfair\pyfair\model\model.py:465: FutureWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.

Matplotlib UserWarning in violin.py

Receiving this error when running testrunner.py:

pyfair\pyfair\report\violin.py:46: UserWarning: FixedFormatter should only be used together with FixedLocator
  ax.axes.xaxis.set_ticklabels(columns)

DB Write and read give different results

Not sure if this is a bug or intended functionality.

I create a model and write it to the database. (v1)
I decide I want to change the parameters of that model and adjust the values, then write it to the database. (v2)
I load the model from the database, and the output shows the model for v1.

Sample code to reproduce:

# Create an initial model
model = FairModel('System 1, Risk A')
model.bulk_import_data({
    'Loss Event Frequency': {'mean':.3, 'stdev':.1},
    'Loss Magnitude': {'constant': 5_000_000}
})
model.calculate_all()

# Create a database file and store that model
db = FairDatabase('pyfair.sqlite3')
db.store(model)

# Create a new version of that model
model = FairModel('System 1, Risk 1')
model.bulk_import_data({
    'Threat Event Frequency': {'low':.3, 'mode':.6, 'high':.9},
    'Vulnerability':{'constant':.5},
    'Loss Magnitude': {'constant': 5_000_000}
})
model.calculate_all()

# Write the updated version of that model to the database
db.store(model)

# Load the model
reconstituted_model = db.load('System 1, Risk A')
reconstituted_model.calculate_all()

fsr = FairSimpleReport([reconstituted_model])
fsr.to_html('output.html')

This also might just be a style/documentation issue? Since we read and write based on the model name, but the database reads and writes based on the UID, it's just unexpected behavior for a new user.

Report relying on OS specific variable

Base report relies on an OS specific variable USERNAME that may not be present on other OSs.
metadata = pd.Series({
'Author': os.environ['USERNAME'],

Suggest to perform a validation of the environment variable first.

User Surfaced Issues

  • Non-Calculated Models going into FairSimpleReport need better error messages.
  • SLEM is improperly formatted and causes error

Risk Tolerance Curve suggestion

This is not strictly required by the FAIR methodology but would be a nice addition.
Typically a CISO will draw the LOE for the inherent and residual risk, he will then ask his CTO/CEO/CFO to provide a few data points to build a risk tolerance curve.
An example from PAN talk is here:

image

The tolerance curve will be interpolated from the few datapoints (the user should choose from linear, expo or polynomial) provided.

The tolerance curve should then be intersected with the other two to find out the break out points.
Would be wonderful to have a class to inject such LOE from input data.

Cheers!

Multiple Errors Thrown: Mac OS Ventura - (Matplotlib + Pandas deprecations)

Getting the following, any suggestions?

/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py:257: FutureWarning: The provided callable <function mean at 0x104970b80> is currently using Series.mean. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "mean" instead.
risk_results = risk_results.agg([np.mean, np.std, np.min, np.max])
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py:257: FutureWarning: The provided callable <function std at 0x104970cc0> is currently using Series.std. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "std" instead.
risk_results = risk_results.agg([np.mean, np.std, np.min, np.max])
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py:257: FutureWarning: The provided callable <function min at 0x1049702c0> is currently using Series.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead.
risk_results = risk_results.agg([np.mean, np.std, np.min, np.max])
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py:257: FutureWarning: The provided callable <function max at 0x104970180> is currently using Series.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
risk_results = risk_results.agg([np.mean, np.std, np.min, np.max])
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py:260: FutureWarning: DataFrame.applymap has been deprecated. Use DataFrame.map instead.
overview_df = risk_results.applymap(lambda x: self._format_strings['Risk'].format(x))
Traceback (most recent call last):
File "/Users/seq/test.py", line 22, in
fsr.to_html('output.html')
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py", line 164, in to_html
output = self._construct_output()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/simple_report.py", line 57, in _construct_output
hist = self._get_distribution(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/base_report.py", line 220, in _get_distribution
fig, ax = fdc.generate_image()
^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyfair/report/distribution.py", line 107, in generate_image
tick.label.set_horizontalalignment('left')
^^^^^^^^^^
AttributeError: 'XTick' object has no attribute 'label'. Did you mean: '_label'?

Set another Currency

Hi,

Thanks for your work, the result for FAIR analyses is really impressive!
However, it would be great to be able to change the currency in the reports (for example € instead of $).

Store finished simulation calculations (Min, Max, Mean) with Serialized JSON model

More bad ideas and annoying requests from me :)

TL;DR = As a user of pyfair I'd like to have the results of calculated nodes from a simulation stored in the Serialized JSON model so that I can yeet it into DocumentDB and marvel at my pseudoscientific riskiness

On a serious note, this may be a bit hard, but I would like a way to store the calculated values in various DataFrame math operators in the JSON model output. Things in Min, Max, Mean, Mode, etc.

For my purposes, I am able to supply data to all of the "child nodes" and let pyfair calculate VULN, TEF, LEF, LM and ultimately, Risk for me. This also means I know exactly what I need to pull out of the DataFrame returned by .export_results(), it's hacky but it's really fast because Pandas is awesome.

def simulations():
    '''
    HEAVILY TRUNCATED...
    '''
    # Run the simulation and for the Specific Threat Community
    tmodel = malwareModel.calculate_all()

    # In this section we will write the Model and the Simulation Results to DocDB
    tcomModelJson = json.loads(malwareModel.to_json())
    
    # Write out a DF and perform a Mean, Min and Max Calculations on the colums of the calculated
    # and add these into the the JSON payload to keep the model inputs and outputs together
    tcomModelDf = tmodel.export_results()
    
    # MAX
    tcomModelJson['MaxRisk'] = int(tcomModelDf['Risk'].max())
    tcomModelJson['MaxLEF'] = float(tcomModelDf['Loss Event Frequency'].max())
    tcomModelJson['MaxTEF'] = float(tcomModelDf['Threat Event Frequency'].max())
    tcomModelJson['MaxVuln'] = float(tcomModelDf['Vulnerability'].max())
    tcomModelJson['MaxLM'] = int(tcomModelDf['Loss Magnitude'].max())
    
    # MIN
    tcomModelJson['MinRisk'] = int(tcomModelDf['Risk'].min())
    tcomModelJson['MinLEF'] = float(tcomModelDf['Loss Event Frequency'].min())
    tcomModelJson['MinTEF'] = float(tcomModelDf['Threat Event Frequency'].min())
    tcomModelJson['MinVuln'] = float(tcomModelDf['Vulnerability'].min())
    tcomModelJson['MinLM'] = int(tcomModelDf['Loss Magnitude'].min())
    
    # MEAN
    tcomModelJson['MeanRisk'] = int(tcomModelDf['Risk'].mean())
    tcomModelJson['MeanLEF'] = float(tcomModelDf['Loss Event Frequency'].mean())
    tcomModelJson['MeanTEF'] = float(tcomModelDf['Threat Event Frequency'].mean())
    tcomModelJson['MeanVuln'] = float(tcomModelDf['Vulnerability'].mean())
    tcomModelJson['MeanLM'] = int(tcomModelDf['Loss Magnitude'].mean())

Looking at the code in model.py looks like you can probably do this, but would need a check for to make sure calculate_all() was called and find a more dynamic way to bring in only calculated nodes as everything supplied would already be in there. Though, there is a case to be made to populated everything as your inputs will obviously always have a different ouput.

My use case is now I can store all of the simulation data (uuid, seed, data, supplied fields) if I ever wanted to rerun simulations and to measure them overtime, for example I can measure macro-trends of our Resistance Strength or Contact Frequency over time for specific Apps / Business and put them into a heat map because that's what risk is right? And on the flipside - having the mean/max/min/mode/etc of the simulation stored within the JSON payload allows for similar analysis, data warehousing/data lake/BI and some post-simulation use cases such as comparing the risk to revenue contributions and combining with radically different models (i.e. run simulations where SLEM/SLEF is modeled on ransom payouts and another modeled on punitive fines / lawsuits)

Took a stab at mocking this up in your to_json(self) method - what is the best way to compile from source @theonaunheim I can take a stab at a PR if you 1) think this is cool 2) tell me how to implement some if / else to check if the model was calculated at all.

def to_json(self):
    """Dump the model as JSON string
    TRUNCATED AND EXISITNG COMMENTS REMOVED
    """
    data = {**self._data_input.get_supplied_values()}
    # Add a check here to see if the model was calculated???
    df = self._model_table

    data['name'] = str(self._name)
    data['n_simulations'] = self._n_simulations
    data['random_seed'] = self._random_seed
    data['model_uuid'] = self._model_uuid
    data['type'] = str(self.__class__.__name__)
    data['creation_date'] = self._creation_date
    # More new stuff!
    data['max_risk'] = int(df['Risk'].max())
    data['max_loss_magnitude'] = int(df['Loss Magnitude'].max())
    ##----continue to do math!##

    json_data = json.dumps(
        data,
        indent=4,
    )
    return json_data

Version 0.2.0

Add version information to JSON

Truncate floats to 2 decimal places in JSON (will cause rounding discrepancies betwern versions).

Note: this will not be implemented. People may need more than 2 decimal places ... and they can just export_results().round(2) if need be.

pyfair problems on Windows 10 - FairSimpleReport gives [WinError 123] '<stdin>'

When I try to run the sample from readme.md it works once I add the dependencies (scipy, pandas, matplotlib) until it tries to create a report. I have been able to successfully do the exact same thing on Ubuntu.
On Windows 10 I get the following error (Python3.8):

fsr = pyfair.FairSimpleReport([model1, mm])

Traceback (most recent call last):

File "", line 1, in
File "C:\Python38\lib\site-packages\pyfair\report\simple_report.py", line 24, in init
super().init()
File "C:\Python38\lib\site-packages\pyfair\report\base_report.py", line 60, in init
self._caller_source = self._set_caller_source()
File "C:\Python38\lib\site-packages\pyfair\report\base_report.py", line 69, in _set_caller_source
elif name.exists():
File "C:\Python38\lib\pathlib.py", line 1388, in exists
self.stat()
File "C:\Python38\lib\pathlib.py", line 1194, in stat
return self._accessor.stat(self)
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: ''

Examples for customizing HTML Output and specific Charts

Not sure if this makes sense in on larger Issue but it would be good to have any examples on outputting to PDF with the specific outputs such as FairDistributionCurve or FairViolinPlot, or at least have a way to configure how the HTML report, you know how much executives love their branding :)

As an aside, is there anyway to change around the output of the Violin Chart specifically? The X Axis labels are horizontal but should ideally have a 60 or 45 degree offset, while that sound super whiny, we are building meta models from multiple scenarios per Threat Community and it usually ends up super busy and ugly. And another reason for my business leaders to sob uncontrollably in their Lambos

Thanks for any assistance in advance!

Use pyfair inside Jupyter notebooks

Hi there,
fantastic project but would like to see how one could embed the single chart components inside a jupyter notebook instead of producing the HTML page.

Meta Model Average vs. Sum Operators

For the Meta Model, it appears that ALE calculations are a Sum of all downstream models, instead of an Average, I am unsure if this is FAIR-esque but it feels like it should be an Average across all potential loss scenarios / threat communities evaluated. I only wonder this because it is quite easy to push up average ALEs into the several Billion dollar cap and that Sum feels like an assumption is made every scenario will happen at once.

The preferred behavior would be a flag to choose the Aggregation by Sum or Average (or other things I guess) - some scenarios may make sense to do serially (e.g. a data exfil event along with a ransomware event)

For instance, here is the output of a Meta Model from a handful of TCOMs with dummy data

<style> </style>
  mean stdev min max
State Actors Model $1,030,199,841 $888,260,670 $4,377,753 $5,551,151,462
State-sponsored Actors Model $11,728,227 $9,979,715 $63,759 $58,396,462
Organized Crime Model $0 $0 $0 $0
Hacktivists Model $0 $0 $0 $0
Cyber Espionage Model $0 $0 $0 $0
Accidental Insiders Model $0 $0 $0 $0
Privileged Insider Threats Model $435,876,396 $374,137,344 $1,906,190 $2,457,416,921
Unprivileged Insider Threats Model $2,111,754 $1,812,884 $10,301 $10,794,341
Opportunistic / Unskilled Attackers Model $0 $0 $0 $0
Risk $1,479,916,218 $966,145,076 $17,907,282 $6,288,729,598

To combat this I can take averages of the POA and TCs across all TCOMs, but that doesn't feel like the right oomph - I like to show where we have strong resistance against a specific threat community as this also informs our Red Team operations.

I can also provided this mocked up data, well some of it, my print statements were errant as a to_json() model.

Add short names to input

Add short names to input functions.

E.g. "V", "v", or "vulnerability" autoexpands to "Vulnerability" if input by user.

Futurewarning in tree_graph.py

Getting this error when doing a to_html report:

pyfair\report\tree_graph.py:169: FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead.
  in data.iteritems()
pyfair\report\tree_graph.py:145: FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead.
  in data.iteritems()

Vulnerability calculation

Hello I'm running PyFair 0.1a8
I think it could be an issue with vulnerability calculation, when it is derived from PoA and TCAP.
Here are two models and results:

Model 1 - CS is lower than TC
model3 = pyfair.FairModel(name="Example Model 2", n_simulations=30000)
model3.input_data('Contact', low=200, mode=1000, high=3000)
model3.input_data('Action', low=0.85, mode=0.95, high=1)
model3.input_data('Threat Capability', low=0.6, mode=0.85, high=0.98)
model3.input_data('Control Strength', low=0.59, mode=0.84, high=0.97)
model3.input_data('Secondary Loss Event Frequency', low=0.5, mode=0.85, high=1)
model3.input_data('Secondary Loss Event Magnitude', low=5000, mode=10000, high=20000)
model3.input_data('Primary Loss', low=15000, mode=25000, high=50000)

Vulnerability 1 Result
Vulnerability value is 0.45

Model 2 - CS is higher than TC
model3 = pyfair.FairModel(name="Example Model 2", n_simulations=30000)
model3.input_data('Contact', low=200, mode=1000, high=3000)
model3.input_data('Action', low=0.85, mode=0.95, high=1)
model3.input_data('Threat Capability', low=0.6, mode=0.85, high=0.98)
model3.input_data('Control Strength', low=0.63, mode=0.87, high=0.99)
model3.input_data('Secondary Loss Event Frequency', low=0.5, mode=0.85, high=1)
model3.input_data('Secondary Loss Event Magnitude', low=5000, mode=10000, high=20000)
model3.input_data('Primary Loss', low=15000, mode=25000, high=50000)
model3.calculate_all()

Vulnerability 2 Result
Vulnerability value is 0.58.

It seem like a mistake to me. Because as we increase Control Strength - Vulnerability value should decrease.

P.S. Great application. And documentation is better than official one (to me).

Contact Details

Hi Theo,

We are a cybersecurity consultancy based in Johannesburg, South Africa.

We wanted to chat to you about potentially using your tool as part of our consultancy services.

Is there an email address that we can contact you on?

Thanks so much,

Andrew
[email protected]

Secondary losses computed through input_multi_data are wrong

The secondary losses computed through input_multi_data are wrong. Here are some examples:

model1 = pyfair.FairModel(name="Insider Threat", n_simulations=10)
model1.input_multi_data('Secondary Loss', {
    'Reputational': {
        'Secondary Loss Event Frequency': {'constant': 1},
        'Secondary Loss Event Magnitude': {'constant': 10},
    },
    'Legal': {
        'Secondary Loss Event Frequency': {'constant': 1},
        'Secondary Loss Event Magnitude': {'constant': 10},
    }
})

In this case, the secondary loss should be 20 (10 x 1 + 10 x 1) for all simulations. However, all elements of model1._model_table["Secondary Loss"] are equal to 101. If one sets all the frequencies to 1:

model2 = pyfair.FairModel(name="Insider Threat", n_simulations=10)
model2.input_multi_data('Secondary Loss', {
    'Reputational': {
        'Secondary Loss Event Frequency': {'constant': 0},
        'Secondary Loss Event Magnitude': {'constant': 10},
    },
    'Legal': {
        'Secondary Loss Event Frequency': {'constant': 0},
        'Secondary Loss Event Magnitude': {'constant': 10},
    }
})

All elements of model2._model_table["Secondary Loss"] are equal to 100 instead of 0. Furthermore, an error is returned if one uses more than two loss types:

model3 = pyfair.FairModel(name="Insider Threat", n_simulations=10)
model3.input_multi_data('Secondary Loss', {
    'Reputational': {
        'Secondary Loss Event Frequency': {'constant': 1},
        'Secondary Loss Event Magnitude': {'constant': 10},
    },
    'Legal': {
        'Secondary Loss Event Frequency': {'constant': 1},
        'Secondary Loss Event Magnitude': {'constant': 10},
    },
    'Response': {
        'Secondary Loss Event Frequency': {'constant': 1},
        'Secondary Loss Event Magnitude': {'constant': 10},
    }
})

The result is:

in FairModel.input_multi_data(self, target, kwargs_dict)
    286 """Input data for multiple items that roll up into an aggregate
    287 
    288 As of now, this is only used for Secondary Loss when calculating
   (...)
...
--> 258 df1, df2 = df_dict.values()
    259 combined_df = df1 * df2
    260 # Sum

ValueError: too many values to unpack (expected 2)

Error upon instantiation from JSON

Error arises saying that gamma does not fall within established ranges. Appears to be set to 0-1, which is wrong.

Add to the unit tests for fairdb and fair model.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.