Giter Club home page Giter Club logo

driverlessai-recipes's Introduction

Recipes for H2O Driverless AI

About Driverless AI

H2O Driverless AI is Automatic Machine Learning for the Enterprise. Driverless AI automates feature engineering, model building, visualization and interpretability.

About BYOR

BYOR stands for Bring Your Own Recipe and is a key feature of Driverless AI. It allows domain scientists to solve their problems faster and with more precision.

What are Custom Recipes?

Custom recipes are Python code snippets that can be uploaded into Driverless AI at runtime, like plugins. No need to restart Driverless AI. Custom recipes can be provided for transformers, models and scorers. During training of a supervised machine learning modeling pipeline (aka experiment), Driverless AI can then use these code snippets as building blocks, in combination with all built-in code pieces (or instead of). By providing your own custom recipes, you can gain control over the optimization choices that Driverless AI makes to best solve your machine learning problems.

Best Practices for Recipes

Security

  • Recipes are meant to be built by people you trust and each recipe should be code-reviewed before going to production.

  • Assume that a user with access to Driverless AI has access to the data inside that instance.

    • Apart from securing access to the instance via private networks, various methods of authentication are possible. Local authentication provides the most control over which users have access to Driverless AI.
    • Unless the config.toml setting enable_dataset_downloading=false is set, an authenticated user can download all imported datasets as .csv via direct APIs.
  • When recipes are enabled (enable_custom_recipes=true, the default), be aware that:

    • The code for the recipes runs as the same native Linux user that runs the Driverless AI application.
      • Recipes have explicit access to all data passing through the transformer/model/scorer API
      • Recipes have implicit access to system resources such as disk, memory, CPUs, GPUs, network, etc.
    • An H2O-3 Java process is started in the background, for use by all recipes using H2O-3. Anyone with access to the Driverless AI instance can browse the file system, see models and data through the H2O-3 interface.
  • Enable automatic detection of forbidden or dangerous code constructs in a custom recipe with custom_recipe_security_analysis_enabled = tr ue. Note the following:

    • When custom_recipe_security_analysis_enabled is enabled, do not use modules specified in the banlist. Specify the banlist with the cu stom_recipe_import_banlist config option.
      • For example: custom_recipe_import_banlist = ["shlex", "plumbum", "pexpect", "envoy", "commands", "fabric", "subprocess", "os.system", "system"] (default)
    • When custom_recipe_security_analysis_enabled is enabled, code is also checked for dangerous calls like eval(), exec() and other in secure calls (regex patterns) defined in custom_recipe_method_call_banlist. Code is also checked for other dangerous constructs defined as regex patterns in the custom_recipe_dangerous_patterns config setting.
    • Security analysis is only performed on recipes that are uploaded after the custom_recipe_security_analysis_enabled config option is en abled.
    • To specify a list of modules that can be imported in custom recipes, use the custom_recipe_import_allowlist config option.
    • The custom_recipe_security_analysis_enabled config option is disabled by default.
  • Best ways to control access to Driverless AI and custom recipes:

    • Control access to the Driverless AI instance
    • Use local authentication to specify exactly which users are allowed to access Driverless AI
    • Run Driverless AI in a Docker container, as a certain user, with only certain ports exposed, and only certain mount points mapped
    • To disable all recipes: Set enable_custom_recipes=false in the config.toml, or add the environment variable DRIVERLESS_AI_ENABLE_CUSTOM_RECIPES=0 at startup of Driverless AI. This will disable all custom transformers, models and scorers.
    • To disable new recipes: To keep all previously uploaded recipes enabled and disable the upload of any new recipes, set enable_custom_recipes_upload=false or DRIVERLESS_AI_ENABLE_CUSTOM_RECIPES_UPLOAD=0 at startup of Driverless AI.

Safety

  • Driverless AI automatically performs basic acceptance tests for all custom recipes unless disabled
  • More information in the FAQ

Performance

  • Use fast and efficient data manipulation tools like datatable, sklearn, numpy or pandas instead of Python lists, for-loops etc.
  • Use disk sparingly, delete temporary files as soon as possible
  • Use memory sparingly, delete objects when no longer needed

Reference Guide

Sample Recipes

Go to Recipes for Driverless 1.7.0 1.7.1 1.8.0 1.8.1 1.8.2 1.8.3 1.8.4 1.8.5 1.8.6 1.8.7 1.8.8 1.8.9 1.8.10 1.9.0 1.9.1 1.9.2 1.9.3 1.10.0 1.10.1 1.10.2 1.10.3 1.10.4 1.10.4.1 1.10.4.2 1.10.4.3 1.10.5

Count: 277

driverlessai-recipes's People

Contributors

abigail-cha avatar achraf-mer avatar arnocandel avatar ashrith avatar azulgarza avatar badrc avatar dhiyaneshwar avatar goldentom42 avatar grigory93 avatar izuit avatar kaz-anova avatar kmontgom2400 avatar luizlf avatar mmalohlava avatar mtanco avatar navdeep-g avatar nkpng2k avatar pseudotensor avatar rsujeevan avatar shivam5992 avatar stexyz avatar sudalairajkumar avatar thirdwing avatar tkmaker avatar tomott12345 avatar tunguz avatar us8945 avatar vopani avatar ybabakhin avatar zainhaq-h2o avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

driverlessai-recipes's Issues

Can Driverless AI be run on 16gb Macbook with M1 chip?

Can Driverless AI be run on 16gb Macbook with M1 chip? My old, 2016 intel MacBook with 16gb of ram isn't able to run it. Considering whether I should upgrade to the new MacBook with unified memory. Is 16gb of unified-ram enough for DAI? Thanks!

Modify ENV variables in custom recipes

During our exploration of BYOR, one of our Models is dependent on numba. We included this dependency under the respective class as follows,

_modules_needed_by_name = ["numba"]

While uploading this recipe, pip installation of numba fails as it requires LLVM compiler installation. Since we had the LLVM compiler installation in a user-defined location, pip fails to pick up relevant binaries like llvm-config from these custom locations.

So Is there a way to provide ENVIRONMENTAL variables like, export PATH=$PATH:/opt/llvm/bin, along with the recipes so that these variables get alive during the recipe deployment time ?

Queries with custom recipes via BYOR

After trying out custom recipes with H20.DAI, I’m having the following queries,

  1. After uploading the recipe, is there a way for customers to modify the parameters specially the ones we set in fn. set_default_params ?

  2. Regarding mutate_params, when does it get triggered ?

  3. After uploading a recipe, is there a way to delete user uploaded recipe ?

  4. Can we control the number of GPUs to be utilized for an experiment ?

  5. There are some keywords like genes all over the logs. Where can be get details about these terminologies ?

Object detection

I just want to perform object detection on my custom data set. Does Driverless AI support object detection for custom data set ??
if yes then, please provide me the path of documentation form where I get help regarding it.

Details on Data Recipes

Add details on the Upload & Modify Data recipes to the FAQ, including a description of when they should be used and how they impact the scoring pipeline (they don't). Also include a template for each.

Arima : Make recipe reproducible and support gaps

Make Arima Reproducible

Arima does not use dates, so asking a prediction for the 10th of October or the 1st of October without pre-processing would lead Arima to provide the exact same prediction.

Arima transformer would need to keep track of the latest date seen by the model for each Time Group and pad the input query accordingly.

Make Arima support gaps

Same issue really. Need to pad for gaps before asking Arima to output predictions.

driverless h2oai python pipeline run_example.sh looks for java?

Hi

After i download the scoring pipeline and try to execute run_example.sh i get the below messages

277, in init│······································································
bind_to_localhost=bind_to_localhost)│······································································
File "/h2o/scoring-pipeline/env/lib/python3.6/site-packages/h2o/backend/server│······································································
.py", line 138, in start│······································································
bind_to_localhost=bind_to_localhost, log_dir=log_dir, log_level=log_level)│······································································
File "/h2o/scoring-pipeline/env/lib/python3.6/site-packages/h2o/backend/server│······································································
.py", line 271, in _launch_server│······································································
java = self._find_java()│······································································
File "/h2o/scoring-pipeline/env/lib/python3.6/site-packages/h2o/backend/server│······································································
.py", line 410, in _find_java│······································································
raise H2OStartupError("Cannot find Java. Please install the latest JRE from\│······································································
n"277, in init
bind_to_localhost=bind_to_localhost)
File "/h2o/scoring-pipeline/env/lib/python3.6/site-packages/h2o/backend/server.py", line 138, in start
bind_to_localhost=bind_to_localhost, log_dir=log_dir, log_level=log_level)
File "/h2o/scoring-pipeline/env/lib/python3.6/site-packages/h2o/backend/server.py", line 271, in _launch_server
java = self._find_java()
File "/h2o/scoring-pipeline/env/lib/python3.6/site-packages/h2o/backend/server.py", line 410, in _find_java
raise H2OStartupError("Cannot find Java. Please install the latest JRE from\n"
h2o.exceptions.H2OStartupError: Cannot find Java. Please install the latest JRE
from
http://www.oracle.com/technetwork/java/javase/downloads/index.html

Can you please advise if the JDK is required to be installed on the client machine?

EmbeddingSimilarityTransformer using too much GPU memory

Can lead to:

self.doc_embedding = DocumentPoolEmbeddings([self.embedding])
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/flair/embeddings.py", line 1532, in __init__
    self.to(flair.device)
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 386, in to
    return self._apply(convert)
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
    module._apply(fn)
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
    module._apply(fn)
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
    module._apply(fn)
  [Previous line repeated 1 more times]
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 199, in _apply
    param.data = fn(param.data)
  File "/home/jon/h2oai/tmp/contrib5/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 384, in convert
    return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 90.00 MiB (GPU 0; 7.76 GiB total capacity; 837.44 MiB already allocated; 65.50 MiB free; 84.56 MiB cached)

driverless h2oai python pipeline tries to connect to H2O server at port 12345

Hi

I am currently using the h2oai trial for scoping the features.

I am finding that after completing the training process and downloading the scoring pipeline, when i execute the run_example.py it fails while connecting to localhost:12345

Question: does the h2oai server need to be up and running while the run_example.py is executed

Is there a way to workaround this and have run_example.py execute (in isolation) without connecting to the h2o server?

Why is the run_example.py trying to connect to h2o server

text_binary_count_transformer.py causes mismatched feature names

Tests fail http://mr-0xc1:8080/job/dai-native-pipeline-nightly/job/dev/45/testReport/junit/tests.test_integration/test_2sigmarentals_text/Testing_on_x86_64___CUDA_10_0_tests_integration_custome___test_text_twoSigmaRentals/ due to mismatched feature names.

Investigation showed three extra columns in mismatch:

"'BinaryCount:features.live_in_super.1',",
 "'BinaryCount:features.outdoor_space.1',",
 "'BinaryCount:features.roof_deck.1',"

'features' column in dataset from twosigmarentals test has:
roof deck and roof_deck
live_in_super and live in_super
outdoor space and outdoor_space
Recommend to remove .replace(' ', '_') which would cause feature names to clash

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.