Giter Club home page Giter Club logo

h2oai / h2o-3 Goto Github PK

View Code? Open in Web Editor NEW
6.7K 383.0 2.0K 609.35 MB

H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.

Home Page: http://h2o.ai

License: Apache License 2.0

Makefile 0.06% Java 22.19% Shell 0.28% Python 11.00% Emacs Lisp 0.01% HTML 28.22% CSS 1.11% JavaScript 0.19% R 5.47% Scala 0.01% CoffeeScript 0.01% Ruby 0.01% Batchfile 0.02% TeX 0.71% Jupyter Notebook 30.40% Groovy 0.27% Dockerfile 0.01% HCL 0.05% HiveQL 0.01% DIGITAL Command Language 0.01%
h2o machine-learning data-science deep-learning big-data ensemble-learning gbm random-forest naive-bayes pca

h2o-3's Introduction

H2O

For any question not answered in this file or in H2O-3 Documentation, please use:

Ask on GitHub Ask on StackOverflow Ask on Gitter

H2O is an in-memory platform for distributed, scalable machine learning. H2O uses familiar interfaces like R, Python, Scala, Java, JSON and the Flow notebook/web interface, and works seamlessly with big data technologies like Hadoop and Spark. H2O provides implementations of many popular algorithms such as Generalized Linear Models (GLM), Gradient Boosting Machines (including XGBoost), Random Forests, Deep Neural Networks, Stacked Ensembles, Naive Bayes, Generalized Additive Models (GAM), Cox Proportional Hazards, K-Means, PCA, Word2Vec, as well as a fully automatic machine learning algorithm (H2O AutoML).

H2O is extensible so that developers can add data transformations and custom algorithms of their choice and access them through all of those clients. H2O models can be downloaded and loaded into H2O memory for scoring, or exported into POJO or MOJO format for extemely fast scoring in production. More information can be found in the H2O User Guide.

H2O-3 (this repository) is the third incarnation of H2O, and the successor to H2O-2.

Table of Contents

1. Downloading H2O-3

While most of this README is written for developers who do their own builds, most H2O users just download and use a pre-built version. If you are a Python or R user, the easiest way to install H2O is via PyPI or Anaconda (for Python) or CRAN (for R):

Python

pip install h2o

R

install.packages("h2o")

For the latest stable, nightly, Hadoop (or Spark / Sparkling Water) releases, or the stand-alone H2O jar, please visit: https://h2o.ai/download

More info on downloading & installing H2O is available in the H2O User Guide.

2. Open Source Resources

Most people interact with three or four primary open source resources: GitHub (which you've already found), GitHub issues (for bug reports and issue tracking), Stack Overflow for H2O code/software-specific questions, and h2ostream (a Google Group / email discussion forum) for questions not suitable for Stack Overflow. There is also a Gitter H2O developer chat group, however for archival purposes & to maximize accessibility, we'd prefer that standard H2O Q&A be conducted on Stack Overflow.

2.1 Issue Tracking and Feature Requests

You can browse and create new issues in our GitHub repository: https://github.com/h2oai/h2o-3

  • You can browse and search for issues without logging in to Github:
    1. Click the Issues tab on the top of the page
    2. Apply filter to search for particular issues
  • To create an issue (either a bug or a feature request):

2.2 List of H2O Resources

3. Using H2O-3 Artifacts

Every nightly build publishes R, Python, Java, and Scala artifacts to a build-specific repository. In particular, you can find Java artifacts in the maven/repo directory.

Here is an example snippet of a gradle build file using h2o-3 as a dependency. Replace x, y, z, and nnnn with valid numbers.

// h2o-3 dependency information
def h2oBranch = 'master'
def h2oBuildNumber = 'nnnn'
def h2oProjectVersion = "x.y.z.${h2oBuildNumber}"

repositories {
  // h2o-3 dependencies
  maven {
    url "https://s3.amazonaws.com/h2o-release/h2o-3/${h2oBranch}/${h2oBuildNumber}/maven/repo/"
  }
}

dependencies {
  compile "ai.h2o:h2o-core:${h2oProjectVersion}"
  compile "ai.h2o:h2o-algos:${h2oProjectVersion}"
  compile "ai.h2o:h2o-web:${h2oProjectVersion}"
  compile "ai.h2o:h2o-app:${h2oProjectVersion}"
}

Refer to the latest H2O-3 bleeding edge nightly build page for information about installing nightly build artifacts.

Refer to the h2o-droplets GitHub repository for a working example of how to use Java artifacts with gradle.

Note: Stable H2O-3 artifacts are periodically published to Maven Central (click here to search) but may substantially lag behind H2O-3 Bleeding Edge nightly builds.

4. Building H2O-3

Getting started with H2O development requires JDK 1.8+, Node.js, Gradle, Python and R. We use the Gradle wrapper (called gradlew) to ensure up-to-date local versions of Gradle and other dependencies are installed in your development directory.

4.1. Before building

Building h2o requires a properly set up R environment with required packages and Python environment with the following packages:

grip
tabulate
requests
wheel

To install these packages you can use pip or conda. If you have troubles installing these packages on Windows, please follow section Setup on Windows of this guide.

(Note: It is recommended to use some virtual environment such as VirtualEnv, to install all packages. )

4.2. Building from the command line (Quick Start)

To build H2O from the repository, perform the following steps.

Recipe 1: Clone fresh, build, skip tests, and run H2O

# Build H2O
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
./gradlew build -x test

You may encounter problems: e.g. npm missing. Install it:
brew install npm

# Start H2O
java -jar build/h2o.jar

# Point browser to http://localhost:54321

Recipe 2: Clone fresh, build, and run tests (requires a working install of R)

git clone https://github.com/h2oai/h2o-3.git
cd h2o-3
./gradlew syncSmalldata
./gradlew syncRPackages
./gradlew build

Notes:

  • Running tests starts five test JVMs that form an H2O cluster and requires at least 8GB of RAM (preferably 16GB of RAM).
  • Running ./gradlew syncRPackages is supported on Windows, OS X, and Linux, and is strongly recommended but not required. ./gradlew syncRPackages ensures a complete and consistent environment with pre-approved versions of the packages required for tests and builds. The packages can be installed manually, but we recommend setting an ENV variable and using ./gradlew syncRPackages. To set the ENV variable, use the following format (where `${WORKSPACE} can be any path):
mkdir -p ${WORKSPACE}/Rlibrary
export R_LIBS_USER=${WORKSPACE}/Rlibrary

Recipe 3: Pull, clean, build, and run tests

git pull
./gradlew syncSmalldata
./gradlew syncRPackages
./gradlew clean
./gradlew build

Notes

  • We recommend using ./gradlew clean after each git pull.

  • Skip tests by adding -x test at the end the gradle build command line. Tests typically run for 7-10 minutes on a Macbook Pro laptop with 4 CPUs (8 hyperthreads) and 16 GB of RAM.

  • Syncing smalldata is not required after each pull, but if tests fail due to missing data files, then try ./gradlew syncSmalldata as the first troubleshooting step. Syncing smalldata downloads data files from AWS S3 to the smalldata directory in your workspace. The sync is incremental. Do not check in these files. The smalldata directory is in .gitignore. If you do not run any tests, you do not need the smalldata directory.

  • Running ./gradlew syncRPackages is supported on Windows, OS X, and Linux, and is strongly recommended but not required. ./gradlew syncRPackages ensures a complete and consistent environment with pre-approved versions of the packages required for tests and builds. The packages can be installed manually, but we recommend setting an ENV variable and using ./gradlew syncRPackages. To set the ENV variable, use the following format (where ${WORKSPACE} can be any path):

    mkdir -p ${WORKSPACE}/Rlibrary
    export R_LIBS_USER=${WORKSPACE}/Rlibrary
    

Recipe 4: Just building the docs

./gradlew clean && ./gradlew build -x test && (export DO_FAST=1; ./gradlew dist)
open target/docs-website/h2o-docs/index.html

Recipe 5: Building using a Makefile

Root of the git repository contains a Makefile with convenient shortcuts for frequent build targets used in development. To build h2o.jar while skipping tests and also the building of alternative assemblies, execute

make

To build h2o.jar using the minimal assembly, run

make minimal

The minimal assembly is well suited for developement of H2O machine learning algorithms. It doesn't bundle some heavyweight dependencies (like Hadoop) and using it saves build time as well as need to download large libraries from Maven repositories.

4.3. Setup on Windows

Step 1: Download and install WinPython.

From the command line, validate python is using the newly installed package by using which python (or sudo which python). Update the Environment variable with the WinPython path.

Step 2: Install required Python packages:
pip install grip tabulate wheel
Step 3: Install JDK

Install Java 1.8+ and add the appropriate directory C:\Program Files\Java\jdk1.7.0_65\bin with java.exe to PATH in Environment Variables. To make sure the command prompt is detecting the correct Java version, run:

javac -version

The CLASSPATH variable also needs to be set to the lib subfolder of the JDK:

CLASSPATH=/<path>/<to>/<jdk>/lib
Step 4. Install Node.js

Install Node.js and add the installed directory C:\Program Files\nodejs, which must include node.exe and npm.cmd to PATH if not already prepended.

Step 5. Install R, the required packages, and Rtools:

Install R and add the bin directory to your PATH if not already included.

Install the following R packages:

To install these packages from within an R session:

pkgs <- c("RCurl", "jsonlite", "statmod", "devtools", "roxygen2", "testthat")
for (pkg in pkgs) {
  if (! (pkg %in% rownames(installed.packages()))) install.packages(pkg)
}

Note that libcurl is required for installation of the RCurl R package.

Note that this packages don't cover running tests, they for building H2O only.

Finally, install Rtools, which is a collection of command line tools to facilitate R development on Windows.

NOTE: During Rtools installation, do not install Cygwin.dll.

Step 6. Install Cygwin

NOTE: During installation of Cygwin, deselect the Python packages to avoid a conflict with the Python.org package.

Step 6b. Validate Cygwin

If Cygwin is already installed, remove the Python packages or ensure that Native Python is before Cygwin in the PATH variable.

Step 7. Update or validate the Windows PATH variable to include R, Java JDK, Cygwin.
Step 8. Git Clone h2o-3

If you don't already have a Git client, please install one. The default one can be found here http://git-scm.com/downloads. Make sure that command prompt support is enabled before the installation.

Download and update h2o-3 source codes:

git clone https://github.com/h2oai/h2o-3
Step 9. Run the top-level gradle build:
cd h2o-3
./gradlew.bat build

If you encounter errors run again with --stacktrace for more instructions on missing dependencies.

4.4. Setup on OS X

If you don't have Homebrew, we recommend installing it. It makes package management for OS X easy.

Step 1. Install JDK

Install Java 1.8+. To make sure the command prompt is detecting the correct Java version, run:

javac -version
Step 2. Install Node.js:

Using Homebrew:

brew install node

Otherwise, install from the NodeJS website.

Step 3. Install R and the required packages:

Install R and add the bin directory to your PATH if not already included.

Install the following R packages:

To install these packages from within an R session:

pkgs <- c("RCurl", "jsonlite", "statmod", "devtools", "roxygen2", "testthat")
for (pkg in pkgs) {
  if (! (pkg %in% rownames(installed.packages()))) install.packages(pkg)
}

Note that libcurl is required for installation of the RCurl R package.

Note that this packages don't cover running tests, they for building H2O only.

Step 4. Install python and the required packages:

Install python:

brew install python

Install pip package manager:

sudo easy_install pip

Next install required packages:

sudo pip install wheel requests tabulate  
Step 5. Git Clone h2o-3

OS X should already have Git installed. To download and update h2o-3 source codes:

git clone https://github.com/h2oai/h2o-3
Step 6. Run the top-level gradle build:
cd h2o-3
./gradlew build

Note: on a regular machine it may take very long time (about an hour) to run all the tests.

If you encounter errors run again with --stacktrace for more instructions on missing dependencies.

4.5. Setup on Ubuntu 14.04

Step 1. Install Node.js
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
sudo apt-get install -y nodejs
Step 2. Install JDK:

Install Java 8. Installation instructions can be found here JDK installation. To make sure the command prompt is detecting the correct Java version, run:

javac -version
Step 3. Install R and the required packages:

Installation instructions can be found here R installation. Click “Download R for Linux”. Click “ubuntu”. Follow the given instructions.

To install the required packages, follow the same instructions as for OS X above.

Note: If the process fails to install RStudio Server on Linux, run one of the following:

sudo apt-get install libcurl4-openssl-dev

or

sudo apt-get install libcurl4-gnutls-dev

Step 4. Git Clone h2o-3

If you don't already have a Git client:

sudo apt-get install git

Download and update h2o-3 source codes:

git clone https://github.com/h2oai/h2o-3
Step 5. Run the top-level gradle build:
cd h2o-3
./gradlew build

If you encounter errors, run again using --stacktrace for more instructions on missing dependencies.

Make sure that you are not running as root, since bower will reject such a run.

4.6. Setup on Ubuntu 13.10

Step 1. Install Node.js
curl -sL https://deb.nodesource.com/setup_16.x | sudo bash -
sudo apt-get install -y nodejs
Steps 2-4. Follow steps 2-4 for Ubuntu 14.04 (above)

4.7. Setup on CentOS 7

cd /opt
sudo wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz"

sudo tar xzf jdk-7u79-linux-x64.tar.gz
cd jdk1.7.0_79

sudo alternatives --install /usr/bin/java java /opt/jdk1.7.0_79/bin/java 2

sudo alternatives --install /usr/bin/jar jar /opt/jdk1.7.0_79/bin/jar 2
sudo alternatives --install /usr/bin/javac javac /opt/jdk1.7.0_79/bin/javac 2
sudo alternatives --set jar /opt/jdk1.7.0_79/bin/jar
sudo alternatives --set javac /opt/jdk1.7.0_79/bin/javac

cd /opt

sudo wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
sudo rpm -ivh epel-release-7-5.noarch.rpm

sudo echo "multilib_policy=best" >> /etc/yum.conf
sudo yum -y update

sudo yum -y install R R-devel git python-pip openssl-devel libxml2-devel libcurl-devel gcc gcc-c++ make openssl-devel kernel-devel texlive texinfo texlive-latex-fonts libX11-devel mesa-libGL-devel mesa-libGL nodejs npm python-devel numpy scipy python-pandas

sudo pip install scikit-learn grip tabulate statsmodels wheel

mkdir ~/Rlibrary
export JAVA_HOME=/opt/jdk1.7.0_79
export JRE_HOME=/opt/jdk1.7.0_79/jre
export PATH=$PATH:/opt/jdk1.7.0_79/bin:/opt/jdk1.7.0_79/jre/bin
export R_LIBS_USER=~/Rlibrary

# install local R packages
R -e 'install.packages(c("RCurl","jsonlite","statmod","devtools","roxygen2","testthat"), dependencies=TRUE, repos="http://cran.rstudio.com/")'

cd
git clone https://github.com/h2oai/h2o-3.git
cd h2o-3

# Build H2O
./gradlew syncSmalldata
./gradlew syncRPackages
./gradlew build -x test

5. Launching H2O after Building

To start the H2O cluster locally, execute the following on the command line:

java -jar build/h2o.jar

A list of available start-up JVM and H2O options (e.g. -Xmx, -nthreads, -ip), is available in the H2O User Guide.

6. Building H2O on Hadoop

Pre-built H2O-on-Hadoop zip files are available on the download page. Each Hadoop distribution version has a separate zip file in h2o-3.

To build H2O with Hadoop support yourself, first install sphinx for python: pip install sphinx Then start the build by entering the following from the top-level h2o-3 directory:

export BUILD_HADOOP=1;
./gradlew build -x test;
./gradlew dist;

This will create a directory called 'target' and generate zip files there. Note that BUILD_HADOOP is the default behavior when the username is jenkins (refer to settings.gradle); otherwise you have to request it, as shown above.

To build the zip files only for selected distributions use the H2O_TARGET env variable together with BUILD_HADOOP, for example:

export BUILD_HADOOP=1;
export H2O_TARGET=hdp2.5,hdp2.6
./gradlew build -x test;
./gradlew dist;

Adding support for a new version of Hadoop

In the h2o-hadoop directory, each Hadoop version has a build directory for the driver and an assembly directory for the fatjar.

You need to:

  1. Add a new driver directory and assembly directory (each with a build.gradle file) in h2o-hadoop
  2. Add these new projects to h2o-3/settings.gradle
  3. Add the new Hadoop version to HADOOP_VERSIONS in make-dist.sh
  4. Add the new Hadoop version to the list in h2o-dist/buildinfo.json

Secure user impersonation

Hadoop supports secure user impersonation through its Java API. A kerberos-authenticated user can be allowed to proxy any username that meets specified criteria entered in the NameNode's core-site.xml file. This impersonation only applies to interactions with the Hadoop API or the APIs of Hadoop-related services that support it (this is not the same as switching to that user on the machine of origin).

Setting up secure user impersonation (for h2o):

  1. Create or find an id to use as proxy which has limited-to-no access to HDFS or related services; the proxy user need only be used to impersonate a user
  2. (Required if not using h2odriver) If you are not using the driver (e.g. you wrote your own code against h2o's API using Hadoop), make the necessary code changes to impersonate users (see org.apache.hadoop.security.UserGroupInformation)
  3. In either of Ambari/Cloudera Manager or directly on the NameNode's core-site.xml file, add 2/3 properties for the user we wish to use as a proxy (replace with the simple user name - not the fully-qualified principal name).
    • hadoop.proxyuser.<proxyusername>.hosts: the hosts the proxy user is allowed to perform impersonated actions on behalf of a valid user from
    • hadoop.proxyuser.<proxyusername>.groups: the groups an impersonated user must belong to for impersonation to work with that proxy user
    • hadoop.proxyuser.<proxyusername>.users: the users a proxy user is allowed to impersonate
    • Example: <property> <name>hadoop.proxyuser.myproxyuser.hosts</name> <value>host1,host2</value> </property> <property> <name>hadoop.proxyuser.myproxyuser.groups</name> <value>group1,group2</value> </property> <property> <name>hadoop.proxyuser.myproxyuser.users</name> <value>user1,user2</value> </property>
  4. Restart core services such as HDFS & YARN for the changes to take effect

Impersonated HDFS actions can be viewed in the hdfs audit log ('auth:PROXY' should appear in the ugi= field in entries where this is applicable). YARN similarly should show 'auth:PROXY' somewhere in the Resource Manager UI.

To use secure impersonation with h2o's Hadoop driver:

Before this is attempted, see Risks with impersonation, below

When using the h2odriver (e.g. when running with hadoop jar ...), specify -principal <proxy user kerberos principal>, -keytab <proxy user keytab path>, and -run_as_user <hadoop username to impersonate>, in addition to any other arguments needed. If the configuration was successful, the proxy user will log in and impersonate the -run_as_user as long as that user is allowed by either the users or groups configuration property (configured above); this is enforced by HDFS & YARN, not h2o's code. The driver effectively sets its security context as the impersonated user so all supported Hadoop actions will be performed as that user (e.g. YARN, HDFS APIs support securely impersonated users, but others may not).

Precautions to take when leveraging secure impersonation

  • The target use case for secure impersonation is applications or services that pre-authenticate a user and then use (in this case) the h2odriver on behalf of that user. H2O's Steam is a perfect example: auth user in web app over SSL, impersonate that user when creating the h2o YARN container.
  • The proxy user should have limited permissions in the Hadoop cluster; this means no permissions to access data or make API calls. In this way, if it's compromised it would only have the power to impersonate a specific subset of the users in the cluster and only from specific machines.
  • Use the hadoop.proxyuser.<proxyusername>.hosts property whenever possible or practical.
  • Don't give the proxyusername's password or keytab to any user you don't want to impersonate another user (this is generally any user). The point of impersonation is not to allow users to impersonate each other. See the first bullet for the typical use case.
  • Limit user logon to the machine the proxying is occurring from whenever practical.
  • Make sure the keytab used to login the proxy user is properly secured and that users can't login as that id (via su, for instance)
  • Never set hadoop.proxyuser..{users,groups} to '*' or 'hdfs', 'yarn', etc. Allowing any user to impersonate hdfs, yarn, or any other important user/group should be done with extreme caution and strongly analyzed before it's allowed.

Risks with secure impersonation

  • The id performing the impersonation can be compromised like any other user id.
  • Setting any hadoop.proxyuser.<proxyusername>.{hosts,groups,users} property to '*' can greatly increase exposure to security risk.
  • When users aren't authenticated before being used with the driver (e.g. like Steam does via a secure web app/API), auditability of the process/system is difficult.
$ git diff
diff --git a/h2o-app/build.gradle b/h2o-app/build.gradle
index af3b929..097af85 100644
--- a/h2o-app/build.gradle
+++ b/h2o-app/build.gradle
@@ -8,5 +8,6 @@ dependencies {
   compile project(":h2o-algos")
   compile project(":h2o-core")
   compile project(":h2o-genmodel")
+  compile project(":h2o-persist-hdfs")
 }

diff --git a/h2o-persist-hdfs/build.gradle b/h2o-persist-hdfs/build.gradle
index 41b96b2..6368ea9 100644
--- a/h2o-persist-hdfs/build.gradle
+++ b/h2o-persist-hdfs/build.gradle
@@ -2,5 +2,6 @@ description = "H2O Persist HDFS"

 dependencies {
   compile project(":h2o-core")
-  compile("org.apache.hadoop:hadoop-client:2.0.0-cdh4.3.0")
+  compile("org.apache.hadoop:hadoop-client:2.4.1-mapr-1408")
+  compile("org.json:org.json:chargebee-1.0")
 }

7. Sparkling Water

Sparkling Water combines two open-source technologies: Apache Spark and the H2O Machine Learning platform. It makes H2O’s library of advanced algorithms, including Deep Learning, GLM, GBM, K-Means, and Distributed Random Forest, accessible from Spark workflows. Spark users can select the best features from either platform to meet their Machine Learning needs. Users can combine Spark's RDD API and Spark MLLib with H2O’s machine learning algorithms, or use H2O independently of Spark for the model building process and post-process the results in Spark.

Sparkling Water Resources:

8. Documentation

Documenation Homepage

The main H2O documentation is the H2O User Guide. Visit http://docs.h2o.ai for the top-level introduction to documentation on H2O projects.

Generate REST API documentation

To generate the REST API documentation, use the following commands:

cd ~/h2o-3
cd py
python ./generate_rest_api_docs.py  # to generate Markdown only
python ./generate_rest_api_docs.py --generate_html  --github_user GITHUB_USER --github_password GITHUB_PASSWORD # to generate Markdown and HTML

The default location for the generated documentation is build/docs/REST.

If the build fails, try gradlew clean, then git clean -f.

Bleeding edge build documentation

Documentation for each bleeding edge nightly build is available on the nightly build page.

9. Citing H2O

If you use H2O as part of your workflow in a publication, please cite your H2O resource(s) using the following BibTex entry:

H2O Software

@Manual{h2o_package_or_module,
    title = {package_or_module_title},
    author = {H2O.ai},
    year = {year},
    month = {month},
    note = {version_information},
    url = {resource_url},
}

Formatted H2O Software citation examples:

H2O Booklets

H2O algorithm booklets are available at the Documentation Homepage.

@Manual{h2o_booklet_name,
    title = {booklet_title},
    author = {list_of_authors},
    year = {year},
    month = {month},
    url = {link_url},
}

Formatted booklet citation examples:

Arora, A., Candel, A., Lanford, J., LeDell, E., and Parmar, V. (Oct. 2016). Deep Learning with H2O. http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/DeepLearningBooklet.pdf.

Click, C., Lanford, J., Malohlava, M., Parmar, V., and Roark, H. (Oct. 2016). Gradient Boosted Models with H2O. http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/GBMBooklet.pdf.

10. Community

H2O has been built by a great many number of contributors over the years both within H2O.ai (the company) and the greater open source community. You can begin to contribute to H2O by answering Stack Overflow questions or filing bug reports. Please join us!

Team & Committers

SriSatish Ambati
Cliff Click
Tom Kraljevic
Tomas Nykodym
Michal Malohlava
Kevin Normoyle
Spencer Aiello
Anqi Fu
Nidhi Mehta
Arno Candel
Josephine Wang
Amy Wang
Max Schloemer
Ray Peck
Prithvi Prabhu
Brandon Hill
Jeff Gambera
Ariel Rao
Viraj Parmar
Kendall Harris
Anand Avati
Jessica Lanford
Alex Tellez
Allison Washburn
Amy Wang
Erik Eckstrand
Neeraja Madabhushi
Sebastian Vidrio
Ben Sabrin
Matt Dowle
Mark Landry
Erin LeDell
Andrey Spiridonov
Oleg Rogynskyy
Nick Martin
Nancy Jordan
Nishant Kalonia
Nadine Hussami
Jeff Cramer
Stacie Spreitzer
Vinod Iyengar
Charlene Windom
Parag Sanghavi
Navdeep Gill
Lauren DiPerna
Anmol Bal
Mark Chan
Nick Karpov
Avni Wadhwa
Ashrith Barthur
Karen Hayrapetyan
Jo-fai Chow
Dmitry Larko
Branden Murray
Jakub Hava
Wen Phan
Magnus Stensmo
Pasha Stetsenko
Angela Bartz
Mateusz Dymczyk
Micah Stubbs
Ivy Wang
Terone Ward
Leland Wilkinson
Wendy Wong
Nikhil Shekhar
Pavel Pscheidl
Michal Kurka
Veronika Maurerova
Jan Sterba
Jan Jendrusak
Sebastien Poirier
Tomáš Frýda
Ard Kelmendi
Yuliia Syzon
Adam Valenta
Marek Novotny

Advisors

Scientific Advisory Council

Stephen Boyd
Rob Tibshirani
Trevor Hastie

Systems, Data, FileSystems and Hadoop

Doug Lea
Chris Pouliot
Dhruba Borthakur

Investors

Jishnu Bhattacharjee, Nexus Venture Partners
Anand Babu Periasamy
Anand Rajaraman
Ash Bhardwaj
Rakesh Mathur
Michael Marks
Egbert Bierman
Rajesh Ambati

h2o-3's People

Contributors

aboyoun avatar arnocandel avatar bghill avatar cliffclick avatar deil87 avatar ericeckstrand avatar h2o-ops avatar hannah-tillman avatar jakubhava avatar jessica0xdata avatar koniecsveta avatar laurendiperna avatar ledell avatar lo5 avatar mattdowle avatar maurever avatar michal-raska avatar mklechan avatar mmalohlava avatar mn-mikke avatar navdeep-g avatar nmadabhushi avatar rpeck avatar spennihana avatar st-pasha avatar tomasfryda avatar tomasnykodym avatar tomkraljevic avatar valenad1 avatar wendycwong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

h2o-3's Issues

Generated MOJO with BOM character in first column

I'm running H2O on docker, using centos 7

When I upload a csv file, parse and run auto ml, the first column of my mojo has a BOM character - Zero Width No-Break Space (BOM, ZWNBSP)

This only happens when I'm running on docker. If I run directly on my machine, this character does not show up.

I've tried passing the parameter -Dfile.encoding=utf-8 on startup, but it didn't affect it. Also tried changing the container locale to en_US.utf8, no effect.

Utility function to retrain models on new data from a set of parameters

We need a utility like this so that we can retrain “optimized” models (found via grid or AutoML, etc) on new data. For example, the full dataset.

In R:

{noformat}h2o.retrain(model, params, training_frame, ...){noformat}

In Python it’s a little different because you can use the existing model object directly which stores the params, and you’d just change the data args in the new {{.retrain()}} method.

{noformat}model = H2OGradientBoostingEstimator(..)
model.train(...)

model.retrain(training_frame, ...){noformat}

GBM Interaction Gain and Variable Importance Inconsistency

It seems that there is an inconsistency and possible inaccuracy in the calculation of variable importance for regression models.

Based on the documentation [here|https://docs.h2o.ai/h2o/latest-stable/h2o-docs/variable-importance.html#variable-importance-calculation-gbm-drf], the variable importance should be the decrease in squared error from a node and it's children nodes. When I try to recreate that calculation using a simple decision stump, I cannot reconcile the values.

Additionally, there seems to be an inconsistency as well compared to the output of h2o.feature_interaction. For a single split tree, I would have expected the metrics to be the same, however they are different, and oddly, the gain value is negative which seems incorrect. Please find the code with a reprex below.

Please let me know if anything about my approach is wrong. Thank you!

# set up
library(data.table)
library(h2o) # 3.36.1.4
h2o.init()
set.seed(12345)

# create dummy data
train_data <- data.frame(x1 = runif(n = 100, min = 1, max = 100), x2 = runif(n = 100, min = 1, max = 100))
train_data$y <- runif(n = 100)*10 + train_data$x1 * 1.5 +  train_data$x2 * -2 
train_data_h2o <- as.h2o(train_data)

# build dummy GBM model (decision tree)
gbm_model <- h2o.gbm(training_frame = train_data_h2o, x = c("x1","x2"), y = "y", ntrees = 1, max_depth = 1, min_rows = 1, seed = 12345)
# look at variable importance table
gbm_model@model$variable_importances

# generate predictions 
train_data_h2o <- h2o.cbind(train_data_h2o, h2o.predict(gbm_model, train_data_h2o))

# get single tree in GBM
tree <- h2o.getModelTree(model = gbm_model, tree_number = 1)

# calculate predictions after first split on x2
train_data_h2o$first_split_pred <- h2o.ifelse(train_data_h2o$x2 >= tree@thresholds[1], gbm_model@model$init_f + tree@predictions[[3]], gbm_model@model$init_f + tree@predictions[[2]])


########### Attempt to calculate relative_importance values from gbm_model@model$variable_importances
# first calculate each node's SSE 
# get SSE of root node
init_f_sse <- sum( (train_data_h2o$y - gbm_model@model$init_f)^2 ) # 518029.3

# calculate SSE from x2's child nodes
x2_right_sse <- sum((train_data_h2o[train_data_h2o$x2 >= tree@thresholds[1],]$y - train_data_h2o[train_data_h2o$x2 >= tree@thresholds[1],]$first_split_pred)^2) # 225849
x2_left_sse <- sum((train_data_h2o[train_data_h2o$x2 < tree@thresholds[1],]$y - train_data_h2o[train_data_h2o$x2 < tree@thresholds[1],]$first_split_pred)^2) # 244693.2

# x2's relative_importance manual calculation
init_f_sse - x2_right_sse - x2_left_sse # 47487.08

# compare to variable_importances table
gbm_model@model$variable_importances[1,2] # 249932 -- doesn't match above

feat_int <- h2o.feature_interaction(gbm_model)

feat_int[[1]]$gain # -0.0078125 

Parser fails if uploading more than 331186 columns

When writing tests for [https://h2oai.atlassian.net/browse/PUBDEV-8876|https://h2oai.atlassian.net/browse/PUBDEV-8876|smart-link], encountered this issue:

{noformat} ~/repos/h2o/h2o-3/h2o-py/h2o/h2o.py in parse_setup(raw_frames, destination_frame, header, separator, column_names, column_types, na_strings, skipped_columns, custom_non_data_line_markers, partition_by, quotechar, escapechar)
874 if len(column_names) != len(j["column_types"]): raise ValueError(
875 "length of col_names should be equal to the number of columns: %d vs %d"
--> 876 % (len(column_names), len(j["column_types"])))
877 j["column_names"] = column_names
878 counter = 0

ValueError: length of col_names should be equal to the number of columns: 1000000 vs 331186{noformat}

Maximum Recursion Depth, when creating rapids string

Hi!

This ticket is the follow-up ticket to: https://h2oai.atlassian.net/browse/PUBDEV-8960
(The old bug-ticket can be deleted).

The Dataset I am using has the following structure:
time,(CH20 MIN) Flow Speed (m/s),...,(CH24 SAMPLE COUNT) Conductivity,id,classes
(36 columns, time series, name=paradox dataset)


This is my code, which causes the error (shortened):

{code:python}

def jifa():
// x_test is a pandas dataframe
features = create_features(x_test, column_id=automl_model.params.get('column_id'),
column_value=automl_model.params.get('column_value'),
column_kind=automl_model.params.get('column_kind'),
column_sort=automl_model.params.get('column_sort'),
settings=automl_model.feature_settings)

    features = convert_h2oframe_to_numeric(features, features.columns)
    *y_pre = automl_model.model.predict(features)['predict'].as_data_frame()*
    // This line triggers the error in h20

def create_features(data, column_id=None, column_value=None, column_kind=None,
settings=None):
"""Load features."""
//extract_features is a method from ts-fresh to extract important features.
features = extract_features(data, column_id=column_id,
column_value=column_value,
column_kind=column_kind,
kind_to_fc_parameters=settings,
impute_function=impute)
features = h2o.H2OFrame(features).drop([0], axis=0)
return features
{code}


The traceback in h20 is the following:

{panel:title=Traceback in h20}
File “/src/model_selection/model_selection.py”, line 79, in compute_metrics y_pre = automl_model.model.predict(features)['predict'].as_data_frame() {color:#14892c}<-- Still my code{color}

File "/usr/local/lib/python3.8/site-packages/h2o/model/model_base.py", line 280, in predict j = H2OJob(h2o.api("POST /4/Predictions/models/%s/frames/%s" % (self.model_id, test_data.frame_id), data = {'custom_metric_func': custom_metric_func}),

File "/usr/local/lib/python3.8/site-packages/h2o/frame.py", line 415, in frame_id return self._frame()._ex._cache._id

File "/usr/local/lib/python3.8/site-packages/h2o/frame.py", line 735, in _frame self._ex._eager_frame()

File "/usr/local/lib/python3.8/site-packages/h2o/expr.py", line 90, in _eager_frame self._eval_driver('frame')

File "/usr/local/lib/python3.8/site-packages/h2o/expr.py", line 113, in _eval_driver exec_str = self._get_ast_str(top)

File "/usr/local/lib/python3.8/site-packages/h2o/expr.py", line 151, in _get_ast_str exec_str = "({} {})".format(self._op, " ".join([ExprNode._arg_to_expr(ast) for ast in self._children]))

File "/usr/local/lib/python3.8/site-packages/h2o/expr.py", line 151, in exec_str = "({} {})".format(self._op, " ".join([ExprNode._arg_to_expr(ast) for ast in self._children]))

{color:#14892c}Afterwards it reaches the maximum recursion depth. It jumps back and forth between the mehtodes, until the mrd.{color}
{panel}


If the _children of the ExpressionNode consist only out of ExpressionNodes and their children as well. Then it jumps between those mehtodes until the maximum recursion depth.


It is related to this ticket, which tries to replace the recursive build:
https://h2oai.atlassian.net/browse/PUBDEV-8252

(This would solve the problem.)

If the _children of the ExpressionNode consist only out of ExpressionNodes and their children as well. Then it jumps between the methodes __arg_to_expr() _and _get_ast_str()


I cannot upload the dataset to this bug-ticket, please send a mail to: [mailto:[email protected]]. And I'll send the dataset to you.

GLM lambda search in combination with offset column results in wrong lambda_max value

Using an offset column in a GLM model in combination with lambda search sometimes delivers incorrect results. The maximum starting value of lambda is too high when an offset is used (calculation of lambda max is incorrect in case of offset), which causes the algorithm to stop too early.

An example is added where the GLM algorithm has been performed with and without offset. In this case the offset column contains only zero's. Therefore the model results should be exactly the same for both models.

  • The GLM with offset starts with a lambda_max of 1.9928.
  • The GLM without offset starts with a lambda_max of 0.05458.

Both models should start with a lambda_max of 0.05458

This difference in lambda_max leads to different model outcomes where the model with offset contains no coefficients and the model without offset contains one coefficient.

GLM fails with ArrayIndexOutOfBoundsException

During development of negative binomial dispersion using maximum likelihood I noticed that some times I get the following error.

{noformat}java.lang.ArrayIndexOutOfBoundsException: -1
at hex.gram.Gram.dropCols(Gram.java:364)
at hex.glm.ComputationState.computeNewGram(ComputationState.java:1030)
at hex.glm.ComputationState.computeGram(ComputationState.java:1064)
at hex.glm.GLM$GLMDriver.fitIRLSMML(GLM.java:2156)
at hex.glm.GLM$GLMDriver.fitModel(GLM.java:2539)
at hex.glm.GLM$GLMDriver.computeSubmodel(GLM.java:3022)
at hex.glm.GLM$GLMDriver.doCompute(GLM.java:3163)
at hex.glm.GLM$GLMDriver.computeImpl(GLM.java:3057)
at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:252)
at hex.glm.GLM$GLMDriver.compute2(GLM.java:1508)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1677)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:976)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
{noformat}

In negative binomial case, this could be partially solved by not removing columns containing zeros if there would be none left ([here|https://github.com/h2oai/h2o-3/blob/fb295a39f9731cd69ff32b40fd1f224c2b4d1913/h2o-algos/src/main/java/hex/glm/ComputationState.java#L1029]) but this often leads to {{NonSPDMatrix}} exception. This can also be partially solved by limiting/truncating {{d}} in [here|https://github.com/h2oai/h2o-3/blob/fb295a39f9731cd69ff32b40fd1f224c2b4d1913/h2o-algos/src/main/java/hex/glm/GLMModel.java#L1163-L1167].

To reproduce you can use the attached file and run the following code:

{code:r}df <- read.csv("95-1477-8.csv")

summary(MASS::glm.nb(result~., df))

hdf <- as.h2o(df)
m <- h2o.glm(y = "result", training_frame = hdf,
family = "negativebinomial", link = "log",
dispersion_parameter_method = "ml", standardize = FALSE,
seed = 95){code}

Creating H2OFrame from Pandas DataFrame ignores the data types - int gets converted to a float

When using an integer column in pandas it will get converted to a float column in h2o ({{H2OFrame(df)}}) due to using {{df.value.tolist()}} which will create a {{numpy}} {{ndarray}} and from that a list that contains floats instead of ints (if any other column was a float column) this then gets uploaded to the backend as CSV. This could be solved by using pandas' {{to_csv}} instead.

Progress bar (Python, R?): add a timer, next to progress percentage

For long running jobs like AutoML runs, showing only percentage in the progress bar looks very insufficient (especially if the job doesn’t have a time limit provided by the user) as the percentage estimation provided by the progress bar logic is often inaccurate (the progress speed being non-linear, it can get stuck to a certain percentage for a long time).

To provide better feedback to the user, I suggest to add a timer (in seconds) next to this percentage.
For example:

{noformat}AutoML progress: |███████▎ | 19% | 3:54{noformat}

On top of this, the percentage doesn’t seem to be rendered in Jupyter notebooks, let’s fix this at the same time.

Write data to snowflake table

I am using h2o 3.34.0.7 version with python. I tried to find a h2o defined method to write data to snowflake table but was not able to find it. Is there a h2o defined way to do it? If not, can I expect for that functionality to be added in a future version release?

Categorical features encoding

I train GBM models with H2O and want to use them in my backend (not Java). To do so, I download the MOJOs, convert it to ONNX and run it in my apps.

In order to make inference, I need to know how categorical columns transformed to their one-hot encoded versions. I was able to find it in the POJO:

{code:java}
static final void fill(String[] sa) {
sa[0] = "Age";
sa[1] = "Fare";
sa[2] = "Pclass.1";
sa[3] = "Pclass.2";
sa[4] = "Pclass.3";
sa[5] = "Pclass.missing(NA)";
sa[6] = "Sex.female";
sa[7] = "Sex.male";
sa[8] = "Sex.missing(NA)";
}
{code}

So, here is the workflow for non-Java backend as I see it:

Encode categorical features with OneHotExplicit.

Train GBM model.

Download MOJO and convert to ONNX.

Download POJO and find feature alignment in the source code.

Implement the inference in your backend.

Is it the most straightforward and correct way?

makeGLMModel. Saved mojo can't be imported back

h2o version '3.32.1.3'

I created a GLM model using the 'Test_Model = H2OGeneralizedLinearEstimator.makeGLMModel(model = GLM_model, coefs = coeff)"
And this model works fine while predicting "pred_valid = Test_Model .predict(valid)"
I was able to save this model in a mojo format as well,
"modelfile = Test_Model .download_mojo(path=r"temp/Model")
print("Model saved to " + modelfile)"
Model saved to temp/aa6519371d527ad213332499b9654416.zip

But when I tried to load the model and do predictions,
imported_model = h2o.import_mojo('temp/aa6519371d527ad213332499b9654416.zip')
Error happened.

generic Model Build progress: | (failed)

OSError Traceback (most recent call last)
in
1 #uploaded_model = h2o.upload_model(modelfile)
----> 2 imported_model = h2o.import_mojo('temp/aa6519371d527ad213332499b9654416.zip')

~/.local/lib/python3.7/site-packages/h2o/h2o.py in import_mojo(mojo_path)
2250 if mojo_path == None:
2251 raise TypeError("MOJO path may not be None")
-> 2252 mojo_estimator = H2OGenericEstimator.from_file(mojo_path)
2253 print(mojo_estimator)
2254 return mojo_estimator

~/.local/lib/python3.7/site-packages/h2o/estimators/generic.py in from_file(file)
122 """
123 model = H2OGenericEstimator(path = file)
--> 124 model.train()
125
126 return model

~/.local/lib/python3.7/site-packages/h2o/estimators/estimator_base.py in train(self, x, y, training_frame, offset_column, fold_column, weights_column, validation_frame, max_runtime_secs, ignored_columns, model_id, verbose)
113 validation_frame=validation_frame, max_runtime_secs=max_runtime_secs,
114 ignored_columns=ignored_columns, model_id=model_id, verbose=verbose)
--> 115 self._train(parms, verbose=verbose)
116
117 def train_segments(self, x=None, y=None, training_frame=None, offset_column=None, fold_column=None,

~/.local/lib/python3.7/site-packages/h2o/estimators/estimator_base.py in _train(self, parms, verbose)
205 return
206
--> 207 job.poll(poll_updates=self._print_model_scoring_history if verbose else None)
208 model_json = h2o.api("GET /%d/Models/%s" % (rest_ver, job.dest_key))["models"][0]
209 self._resolve_model(job.dest_key, model_json)

~/.local/lib/python3.7/site-packages/h2o/job.py in poll(self, poll_updates)
78 if (isinstance(self.job, dict)) and ("stacktrace" in list(self.job)):
79 raise EnvironmentError("Job with key {} failed with an exception: {}\nstacktrace: "
---> 80 "\n{}".format(self.job_key, self.exception, self.job["stacktrace"]))
81 else:
82 raise EnvironmentError("Job with key %s failed with an exception: %s" % (self.job_key, self.exception))

OSError: Job with key $03017f00000138d4ffffffff$_9d4bdc44f080e644f672efad9a654323 failed with an exception: java.lang.ClassCastException: com.google.gson.JsonNull cannot be cast to com.google.gson.JsonArray
stacktrace:
java.lang.ClassCastException: com.google.gson.JsonNull cannot be cast to com.google.gson.JsonArray
at hex.genmodel.attributes.ModelJsonReader.readTableArray(ModelJsonReader.java:53)
at hex.genmodel.ModelMojoReader.readReproducibilityInformation(ModelMojoReader.java:251)
at hex.genmodel.ModelMojoReader.readAll(ModelMojoReader.java:245)
at hex.genmodel.ModelMojoReader.readFrom(ModelMojoReader.java:65)
at hex.generic.Generic$MojoDelegatingModelDriver.computeImpl(Generic.java:94)
at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:246)
at hex.generic.Generic$MojoDelegatingModelDriver.compute2(Generic.java:79)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1637)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)

Purging and embargoing to deal with unintended data leaks in cross validation.

These approaches are often used in financial ML. Can benefit a wide variety of ML tasks though.

In short: Adding a safety gap between the k-folds or train-, test- and validation splits.

These articles explain it in detail:
[
https://medium.com/mlearning-ai/why-k-fold-cross-validation-is-failing-in-finance-65c895e83fdf|
https://medium.com/mlearning-ai/why-k-fold-cross-validation-is-failing-in-finance-65c895e83fdf]

[https://blog.quantinsti.com/cross-validation-embargo-purging-combinatorial/|https://blog.quantinsti.com/cross-validation-embargo-purging-combinatorial/]

The Combinatorial Purged Cross Validation mentioned there (it is a little better explained here: [https://towardsai.net/p/l/the-combinatorial-purged-cross-validation-method|https://towardsai.net/p/l/the-combinatorial-purged-cross-validation-method]) helps creating more walk-forward paths that are purely out-of-sample for increased statistical significance. This was proposed by Marcos Lopez de Prado in the “Advances in financial machine learning”.

Would be great to have this out-of-the box or being able to pass the cross validation folds / index with gaps.

RoR application request parameters getting overwritten with random symbol

So I have an application running Ruby 2.7.6 and Rails 6.1. Also using rom and from-HTTP for API calls. We have suddenly started to see our URLs in the browser, as well as server-side API request parameters, get overwritten with one of the symbols we use to represent an ID in the code:

[Ruby on Rails Course|https://www.igmguru.com/digital-marketing-programming/ruby-on-rails-certification-training/],

[Ruby on Rails Training|https://www.igmguru.com/digital-marketing-programming/ruby-on-rails-certification-training/]

Rails: Use strings like "1", "1.2", "1.2.1", and "1.2.1a" as ID values of a model

I have a tree-like structure of the Web Content Accessibility Guidelines (WCAG) with my model wcag_elements. I have no idea whether there's any benefit from that. I already looked into friendly_id gem, but this feels over-engineered for my specific use case:

[Ruby on Rails Course|https://www.igmguru.com/digital-marketing-programming/ruby-on-rails-certification-training/]

Importing R data.table's gzipped csv files stops early

The following example shows that if you try to import a {{.csv.gz}} file created by {{data.table}} , {{h2o}} does not import the full file, whereas it will if you do the {{gzip}} as a separate step. I’m guessing there’s a difference in the header which messes up the import logic. Also reproduced this issue on a linux machine.

{code:r}library(data.table)
library(h2o)

sessionInfo()
R version 4.2.2 (2022-10-31 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22621)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.utf8 LC_CTYPE=English_United States.utf8 LC_MONETARY=English_United States.utf8
[4] LC_NUMERIC=C LC_TIME=English_United States.utf8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] h2o_3.38.0.1 data.table_1.14.6

loaded via a namespace (and not attached):
[1] Rcpp_1.0.9 compiler_4.2.2 later_1.3.0 urlchecker_1.0.1 bitops_1.0-7 prettyunits_1.1.1 profvis_0.3.7
[8] remotes_2.4.2 tools_4.2.2 digest_0.6.30 pkgbuild_1.4.0 pkgload_1.3.2 jsonlite_1.8.4 memoise_2.0.1
[15] lifecycle_1.0.3 rlang_1.0.6 shiny_1.7.3 cli_3.4.1 rstudioapi_0.14 fastmap_1.1.0 stringr_1.5.0
[22] fs_1.5.2 htmlwidgets_1.5.4 devtools_2.4.5 glue_1.6.2 R6_2.5.1 processx_3.8.0 sessioninfo_1.2.2
[29] callr_3.7.3 purrr_0.3.5 magrittr_2.0.3 ps_1.7.2 promises_1.2.0.1 ellipsis_0.3.2 htmltools_0.5.4
[36] usethis_2.1.6 mime_0.12 xtable_1.8-4 httpuv_1.6.6 stringi_1.7.8 miniUI_0.1.1.1 RCurl_1.98-1.9
[43] cachem_1.0.6 crayon_1.5.2

h2o.init()

set.seed(87)
dt <- data.table(a = rnorm(1e6),
b = sample(x = 0:1, size = 1e6, replace = TRUE))

write a .csv using data.table's gzip

uses zlib, I believe, due to SystemRequirements in DESCRIPTION

fwrite(x = dt, file = "fake_data1.csv.gz")

same as

fwrite(x = dt, file = "fake_data1.csv", compress = "gzip")

export a normal .csv, then use builtin gzip

fwrite(x = dt, file = "fake_data2.csv")
system2(command = "gzip", args = "fake_data2.csv")

no "error" but only imports ~6k rows

h2oframe <- h2o.importFile(normalizePath("~/fake_data1.csv.gz"))
nrow(h2oframe)

[1] 6197

imports full file correctly

h2oframe <- h2o.importFile(normalizePath("~/fake_data2.csv.gz"))
nrow(h2oframe)

[1] 1000000{code}

Enhancement: GBM monotone constraints for Poisson, Gamma families

Currently GBM monotone constraints exist for gaussian, bernoulli and tweedie (for power strictly between 1 and 2) families only. Extending this functionality to include Poisson and Gamma will be useful in financial services applications where regulation and/or client expectation dictates monotonic behaviour in response to certain factors

Previously raised by another user on h2ostream: [Poisson/Gamma GBM with monotonicity|https://groups.google.com/g/h2ostream/c/BEwC2iVZvgY?pli=1]

Support for weighted hyperparameters in random grids

When running a RGS, we may want to privilege some parameters values over others.
This can be “partly done” today by duplicating a parameter value for example, but the result is not 100% as expected:

  • when trying to generate new random hyper-parameters, duplicates from previous parameters permutations are checked based on index, not on value, therefore if we use the hyper-param {{param_dummy = ['A', 'A', 'B']}} then we effectively double the probability to use value {{'A'}} but as {{'A'}} (idx 0 ) is different from {{'A'}} (idx 1) then the walker considers that we have 2 different parameters and then {{GridSearch}} may try to train 2 models with exactly the same hyper-parameters.
  • to avoid training duplicates (and also to resume an existing grid), {{GridSearch}} first tries to find an existing model by {{checksum}} . However, even when found this way, the model is added to the grid and counted as an additional model (also impacting the {{max_models}} behaviour).

To avoid the issues above, I suggest to offer the possibility to provide explicit weights to some parameters through a {{meta}} parameter:

{code:none}param_dummy = ['A', 'B']
param_dummy$weights = [2, 1]{code}

the walkers supporting weights (currently only {{RandomDiscreteValueWalker}}) will then be able to extract those meta-params, validate them (ensure ints, same size as corresponding param…), and use them to tweak the random hyper-param selector.

h4. Benefits of this syntax (meta-param) over additional method parameter:

  • doesn’t require any API change.
  • hyper-params are always passed as strings so we can use the {{$}} separator without risk of conflict.
  • weights can be easily declared right below the corresponding param on any client (including when creating a Java HashMap) for clarity/visibility.
  • can also be extracted easily from sub-groups (hyper-parameters support parameters grouping for related params).

h4. Drawbacks of this syntax:

  • mixes up {{true}} param with {{meta}} params.

H2O will terminate when XGBoost training runs on a machine with old GPU

H2O XGBoost needs CUDA capability 3.5 or higher. I noticed that H2O will crash if it is run on card that doesn’t support the minimal version:

{noformat}[10:10:55] WARNING: /dot/src/gbm/gbtree.cc:73: DANGER AHEAD: You have manually specified updater parameter. The tree_method parameter will be ignored. Incorrect sequence of updaters will produce undefined behavior. For common uses, we recommend usingtree_method parameter instead.
[10:10:55] WARNING: /dot/src/tree/../common/device_helpers.cuh:178: CUDA Capability Major/Minor version number: 3.0 is insufficient. Need >=3.5 for device: 0
terminate called after throwing an instance of 'thrust::system::system_error'
what(): parallel_for failed: no kernel image is available for execution on the device

Process finished with exit code 134 (interrupted by signal 6: SIGABRT){noformat}

This happens with default settings - the GPU is autodetected and prefered to CPU, however, at actual training time (first actual iteration) the training crashes and takes down the whole JVM.

The real life impact of this issue is small because it affects cards release 10+ years ago. The solution is to manually specify CPU backend for XGBoost training (this might not be easy to do in all cases, eg. in AutoML).

Other option is to specify environment variable {{CUDA_VISIBLE_DEVICES}} to make the GPU invisible to CUDA and to H2O.

{noformat}CUDA_VISIBLE_DEVICES=-1{noformat}

Perhaps this could be in H2O documentation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.