Giter Club home page Giter Club logo

correlationfunnel's Introduction

correlationfunnel

by Business Science

Lifecycle: maturing Travis build status Coverage status CRAN_Status_Badge

Speed Up Exploratory Data Analysis (EDA)

The goal of correlationfunnel is to speed up Exploratory Data Analysis (EDA). Here’s how to use it.

Installation

You can install the latest stable (CRAN) version of correlationfunnel with:

install.packages("correlationfunnel")

You can install the development version of correlationfunnel from GitHub with:

devtools::install_github("business-science/correlationfunnel")

Correlation Funnel in 2-Minutes

Problem: Exploratory data analysis (EDA) involves looking at feature-target relationships independently. This process is very time consuming even for small data sets. Rather than search for relationships, what if we could let the relationships come to us?

Solution: Enter correlationfunnel. The package provides a succinct workflow and interactive visualization tools for understanding which features have relationships to target (response).

Main Benefits:

  1. Speeds Up Exploratory Data Analysis

  2. Improves Feature Selection

  3. Gets You To Business Insights Faster

Example - Bank Marketing Campaign

The following example showcases the power of fast exploratory correlation analysis. The goal of the analysis is to determine which features relate to the bank’s marketing campaign goal of having customers opt into a TERM DEPOSIT (financial product).

We will see that using 3 functions, we can quickly:

  1. Transform the data into a binary format with binarize()

  2. Perform correlation analysis using correlate()

  3. Visualize the highest correlation features using plot_correlation_funnel()

Result: Rather than spend hours looking at individual plots of capaign features and comparing them to which customers opted in to the TERM DEPOSIT product, in seconds we can discover which groups of customers have enrolled, drastically speeding up EDA.

Getting Started

First, load the libraries.

library(correlationfunnel)
library(dplyr)

Next, collect data to analyze. We’ll use Marketing Campaign Data for a Bank that was popularized by the UCI Machine Learning Repository. We can load the data with data("marketing_campaign_tbl").

# Use ?marketing_campagin_tbl to get a description of the marketing campaign features
data("marketing_campaign_tbl")

marketing_campaign_tbl %>% glimpse()
#> Observations: 45,211
#> Variables: 18
#> $ ID           <chr> "2836", "2837", "2838", "2839", "2840", "2841", "28…
#> $ AGE          <dbl> 58, 44, 33, 47, 33, 35, 28, 42, 58, 43, 41, 29, 53,…
#> $ JOB          <chr> "management", "technician", "entrepreneur", "blue-c…
#> $ MARITAL      <chr> "married", "single", "married", "married", "single"…
#> $ EDUCATION    <chr> "tertiary", "secondary", "secondary", "unknown", "u…
#> $ DEFAULT      <chr> "no", "no", "no", "no", "no", "no", "no", "yes", "n…
#> $ BALANCE      <dbl> 2143, 29, 2, 1506, 1, 231, 447, 2, 121, 593, 270, 3…
#> $ HOUSING      <chr> "yes", "yes", "yes", "yes", "no", "yes", "yes", "ye…
#> $ LOAN         <chr> "no", "no", "yes", "no", "no", "no", "yes", "no", "…
#> $ CONTACT      <chr> "unknown", "unknown", "unknown", "unknown", "unknow…
#> $ DAY          <dbl> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, …
#> $ MONTH        <chr> "may", "may", "may", "may", "may", "may", "may", "m…
#> $ DURATION     <dbl> 261, 151, 76, 92, 198, 139, 217, 380, 50, 55, 222, …
#> $ CAMPAIGN     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
#> $ PDAYS        <dbl> -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,…
#> $ PREVIOUS     <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
#> $ POUTCOME     <chr> "unknown", "unknown", "unknown", "unknown", "unknow…
#> $ TERM_DEPOSIT <chr> "no", "no", "no", "no", "no", "no", "no", "no", "no…

Response & Predictor Relationships

Modeling and Machine Learning problems often involve a response (Enrolled in TERM_DEPOSIT, yes/no) and many predictors (AGE, JOB, MARITAL, etc). Our job is to determine which predictors are related to the response. We can do this through Binary Correlation Analysis.

Binary Correlation Analysis

Binary Correlation Analysis is the process of converting continuous (numeric) and categorical (character/factor) data to binary features. We can then perform a correlation analysis to see if there is predictive value between the features and the response (target).

Step 1: Convert to Binary Format

The first step is converting the continuous and categorical data into binary (0/1) format. We de-select any non-predictive features. The binarize() function then converts the features into binary features.

  • Numeric Features: Are binned into ranges or if few unique levels are binned by their value, and then converted to binary features via one-hot encoding

  • Categorical Features: Are binned by one-hot encoding

The result is a data frame that has only binary data with columns representing the bins that the observations fall into. Note that the output is shown in the glimpse() format. THere are now 80 columns that are binary (0/1).

marketing_campaign_binarized_tbl <- marketing_campaign_tbl %>%
    select(-ID) %>%
    binarize(n_bins = 4, thresh_infreq = 0.01)

marketing_campaign_binarized_tbl %>% glimpse()
#> Observations: 45,211
#> Variables: 74
#> $ `AGE__-Inf_33`       <dbl> 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0…
#> $ AGE__33_39           <dbl> 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ AGE__39_48           <dbl> 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0…
#> $ AGE__48_Inf          <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1…
#> $ JOB__admin.          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0…
#> $ `JOB__blue-collar`   <dbl> 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ JOB__entrepreneur    <dbl> 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0…
#> $ JOB__housemaid       <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ JOB__management      <dbl> 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ JOB__retired         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0…
#> $ `JOB__self-employed` <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ JOB__services        <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1…
#> $ JOB__student         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ JOB__technician      <dbl> 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0…
#> $ JOB__unemployed      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ `JOB__-OTHER`        <dbl> 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MARITAL__divorced    <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0…
#> $ MARITAL__married     <dbl> 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1…
#> $ MARITAL__single      <dbl> 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0…
#> $ EDUCATION__primary   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0…
#> $ EDUCATION__secondary <dbl> 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1…
#> $ EDUCATION__tertiary  <dbl> 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0…
#> $ EDUCATION__unknown   <dbl> 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0…
#> $ DEFAULT__no          <dbl> 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1…
#> $ DEFAULT__yes         <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0…
#> $ `BALANCE__-Inf_72`   <dbl> 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0…
#> $ BALANCE__72_448      <dbl> 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1…
#> $ BALANCE__448_1428    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0…
#> $ BALANCE__1428_Inf    <dbl> 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ HOUSING__no          <dbl> 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ HOUSING__yes         <dbl> 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ LOAN__no             <dbl> 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ LOAN__yes            <dbl> 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ CONTACT__cellular    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ CONTACT__telephone   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ CONTACT__unknown     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ `DAY__-Inf_8`        <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ DAY__8_16            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ DAY__16_21           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ DAY__21_Inf          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__apr           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__aug           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__feb           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__jan           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__jul           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__jun           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__mar           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__may           <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ MONTH__nov           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__oct           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ MONTH__sep           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ `MONTH__-OTHER`      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ `DURATION__-Inf_103` <dbl> 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0…
#> $ DURATION__103_180    <dbl> 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1…
#> $ DURATION__180_319    <dbl> 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0…
#> $ DURATION__319_Inf    <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0…
#> $ `CAMPAIGN__-Inf_2`   <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ CAMPAIGN__2_3        <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ CAMPAIGN__3_Inf      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ `PDAYS__-1`          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ `PDAYS__-OTHER`      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ PREVIOUS__0          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ PREVIOUS__1          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ PREVIOUS__2          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ PREVIOUS__3          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ PREVIOUS__4          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ PREVIOUS__5          <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ `PREVIOUS__-OTHER`   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ POUTCOME__failure    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ POUTCOME__other      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ POUTCOME__success    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
#> $ POUTCOME__unknown    <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ TERM_DEPOSIT__no     <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
#> $ TERM_DEPOSIT__yes    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…

Step 2: Perform Correlation Analysis

The second step is to perform a correlation analysis between the response (target = TERM_DEPOSIT_yes) and the rest of the features. This returns a specially formatted tibble with the feature, the bin, and the bin’s correlation to the target. The format is exactly what we need for the next step - Producing the Correlation Funnel

marketing_campaign_correlated_tbl <- marketing_campaign_binarized_tbl %>%
    correlate(target = TERM_DEPOSIT__yes)

marketing_campaign_correlated_tbl
#> # A tibble: 74 x 3
#>    feature      bin      correlation
#>    <fct>        <chr>          <dbl>
#>  1 TERM_DEPOSIT no            -1.000
#>  2 TERM_DEPOSIT yes            1.000
#>  3 DURATION     319_Inf        0.318
#>  4 POUTCOME     success        0.307
#>  5 DURATION     -Inf_103      -0.191
#>  6 PDAYS        -OTHER         0.167
#>  7 PDAYS        -1            -0.167
#>  8 PREVIOUS     0             -0.167
#>  9 POUTCOME     unknown       -0.167
#> 10 CONTACT      unknown       -0.151
#> # … with 64 more rows

Step 3: Visualize the Correlation Funnel

A Correlation Funnel is an tornado plot that lists the highest correlation features (based on absolute magnitude) at the top of the and the lowest correlation features at the bottom. The resulting visualization looks like a Funnel.

To produce the Correlation Funnel, use plot_correlation_funnel(). Try setting interactive = TRUE to get an interactive plot that can be zoomed in on.

marketing_campaign_correlated_tbl %>%
    plot_correlation_funnel(interactive = FALSE)

Examining the Results

The most important features are towards the top. We can investigate these.

marketing_campaign_correlated_tbl %>%
    filter(feature %in% c("DURATION", "POUTCOME", "PDAYS", 
                          "PREVIOUS", "CONTACT", "HOUSING")) %>%
    plot_correlation_funnel(interactive = FALSE, limits = c(-0.4, 0.4))

We can see that the following prospect groups have a much greater correlation with enrollment in the TERM DEPOSIT product:

  • When the DURATION, the amount of time a prospect is engaged in marketing campaign material, is 319 seconds or longer.

  • When POUTCOME, whether or not a prospect has previously enrolled in a product, is “success”.

  • When CONTACT, the medium used to contact the person, is “cellular”

  • When HOUSING, whether or not the contact has a HOME LOAN is “no”

Other Great EDA Packages in R

The main addition of correlationfunnel is to quickly expose feature relationships to semi-processed data meaning missing (NA) values have been treated, date or date-time features have been feature engineered, and data is in a “clean” format (numeric data and categorical data are ready to be correlated to a Yes/No response).

Here are several great EDA packages that can help you understand data issues (cleanliness) and get data preprared for Correlation Analysis!

  • Data Explorer - Automates Exploration and Data Treatment. Amazing for investigating features quickly and efficiently including by data type, missing data, feature engineering, and identifying relationships.

  • naniar - For understanding missing data.

  • UpSetR - For generating upset plots

  • GGally - The ggpairs() function is one of my all-time favorites for visualizing many features quickly.

Using Correlation Funnel? You Might Be Interested in Applied Business Education

Business Science teaches students how to apply data science for business. The entire curriculum is crafted around business consulting with data science. Correlation Analysis is one of the many techniques that we teach in our curriculum. Learn from our data science application experience with real-world business projects.

Learn from Real-World Business Projects

Students learn by solving real world projects using our repeatable project-management framework along with cutting-edge tools like the Correlation Analysis, Automated Machine Learning, and Feature Explanation as part of our ROI-Driven Data Science Curriculum.

correlationfunnel's People

Contributors

mdancho84 avatar olivroy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

correlationfunnel's Issues

Error: plot_correlation_funnel(): [Unnacceptable Data] Acceptable data is generated from the output of correlate().

Hello - When I attempt to use the "Plot a Correlation Funnel" example using the data("marketing_campaign_tbl") dataset. I get the following error:

"Error: plot_correlation_funnel(): [Unnacceptable Data] Acceptable data is generated from the output of correlate()."

I would love to get this code working as it looks to be an invaluable resource.

Here's the reprex:

library(dplyr)
library(correlationfunnel)
marketing_campaign_tbl %>%
select(-ID) %>%
binarize() %>%
correlate(TERM_DEPOSIT__yes) %>%
plot_correlation_funnel()

Session Info:
R version 4.1.2 (2021-11-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19044)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252

attached base packages:
[1] tools grid stats graphics grDevices datasets utils methods base

other attached packages:
[1] reactable_0.2.3 kableExtra_1.3.4.9000 rlang_1.0.6.9000 log4r_0.4.2 zipcodeR_0.3.3
[6] yaml_2.3.5 xtable_1.8-4 xray_0.2.900 xfun_0.31 visdat_0.5.3
[11] validate_1.1.1 UpSetR_1.4.0 tryCatchLog_1.3.1 treemap_2.4-3 timetk_2.8.0
[16] forcats_0.5.2 tidyr_1.2.1 tidyverse_1.3.1 tidytable_0.9.0 taskscheduleR_1.6
[21] summarytools_1.0.0 styler_1.7.0 stringr_1.4.1 splot_0.5.2 snakecase_0.11.0
[26] SmartEDA_0.3.8 SixSigma_0.10.3 skimr_2.1.4 shinyWidgets_0.6.4 shinydashboard_0.7.2
[31] shinyAce_0.4.1 shiny_1.7.1 RVerbalExpressions_0.1.0 rstudio.prefs_0.1.8 rstudioapi_0.14
[36] rmarkdown_2.16 rio_0.5.29 rgdal_1.5-32 sp_1.5-0 reshape2_1.4.4
[41] reprex_2.0.1 ReDaMoR_0.6.3 visNetwork_2.1.0 readxl_1.3.1 readr_2.1.3
[46] raincloudplots_0.2.0 quantmod_0.4.18 TTR_0.24.3 xts_0.12.1 zoo_1.8-9
[51] purrr_0.3.5 processR_0.2.6 ppsr_0.0.2 plotly_4.10.0.9001 plotluck_1.1.1
[56] pivottabler_1.5.3 pdftools_3.1.1 patchwork_1.1.1 pasteAsComment_0.2.0 pak_0.3.1
[61] packagefinder_0.3.2 pacman_0.5.1 orca_1.1-1 openxlsx_4.2.5 naniar_0.6.1.9000
[66] modelsummary_1.0.2.9000 Microsoft365R_2.3.4 magrittr_2.0.3 lubridate_1.8.0 logger_0.2.2
[71] lobstr_1.1.1 lintr_2.0.1 leaflet_2.1.1.9000 knitr_1.40 janitor_2.1.0
[76] inspectdf_0.0.11 inexact_0.0.3 IEDA_0.1.0 htmltools_0.5.3 highcharter_0.9.4.9000
[81] here_1.0.1 gt_0.5.0 grkstyle_0.0.3 googlesheets4_1.0.1.9000 googledrive_2.0.0.9000
[86] ggthemr_1.1.0 ggstatsplot_0.9.1 GGally_2.1.2.9000 ggplot2_3.3.6 ggdogs_1.0
[91] ggblanket_1.0.0 gluedown_1.0.4 fuzzyjoin_0.1.6 futile.logger_1.4.3 fortunes_1.5-4
[96] formattable_0.2.1 formatR_1.12 flow_0.1.0 flextable_0.7.0 flexdashboard_0.5.2
[101] ezknitr_0.6 explore_0.8.0 report_0.5.5 see_0.7.2 correlation_0.8.2
[106] modelbased_0.8.5 effectsize_0.7.0.5 parameters_0.18.2.11 performance_0.9.2 bayestestR_0.13.0
[111] datawizard_0.6.1 insight_0.18.4.6 easystats_0.5.2 dygraphs_1.1.1.7 DT_0.23
[116] dtplyr_1.2.1 dplyr_1.0.9 plyr_1.8.7 downlit_0.4.2 dm_0.2.8.9002
[121] dlookr_0.5.6 diffobj_0.3.5 DiagrammeR_1.0.9 devtools_2.4.3 usethis_2.1.5.9000
[126] deepdep_0.4.1 datapasta_3.1.1 dataReporter_1.0.0 datamodelr_0.2.2.9002 dataMeta_0.1.1
[131] DataExplorer_0.8.2 DataEditR_0.1.5 data.validator_0.1.6 data.tree_1.0.0 DALEX_2.4.2
[136] data.table_1.14.2 daff_0.3.5 d3heatmap_0.9.0 correlationfunnel_0.2.0 compareDF_2.3.3
[141] compare_0.2-6 clock_0.6.0 circlize_0.4.15 checkpoint_1.0.2 cheatsheet_0.1.0
[146] blogsnip_0.0.0.9004 bookdownplus_1.5.8 beepr_1.3 autoEDA_1.0 assertr_2.9
[151] ARTofR_0.4.1 arsenal_3.6.3 archivist_2.3.6 ProjectTemplate_0.10.2 tibble_3.1.7
[156] digest_0.6.29

loaded via a namespace (and not attached):
[1] sjlabelled_1.1.8 mycor_0.1.1 remotes_2.4.2 shinyjs_2.1.0 lattice_0.20-45 paletteer_1.4.0
[7] vctrs_0.4.2.9000 stats4_4.1.2 utf8_1.2.2 blob_1.2.3 R.oo_1.24.0 reactR_0.4.4
[13] withr_2.5.0 foreign_0.8-82 gdtools_0.2.4 uuid_1.1-0 matrixStats_0.61.0 audio_0.1-10
[19] lifecycle_1.0.3.9000 emmeans_1.7.3 cellranger_1.1.0 munsell_0.5.0 ragg_1.2.2 fontawesome_0.2.2
[25] AzureGraph_1.3.2 devEMF_4.0-2 codetools_0.2-18 gghalves_0.1.1 furrr_0.2.3 ppcor_1.1
[31] magick_2.7.3 parallelly_1.32.1 fs_1.5.2 stringi_1.7.6 rlist_0.4.6.2 pbivnorm_0.6.0
[37] pkgconfig_2.0.3 prettyunits_1.1.1 cyclocomp_1.1.0 rvg_0.2.5 estimability_1.3 httr_1.4.4
[43] ggiraphExtra_0.3.0 igraph_1.2.11 progress_1.2.2 hrbrthemes_0.8.0 qpdf_1.1 terra_1.6-17
[49] diagram_1.6.5 haven_2.5.0 mc2d_0.1-21 V8_4.0.0 rsample_0.1.1 miniUI_0.1.1.1
[55] viridisLite_0.4.1 prodlim_2019.11.13 pillar_1.8.1 pkgdown_2.0.3 jquerylib_0.1.4 later_1.3.0
[61] glue_1.6.2 DBI_1.1.2 foreach_1.5.2 ISLR_1.4 robustbase_0.95-0 gtable_0.3.1
[67] raster_3.6-3 tigris_1.6 GlobalOptions_0.1.2 fastmap_1.1.0 extrafont_0.18 sampling_2.9
[73] crosstalk_1.2.0 broom_1.0.1 checkmate_2.1.0 promises_1.2.0.1 webshot_0.5.4 tmvnsim_1.0-2
[79] textshaping_0.3.6 rapportools_1.1 brio_1.1.3 mnormt_2.0.2 hms_1.1.2 askpass_1.1
[85] png_0.1-7 lazyeval_0.2.2 Formula_1.2-4 crayon_1.5.2 extrafontdb_1.0 gridBase_0.4-7
[91] predict3d_0.1.3.3 svglite_2.1.0 flock_0.7 tidyselect_1.1.2 pander_0.6.5 splines_4.1.2
[97] editData_0.1.8 rintrojs_0.3.0 survival_3.2-13 bannerCommenter_1.0.0 rappdirs_0.3.3 WRS2_1.1-3
[103] bit64_4.0.5 lambda.r_1.2.4 modelr_0.1.8 networkD3_0.4 sjmisc_2.8.9 pagedown_0.17
[109] ggsignif_0.6.3 R.methodsS3_1.8.1 rex_1.2.1 markdown_1.1 ggiraph_0.8.2 renv_0.15.4
[115] cachem_1.0.6 ipred_0.9-13 statsExpressions_1.3.0 abind_1.4-5 systemfonts_1.0.4 mime_0.12
[121] ztable_0.2.3 ggrepel_0.9.1 rstatix_0.7.0 processx_3.7.0 xaringan_0.24 interactions_1.1.5
[127] cli_3.4.1 rgl_0.108.3 proxy_0.4-26 future.apply_1.9.1 Matrix_1.3-4 libcoin_1.0-9
[133] shinyBS_0.61 assertthat_0.2.1 officer_0.4.2 repr_1.1.4 lpSolve_5.6.15 mgcv_1.8-38
[139] ggpubr_0.4.0 R.utils_2.12.0 rhandsontable_0.3.8 moonBook_0.3.1 zip_2.2.0 prediction_0.3.14
[145] colourpicker_1.1.1.9001 tzdb_0.3.0 maptools_1.1-2 ps_1.7.1 fansi_1.0.3 KernSmooth_2.23-20
[151] clipr_0.8.0 backports_1.4.1 sysfonts_0.8.5 farver_2.1.1 bit_4.0.4 hardhat_1.2.0
[157] sass_0.4.2.9000 futile.options_1.0.1 partykit_1.2-15 iterators_1.0.14 tables_0.9.6 nlme_3.1-155
[163] lavaan_0.6-11 shape_1.4.6 bslib_0.4.0 inum_1.0-4 sf_1.0-5 rematch2_2.1.2
[169] listenv_0.8.0 gargle_1.2.1.9000 generics_0.1.3 colorspace_2.0-3 base64enc_0.1-3 pkgbuild_1.3.1
[175] e1071_1.7-9 jtools_2.1.4 dbplyr_2.1.1 pryr_0.1.5 RColorBrewer_1.1-3 R.cache_0.15.0
[181] timeDate_4021.106 evaluate_0.16 memoise_2.0.1 coda_0.19-4 semTools_0.5-5 httpuv_1.6.5
[187] class_7.3-20 Rttf2pt1_1.3.10 Rcpp_1.0.8.3 openssl_2.0.3 classInt_0.4-3 pkgload_1.2.4
[193] jsonlite_1.8.2 tidycensus_1.2.1 showtextdb_3.0 bookdown_0.26 rprojroot_2.0.3 bitops_1.0-7
[199] RSQLite_2.2.14 globals_0.16.1 compiler_4.1.2 nnet_7.3-17 settings_0.2.7 tcltk_4.1.2
[205] carData_3.0-5 testthat_3.1.4 rrtable_0.2.1 sessioninfo_1.2.2 lava_1.6.10 ggfittext_0.9.1
[211] rvest_1.0.3 recipes_1.0.1 future_1.28.0 mvtnorm_1.1-3 htmlwidgets_1.5.4 psych_2.2.3
[217] labeling_0.4.2 callr_3.7.2 curl_4.3.3 parallel_4.1.2 highr_0.9 DEoptimR_1.0-11
[223] scales_1.2.1 showtext_0.9-4 desc_1.4.1 gridExtra_2.3 AzureAuth_1.3.3 RCurl_1.98-1.6
[229] car_3.0-12 zeallot_0.1.0 MASS_7.3-55 ellipsis_0.3.2 xml2_1.3.3 gower_1.0.0
[235] reshape_0.8.9 rpart_4.1.16 R6_2.5.1 units_0.7-2

Thank you for your time and consideration.

Feature Request for CorrelationFunnel: automate with new explain() Fx

Great and
very clear stepXstep package tutorial, Matt!.

A time-saving suggestion (if I may):
in Step:
"Examining the Results" (after Step 3),
where you have:

marketing_campaign_correlated_tbl %>%
filter(feature %in% c("DURATION", "POUTCOME", "PDAYS",
"PREVIOUS", "CONTACT", "HOUSING")) %>%
plot_correlation_funnel(interactive = FALSE, limits = c(-0.4, 0.4))

Why not "automatically" generate the results
for these top 6 dependent variables?.
Easy and useful shortcut! :-)

The new, suggested Function:
explain()
where the default is: show only the 6 TOP vars,

The user can specify any other # of top vars to show,
ie:
explain(3) to show the 3 top vars, or
explain(+3) to show the 3 top positively-correlated vars, or
explain(-3) to show the 3 top negatively-correlated vars

Love to see this explain() Fx in correlationFunnel! :-)
THANKS, MATT! great job!.

Sfd99
San Francisco

Option for NAs

Would be nice with something similar as you have for the standard cor-function where users can specify how they would like to deal with NA’s. I do agree with the comment "Missing values and cleaning data are critical to getting great correlations" but a function like this is very convinient when having a few NAs in some columns.

Round bin label lengths

I have found this extremely useful as part of initial look at data.

Would it be possible to reduce bin lengths generated, especially where they become recursive, it would make charts much less cluttered.

Regards

correlationfunnel

library(correlationfunnel )

This error occurs while loading the package any arguments need to be passed with loading the packages

Error: package or namespace load failed for ‘correlationfunnel’:
.onAttach failed in attachNamespace() for 'correlationfunnel', details:
call: if (theme$dark) {
error: missing value where TRUE/FALSE needed

Is it valid to use a continuous target variable?

Let's say I have a continuous response and I'm interested in correlations all along its range, not just the top or bottom quartile.

Instead of running a separate correlation_funnel for each bin, is there any reason I cannot add the continuous column back in between running binarize() and correlate()?

Reprex:

library(dplyr); library(correlationfunnel);

foo <- select(survival::veteran,-'time') %>% binarize() %>% 
    cbind(time=survival::veteran$time) %>% correlate(target=time);

foo$bin[1] <- 'time';

plot_correlation_funnel(foo);

The above runs with one warning and no errors, producing a plot where presumably all correlations are relative to an outcome variable that is not binned just like I want.

My question: Is it valid to use correlationfunel this way?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.