Giter Club home page Giter Club logo

ga-project_4-web_scrapping_and_analysis's Introduction

Project 4: Web Scraping Job Postings

Business Case Overview

You're working as a data scientist for a contracting firm that's rapidly expanding. Now that they have their most valuable employee (you!), they need to leverage data to win more contracts. Your firm offers technology and scientific solutions and wants to be competitive in the hiring market. Your principal has two main objectives:

  1. Determine the industry factors that are most important in predicting the salary amounts for these data.
  2. Determine the factors that distinguish job categories and titles from each other. For example, can required skills accurately predict job title?

To limit the scope, your principal has suggested that you focus on data-related job postings, e.g. data scientist, data analyst, research scientist, business intelligence, and any others you might think of. You may also want to decrease the scope by limiting your search to a single region.

Hint: Aggregators like Indeed.com regularly pool job postings from a variety of markets and industries.

Goal: Scrape your own data from a job aggregation tool like Indeed.com in order to collect the data to best answer these two questions.


Directions

In this project you will be leveraging a variety of skills. The first will be to use the web-scraping and/or API techniques you've learned to collect data on data jobs from Indeed.com or another aggregator. Once you have collected and cleaned the data, you will use it to answer the two questions described above.

QUESTION 1: Factors that impact salary

To predict salary you will be building either a classification or regression model, using features like the location, title, and summary of the job. If framing this as a regression problem, you will be estimating the listed salary amounts. You may instead choose to frame this as a classification problem, in which case you will create labels from these salaries (high versus low salary, for example) according to thresholds (such as median salary).

Use all the skills you have learned so far to build a predictive model.

Whatever you decide to use, the most important thing is to justify your choices and interpret your results. Communication of your process is key. Note that most listings DO NOT come with salary information. You'll need to be able to extrapolate or predict the expected salaries for these listings.

QUESTION 2: Factors that distinguish job category

Using the job postings you scraped for part 1 (or potentially new job postings from a second round of scraping), identify features in the data related to job postings that can distinguish job titles from each other. There are a variety of interesting ways you can frame the target variable, for example:

  • What components of a job posting distinguish data scientists from other data jobs?
  • What features are important for distinguishing junior versus senior positions?
  • Do the requirements for titles vary significantly with industry (e.g. healthcare versus government)?

You may end up making multiple classification models to tackle different questions. Be sure to clearly explain your hypotheses and framing, any feature engineering, and what your target variables are. The type of classification model you choose is up to you. Be sure to interpret your results and evaluate your model's performance.

We want to predict a binary variable - whether the salary was low or high. Compute the median salary and create a new binary variable that is true when the salary is high (above the median).

We could also perform Linear Regression (or any regression) to predict the salary value here. Instead, we are going to convert this into a binary classification problem, by predicting two classes, HIGH vs LOW salary.

While performing regression may be better, performing classification may help remove some of the noise of the extreme salaries. We don't have to choose the median as the splitting point - we could also split on the 75th percentile or any other reasonable breaking point.

In fact, the ideal scenario may be to predict many levels of salaries.

Create a classification model to predict High/Low salary.

Model based on location:

  • Start by ONLY using the location as a feature.
  • Use logistic regression with both statsmodels and sklearn.
  • Use a further classifier you find suitable.
  • Remember that scaling your features might be necessary.
  • Display the coefficients/feature importances and write a short summary of what they mean.

Model taking into account job levels and categories:

  • Create a few new variables in your dataframe to represent interesting features of a job title.
  • For example, create a feature that represents whether 'Senior' is in the title or whether 'Manager' is in the title.
  • Incorporate other text features from the title or summary that you believe will predict the salary.
  • Then build new classification models including also those features. Do they add any value?
  • Tune your models by testing parameter ranges, regularization strengths, etc. Discuss how that affects your models.
  • Discuss model coefficients or feature importances as applicable.

Model evaluation:

Your boss would rather tell a client incorrectly that they would get a lower salary job than tell a client incorrectly that they would get a high salary job. Adjust one of your models to ease his mind, and explain what it is doing and any tradeoffs.

  • Use cross-validation to evaluate your models.
  • Evaluate the accuracy, AUC, precision and recall of the models.
  • Plot the ROC and precision-recall curves for at least one of your models.

Bonus:

  • Answer the salary discussion by using your model to explain the tradeoffs between detecting high versus low salary positions.
  • Discuss the differences and explain when you want a high-recall or a high-precision model in this scenario.
  • Obtain the ROC/precision-recall curves for the different models you studied (at least the tuned model of each category) and compare.

Summarize your results in an executive summary written for a non-technical audience.

  • Writeups should be at least 500-1000 words, defining any technical terms, explaining your approach, as well as any risks and limitations.

BONUS

Convert your executive summary into a public blog post of at least 500 words, in which you document your approach in a tutorial for other aspiring data scientists. Link to this in your notebook.


Suggestions for Getting Started

  1. Collect data from Indeed.com (or another aggregator) on data-related jobs to use in predicting salary trends for your analysis.
  • Select and parse data from at least 1000 postings for jobs, potentially from multiple location searches.
  1. Find out what factors most directly impact salaries (e.g. title, location, department, etc).
  • Test, validate, and describe your models. What factors predict salary category? How do your models perform?
  1. Discover which features have the greatest importance when determining a low versus high paying job.
  • Your Boss is interested in what overall features hold the greatest significance.
  • HR is interested in which SKILLS and KEY WORDS hold the greatest significance.
  1. Author an executive summary that details the highlights of your analysis for a non-technical audience.
  2. If tackling the bonus question, try framing the salary problem as a classification problem detecting low versus high salary positions.

Useful Resources


ga-project_4-web_scrapping_and_analysis's People

Contributors

joseph-a-pearson avatar

Watchers

James Cloos avatar  avatar

Forkers

abdullahmu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.