Giter Club home page Giter Club logo

fortune500's Introduction

Fortune 500 company lists (1955-2019)

Usage

The dataset is under the csv/ directory.

The Fortune 500 is an annual list compiled and published by Fortune magazine that ranks 500 of the largest United States corporations by total revenue for their respective fiscal years.

How is this dataset collected?

The data come from a variety of sources, as I failed to find a single complete dataset that contains all lists from 1955 to 2018.

2019-

I'll be manually updating them.

2015-2018

http://fortune.com/fortune500/2015/list only loads the top 20 companies. More rows can be loaded by scrolling down to page bottom.

  1. On the webpage, open Developer Tools.
  2. Scroll to page bottom to load the next 30 companies (ranked 21 through 50).
  3. In the Network panel, you can find a request whose type is Fetch.
  4. Right click on the request to reveal link http://fortune.com/api/v2/list/1141696/expand/item/ranking/asc/20/30
  5. After inspecting, we find that /20/30 means skip 20 and take 30, equivalent to getting row 21 through row 50.
  6. It seems this API gives at most 100 rows per call. So, we can access http://fortune.com/api/v2/list/1141696/expand/item/ranking/asc/0/100 to get the first 100 companies, and http://fortune.com/api/v2/list/1141696/expand/item/ranking/asc/100/100 to get the next 100, and so on.
  7. Finally, use the Python json package to parse the JSON files, and build the CSV files.

Data source:

2013-2014

The data are from FortuneChina.com, the official website of Fortune magazine for China.

Data source:

url_2013 = 'http://www.fortunechina.com/fortune500/c/2013-05/06/content_154796.htm'
url_2014 = 'http://www.fortunechina.com/fortune500/c/2014-06/02/content_207496.htm'

2006-2012

The data are scrapped manually from the sources below, because the HTML pages containing 2006-2012 data do not follow a uniform structure.

Data source:

base = 'https://money.cnn.com/magazines/fortune/fortune500/{}/full_list/{}.html'
pages = ('index', '101_200', '201_300', '301_400', '401_500')
urls = [base.format(year, page) for year in range(2006,2013) for page in pages]

1955-2005

HTML sources are downloaded using urllib, parsed using Beautiful Soup, and saved as CSV.

Data source:

base = 'https://money.cnn.com/magazines/fortune/fortune500_archive/full/{}/{}.html'
urls = [base.format(year, page) for year in range(1955,2006) for page in (1,101,201,301,401)]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.