Giter Club home page Giter Club logo

healthtools_ke's Introduction

HealthTools Kenya Scraper

This is a suite of scrapers that retrieve actionable information for citizens to use. All the data scraped by this is accessible through our HealthTools API.

They retrieve data from the following sites:

They currently run on morph.io but you are able to set it up on your own server.

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

How the Scrapers Work

To get the data we follow a couple of steps:

1. Scrape website: This is done in most cases using Beautiful Soup.
2. Elastic Index update: Replace data on elasticsearch with the new one. We only delete the documents after succesful completion of the scraping and not before. In the doctors' case, because we pull together foreign and local doctors, we won't update elasticsearch until both have been scraped succesfully.
3. Archive data: We archive the data in a latest.json file so that the url doesn't have to change to get the latest version in a "dump" format. A date-stamped archive is also stored as we later intend to do some analysis on the changes over time.

Should the scraper fail at any of these points, we log the error, and if set up, a Slack notification is sent.


Development

Clone repo and install the requirements this way:

$ git clone [email protected]:CodeForAfrica-SCRAPERS/healthtools_ke.git
$ cd healthtools_ke
$ mkvirtualenv healthtools-ke
(healthtools-ke)$ pip install -r requirements.txt

Other requirements include:

  • Elastic: Index the data scraped.
  • Slack Webhook (Optional): For error logging.
  • S3 Bucket (Optional): Used to archive data.

Elastic Setup

All the data scraped is uploaded to Elastic for access by the HealthTools API.

  • For mac users, run $ brew install elasticsearch on your terminal.
  • For linux and windows users, follow instructions from this link.

N/B: Make sure if you use Elastic locally, it's running.

Error Handling

As with anything beyond our control (the websites we are scraping), we try to catch all errors and display useful and actionable information about them.

As such, we capture the following details:

  • Timestamp
  • Machine name
  • Module / Scraper name + function name
  • Error message

This data is printed in terminal in the following way:

[ Timestamp ] { Module / Scraper Name }
[ Timestamp ] Scraper has started.
[ Timestamp ] ERROR: { Module / Scraper Name } / { function name }
[ Timestamp ] ERROR: { Error message }

We also provide a Slack notification option detailed below.

Slack Notifications (Optional):

To setup Slack notifications when the scrapers run into an error, start by creating an Incoming Webhook following these steps here and set the MORPH_WEBHOOK_URL environment variable.

Configuration

The following configurations are available for the scraper via env variables:

# Elastic host and port
$ export MORPH_ES_HOST="127.0.0.1"
$ export MORPH_ES_PORT=9200

# AWS Keys for ES Service (optional) and S3 (optional)
$ export MORPH_AWS_ACCESS_KEY_ID=""
$ export MORPH_AWS_SECRET_KEY=""

# AWS Region for S3 (optional)
$ export MORPH_AWS_REGION=""

# AWS S3 Bucket (optional)
$ export MORPH_S3_BUCKET=""

# Slack Webhook (optional)
$ export MORPH_WEBHOOK_URL=""

Usage

In development, instead of scraping entire websites, you can scrape only a small batch (a few pages) to ensure your scrapers are working as expected.

Set the SMALL_BATCH, SMALL_BATCH_HF (for health facilities scrapers), and SMALL_BATCH_NHIF (for NHIF scrapers) in the config file that will ensure the scraper doesn't scrape entire sites but just the number of pages that you would like it to scrape defined by this variable.

Usage $ python scraper.py --help Example $ python scraper.py --small-batch --scraper doctors to run the scrapers.

Tests

Use nosetests to run tests (with stdout) like this:

$ nosetests --nocapture
$ # Or
$ nosetests -s

Deployment

TODO


License

MIT License

Copyright (c) 2018 Code for Africa

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

healthtools_ke's People

Contributors

andela-mabdussalam avatar andela-mmakinde avatar andela-ookoro avatar celelstine avatar davidlemayian avatar gathondu avatar ryansept avatar tinamurimi avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

healthtools_ke's Issues

Create S3 folders if don't exist

We should create the S3 folders if they don't exists similar to how we are doing for local file storage.

This should probably be done in a Python module instead of in config.

NB: Include in tests as mentioned here

Store data in SQLite as suggested by Morph

Morph considers the scraper to have failed if no SQLite created;

Scraper didn't create an SQLite database in your current working directory called
data.sqlite. If you've just created your first scraper and not edited the code yet
this is to be expected.

Other than solving that error, it would be nice to make the data we scrape available on there too.

https://morph.io/documentation

Handling of AWS S3 Data Directories and Keys

Currently, in regards to the AWS S3 storage where we archive data, the running assumption is that for someone who is trying to install the project for whatever purpose already has S3 directory structure as the scraper expects. This should not be the case however. We should have the scraper check that the AWS S3 bucket exist and if it does, it has the expected structure contrary to which, the scraper will create the structure as expected.

DISCLAIMER: The AWS S3 bucket must have been created before hand though. The structure is what the scraper should create.

Allow selection of what to scrape

We should allow scraper selection of what we want by passing an argument --scraper with the different "doc types" e.g --scraper doctors:

  • Scrape all (default): python scraper.py
  • Single: python scraper.py --scraper doctors
  • Multiple: python scraper.py --scraper doctors,clinical_officers

Improve debug logs

Currently we're outputting too much logs. We should only mark successful scrapes as a whole (doctors, clinical officers, etc) instead of pages.

Other logs would be failed scrapes which should be returned as errors instead of normal print.

Limit Elasticsearch upload size

Currently we are uploading all the data at once to Elasticsearch but this fails when you have a lot of data for example in the case of Health Facilities, we get the following error:

TransportError(413, u'{"Message":"Request size exceeded 10485760 bytes"}')

Store stats of scraping

After scraping, we should store stats in s3 in a stats.json file that we will be used in display on a HTML page. This should include:

  1. Count of each data scraped.
  2. Dates of last successfully scraped for each and date of last successful scrape as a whole.
  3. How long each scrape took and all the scrapers.

For Debate: This info should also be pushed to Google Analytics at a later stage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.