Giter Club home page Giter Club logo

deep-deep's Introduction

Deep-Deep: Adaptive Crawler

Build Status

Code Coverage

Deep-Deep is a Scrapy-based crawler which uses Reinforcement Learning methods to learn which links to follow.

It is called Deep-Deep, but it doesn't use Deep Learning, and it is not only for Deep web. Weird.

Running

In order to run the spider, you need some seed urls and a relevancy function that will provide reward value for each crawled page. There are some scripts in ./scripts with common use-cases:

  • crawl-forms.py learns to find password recovery forms (they are classified with Formasaurus). This is a good benchmark task, because the spider must learn to plan several steps ahead (they are often best reachable via login links).
  • crawl-keywords.py starts a crawl where relevance function is determined by a keywords file (keywords starting with "-" are considered negative).
  • crawl-relevant.py start a crawl where reward is given by a classifier that returns a score with .predict_proba method.

There is also an extraction spider deepdeep.spiders.extraction.ExtractionSpider that learns to extract unique items from a single domain given an item extractor.

For keywords and relevancy crawlers, the following files will be created in the result folder:

  • items.jl.gz - depending on the value of the export_cdr argument, either items in CDR format will be exported (default), or spider stats, including learning statistics (pass -a export_cdr=0)
  • meta.json - arguments of the spider
  • params.json - full spider parameters
  • Q-*.joblib - Q-model snapshots
  • queue-*.csv.gz - queue snapshots
  • events.out.tfevents.* - a log in TensorBoard format. Install TensorFlow to view it with tensorboard --logdir <result folder parent> command.

Using trained model

You can use deep-deep to just run adaptive crawls, updating link model and collecting crawled data at the same time. But in some cases it is more efficient to first train a link model with deep-deep, and then use this model in another crawler. Deep-deep uses a lot of memory to store page and link features, and more CPU to update the link model. So if the link model is general enough to freeze it, you can run a more efficient crawl. Or you might want to just use deep-deep link model in an existing project.

This is all possible with deepdeep.predictor.LinkClassifier: just load it from Q-*.joblib checkpoint and use .extract_urls_from_response or .extract_urls methods to get a list of urls with scores. An example of using this classifier in a simple scrapy spider is given in examples/standalone.py. Note that in order to use default scrapy queue, a float link score is converted to an integer priority value.

Note that in some rare cases the model might fail to generalize from the crawl it was trained on to the new crawl.

Model explanation

It's possible to explain model weights and predictions using eli5 library. For that you'll need to crawl with model checkpointing enabled and storing items in CDR format. Crawled items are used in order to invert the hashing vectorizer features, and also for prediction explanation.

./scripts/explain-model.py can save a model explanation to pickle, html, or print it in the terminal. But it is hard to analyze because character ngram features are used.

./scripts/explain-predictions.py will produce an html file for each crawled page, where explanations for all link scores will be shown.

Testing

To run tests, execute the following command from the deep-deep folder:

./check.sh

It requires Python 3.5+, pytest, pytest-cov, pytest-twisted and mypy.

Alternatively, run tox from deep-deep folder.


define hyperiongray

deep-deep's People

Contributors

kmike avatar lopuhin avatar mehaase avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-deep's Issues

URL canonicalization is inconsistent in training and prediction

As far as I can tell from the source code, urls are canonicalized when training QSpider, but not canonicalized in predictor.LinkClassifier - this can lead to slightly different predictions when using URL features.

I would rather disable canonicalization in QSpider than enable it during prediction: I think it's more convenient to have not canonicalized urls in prediction (it's always possible to later canonicalize them, and scrapy now defaults to doing no canonicalization).

USING THE PROJECT

@kmike Can the readme file be made more descriptive, because it's difficult to set the project up and use it for students like us!

Add integration tests

I think some simple tests for the relevancy spider could already cover a lot of code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.