Giter Club home page Giter Club logo

scrapy-crawlera-fetch's People

Contributors

elacuesta avatar ionut-ciubotariu avatar starrify avatar verz1lka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scrapy-crawlera-fetch's Issues

Visibility for Original URLs in Requests Log

Hi everyone,

do you think it would be feasible to tweak the request object in a way that the Requests Log shows a Request to the original URL, instead of to the fetch api endpoint?

Screenshot 2021-07-26 at 15 09 14

My understanding of the framework is not full, hence my question, but if you think it is feasible I would definitely be interested in contributing.

Thanks!

Add retry mechanism into middleware

Hi guys,
I recently faced the case when several retry requests for fetch.crawlera.com helped the spider to work well. As I got from the discussion here https://zytegroup.slack.com/archives/C014HA686ES/p1612975265044000 uncork does 3 retries but not for all failures.
I've implemented this as a temporary fix with retrying requests right in the spider. We could do this customer retry middleware sure, but we will need to add this to every spider/project.
To make things simpler - is it possible to add this right into the CrawleraFetchMiddleware and add meta parameter for retry reasons/retry times along with the existing "skip" parameter?

The reasons for failed responses that I've faced
"crawlera_status":"fail"
"crawlera_status":"ban"
"crawlera_error":"timeout"

Thanks.

`crawlera_fetch` middleware doesn't work with `@inline_requests`

crawlera_fetch Midlleware doesn't work on methods with scrapy inline requests decorator enabled.

Log output
2021-12-03 20:03:07 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: bvbot)
2021-12-03 20:03:08 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.7.0, Python 3.8.11 (default, Aug  6 2021, 09:57:55) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 21.0.0 (OpenSSL 1.1.1l  24 Aug 2021), cryptography 3.4.7, Platform Windows-10-10.0.19043-SP0
2021-12-03 20:03:08 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-12-03 20:03:10 [scrapy.crawler] INFO: Overridden settings:
...
2021-12-03 20:03:19 [scrapy.core.engine] INFO: Spider opened
2021-12-03 20:03:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-12-03 20:03:20 [crawlera-fetch-middleware] INFO: Using Crawlera Fetch API at http://cm-58.scrapinghub.com:8010/fetch/v2/ with apikey *****
2021-12-03 20:03:20 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
....
2021-12-03 20:04:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://website... > (referer: None latency: 0.00)
2021-12-03 20:04:14 [scrapy.core.scraper] ERROR: Error downloading <GET https://website... >
Traceback (most recent call last):
  File "C:\Users\...\envs\...\lib\site-packages\twisted\internet\defer.py", line 1661, in _inlineCallbacks
    result = current_context.run(gen.send, result)
  File "C:\Users\...\envs\...\lib\site-packages\scrapy\core\downloader\middleware.py", line 36, in process_request
    response = yield deferred_from_coro(method(request=request, spider=spider))
  File "C:\Users\...\envs\...\lib\site-packages\crawlera_fetch\middleware.py", line 142, in process_request
    "original_request": request_to_dict(request, spider=spider),
  File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 19, in request_to_dict
    cb = _find_method(spider, cb)
  File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 87, in _find_method
    raise ValueError(f"Function {func} is not an instance method in: {obj}")
ValueError: Function functools.partial(<bound method RequestGenerator._handleSuccess of <inline_requests.generator.RequestGenerator object at 0x000001525EFCF4F0>>, generator=<generator object TestSpider.parse_product at 0x000001525EF63CF0>) is not an instance method in: <TestSpider 'test_spider' at 0x1525d8b1490>
2021-12-03 20:04:14 [scrapy.core.scraper] ERROR: Spider error processing <GET https://website... > (referer: https://website...  )
Traceback (most recent call last):
  File "C:\Users\...\envs\...\lib\site-packages\twisted\internet\defer.py", line 858, in _runCallbacks
    current.result = callback(  # type: ignore[misc]
  File "C:\Users\...\envs\...\lib\site-packages\inline_requests\generator.py", line 107, in _handleFailure
    ret = failure.throwExceptionIntoGenerator(generator)
  File "C:\Users\...\proj\spiders\test.py", line 280, in parse
    stock_response = yield Request(
  File "C:\Users\...\envs\...\lib\site-packages\twisted\internet\defer.py", line 1661, in _inlineCallbacks
    result = current_context.run(gen.send, result)
  File "C:\Users\...\envs\...\lib\site-packages\scrapy\core\downloader\middleware.py", line 36, in process_request
    response = yield deferred_from_coro(method(request=request, spider=spider))
  File "C:\Users\...\envs\...\lib\site-packages\crawlera_fetch\middleware.py", line 142, in process_request
    "original_request": request_to_dict(request, spider=spider),
  File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 19, in request_to_dict
    cb = _find_method(spider, cb)
  File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 87, in _find_method
    raise ValueError(f"Function {func} is not an instance method in: {obj}")
ValueError: Function functools.partial(<bound method RequestGenerator._handleSuccess of <inline_requests.generator.RequestGenerator object at 0x000001525EFCF4F0>>, generator=<generator object TestSpider.parse at 0x000001525EF63CF0>) is not an instance method in: <TestSpider 'test_spider' at 0x1525d8b1490>
2021-12-03 20:04:14 [scrapy.core.engine] INFO: Closing spider (finished)
2021-12-03 20:04:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
...
2021-12-03 20:04:14 [scrapy.core.engine] INFO: Spider closed (finished)

Missed spider in request_to_dict

I have a error when I started to use the latest version

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 36, in process_request
    response = yield deferred_from_coro(method(request=request, spider=spider))
  File "/app/python/lib/python3.8/site-packages/crawlera_fetch/middleware.py", line 106, in process_request
    "original_request": request_to_dict(request),
  File "/usr/local/lib/python3.8/site-packages/scrapy/utils/reqser.py", line 19, in request_to_dict
    cb = _find_method(spider, cb)
  File "/usr/local/lib/python3.8/site-packages/scrapy/utils/reqser.py", line 92, in _find_method
    raise ValueError("Function %s is not a method of: %s" % (func, obj))
ValueError: Function <bound method UniversalParserSpider.parse_item of <AmazonExtractionSpider 'amazon_extraction' at 0x7f56023bf670>> is not a method of: None

Should we pass spider here

"original_request": request_to_dict(request),
?

https://github.com/scrapy/scrapy/blob/master/scrapy/utils/reqser.py#L11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.