scrapy-plugins / scrapy-crawlera-fetch Goto Github PK
View Code? Open in Web Editor NEWScrapy Downloader Middleware for Crawlera Fetch API
License: BSD 3-Clause "New" or "Revised" License
Scrapy Downloader Middleware for Crawlera Fetch API
License: BSD 3-Clause "New" or "Revised" License
Hi everyone,
do you think it would be feasible to tweak the request object in a way that the Requests Log shows a Request to the original URL, instead of to the fetch api endpoint?
My understanding of the framework is not full, hence my question, but if you think it is feasible I would definitely be interested in contributing.
Thanks!
I think if we want to set raise_on_error to False we don't want to see these problems in error log. Probably we are trusting retry middleware, so, maybe it make sense to move this message to warning
level?
X-Crawlera-JobId header is used to troubleshoot specific requests in the underlying stack as well as stats generation. This needs to be supported by the middleware.
https://doc.scrapinghub.com/crawlera-proxy-api.html#x-crawlera-jobid
Hi guys,
I recently faced the case when several retry requests for fetch.crawlera.com helped the spider to work well. As I got from the discussion here https://zytegroup.slack.com/archives/C014HA686ES/p1612975265044000 uncork does 3 retries but not for all failures.
I've implemented this as a temporary fix with retrying requests right in the spider. We could do this customer retry middleware sure, but we will need to add this to every spider/project.
To make things simpler - is it possible to add this right into the CrawleraFetchMiddleware and add meta parameter for retry reasons/retry times along with the existing "skip" parameter?
The reasons for failed responses that I've faced
"crawlera_status":"fail"
"crawlera_status":"ban"
"crawlera_error":"timeout"
Thanks.
Example:
https://app.zyte.com/p/515885/7/13/log?line=173
Right now it is not possible to know which is the url that offended the spider logic.
crawlera_fetch
Midlleware doesn't work on methods with scrapy inline requests decorator enabled.
2021-12-03 20:03:07 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: bvbot)
2021-12-03 20:03:08 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.7.0, Python 3.8.11 (default, Aug 6 2021, 09:57:55) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 21.0.0 (OpenSSL 1.1.1l 24 Aug 2021), cryptography 3.4.7, Platform Windows-10-10.0.19043-SP0
2021-12-03 20:03:08 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-12-03 20:03:10 [scrapy.crawler] INFO: Overridden settings:
...
2021-12-03 20:03:19 [scrapy.core.engine] INFO: Spider opened
2021-12-03 20:03:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-12-03 20:03:20 [crawlera-fetch-middleware] INFO: Using Crawlera Fetch API at http://cm-58.scrapinghub.com:8010/fetch/v2/ with apikey *****
2021-12-03 20:03:20 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
....
2021-12-03 20:04:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://website... > (referer: None latency: 0.00)
2021-12-03 20:04:14 [scrapy.core.scraper] ERROR: Error downloading <GET https://website... >
Traceback (most recent call last):
File "C:\Users\...\envs\...\lib\site-packages\twisted\internet\defer.py", line 1661, in _inlineCallbacks
result = current_context.run(gen.send, result)
File "C:\Users\...\envs\...\lib\site-packages\scrapy\core\downloader\middleware.py", line 36, in process_request
response = yield deferred_from_coro(method(request=request, spider=spider))
File "C:\Users\...\envs\...\lib\site-packages\crawlera_fetch\middleware.py", line 142, in process_request
"original_request": request_to_dict(request, spider=spider),
File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 19, in request_to_dict
cb = _find_method(spider, cb)
File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 87, in _find_method
raise ValueError(f"Function {func} is not an instance method in: {obj}")
ValueError: Function functools.partial(<bound method RequestGenerator._handleSuccess of <inline_requests.generator.RequestGenerator object at 0x000001525EFCF4F0>>, generator=<generator object TestSpider.parse_product at 0x000001525EF63CF0>) is not an instance method in: <TestSpider 'test_spider' at 0x1525d8b1490>
2021-12-03 20:04:14 [scrapy.core.scraper] ERROR: Spider error processing <GET https://website... > (referer: https://website... )
Traceback (most recent call last):
File "C:\Users\...\envs\...\lib\site-packages\twisted\internet\defer.py", line 858, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Users\...\envs\...\lib\site-packages\inline_requests\generator.py", line 107, in _handleFailure
ret = failure.throwExceptionIntoGenerator(generator)
File "C:\Users\...\proj\spiders\test.py", line 280, in parse
stock_response = yield Request(
File "C:\Users\...\envs\...\lib\site-packages\twisted\internet\defer.py", line 1661, in _inlineCallbacks
result = current_context.run(gen.send, result)
File "C:\Users\...\envs\...\lib\site-packages\scrapy\core\downloader\middleware.py", line 36, in process_request
response = yield deferred_from_coro(method(request=request, spider=spider))
File "C:\Users\...\envs\...\lib\site-packages\crawlera_fetch\middleware.py", line 142, in process_request
"original_request": request_to_dict(request, spider=spider),
File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 19, in request_to_dict
cb = _find_method(spider, cb)
File "C:\Users\...\envs\...\lib\site-packages\scrapy\utils\reqser.py", line 87, in _find_method
raise ValueError(f"Function {func} is not an instance method in: {obj}")
ValueError: Function functools.partial(<bound method RequestGenerator._handleSuccess of <inline_requests.generator.RequestGenerator object at 0x000001525EFCF4F0>>, generator=<generator object TestSpider.parse at 0x000001525EF63CF0>) is not an instance method in: <TestSpider 'test_spider' at 0x1525d8b1490>
2021-12-03 20:04:14 [scrapy.core.engine] INFO: Closing spider (finished)
2021-12-03 20:04:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
...
2021-12-03 20:04:14 [scrapy.core.engine] INFO: Spider closed (finished)
I have a error when I started to use the latest version
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 36, in process_request
response = yield deferred_from_coro(method(request=request, spider=spider))
File "/app/python/lib/python3.8/site-packages/crawlera_fetch/middleware.py", line 106, in process_request
"original_request": request_to_dict(request),
File "/usr/local/lib/python3.8/site-packages/scrapy/utils/reqser.py", line 19, in request_to_dict
cb = _find_method(spider, cb)
File "/usr/local/lib/python3.8/site-packages/scrapy/utils/reqser.py", line 92, in _find_method
raise ValueError("Function %s is not a method of: %s" % (func, obj))
ValueError: Function <bound method UniversalParserSpider.parse_item of <AmazonExtractionSpider 'amazon_extraction' at 0x7f56023bf670>> is not a method of: None
Should we pass spider
here
https://github.com/scrapy/scrapy/blob/master/scrapy/utils/reqser.py#L11
Module scrapy.utils.reqser is deprecated, please use request.to_dict method and/or scrapy.utils.request.request_from_dict instead
from scrapy.utils.reqser import request_from_dict, request_to_dict
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.