Comments (7)
Request limiting is achieved by the number of workers atm. Could you explain why it does not meet your needs?
from crawly.
Yes currently I drop the worker count to one, but it still averages 60~70 requests per minute, which is still a tad too high for my liking. That is over 3600 requests per hour, which would likely be flagged out in anomaly-based firewall systems.
So far I haven't had much issue, but it would be nice to have granular control over the requests/min.
Perhaps tag this as a "nice-to-have"?
from crawly.
Wow. I can't get more than 50 rpm from two workers on the CrawlyUI demo for an example.
I was not expecting it to be a problem, but indeed, it should be fixed.
A worker does a request once per 300 microseconds (https://github.com/oltarasenko/crawly/blob/master/lib/crawly/worker.ex#L11). Usually, the HTTP part is a bottleneck here. However, we can make it configurable. Let me implement this quickly, so you could have it sooner.
Do you need it fast? I can do 0.10.1 for the case tomorrow.
from crawly.
It isn't urgent, no need to rush it. Its just something I noticed and was on my mind since we had the discussion in Jan about how requests were fetched #39 (comment) as this type of throttling customization could be achieved through a a "pipeline" module between the fetching and then datastorage portion in the diagram.
For example, I could do things like randomize the throttle rate, or base the throttle rate on some calculation.
But yea, for some reason my machine makes the requests quite quickly.
from crawly.
@Ziinc Actually I don't want to have the flexibility of assigning a given speed to a given worker. It will produce unpredictable results when different workers will have different speeds. So it will be hard to reason why something is faster and something is slower.
Currently, my mind suggests me to hardcode the worker's speed at some value. For all workers. E.g. we can have 5 requests per minute from a worker at max. Or even 1 request per minute from a worker. What do you think?
from crawly.
Hopefully, it could improve your case.
from crawly.
Many thanks, so adjusting the request rate will be based on # of workers? an interesting approach. Will review the PR
from crawly.
Related Issues (20)
- custom parsar callback sample HOT 7
- Could not compile dependency :epipe HOT 5
- Demo page not loading HOT 3
- Setting up a parametric spider (dynamic base_url and start_urls) HOT 1
- Use a more reliable website to crawl in tutorial HOT 1
- Any working examples? HOT 4
- jl files not found probably not writing HOT 1
- Crawly.fetch giving 301 response instead of 200 HOT 1
- My Spider's code is never invoked, weird behavior with `Crawly.RequestsStorage.pop` in library code HOT 5
- This is actually a question, Nested scraping HOT 2
- Genserver time out crash in long-running pipeline HOT 1
- Stop and resume the spider where it stopped HOT 2
- Protocol error HOT 7
- `Crawly.Fetchers.Fetcher` implementation for Playwright HOT 4
- robots.txt matching is pretty buggy HOT 10
- Running many instances of one spider HOT 3
- Make the management tool opt-in by default HOT 5
- Q: Can the spider "fan out" on a website? (multiple next items) HOT 1
- Error: Could not load spiders. HOT 5
- [error] Pipeline crash by call: Crawly.Middlewares.UniqueRequest.run
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from crawly.