Giter Club home page Giter Club logo

crawlergo's People

Contributors

1oca1h0st avatar chushuai avatar danielintruder avatar dependabot[bot] avatar heisenbergv avatar moond4rk avatar pengdacn avatar pigfaces avatar qianlitp avatar tuuunya avatar zjj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crawlergo's Issues

爬虫任务终止释放未完全问题

环境:windows10, crawlergo 0.1.2, chrome
单个测试效果不错,准备进行批量爬取。
单进程跑了一天发现电脑卡死,看了下cpu资源耗尽,后台发现chrome进程一大堆。
应该是crawlergo任务结束了,部分chrome并没有正常关闭。

ERRO[0005] navigate timeout context deadline exceeded

试了几个网站都提示timeout,网络没问题

2052 ◯ ./crawlergo -c /opt/bugbounty/chrome-linux/chrome -t 20 http://testphp.vulnweb.com/
Crawling GET https://testphp.vulnweb.com/
Crawling GET http://testphp.vulnweb.com/
ERRO[0005] navigate timeout context deadline exceeded
ERRO[0005] http://testphp.vulnweb.com/
--[Mission Complete]--
GET http://testphp.vulnweb.com/ HTTP/1.1
Spider-Name: crawlergo-0KeeTeam
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.0 Safari/537.36

GET https://testphp.vulnweb.com/ HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.0 Safari/537.36
Spider-Name: crawlergo-0KeeTeam

可否支持设置默认的parameter value

从目前的测试来看,如果我设置postdata 是'username=admin&password=password', 那只尝试一次, 而且忽略其他在页面里面一同出现的paramter, 后续的username password 继续使用默认的KeeTeam 等等。 能否支持设定username=admin 以后, 所有在username 出现的地方都使用admin 而不用KeeTeam? password 类似。

某网站爬取时间过长

目标站点为:https://www.che168.com/
爬取了两天了,还未结束, 所以希望作者能帮忙看一下是什么原因.
因为crawlergo是串联在自己写的一个程序中的,程序一直在爬,导致无法结束.
后续应该如何约束最大爬取时间,或深度?

部分爬取URL如下:

http://www.che168.com/suihua/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/#pvareaid=100943
https://www.che168.com/china/baoma/baoma5xi/0_5/a3_8msdgscncgpi1ltocspexx0a1/#pvareaid=108403%23seriesZong
http://www.che168.com/jiangsu/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/
http://www.che168.com/nanjing/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/#pvareaid=100943
https://www.che168.com/china/aodi/aodia6l/0_5/a3_8msdgscncgpi1ltocspexx0a1/#pvareaid=108403%23seriesZong
http://www.che168.com/xuzhou/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/#pvareaid=100943
http://www.che168.com/wuxi/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/#pvareaid=100943
https://www.che168.com/china/baoma/baoma3xi/0_5/a3_8msdgscncgpi1ltocspexx0a1/#pvareaid=108403%23seriesZong
http://www.che168.com/changzhou/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/#pvareaid=100943
http://www.che168.com/suzhou/suva0-suva-suvb-suvc-suvd/0_8/a0_0msdgscncgpi1ltocsp1exa16/#pvareaid=1009

mac下运行报“navigate timeout fork/exec /Applications/Chrome.app: permission ”错误

具体报错信息如下:
$ ./crawlergo -c /Applications/Chrome.app -t 20 https://www.baidu.com
INFO[0000] Init crawler task, host: www.baidu.com, max tab count: 20, max crawl count: 200.
INFO[0000] filter mode: smart
INFO[0000] Start crawling.
INFO[0000] filter repeat, target count: 2
INFO[0000] Crawling GET http://www.baidu.com/
INFO[0000] Crawling GET https://www.baidu.com/
WARN[0000] navigate timeout fork/exec /Applications/Chrome.app: permission deniedhttp://www.baidu.com/
WARN[0000] navigate timeout fork/exec /Applications/Chrome.app: permission deniedhttps://www.baidu.com/
INFO[0000] closing browser.

crawlergo和Chrome.app已加执行权限,Chrome.app为“Google Chrome.app”更改而来。
Mac版本10.14.5,Chrome版本 80.0.3987.87(正式版本) (64 位),python3.7.6

ERRO[0000] navigate timeout 'Fetch.enable' wasn't found (-32601)

CentOS Linux release 7.6.1810 (Core)

[root@VM_0_17_centos data]# ./crawlergo -c /root/.local/share/pyppeteer/local-chromium/575458/chrome-linux/chrome -t 10 http://testphp.vulnweb.com
Crawling GET https://testphp.vulnweb.com/
Crawling GET http://testphp.vulnweb.com/
ERRO[0000] navigate timeout 'Fetch.enable' wasn't found (-32601)
ERRO[0000] https://testphp.vulnweb.com/
ERRO[0000] navigate timeout 'Fetch.enable' wasn't found (-32601)
ERRO[0000] http://testphp.vulnweb.com/
--[Mission Complete]--
GET http://testphp.vulnweb.com/ HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.0 Safari/537.36
Spider-Name: crawlergo-0KeeTeam


GET https://testphp.vulnweb.com/ HTTP/1.1
Spider-Name: crawlergo-0KeeTeam
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.0 Safari/537.36

python调用过程中会出现卡住的情况

部分LOG如下:

alling _exit(1). Core file will not be generated.
http:///components
WARN[0006] navigate timeout context deadline exceededhttp://A
WARN[0006] navigate timeout context deadline exceededhttp://A
INFO[0009] Crawling GET http://A/api.php
INFO[0009] Crawling GET http://A/uc_client/
WARN[0021] navigate timeout unable to execute *log.EnableParams: context deadline exceededhttp://A/connect.php
WARN[0021] navigate timeout unable to execute *log.EnableParams: context deadline exceededhttp://A/*?mod=misc*
INFO[0021] closing browser.
> 此次卡住了,无输出,按下enter也没反应,卡了24小时都无反应

这种情况已经出现了3次了, 无法定位出原因, 因为同样的python代码有时候]没有任何问题,有的时候就会卡住.

【需求】机器资源和爬虫属性限制

有些基准的爬虫需求,主要考虑到可能在爬取大型网站会存在一些问题。
内存大小限制;速率限制;chrome数量限制;cpu限制;

  1. 资源使用限制:如内存大小使用限制,CPU限制(3个tab就吃不消)。不知道这里调用的chrome headless能否控制,性能好的机器反而吃的资源多;
  2. 爬取速度限制:支持低速、中速? 有些网站需要低频扫描。 tab数量和qps之前的关系还是很模糊的,性能好的机器1个tab对应很多chrome进程,速度也很快;
  3. 爬取层数限制;

新需求: 截图+负载爬虫模式

  1. 希望能对某些页面进行截图, 方便人工验证,(尤其是匹配到关键字"后台","管理"等.
  2. 希望能支持 多子域+多fuzzpath的场景, 并且请求能均衡发送.

针对第二点,目前的方法是先用dirsearch fuzz出来的path, 筛选成list , 然后list加到一个string里面再用subprocess调用crawlergo, 这样的弊端也很显然.

不知道作者后期有无这方面的规划,thanks!

目前我的方法是拼接, 比如 http://www.A.com, 已知了两个路径: /path_a,/path_b

目前我的方法是拼接, 比如 http://www.A.com, 已知了两个路径: /path_a,/path_b
那么命令为: crawlergo -c chrome http://www.A.com/ http://www.A.com/path_a http://www.A.com/path_b

有两个问题:

  1. 如果已知路径比较多, 手工拼接比较麻烦
  2. 这种拼接传参的方法和分开一个个执行得到的结果是一样? 还是说有差别,没有进行验证.

当然后期能有参数支持多路径作为入口最好不过.

Originally posted by @djerrystyle in #31 (comment)

在Mac下运行出现WaitGroup重用的问题

环境:
Darwin ZBMAC-C02VQ02-5.local 17.2.0 Darwin Kernel Version 17.2.0: Fri Sep 29 18:27:05 PDT 2017; root:xnu-4570.20.62~3/RELEASE_X86_64 x86_64

命令:
./crawlergo -c /Applications/Chromium.app/Contents/MacOS/Chromium -f smart -o json -t 5 http://www.baidu.com

报错:
panic: sync: WaitGroup is reused before previous Wait has returned

goroutine 93421 [running]:
sync.(*WaitGroup).Wait(0xc0093f59a0)
C:/Go/src/sync/waitgroup.go:132 +0xae
ioscan-ng/src/tasks/crawlergo/engine.(*Tab).Start.func3(0xc0093f5800)
D:/go_projects/ioscan-ng/src/tasks/crawlergo/engine/tab.go:229 +0x34
created by ioscan-ng/src/tasks/crawlergo/engine.(*Tab).Start
D:/go_projects/ioscan-ng/src/tasks/crawlergo/engine/tab.go:227 +0x4f1

除此之外,选择-o json并未输出结果,所有请求均为timeout

带端口的url瞬间返回结果

./crawlergo_linux -c chrome-linux/chrome -output-mode json http://A.B.com:80/
执行后: 瞬间返回如下:

--[Mission Complete]--
{"req_list":null,"all_domain_list":[xxxxx],"all_req_list":[xxxxx]}

但是:
./crawlergo_linux -c chrome-linux/chrome -output-mode json http://A.B.com/

Crawling GET http://A.B.com/
DEBU[0000] 
DEBU[0006] context deadline exceeded
--[Mission Complete]--
{"req_list":[xxxxx],"all_domain_list":[xxxxx],"sub_domain_list":[xxxxx]}

支持代理配置

希望可以支持代理配置,这样可以方便在不同网络环境下进行测试,虽可以通过 proxychains 等方法实现,但是不如原生支持来的方便:)

直接报错

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x16b9e8f]

goroutine 975 [running]:
ioscan-ng/src/tasks/crawlergo/engine.(*Tab).InterceptRequest(0xc00062c1c0, 0xc0005e5d80)
D:/go_projects/ioscan-ng/src/tasks/crawlergo/engine/intercept_request.go:42 +0x25f
created by ioscan-ng/src/tasks/crawlergo/engine.NewTab.func1
D:/go_projects/ioscan-ng/src/tasks/crawlergo/engine/tab.go:90 +0x2e8

一次性添加多个 target

如果有大量的页面要爬取,即有很多 target,每次开子进程运行 ./crawlergo target 开销有点大,能不能一次性加入爬取列表?

macos下运行crawlergo时浏览器路径有问题

您好,
我在使用crawlergo时,使用chrome浏览器的路径/Applications/Google\ Chrome.app好像没反应。
想问一下,macos的chrome浏览器路径什么样才是正确的。
image
还有就是我去你们提供的chromium浏览器下载了。路径是/Users/mac/Downloads/chrome-mac/Chromium.app
image
好像两个都不行。
最后,我也给crawlergo提供了权限了。chmod +x crawlergo
image
还是显示permission denied

导航超时错误

navigate timeout context deadline exceeded
想本地做个dedecms的爬虫测试,直接就报了这个错误 是哪里操作不当嘛?

一个奇怪的链接

cmd = ["E:/exploit/spider/crawlergo/crawlergo", "-c", "E:/exploit/spider/crawlergo/chrome-win/chrome.exe","-t", "5","-f","smart", "-m", "1", "--output-mode", "json", 'https://www.baidu.com']

rsp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

result = simplejson.loads(output.decode().split("--[Mission Complete]--")[1])

result["all_req_list"][9]['url']

'https://,Wn=/'
image

'https://,Wn=/'请问这个链接是怎么触发的?

爬取链接错误。

yH5BAEAAAAALAAAAAABAAEAAAIBRAA7, 感觉像图片里的。
image

crawlergo.exe -c "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --tab-run-timeout 120s   -f strict   -m 1  --wait-dom-content-loaded-timeout 30s --output-mode console   https://cn.vuejs.org/

crawlergo 直接退出了

0.12的版本下载后只有5.1M,不知道是精简过多了。执行后就直接退出了
➜ crawlergo mv ~/Downloads/crawlergo ./
➜ crawlergo chmod +x crawlergo
➜ crawlergo ./crawlergo
[1] 9838 killed ./crawlergo
➜ crawlergo ./crawlergo -h
[1] 9845 killed ./crawlergo -h
➜ crawlergo ./crawlergo
[1] 9852 killed ./crawlergo
➜ crawlergo

对表单支持不太好

自动填的0kee不一定符合该input的type,导致一些表单触发不了,就抓不到目标页面。

Ubuntu运行出错问题

root@ubuntu:~/Desktop/crawlergo# ./crawlergo -c /Desktop/crawlergo/chrome-linux/chrome -t 20  http://testphp.vulnweb.com/
INFO[0000] Init crawler task, host: testphp.vulnweb.com, max tab count: 20, max crawl count: 200. 
INFO[0000] filter mode: smart                           
INFO[0000] Start crawling.                              
INFO[0000] filter repeat, target count: 2               
INFO[0000] Crawling GET https://testphp.vulnweb.com/    
WARN[0000] navigate timeout fork/exec /Desktop/crawlergo/chrome-linux/chrome: no such file or directoryhttps://testphp.vulnweb.com/ 
INFO[0000] Crawling GET http://testphp.vulnweb.com/     
WARN[0000] navigate timeout fork/exec /Desktop/crawlergo/chrome-linux/chrome: no such file or directoryhttp://testphp.vulnweb.com/ 
INFO[0000] closing browser.                             

运行操作如上 已经赋予crawlergo文件的+x 权限请问这是什么情况?

navigate timeout context deadline exceeded

执行
./crawlergo -c /usr/bin/google-chrome-stable -t 20 http://testphp.vulnweb.com/

image

传参的url只爬到一个
GET http://testphp.vulnweb.com/search.php?test=query
release

Distributor ID:	CentOS
Description:	CentOS Linux release 7.6.1810 (Core) 
Release:	7.6.1810
Codename:	Core

支持带cookie爬取吗

这个参数是不是不能添加cookie的?
--custom-headers Headers 自定义HTTP头,使用传入json序列化之后的数据,这个是全局定义,将被用于所有请求

爬取特定站点总是出现 Mission Complete

image

crawlergo.exe -c "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --tab-run-timeout 120s   -f strict   -m 1  --wait-dom-content-loaded-timeout 30s --output-mode console   https://cn.vuejs.org/

感谢分享如此好用的爬虫工具, 希望可以提供chromium中localStrorage 和 附加数据的支持!

感谢大佬分享如此好用的爬虫工具
在使用过程中我发现在需要对一些需要认证页面现在的爬取有一点无力的感觉, 提供的Header的客制化只能应付一些利用Cookie作为凭据的场景, 在一些SPA的场景中, 作为凭据的Token往往会放在 浏览器的LocalStorage 或者 作为一个固定的数据附加在提交的Body中, 希望可以提供这两块地方的客制化. 希望大佬可以将上述的特性放在后期的更新中, 因为现在的很多页面大部分都需要认证的功能, 如果只是单一的爬取非认证的页面能得到信息比较有限.
再次更新感谢大佬的分享🙏 🙏 🙏 🙏 !

windows下运行,所有站点都会报错timeout

$ crawlergo.exe -c .\GoogleChromePortable64\GoogleChromePortable.exe http://www.baidu.com
Crawling GET https://www.baidu.com/
Crawling GET http://www.baidu.com/
time="2019-12-31T10:56:43+08:00" level=error msg="navigate timeout chrome failed to start:\n"
time="2019-12-31T10:56:43+08:00" level=error msg="https://www.baidu.com/"
time="2019-12-31T10:56:43+08:00" level=debug msg="all navigation tasks done."
time="2019-12-31T10:56:43+08:00" level=error msg="navigate timeout chrome failed to start:\n"
time="2019-12-31T10:56:43+08:00" level=error msg="http://www.baidu.com/"
time="2019-12-31T10:56:43+08:00" level=debug msg="get comment nodes err"
time="2019-12-31T10:56:43+08:00" level=debug msg="all navigation tasks done."
time="2019-12-31T10:56:43+08:00" level=debug msg="invalid target"
time="2019-12-31T10:56:43+08:00" level=debug msg="get comment nodes err"
time="2019-12-31T10:56:43+08:00" level=debug msg="invalid target"
--[Mission Complete]--
GET http://www.baidu.com/ HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.0 Safari/537.36
Spider-Name: crawlergo-0KeeTeam

GET https://www.baidu.com/ HTTP/1.1
Spider-Name: crawlergo-0KeeTeam
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.0 Safari/537.36

crawlergo可以配置走代理吗?

通过配置googlemini浏览器的代理不起作用,想直接启动添加代理也不可以(chromium --proxy-server="socks5://127.0.0.1:1080")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.