jordanpotti / awsbucketdump Goto Github PK
View Code? Open in Web Editor NEWSecurity Tool to Look For Interesting Files in S3 Buckets
License: MIT License
Security Tool to Look For Interesting Files in S3 Buckets
License: MIT License
Hi! First of all, thanks for the work!
Secondly... when lunching the script without -t argument:
python AWSBucketDump.py -D -l BucketNames.txt -g interesting_Keywords.txt
It seems that the script hangs right before downloading the files:
Downloads enabled (-D), will be saved to current directory.
Starting thread...
Queuing http://####.s3.amazonaws.com...
Fetching http://####.s3.amazonaws.com...
Pilfering http://####.s3.amazonaws.com...
Collectable: http://####.s3.amazonaws.com/####
Collectable: http://####.s3.amazonaws.com/####
...
...
I think to have spotted the problem.
Due to "for i in range(1, arguments.threads)" in:
AWSBucketDump/AWSBucketDump.py
Lines 217 to 221 in f8a6301
if the user sets "arguments.threads" = 1, no downloadworker
will be generated, so the queue will not be empied, causing "download_q.join()" to hang:
AWSBucketDump/AWSBucketDump.py
Lines 230 to 231 in f8a6301
A possible solution could be to change
"for` i in range(1, arguments.threads)"
in
"for i in range(0, arguments.threads)"
Is this possible?
I hope i was helpful.
Good evening, so the tool worked fine but I have a question.
I'm doing some tests with my bucket but for some reason the tool only dumps the first 1k of files.
How I can set to dump everything from a single bucket?
I have tried increasing the thread and the max file size but do not worked, they stop at the same point as default setting.
I also tried to dump without the wordlist because with the wordlist I cant find any file.
So what is my best way to work on this?
It is there any option available to dump the entire bucket?
AWSBucketDump.py -h
Traceback (most recent call last):
File "C:\Python3\AWSBucketDump-master\AWSBucketDump.py", line 15, in
import xmltodict
ImportError: No module named xmltodict
###################################################################
pip3 install xmltodict
Requirement already satisfied: xmltodict in c:\python3\lib\site-packages (0.11.0)
Add an option to add an API key and sign the request in order to check for buckets exposed to Amazon users.
Did some research on this specific error, and I simply did as people were saying to do:
Change:
from queue import Queue
To:
from multiprocessing import Queue
Apparently it gets confused. Works now!
UPDATE: Scratch that! It got done queuing and said 'Queue' object has no attribute 'join'. Must be because I am using python 2.7.11!
UPDATE2: Scratch it again. Read that on 2.7+ it uses "Queue" not "queue". Changed that. Got some encoding errors during discovery, then just used this section of code below to force encoding to utf-8, and it works great!
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
I have a S3 bucket residing with following URL
Your code makes it
http://http://XXXXX.s3-website-us-east-1.amazonaws.com.s3.amazonaws.com
How do I fix this?
I use the normal command " python AWSBucketDump.py -l BucketNames.txt -g interesting_Keywords.txt -D -m 500000 -d 1" and start brute,but it will stuck suddenly ,and there is many errors like this
Traceback (most recent call last): File "AWSBucketDump.py", line 45, in bucket_worker fetch(item) File "AWSBucketDump.py", line 33, in fetch response = requests.get(url) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 72, in get return request('get', url, params=params, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 58, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 502, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 612, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 504, in send raise ConnectionError(e, request=request) ConnectionError: HTTPConnectionPool(host='ample.s3.amazonaws.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c3735e550>: Failed to establish a new connection: [Errno 111] Connection refused',)) HTTPConnectionPool(host='ample.s3.amazonaws.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c3735e550>: Failed to establish a new connection: [Errno 111] Connection refused',))
How could i fix this?
Hi, users are unable to run AWSBucketDump due to dependency conflict with urllib3 package.
As shown in the following full dependency graph of AWSBucketDump, AWSBucketDump requires urllib3,while requests==2.20.0 requires urllib3>=1.21.1,<1.25.
According to pip’s “first found wins” installation strategy, urllib3==1.25.3 is the actually installed version. However, urllib3==1.25.3 does not satisfy urllib3>=1.21.1,<1.25.
AWSBucketDump-master
| +-certifi(version range:==2017.7.27.1)
| +-chardet(version range:==3.0.4)
| +-idna(version range:==2.6)
| +-requests(version range:==2.20.0)
| | +-certifi(version range:>=2017.4.17)
| | +-chardet(version range:<3.1.0,>=3.0.2)
| | +-idna(version range:>=2.5,<2.8)
| | +-urllib3(version range:>=1.21.1,<1.25)
| +-urllib3(version range:*)
| +-xmltodict(version range:==0.11.0)
Thanks for your help.
Best,
Neolith
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.