Giter Club home page Giter Club logo

awsbucketdump's Introduction

AWSBucketDump

AWSBucketDump is a tool to quickly enumerate AWS S3 buckets to look for loot. It's similar to a subdomain bruteforcer but is made specifically for S3 buckets and also has some extra features that allow you to grep for delicious files as well as download interesting files if you're not afraid to quickly fill up your hard drive.

Pre-Requisites

Non-Standard Python Libraries:

  • xmltodict
  • requests
  • argparse

Created with Python 3.6

Install with virtualenv

source venv/bin/activate
pip install -r requirements.txt

General

This is a tool that enumerates Amazon S3 buckets and looks for interesting files.

I have example wordlists but I haven't put much time into refining them.

https://github.com/danielmiessler/SecLists will have all the word lists you need. If you are targeting a specific company, you will likely want to use jhaddix's enumall tool which leverages recon-ng and Alt-DNS.

https://github.com/jhaddix/domain && https://github.com/infosec-au/altdns

As far as word lists for grepping interesting files, that is completely up to you. The one I provided has some basics and yes, those word lists are based on files that I personally have found with this tool.

Using the download feature might fill your hard drive up, you can provide a max file size for each download at the command line when you run the tool. Keep in mind that it is in bytes.

I honestly don't know if Amazon rate limits this, I am guessing they do to some point but I haven't gotten around to figuring out what that limit is. By default there are two threads for checking buckets and two buckets for downloading.

After building this tool, I did find an interesting article from Rapid7 regarding this research.

Usage:

usage: AWSBucketDump.py [-h] [-D] [-t THREADS] -l HOSTLIST [-g GREPWORDS] [-m MAXSIZE]

optional arguments:
  -h, --help    show this help message and exit
  -D            Download files. This requires significant diskspace
  -d            If set to 1 or True, create directories for each host w/ results
  -t THREADS    number of threads
  -l HOSTLIST
  -g GREPWORDS  Provide a wordlist to grep for
  -m MAXSIZE    Maximum file size to download.

 python AWSBucketDump.py -l BucketNames.txt -g interesting_Keywords.txt -D -m 500000 -d 1

Contributors

jordanpotti

grogsaxle

codingo

aarongorka

BHaFSec

paralax

fzzo

rypb

awsbucketdump's People

Contributors

aarongorka avatar b03902043 avatar bhafsec avatar codingo avatar fzzo avatar grogsaxle avatar jbpratt avatar jordanpotti avatar n3tl0kr avatar paralax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awsbucketdump's Issues

ImportError: No module named queue

Did some research on this specific error, and I simply did as people were saying to do:

Change:
from queue import Queue
To:
from multiprocessing import Queue

Apparently it gets confused. Works now!

UPDATE: Scratch that! It got done queuing and said 'Queue' object has no attribute 'join'. Must be because I am using python 2.7.11!

UPDATE2: Scratch it again. Read that on 2.7+ it uses "Queue" not "queue". Changed that. Got some encoding errors during discovery, then just used this section of code below to force encoding to utf-8, and it works great!

import sys
reload(sys)
sys.setdefaultencoding('utf-8')

Number of files to dump.

Good evening, so the tool worked fine but I have a question.

I'm doing some tests with my bucket but for some reason the tool only dumps the first 1k of files.

How I can set to dump everything from a single bucket?

I have tried increasing the thread and the max file size but do not worked, they stop at the same point as default setting.

I also tried to dump without the wordlist because with the wordlist I cant find any file.

So what is my best way to work on this?

It is there any option available to dump the entire bucket?

Installation fails due to conflicting urllib3 version

Hi, users are unable to run AWSBucketDump due to dependency conflict with urllib3 package.
As shown in the following full dependency graph of AWSBucketDump, AWSBucketDump requires urllib3,while requests==2.20.0 requires urllib3>=1.21.1,<1.25.

According to pip’s “first found wins” installation strategy, urllib3==1.25.3 is the actually installed version. However, urllib3==1.25.3 does not satisfy urllib3>=1.21.1,<1.25.

Dependency tree------

AWSBucketDump-master
| +-certifi(version range:==2017.7.27.1)
| +-chardet(version range:==3.0.4)
| +-idna(version range:==2.6)
| +-requests(version range:==2.20.0)
| | +-certifi(version range:>=2017.4.17)
| | +-chardet(version range:<3.1.0,>=3.0.2)
| | +-idna(version range:>=2.5,<2.8)
| | +-urllib3(version range:>=1.21.1,<1.25)
| +-urllib3(version range:*)
| +-xmltodict(version range:==0.11.0)

Thanks for your help.
Best,
Neolith

No module named xmltodict

AWSBucketDump.py -h
Traceback (most recent call last):
File "C:\Python3\AWSBucketDump-master\AWSBucketDump.py", line 15, in
import xmltodict
ImportError: No module named xmltodict

###################################################################

pip3 install xmltodict
Requirement already satisfied: xmltodict in c:\python3\lib\site-packages (0.11.0)

The script hangs when "arguments.threads" is set to 1.

Hi! First of all, thanks for the work!
Secondly... when lunching the script without -t argument:

python AWSBucketDump.py -D -l BucketNames.txt -g interesting_Keywords.txt

It seems that the script hangs right before downloading the files:

Downloads enabled (-D), will be saved to current directory.
Starting thread...
Queuing http://####.s3.amazonaws.com...
Fetching http://####.s3.amazonaws.com...
Pilfering http://####.s3.amazonaws.com...
Collectable: http://####.s3.amazonaws.com/####
Collectable: http://####.s3.amazonaws.com/####
...
...


I think to have spotted the problem.

Due to "for i in range(1, arguments.threads)" in:

# start download workers
for i in range(1, arguments.threads):
t = Thread(target=downloadWorker)
t.daemon = True
t.start()

if the user sets "arguments.threads" = 1, no downloadworker will be generated, so the queue will not be empied, causing "download_q.join()" to hang:

if arguments.download:
download_q.join()

A possible solution could be to change

"for` i in range(1, arguments.threads)"

in

"for i in range(0, arguments.threads)"

Is this possible?
I hope i was helpful.

the brute is stuck

I use the normal command " python AWSBucketDump.py -l BucketNames.txt -g interesting_Keywords.txt -D -m 500000 -d 1" and start brute,but it will stuck suddenly ,and there is many errors like this

Traceback (most recent call last): File "AWSBucketDump.py", line 45, in bucket_worker fetch(item) File "AWSBucketDump.py", line 33, in fetch response = requests.get(url) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 72, in get return request('get', url, params=params, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/api.py", line 58, in request return session.request(method=method, url=url, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 502, in request resp = self.send(prep, **send_kwargs) File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 612, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 504, in send raise ConnectionError(e, request=request) ConnectionError: HTTPConnectionPool(host='ample.s3.amazonaws.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c3735e550>: Failed to establish a new connection: [Errno 111] Connection refused',)) HTTPConnectionPool(host='ample.s3.amazonaws.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5c3735e550>: Failed to establish a new connection: [Errno 111] Connection refused',))

How could i fix this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.