domainExtractor
Extract domains/subdomains/FQDNs from files and URLs
Installation:
git clone https://github.com/intrudir/domainExtractor.git
Run script without args to see usage Usage Examples:
python3 domainExtractor.py
usage: domainExtractor.py [-h] [-f INPUTFILE] [-u URL] [-t TARGET] [-v]
This script will extract domains from the file you specify and add it to a final file
optional arguments:
-h, --help show this help message and exit
-f INPUTFILE, --file INPUTFILE
Specify the file to extract domains from
-u URL, --url URL Specify the web page to extract domains from. One at a time for now
-t TARGET, --target TARGET
Specify the target top-level domain you'd like to find and extract e.g. uber.com
-v, --verbose Enable slightly more verbose console output
Specify your source and a target domain to search for and extract. Matching a specified target domain
Using any file with text in it, extracting all domains with yahoo.com as the TLD. Extracting from files
python3 domainExtractor.py -f ~/Desktop/yahoo/test/test.html -t yahoo.com
It will extract, sort and dedup all domains that are found.
You can specify multiple files using commas (no spaces)
python3 domainExtractor.py -f amass.playstation.net.txt,subfinder.playstation.net.txt --target playstation.net
Example output:
Pulling data directly from Yahoo.com's homepage extracting all domains with 'yahoo.com' as the TLD. Extracting from a web page
python3 domainExtractor.py -u "https://yahoo.com" -t yahoo.com
You can either omit the --target flag completely, or specify 'all' and it will extract all domains it finds (at the moment .com, .net, .org, .tv, .io) Specifying all domains
# pulling from a file, extract all domains
python3 domainExtractor.py -f test.html --target all
# pull from yahoo.com home page, extract all domains. No target specified defaults to 'all'
python3 domainExtractor.py -u "https://yahoo.com"
Example output:
If you run the script again while checking for the same target, a few things occur: Domains not previously found
1) if you already have a final file for it it will notify you of domains you didnt have before
2) It will append them to the final file
3) It will log the new domain to logs/newdomains.{target}.txt with date & time found
This allows you to check the same target across multiple files and be notified of any new domains found!
I first use it against my Amass results, then against my Subfinder results.
The script will sort and dedup as well as notify me of how many new, unique domains came from subfinder's results.
It will add them to the final file and log just the new ones to logs/newdomains.{target}.txt