Giter Club home page Giter Club logo

sonarsearch's Introduction

SonarSearch v2

Attention!

Over a year ago Rapid7 revoked public access to their datasets, and thus the data hosted on the omnisint API became extremely out of date. In addition, due to the licensing changes around the data, our wonderful sponsor ZeroGuard was no longer able to support the project. As a result, it has been taken offline. However, I have released full instruction for running your own instance of the API, providing you can obtain a dataset. The instructions can be found at the bottom of the README.


This repo contains all the tools needed to create a blazing fast API for Rapid7's Project Sonar dataset. It employs a custom indexing method in order to achieve fast lookups of both subdomains for a given domain, and domains which resolve to a given IP address.


An instance of this API (Crobat) is online at the following URL:

https://sonar.omnisint.io

Crobat

Crobat is a command line utility designed to allow easy querying of the Crobat API. To install the client, run the following command:

$ go get github.com/cgboal/sonarsearch/cmd/crobat

Below is a full list of command line flags:

$ crobat -h                                                                                                                                                                      
Usage of crobat:
  -r string
    	Perform reverse lookup on IP address or CIDR range. Supports files and quoted lists
  -s string
    	Get subdomains for this value. Supports files and quoted lists
  -t string
    	Get tlds for this value. Supports files and quoted lists
  -u	Ensures results are unique, may cause instability on large queries due to RAM requirements

Additionally, it is now possible to pass either file names, or quoted lists ('example.com example.co.uk') as the value for each flag in order to specify multiple domains/ranges.

Crobat API

Currently, Project Crobat offers two APIs. The first of these is a REST API, with the following endpoints:

/subdomains/{domain} - All subdomains for a given domain
/tlds/{domain} - All tlds found for a given domain
/all/{domain} - All results across all tlds for a given domain
/reverse/{ip} - Reverse DNS lookup on IP address
/reverse/{ip}/{mask} - Reverse DNS lookup of a CIDR range

Additionally, Project Crobat offers a gRPC API which is used by the client to stream results over HTTP/2. Thus, it is recommended that the client is used for large queries as it reduces both query execution times, and server load. Also, unlike the REST API, there is no limit to the size of specified when performing reverse DNS lookups.

No authentication is required to use the API, nor special headers, so go nuts.

Third-Party SDKs

Contributing

If you wish to contribute a SDK written in other languages, shoot me a DM on Twitter (@CalumBoal), or open an issue on this repository and I will provide a link to your repository in the Third-Party SDK's section of this readme.

SonarSearch Setup Instructions

Setting up an instance of SonarSearch is reasonably straightforward. You will require a host to run the server on, this can be a VPS, or your own personal device. Regardless of the hosting option you choose, you will require 150-200GB of diskspace in order to store the datasets and indexes.

There are two options for hosting the indexes (redis, or postgres). Redis requires ~20GB of RAM to hold the index, but it is quick to load the index, as well as query it. Postgres on the other hand does not use ram to hold the index, and thus has a much lower memory footprint. However, it will take longer to load the data into Postgres, and looking up index values will take longer. If you are expecting an extremely high volume of lookups, use Redis, otherwise, Postgres should suffice.

I am not sure how much memory is required to run SonarSearch with Postgres, but it should not be a lot (2-4GB?).

Installation of tools

Clone the SonarSearch git repository, and run the following commands:

make
make install

This will compile the various binaries used to set up the server and copy them to your path. You may wish to alter the install location specified in the make file. Or, you can omit the make install step and simply use the binaries from the bin directory after running make.

Additionally, you will require either Postgres or Redis. You can use a Docker container for either of these, or run them locally. Consult google for setup instructions.

The following command will spin up a Postgres container which can be used for the index:

docker run --name sonarsearch_postgres --expose 5432 -p 5432:5432 -v /var/lib/sonar_search:/var/lib/postgresql/data -e POSTGRES_PASSWORD=postgres -d postgres

Set up Postgres

Before you build the index, you must create the table in Postgres. This can be done with the following command:

psql -U postgres -h 127.0.0.1 -d postgres -c "CREATE TABLE crobat_index (id serial PRIMARY KEY, key text, value text)"

Acquiring the datasets

Dunno, good luck :)

Building the indexes

To optimize searching these large datasets, a custom indexing strategy is used. Three steps are required in order to set this up:

Step 1

First, you need to convert the project sonar dataset into the format used by SonarSearch. This can be done using the following command.

gunzip < 2021-12-31-1640909088-fdns_a.json.gz | sonar2crobat -i - -o crobat_unsorted

Step 2

In order to build the index, we need to sort the files obtained from the previous step. If you are running low on disk space, you can discard the raw gzip dataset.

I recommend running these commands one at a time, as they are resource intensive:

sort -k1,1 -k2,2 -t, crobat_unsorted_domains > crobat_sorted_domains
sort -k1,1 -t, -n crobat_unsorted_reverse > crobat_sorted_reverse

If you are happy, you can now discard the unsorted files.

Step 3

Once the files have been sorted, you need to generate indexes for both the subdomain and reverse DNS searches.

To do so, you run the crobat2index binary, passing the input file, the format you wish to output (domain or reverse), and the storage backend (postgres or redis).

crobat2index will output data to stdout which can be piped to either redis-cli or psql to import it quickly and efficiently. Below is an example of importing the domain index into Postgres.

crobat2index -i crobat_sorted_domains -f domain -backend postgres | psql -U postgres -h 127.0.0.1 -d postgres -c "COPY crobat_index(key, value) from stdin (Delimiter ',')"

Whereas inserting the reverse index would be done as follows:

crobat2index -i crobat_sorted_reverse -f reverse -backend postgres | psql -U postgres -h 127.0.0.1 -d postgres -c "COPY crobat_index(key, value) from stdin (Delimiter ',')"

If something goes wrong and you need to try again, run this command:

psql -U postgres -h 127.0.0.1 -d postgres -c "DROP TABLE crobat_index; CREATE TABLE crobat_index (id serial PRIMARY KEY, key text, value text)"

Running crobat-server

Once you have completed all the previous steps, you are ready to run your crobat server. You will need to set a few env vars regarding configuration, as listed below:

CROBAT_POSTGRES_URL=postgres://postgres:postgres@localhost:5432/postgres CROBAT_CACHE_BACKEND=postgres CROBAT_DOMAIN_FILE=~/Code/SonarSearch/testdata/crobat_sorted_domains CROBAT_REVERSE_FILE=~/Code/SonarSearch/testdata/crobat_sorted_reverse crobat-server

To make this easier to run, you can save these env variables to a file and source them.

By default, crobat-server listens on ports 1997 (gRPC) and 1998 (HTTP).

The end?

You should now have a local working version of SonarSearch. Please note that postgres support is experimental, and may have some unexpected issues. If you encounter any problems, or have any questions regarding setup, feel free to open an issue on this repo.

sonarsearch's People

Contributors

0xflotus avatar cgboal avatar gy741 avatar reconsec avatar tannart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sonarsearch's Issues

Throwing error

@Cgboal now days crobat throwing error

rpc error: code = Unknown desc = : HTTP status code 525; transport: missing content-type field

Raw Output Endpoint

Thank you for your work on this project. Once I found your API I was hooked. This really is a fantastic tool.

There is one limitation in SonarSearch that keeps me reliant upon my raw copy of the SONAR dataset, the inability to query the raw data thus getting both halves of the record. For example, let's say I want to return a list of every S3 bucket and see how many are CNAMEs for (sub)domains. I can simply grep my raw data set and in a little while have the full JSON returned

{"timestamp":"1622162183","name":"10gen-gvf.s3.amazonaws.com","type":"cname","value":"s3-1-w.amazonaws.com"}
{"timestamp":"1622162106","name":"10in30.s3.amazonaws.com","type":"cname","value":"s3-us-west-2-w.amazonaws.com"}
{"timestamp":"1622162164","name":"10judzoqo6kzfffm183231.covideos.s3.amazonaws.com","type":"cname","value":"s3-1-w.amazonaws.com"}

Admittedly, there will be a lot of duplicates where the S3 bucket points to AWS, but I can filter that myself fairly easily. Based on what I read in the source code you are saving both the record name and value, thus could we get an endpoint that returned something with both halves of the record?

Perhaps something like /raw/{domain}? Where if we queried /raw/zendesk.com we would get:

[
"1125.zendesk.com":"1125.diversified-capital.com",
"15below.ssl.zendesk.com":"15below.zendesk.com",
"18emaint.zendesk.com":"18emaint.motorwerks.com",
"1d-color.zendesk.com":"1d-color.support.ec-force.com",
"1stadm.zendesk.com":"1stadm.ssl.zendesk.com"
]

This could also be helpful for target-specific searches. For example, if we queried /raw/hackerone.com we would be returned more helpful information for research:

[
"mta-sts.dev.wearehackerone.com":"hacker0x01.github.io",
"mta-sts.forwarding.hackerone.com",:"hacker0x01.github.io",
"mta-sts.hackerone.com":"hacker0x01.github.io"
]

Server times out after 120 seconds for large responses.

I noticed that the 120 second timeouts have been removed from the client, but the server still seems to timeout after 120 seconds regardless.
For example searching for "amazonaws.com" cuts off pretty much exactly at 120 seconds with 1491770 domains returned.
Would be nice if the server didnt time out for large responses, for example a timeout after 5-10 seconds after no data has been sent vs. the current 120 second timeout no matter what.

Error while trying to fetch tlds

Hey!

I've been using the tool for a while and find the cli really helpful during my recon workflow. Thanks alot for an amazing tool.

Lately, while trying to look for tlds using -t option the command returns the below error:

2021/10/25 20:59:02 rpc error: code = Unavailable desc = unexpected HTTP status code received from server: 502 (Bad Gateway); transport: received unexpected content-type "text/html"

I'm however able to fetch the information using the curl command:
curl -s https://sonar.omnisint.io/tlds/yahoo

The other options for subdomains, reverse dns work fine.

Please help

SonarSearch Down

Crobat application and https://sonar.omnisint.io site are not working. The following error comes up.

2020/08/26 11:32:56 rpc error: code = Unavailable desc = Bad Gateway: HTTP status code 502; transport: received the unexpected content-type "text/html"

Configurable limit for HTTP APIs and pagination indicator

Thank you for the valuable tool!

A few suggestions on improvements for HTTP API:

  1. Allow user to specify the limit (when we don't need the whole 10k - currently hardcoded limit). It could still be capped at 10k to prevent abuse.
  2. Provide indication in the response whether "next page" exists

Both of these would allow to query the service in a more considerate manner, which should also help reduce the load.

If possible, it would also be helpful if total count could be included in any paginated response.

invalid IPv4 address

Since updating to V2, I've been able to get one search run successfully. Everything since results in this:

[09:25:13] user@hostname:~$ crobat -r 8.8.8.8 2021/09/16 09:25:21 rpc error: code = Unknown desc = invalid IPv4 address

This happens no matter the IP address. I tried deleting the binary and reinstalling but same story.

ENV: WSL1 Debian instance running on Windows 10 19042.1165

Thanks for all your efforts.

crobat is not working

crobat is not working and showing error like "rpc error: code = Unknown desc = unexpected HTTP status code received from server: 522 (); malformed header: missing HTTP content-type"
Screenshot (20)

rpc error

rpc error: code = Unknown desc = unexpected HTTP status code received from server: 523 (); malformed header: missing HTTP content-type

FDNS Rapid Data update

I tried to look into the code about how often data gets updated from the Rapid 7 database but couldn't able to find anything.
Can anyone please on this.

Sonar website down

Hi,

When going to the following there is a 525 error
image

Is it also possible to set the mime type to be json instead of text/html as that is the wrong mime type

Feature request

Thanks a lot for this tool. But it is very difficult to save the subdomains it finds. Can you bring the feature to save subdomains to file?

Deduplication - improvement request

Hi, Calum! Nice tool! The output would look cooler if you added duplicates deleting. I'm using it with sort -u (crobat -s domain.com | sort -u >> file.txt) now but it'd be great if you added deduplication to your tool.

GRPC Credientials Error on Installation

When installing on Ubuntu 18.04 I get the following error.

test@test:~/tools/SonarSearch$ go get github.com/cgboal/sonarsearch/crobat
# google.golang.org/grpc/credentials
../../go/src/google.golang.org/grpc/credentials/tls.go:245:2: undefined: tls.TLS_AES_128_GCM_SHA256
../../go/src/google.golang.org/grpc/credentials/tls.go:246:2: undefined: tls.TLS_AES_256_GCM_SHA384
../../go/src/google.golang.org/grpc/credentials/tls.go:247:2: undefined: tls.TLS_CHACHA20_POLY1305_SHA256
I believe it's related to this.
grpc/grpc#23702

Error 522 - unexpected HTTP status code received from server

I just migrated WSL to my new work PC and now Crobat just does nothing when I try to launch it. Not sure if it has anything to do with the migration or not. I completely removed and reinstalled yet it just sits there and does nothing after I try to run it. Not sure how to provide any useful info as there don't appear to be any error messages. This is on a Debian 10 WSL 1 instance. Currently trying it out with PS7 and will see what it does

Bad Gateway: HTTP status code 502

I just tried firing off an rDNS query with crobat like I do every single day at work, but today all of my queries are returning this error:
2021/01/04 14:53:25 rpc error: code = Unavailable desc = Bad Gateway: HTTP status code 502; transport: received the unexpected content-type "text/html"
I tried pulling the latest version but it's all up-to-date. Not sure what's changed.

EDIT - spelling

Wildcard searches

Any chance of allowing wildcard searches? Currently i'm downloading the data from R7 and just searching for our specific brand. I think it would be awesome if I could just use your project to do it!

Thanks!

API not working

The API is returning null on on endpoints and go tool fails

$ curl -s https://sonar.omnisint.io/subdomains/tesla.com   
null

Feature Request

scanning domain names inside a txt file
example: crobat -sl domain.txt

question?????

Why is your project not working?
You can say that you were getting this information for Tlds.

internal server error 500

hello
I use this tools and every time I get this message error:
" rpc error: code = Unknown desc = unexpected HTTP status code received from server: 500 (Internal Server Error); transport: received unexpected content-type "text/html" "
what should I do???

New Domains Filter

Would it be possible to pull all the domains that have been added in the past week for example?

SonarSearch on Debian

Trying to get it working on Debian, but with no success. Could you please give some detailed instructions? Are there any other ways on how to get access to your API?

Rated limited ??? why you say go nuts ? you mean eat my nuts ?

<!DOCTYPE html>
<!--if lt IE 7> <html class=no-js ie6 oldie lang=en-US> <!endif-->
<!--if IE 7>    <html class=no-js ie7 oldie lang=en-US> <!endif-->
<!--if IE 8>    <html class=no-js ie8 oldie lang=en-US> <!endif-->
<!--if gt IE 8><!--> <html class=no-js lang=en-US> <!--<!endif-->
<head>
<title>Rate Limited</title>
<meta charset=UTF-8 />
<meta http-equiv=Content-Type content=text/html; charset=UTF-8 />
<meta http-equiv=X-UA-Compatible content=IE=Edge />
<meta name=robots content=noindex
 nofollow />
<meta name=viewport content=width=device-width
initial-scale=1 />
<link rel=stylesheet id=cf_styles-css href=/cdn-cgi/styles/main.css />


<script>
(function(){if(document.addEventListener&&window.XMLHttpRequest&&JSON&&JSON.stringify){var e=function(a){var c=document.getElementById(error-feedback-survey)
d=document.getElementById(error-feedback-success)
b=new XMLHttpRequest;a={event:feedback clicked
properties:{errorCode:1200
helpful:a
version:1}};b.open(POST
https://sparrow.cloudflare.com/api/v1/event);b.setRequestHeader(Content-Type
application/json);b.setRequestHeader(Sparrow-Source-Key
c771f0e4b54944bebf4261d44bd79a1e);
b.send(JSON.stringify(a));c.classList.add(feedback-hidden);d.classList.remove(feedback-hidden)};document.addEventListener(DOMContentLoaded
function(){var a=document.getElementById(error-feedback)
c=document.getElementById(feedback-button-yes)
d=document.getElementById(feedback-button-no);classListin a&&(a.classList.remove(feedback-hidden)
c.addEventListener(click
function(){e(!0)})
d.addEventListener(click
function(){e(!1)}))})}})();
</script>

<script defer src=https://performance.radar.cloudflare.com/beacon.js></script>
</head>
<body>
  <div id=cf-wrapper>
    <div class=cf-alert cf-alert-error cf-cookie-error hidden id=cookie-alert data-translate=enable_cookies>Please enable cookies.</div>
    <div id=cf-error-details class=p-0>
      <header class=mx-auto pt-10 lg:pt-6 lg:px-8 w-240 lg:w-full mb-15 antialiased>
         <h1 class=inline-block md:block mr-2 md:mb-2 font-light text-60 md:text-3xl text-black-dark leading-tight>
           <span data-translate=error>Error</span>
           <span>1200</span>
         </h1>
         <span class=inline-block md:block heading-ray-id font-mono text-15 lg:text-sm lg:leading-relaxed>Ray ID: 729a1e910a5e047e &bull;</span>
         <span class=inline-block md:block heading-ray-id font-mono text-15 lg:text-sm lg:leading-relaxed>2022-07-12 13:27:58 UTC</span>
        <h2 class=text-gray-600 leading-1.3 text-3xl lg:text-2xl font-light>This website has been temporarily rate limited</h2>
      </header>

      <section class=w-240 lg:w-full mx-auto mb-8 lg:px-8>
          <div id=what-happened-section class=w-1/2 md:w-full>
            <h2 class=text-3xl leading-tight font-normal mb-4 text-black-dark antialiased data-translate=what_happened>What happened?</h2>
            <p>Too many requests for sonar.omnisint.io. Try again later.</p>
            
              <p>Please see <a rel=noopener noreferrer href=https://support.cloudflare.com/hc/en-us/articles/360029779472#h_302a97f3-eba3-4c0a-a589-76ba95f60dcf target=_blank>https://support.cloudflare.com/hc/en-us/articles/360029779472#h_302a97f3-eba3-4c0a-a589-76ba95f60dcf</a> for more details.</p>
            
          </div>

          
      </section>

      <div class=feedback-hidden py-8 text-center id=error-feedback>
    <div id=error-feedback-survey class=footer-line-wrapper>
        Was this page helpful?
        <button class=border border-solid bg-white cf-button cursor-pointer ml-4 px-4 py-2 rounded id=feedback-button-yes type=button>Yes</button>
        <button class=border border-solid bg-white cf-button cursor-pointer ml-4 px-4 py-2 rounded id=feedback-button-no type=button>No</button>
    </div>
    <div class=feedback-success feedback-hidden id=error-feedback-success>
        Thank you for your feedback!
    </div>
</div>


      <div class=cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300>
  <p class=text-13>
    <span class=cf-footer-item sm:block sm:mb-1>Cloudflare Ray ID: <strong class=font-semibold>729a1e910a5e047e</strong></span>
    <span class=cf-footer-separator sm:hidden>&bull;</span>
    <span id=cf-footer-item-ip class=cf-footer-item hidden sm:block sm:mb-1>
      Your IP:
      <button type=button id=cf-footer-ip-reveal class=cf-footer-ip-reveal-btn>Click to reveal</button>
      <span class=hidden id=cf-footer-ip>2a01:e0a:5bd:e320::5780:4c31</span>
      <span class=cf-footer-separator sm:hidden>&bull;</span>
    </span>
    <span class=cf-footer-item sm:block sm:mb-1><span>Performance &amp; security by</span> <a rel=noopener noreferrer href=https://www.cloudflare.com/5xx-error-landing id=brand_link target=_blank>Cloudflare</a></span>
    
  </p>
  <script>(function(){function d(){var b=a.getElementById(cf-footer-item-ip)
c=a.getElementById(cf-footer-ip-reveal);b&&classListin b&&(b.classList.remove(hidden)
c.addEventListener(click
function(){c.classList.add(hidden);a.getElementById(cf-footer-ip).classList.remove(hidden)}))}var a=document;document.addEventListener&&a.addEventListener(DOMContentLoaded

i got rated limited after making like 100k request in under 30 secound :pepesad:

When searching for subdomains, I'd like the input to be used as is

Hello!

When hitting the /subdomains/{domain} endpoint, if domain is abc.domain.com, the subdomain search will run for domain.com because of

domain := s.dp.GetDomain(vars["domain"])

I wish it ran on abc.domain.com as provided. For example if I want sports.yahoo.com subdomains, searching for yahoo.com instead provides many uninteresting results and makes the execution much longer.

I'd be happy to contribute this feature but I'm wondering how you'd prefer to see this implemented. A new route? Or a query parameter on the existing route?

Thanks!

Possibility of Regex Searches

Is there a way to add grep type regex support to the tool? This would be really cool in finding out lateral subdomains like:
clientname1-example.com <-> example.com <-> clientname2-example.com

I'm aware that the data you're dealing with is quite large so I'm not sure how feasible it'd be for you. I'm happy to learn along and help.

Great work! Thanks

Got issue when try to compile sonar server !

when i try to bluid the crobat-server

go build -tags=go_json -o bin/crobat-server ./cmd/crobat-server

i got this error "cmd/crobat-server/grpc/server.go:77:25: unknown field 'IPv4' in struct literal of type "github.com/cgboal/sonarsearch/proto".Domain (but does have Ipv4)"

any help ? thx

Error when building SonarSearch

Hi, i try to compile SonarSearch on Ubuntu 20.04.4 LTS x86_64 an get error like below :
tanio@vmi798622:~/tools/SonarSearch$ make
go build -o bin/sonar2crobat ./cmd/sonar2crobat
go: downloading github.com/Cgboal/DomainParser v0.0.0-20210827145802-99068439e39f
go: downloading github.com/json-iterator/go v1.1.11
go: downloading github.com/brotherpowers/ipsubnet v0.0.0-20170914094241-30bc98f0a5b1
go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
go: downloading github.com/modern-go/reflect2 v1.0.1
go build -o bin/crobat2index ./cmd/crobat2index
go build -tags=go_json -o bin/crobat-server ./cmd/crobat-server
go: downloading github.com/spf13/viper v1.8.1
go: downloading google.golang.org/grpc v1.40.0
go: downloading github.com/golang/protobuf v1.5.2
go: downloading google.golang.org/protobuf v1.27.1
go: downloading github.com/gin-gonic/gin v1.7.4
go: downloading github.com/go-redis/redis/v8 v8.11.3
go: downloading github.com/fsnotify/fsnotify v1.4.9
go: downloading github.com/hashicorp/hcl v1.0.0
go: downloading github.com/magiconair/properties v1.8.5
go: downloading github.com/mitchellh/mapstructure v1.4.1
go: downloading github.com/pelletier/go-toml v1.9.3
go: downloading github.com/spf13/afero v1.6.0
go: downloading github.com/spf13/jwalterweatherman v1.1.0
go: downloading github.com/spf13/cast v1.3.1
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/subosito/gotenv v1.2.0
go: downloading gopkg.in/ini.v1 v1.62.0
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading golang.org/x/net v0.0.0-20210428140749-89ef3d95e781
go: downloading google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c
go: downloading golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf
go: downloading github.com/gin-contrib/sse v0.1.0
go: downloading github.com/mattn/go-isatty v0.0.13
go: downloading github.com/cespare/xxhash/v2 v2.1.1
go: downloading github.com/cespare/xxhash v1.1.0
go: downloading github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f
go: downloading golang.org/x/text v0.3.7
go: downloading github.com/go-playground/validator/v10 v10.9.0
go: downloading github.com/ugorji/go/codec v1.2.6
go: downloading github.com/ugorji/go v1.2.6
go: downloading github.com/go-playground/universal-translator v0.18.0
go: downloading github.com/leodido/go-urn v1.2.1
go: downloading golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
go: downloading github.com/go-playground/locales v0.14.0
github.com/cgboal/sonarsearch/cmd/crobat-server/grpc
cmd/crobat-server/grpc/server.go:77:25: unknown field 'IPv4' in struct literal of type "github.com/cgboal/sonarsearch/proto".Domain
cmd/crobat-server/grpc/server.go:100:25: unknown field 'IPv4' in struct literal of type "github.com/cgboal/sonarsearch/proto".Domain
make: *** [Makefile:4: build] Error 2

any fix??

Documentation

In the future, do you plan to provide documentation on indexing the datasets, what kind of nosql database you use and how many resources you use?

Regards

install Error

go install: github.com/Cgboal/SonarSearch/crobat@latest: module github.com/Cgboal/SonarSearch@latest found (v0.0.0-20220110222754-ddd8c134e2e4), but does not contain package github.com/Cgboal/SonarSearch/crobat

server down

{"Message": "server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: localhost:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }, ] }"}

every time i go to reverse any ip it give me this
so please fix it :)

Allow insecure communication on crobat client

Hello,

It will be nice to be able to specify an additional certificate to be used over environments that are passing through interception. This will either require an insecure method to be specified or an explicit load of the root.pem file of the CA in the middle.

The current error is:

2021/07/17 22:51:52 No certs appended, using system certs only
INFO: 2021/07/17 22:51:52 [core] parsed scheme: ""
INFO: 2021/07/17 22:51:52 [core] scheme "" not registered, fallback to default scheme
INFO: 2021/07/17 22:51:52 [core] ccResolverWrapper: sending update to cc: {[{crobat-rpc.omnisint.io:443  <nil> 0 <nil>}] <nil> <nil>}
INFO: 2021/07/17 22:51:52 [core] ClientConn switching balancer to "pick_first"
INFO: 2021/07/17 22:51:52 [core] Channel switches to new LB policy "pick_first"
INFO: 2021/07/17 22:51:52 [core] Subchannel Connectivity change to CONNECTING
INFO: 2021/07/17 22:51:52 [core] blockingPicker: the picked transport is not ready, loop back to repick
INFO: 2021/07/17 22:51:52 [core] pickfirstBalancer: UpdateSubConnState: 0xc000317600, {CONNECTING <nil>}
INFO: 2021/07/17 22:51:52 [core] Channel Connectivity change to CONNECTING
INFO: 2021/07/17 22:51:52 [core] Subchannel picks a new address "crobat-rpc.omnisint.io:443" to connect
INFO: 2021/07/17 22:51:52 [transport] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
INFO: 2021/07/17 22:51:52 [core] Subchannel Connectivity change to TRANSIENT_FAILURE
INFO: 2021/07/17 22:51:52 [core] pickfirstBalancer: UpdateSubConnState: 0xc000317600, {TRANSIENT_FAILURE connection closed}
INFO: 2021/07/17 22:51:52 [core] Channel Connectivity change to TRANSIENT_FAILURE
2021/07/17 22:51:52 rpc error: code = Unavailable desc = connection closed

Thank you,
Nicolas

Miss typo at struct crobat.Domain

I try to run this code on my local (SonarSearch/cmd/crobat-server), but

$go run main.go

# github.com/cgboal/sonarsearch/cmd/crobat-server/grpc
grpc/server.go:77:25: unknown field 'IPv4' in struct literal of type "github.com/cgboal/sonarsearch/proto".Domain (but does have Ipv4)
grpc/server.go:100:25: unknown field 'IPv4' in struct literal of type "github.com/cgboal/sonarsearch/proto".Domain (but does have Ipv4) 

so I fixing

		reply := &crobat.Domain{
			Domain: result.Domain,
                        IPv4: result.IPv4,
		}

from IPv4 into Ipv4

I want to help contribute but can't create some PR. so please check man

Error occured while running the script

Hello,

When I run this script, I get the following error message. I tested this on different environments but still got the same error:
rpc error: code = Unknown desc = server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: localhost:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }, ] }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.