Giter Club home page Giter Club logo

sgt's Introduction

UPDATE: We have no plans to continue updating SGT moving forward. Please feel free to clone/fork this project, or try Kolide Fleet as an alternative (https://www.kolide.com/fleet/).

SGT: OSQuery Management Server Built Entirely on AWS!

Build Status Go Report

SGT is an osquery management server written in Golang and built in aws. Sgt (Simple Go TLS) is backed entirely by AWS services, making its infrastructure requirements extremely simple, robust and scalable.

SGT is managed entirely through terraform

NOTE: SGT is under active development. Please help us improve by submitting issues!

Getting started.

NOTE If you are upgrading from a previous version, please see the release notes for 0.2.0

Getting started with sgt is designed to be very simple with minimal setup required. To get started, however, you will need a FEW things first.

⚠️ There is current an issue with terraform version 11.5-11.10 that will cause terraform to error out during destroy. The current workaround is to set environment variable TF_WARN_OUTPUT_ERRORS=1 ⚠️

prereqs:
  1. An AWS account with admin access to DynamoDB, EC2, ES (ElastisearchService), Kinesis/Firehose and IAM. (note, this must be programatic access, so you can have an access key and secret to use)
  2. Golang 1.9.0+
  3. Terraform 11.9+
  4. A domain with DNS managed via Route53 (Note: This does not mean you need to buy a domain, you can use an existing domain and just manage DNS on Route53)
  5. An SSL cert with public and private keypair. This will be used to terminate TLS connections to our server see Obtaining a free ssl cert for SGT with Letsencrypt for one method of aquiring a certificate
  6. An aws profile configured.

Installation

  1. Clone the repo

    git clone [email protected]:OktaSecurityLabs/sgt.git $GOPATH/src/github.com/oktasecuritylabs/sgt
    
  2. change into the downloaded directory

    cd $GOPATH/src/github.com/oktasecuritylabs/sgt
    
  3. Build the project

    go build
    
  4. Copy your ssl certs to the proper directory. For this example, I'm using a subdomain of example.com with a letsencrypt certificate, sgt-demo.example.com. Lets encrypt certs live in /etc/letsencrypt/live/<site> so I'm copying them from there into the cert directory for SGT.

    sudo cp /etc/letsencrypt/live/sgt-demo.example.com/fullchain.pem certs/fullchain.pem
    sudo cp /etc/letsencrypt/live/sgt-demo.example.com/privkey.pem certs/privkey.pem
    
  5. Rename your certs to reflect which site they belong to. I recommend following the example format of

    example.domain.com.fullchain.pem
    

    moving...

     cd certs
     mv fullchain.pem sgt-demo.example.com.fullchain.pem
     mv privkey.pem sgt-demo.example.com.privkey.pem
     cd ..
    
  6. Create a new environment by following the prompts

    ./sgt wizard
    

    6a. Enter a name for your environment (I'm calling my demo one sgt-demo)

    Enter new environment name.  This is typically something like'Dev' or 'Prod' or 'Testing, but can be anything you want it to be: sgt-demo
    

    6b. Choose the AWS profile to use (Mine is again called sgt-demo)

    Enter the name for the aws profile you'd like to use to deploy this environment
    if you've never created a profile before, you can read more about how to do this here
    http://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html
    a 'default' profile is created if you've installed and configured the aws cli:
    sgt-demo
    

    6c. Enter the IP address that you are currently deploying from.

    Enter an ipaddress or cidr block for access to your elasticsearch cluster.
    Note:  This should probably be your current IP address, as you will need to be able to access
    elasticsearch via API to create the proper indices and mappings when deploying: xxx.xxx.xxx.xxx/24
    

    6d. Name your log bucket. I recommend something easily identified for your domain.

    Enter a name for the s3 bucket that will hold your osquery logs.
    Remeber, S3 bucket names must be globally unique: sgt-demo.log.bucket
    

    6e. And your config bucket...

    Enter a name for the s3 bucket that will hold your server configuration
    Remember, S3 bucket names must be globally unique:
    sgt-demo.configuration.bucket
    

    6f. Enter your root domain

    Enter the domain you will be using for your SGT server.
    Note:  This MUST be a domain which you have previously registered or are managing throughaws.
    This will be used to create a subdomain for the SGT TLS endpoint
    example.com
    

    6g. Enter the subdomain (sgt-demo in my case)

    Enter a subdomain to use as the endpoint.  This will be prepended to the
    domain you provided as a subdomain
    sgt-demo
    

    6h. Enter your aws keypair name

    Enter the name of your aws keypair.  This is used to access ec2 instances ifthe need
    should ever arise (it shouldn't).
    NOTE:  This is the name of the keypair EXCLUDING the .pem flie name and it must already exist in aws
    my-secret-key-name
    

    6i. Enter the name of your keypair and priv key, as you named them above.

    Enter the name of the full ssl certificate chain bundle you will be using for
    your SGT server.  EG - full_chain.pem :
    sgt-demo.example.com.fullchain.pem
    Enter the name of the private key for your ssl certificate.  Eg - privkey.pem:
    sgt-demo.example.com.privkey.pem
    

    6j. Enter the node secret

    Enter the node secret you will use to enroll your endpoints with the SGT server
    This secret will be used by each endpoint to authenticate to your server:
    my-super-secret-node-secret
    

    6k. Enter the app secret

    Enter the app secret key which will be used to generate session tokens when
    interacting with the API as an authenticated end-user.  Make this long, random and complex:
    diu3piqeujr302348u33rqwu934r1@#)(*@3
    

    Select N when prompted to continue. Because this is a demo environment, we're going to make a small change to our configuration.

  7. Edit the environment config file found in /terraform/<environment/environment.json with your favorite editor and change the value for create_elasticsearch to 0. This will disable the creation of elasticsearch, which we will not be using for this demo. In a production environment, Elasticsearch would be a large part of your process, but it adds significant cost and it's not needed for this demo.

    {
      "environment": "example_environment",
      "aws_profile": "default",
      "user_ip_address": "127.0.0.1",
      "sgt_osquery_results_bucket_name": "example_log_bucket_name",
      "sgt_config_bucket_name": "example_config_bucket_name",
      "domain": "somedomain.com",
      "subdomain": "mysubdomain",
      "aws_keypair": "my_aws_ec2_keypair_name",
      "full_ssl_certchain": "full_cert_chain.pem",
      "ssl_private_key": "privkey.pem",
      "sgt_node_secret": "super_sekret_node_enrollment_key",
      "sgt_app_secret": "ultra_mega_sekret_key_you'll_never_give_to_anyone_not_even_your_mother",
      "create_elasticsearch": 0
    }

Deploy!!

Its finally time to deploy, although hopefully that wasn't too painful. Deployment is by far the easiest part.

./sgt deploy -env <your environment name> -all

This will stand up the entire environment, including endpoint configuration scripts which we will use to set up some osquery nodes later. The entire process should take about 5-10 minutes depending on your internet connection, at which point you should be ready to install osquery on an endpoint and start receiving logs!

-Note: This getting started guide originally appeared on blog.securelyinsecure.com, but I'm appropriating it for the docs as well, due to it being better than the last readme I wrote.

Once you've installed Go and Terraform, and built your SGT binary, its time to run your deployment!

The wizard will walk you through everything you need to configure a new environment, create the proper directory structure and the environment specific configuration files and stand up the environment if you choose to do so

./sgt wizard

Among other things, the wizard will ask you to provide: The "mail domain for the users of your Kibana dashboard". This should be the domain name used for the email addresses of the people who will be using the Kibana dashboard (example: company.com)

A comma delimited list of users for the Kibana dashboard. The users in the list must correspond to email addresses of the users. For example, if you wanted to initialize Kibana with 2 users (Some Guy, [email protected]; Someone Else, selse@company,com) your input at this prompt woukld be sguy,selse

When you are done with the wizard, you will be prompted to either continue to deploy the actual resources, or exit. If you choose to exit, you you will need manually deploy later

Manual deployment

SGT can be deployed as a full environment, or individual pieces(Note that the components still requires their dependencies to be built, they may just be updated individually to save time)

To deploy SGT...

./sgt deploy -env <environment> -all

To deploy/update an individual component..

./sgt deploy -env <environment> -components elasticsearch,firehose

for a full list of commands, issue the -h flag

If terraform fails at any point during this process, cancel the installation ctrl+c and review your errors. SGT depends on all previous deploy steps completing successfully, so it is important to make sure this occurs before moving on to next steps

Creating your first user.

To create a user to interact with SGT, pass the -create_user flag with the requisite options

./sgt create-user -credentials-file <cred_file> -profile <profile> -username <username> -role <"Admin"|"User"|"Read-only">

Getting an Authentication Token

Using any portion of the End-user facing API requires an Authorization token. To get an auth token, send a post request to /api/v1/get-token supplying your username and password in the post body

{"username": "my_username", "password": "my_password"}

If your credentials are valid, you will recieve a json response back

{"Authorization": "<long jtw">}

Provide this token in any subsequent requests in the Authorization header

Creating Additional Kibana Users Post-Deployment

  1. Log into the AWS account where you deployed sgt, and go to the cognito service page
  2. Click Manage User Pools
  3. Click the User Pool you created during the sgt deployment
  4. Click Users and Groups
  5. Click Create User
  6. In the Username text box, type the username portion of the user's email address
  7. Leave the box "Send an invitation to this new user?" checked
  8. Check the "Email" box
  9. Un-check the "Mark phone number as verified?" box
  10. In the Email text box, type the user's email address
  11. Click Create User

Documentation notes:

Documentation is lacking right now due to a rather un-fun flu season. However, updates to documentation should be expected in teh coming week or so. (This note marked: 1/17/18)

arch

sgt's People

Contributors

chaimsanders-okta avatar johnrichards-okta avatar mattjane-okta avatar ryandeivert avatar securityclippy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sgt's Issues

[bug] deploy.ErrorCheck usage is unclear

Background

The deploy package contains an ErrorCheck function that is used a handful of times within the package and within the main package. The usage of this function is unclear, since it both logs the err using the Error level, and then also logs the err as Fatal level, which will both print the error and call Exit(1). The main issue is that there are multiple locations that the calls to this expect the error to be returned if one exists. For instance, this call would never actually have an effect since the program would have exited at the point of entering this if block. The offending code is here.

Requested Changes

I think the idea of this function is to be a helper to make error checking/logging/returning easier. However, it should be made clear if this function should be exiting or just logging the error and returning it. If I get some clarification on the desired functionality, I'd be happy to put up a PR to address this.

[improvement] optimize config loading and validation process

Background

The ParseDeploymentConfig function is called many times throughout the deploy process, for each individual service being deployed. Validation to ensure the loaded config file also matches the deployment environment name (in CheckEnvironMatchConfig) also occurs within each of these functions.

Requested Changes

Load the config at the start of the deploy and pass it to functions that require it. Since an invalid deploy config should stop the deploy process, there is no harm in doing this operation at the very start of the process.

Elasticsearch Documentation

Related to the other issue I opened, would you be able to provide documentation on leveraging Elasticsearch as well?

[Feature Request] Redo CLI to support chained flag and better naming

Is your feature request related to a problem? Please describe.
Deploying could be much clearer if flags were dependent on each other. Depending on usage of elasticsearch, it can often be difficult to know which component to pick.

Describe the solution you'd like
Chain a set of flags to allow only viable options to be selected, with useful help documentation for each flag

Move config files back to config bucekt

At some point it appears there was a regression to storing the config files for the ec2 instances in the log bucket.

These files should be stored in teh s3 config bucket.

Unsupported URL when downloading

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Cannot deploy SGT to specified region

Describe the bug
Cannot deploy SGT to anywhere else other than us-east-

To Reproduce
Steps to reproduce the behavior:

  1. go through ./sgt wizard
  2. put in any other region than us-east-1
  3. deploy

Expected behavior
Deployment of sgt into region specified in wizard

Screenshots

* module.vpc.aws_subnet.sgt-PublicSubnet_us_east-1b: 1 error(s) occurred:

* aws_subnet.sgt-PublicSubnet_us_east-1b: Error creating subnet: InvalidParameterValue: Value (us-east-1b) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-east-2a, us-east-2c, us-east-2b.```

**Desktop (please complete the following information):**
 - OS: Mac
 - Browser Chrome
 - Version [80.0.3987.149

**Smartphone (please complete the following information):**
N/A

**Additional context**
Why ask for a region if you're just going to hard-code `us-east-1` into the terraform modules?
https://github.com/OktaSecurityLabs/sgt/blob/a2c0b8739e38e013468a07d0cc9b571d3231ec6e/terraform/modules/vpc/vpc.tf

[Improvement] Migrate TFstate to s3

Currently all states are kept locally. This is fine for single deployments, but this should be moved to s3 to allow teams to iterate on deployments.

[improvement] vendor dependencies using `govendor`

Background

Ever-changing external dependencies are difficult to rely on, so I'd suggest vendoring all dependencies so you do not have to worry about breaking changes from third party packages.

Requested Change(s)

Possible solution is to use govendor or similar to vendor all third party packages. See the repo and whitepaper for more info on how to accomplish this and the various other benefits not mentioned here.

There may be other/better ways to solve this also.

[improvement] add unit tests to be run with `go test`

Background

There are currently no unit tests for any packages. Unit tests should be added to provide better insight into regressions, identify areas for improvement, etc.

Requested Changes

Add unit tests for packages. It might be helpful to identify which packages are of the most importance and focus on them first.

Post-deployment documentation

Hello! I've successfully deployed SGT following the deployment documentation, but am a little confused on what the next steps are/how to proceed from here. Would you be able to provide some documentation on that front? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.