Giter Club home page Giter Club logo

velociraptor-to-timesketch's Introduction

velociraptor-to-timesketch

sketch

Watch our DFIR Summit talk

Breaches Be Crazy

We will be working on making this a pre-baked AMI, but here are the deployment steps in the meantime <3

Note: You may need to add/modify fs.inotify.max_user_watches in /etc/sysctl.conf. The default is 8192, and you may need to increase this number. Run sysctl -p after modifying.

Deployment

  • Deploy Timesketch instance - Deployment Directions
  • python3/pip3, awscli, unzip, and inotify-tools are required
    apt install python3 python3-pip unzip inotify-tools -y
    pip3 install --upgrade awscli
    
  • Configure AWS CLI
    aws configure 
    
  • Modify bucket_name in watch-s3-to-timesketch.py with S3 bucket name
  • Modify BUCKET_NAME in watch-plaso-to-s3.sh with S3 bucket name
  • Modify $username and $password in watch-to-timesketch.sh
  • Add Velociraptor artifact in Velociraptor and configure with AWS S3 bucket, region, and IAM credentials Screen Shot 2021-07-08 at 2 36 18 PM
  • Run deploy.sh
    ./deploy.sh
    
  • Kick off Windows.KapeFiles.Targets collection on one or more clients in Velociraptor
    • Wait for triage zip to upload to S3
    • Wait for zip to download to Timesketch instance from S3
    • log2timeline will begin processing data into a Plaso file
    • timesketch_importer will then bring it into Timesketch

velociraptor-to-timesketch's People

Contributors

shortstack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

velociraptor-to-timesketch's Issues

Add LICENSE File

Hey, would be great if you could add a license file so people who want to contribute / use the project know under which conditions they can.

Thx much

Parsing TimeSketch

Good morning,
once the logs have been imported to timesketch from AWS, the output is as follows

If I perform the operation manually the result is the same.

timesketch

To get digestible output from Timesketch do you need to modify the Artifacts "Kape.Extract" or "Kape.Targets" first before running the hunting?

In this case I used them both without modifying it.

With both S3 and manually (log2timeline), I cannot import data generated by Velociraptor into Timesketch.

Timesketch has mandatory fields to correctly ingest logs and "log2timeline" has preset formats to convert files to plaso.

I haven't seen code in git that prepares the .json / .csv with these fields.

Thanks for the support and happy holidays!

Feature Request for Convenience and a Bug

Just got done installing your lovely solution for time-lining and I love this workflow! However, I see some points of improvement:

  • Installer
    • Make uploading the complete .plaso file back to the S3 Bucket optional through a switch in deploy.sh that just does not activate the script watch-plaso-to-s3.sh
    • Docker continuously throws an error that it should not run as root. Can be checked using the command sudo docker logs --tail 50 --follow --timestamps timesketch_timesketch-worker_1 | less
    • Create an option for deleting all raw data after plaso processing for cases when you are strapped for storage.
    • Create an option for the timesketch instance not being run in AWS. I have it running on a on-prem hypervisor with Velo being run in AWS. Further down the road one might also consider shipping the data to SFTP instead of AWS to allow for a full on-prem solution
  • watch-s3-to-timesketch.py
    • If the same hunt is executed twice for some reason the filename in the S3 Bucket will remain the same. It might be interesting to add a unique ID to each item in the S3 Bucket to identify them. I have no good solution for this as of yet, maybe AWS has something already built in. These IDs for every item would then be added to a list/database on the timesketch instance to be checked prior to downloading
    • Currently there is a While True loop that sends requests at a very high frequency. This can quickly increase your AWS bill. After running my pipeline for roughly 30 hours I had a 30โ‚ฌ bill despite almost no data being transferred. Having the script poll every 10 seconds or so would drastically decrease the number of requests without slowing the pipeline significantly.
    • The AWS Credentials need to be put in in the sourcecode. I think following the AWS best practices with a dedicated file at ~/.aws/credentials might be better. See this for reference
  • watch-to-timesketch.sh
    • The name of the service being installed (data-to-timesketch) is different from the name of the script. This is not the case for the python downloader or the other bash script. It confused me for a moment and I would align that to both be watch-data-to-timesketch
    • There is a bug that causes all data from the unzipped Kape .zip to be deleted instead of only the unimportant bits. This is due to the filepath, at least in my installation is [...]$SYSTEM/fs/fs/clients/[...] instead of [...]$SYSTEM/fs/clients/[...] Check the following code for reference
# Remove from subdir
mv $PARENT_DATA_DIR/$SYSTEM/fs/clients/*/collections/*/uploads/* $PARENT_DATA_DIR/$SYSTEM/
# Delete unnecessary collection data
rm -r $PARENT_DATA_DIR/$SYSTEM/fs $PARENT_DATA_DIR/$SYSTEM/UploadFlow.json $PARENT_DATA_DIR/$SYSTEM/UploadFlow 

Ideas

I think a central config file might solve some of the issues I faced but I am not sure wether this is the best way to go about it. I will try to create a pull request that offers a solution for the topics I mentioned. Furthermore, creating an SFTP based solution in parallel to the AWS based solution would allow one to host your setup locally. I will see if I get around to that either

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.