Giter Club home page Giter Club logo

autoidsserver's People

Contributors

travisbgreen avatar vampjaz avatar

Watchers

 avatar  avatar

autoidsserver's Issues

More robust background thread

If the background thread crashes at any point, pcaps cannot be processed and the entire server needs to be restarted. We need to think of a better way to do this.

Perhaps spawning a new thread for every pcap would work. Have that thread acquire a global resource lock while it does the processing in order not to mess stuff up while processing multiple files.

Manual rule input

Add a textbox to the main page to override the default rule sets and paste in some rules for analysis with. Maybe it would only become visible if a dropdown ruleset selection was set to "custom".

update engines

Update engines to match engines available in IDSDeathBlossom

Only display the files that we specify

Not all of the logfiles are useful for the user to be able to see. Some of them can even be a little bit too revealing. I'm thinking we whitelist a certain list of filenames, like eve.json, fast.log, and whatever we think of.

add shell scripts

make a setup script to

  • create user
  • create /var/pcap
  • set directory permissions
    • /var/pcap
    • running dir
  • option to run @reboot time in cron

other scripts

  • run_dev.sh
  • run_prod.sh

DDoS protection

If this is a public webapp with no account control, there needs to be a way to prevent people from uploading tons of junk to the server, taking up disk space and processing power.

Pagination navigation for pcap listing

Currently the pagination on the pcap list allows you to go to the next page even if there is no next page. Perhaps we could make it so the next button dissapears if there are more than 40 pcaps returned.:

c.execute('SELECT * FROM pcaps ORDER BY uploaded DESC LIMIT ? OFFSET ?',(40,40*(page-1))) # get 40 pcaps, skipping 40*page offset

If we were to actually return 41 pcaps and display the first 40 of them, we could test if there is a 41st in the list and that would indicate that there is at least one more in the list and then it would display the next page navigation.

Logfile page navigation

Because of the length of logfiles and the difficulty in scrolling down the page to find them, it would be nice to have a page navigation sidebar in the logfile display. It would just link to an anchor on the page at each logfile header.

Re-run a file with different settings

Currently the system prevents you from running the same file through the system twice because files are identified by a hash. Even if the settings are different, the system will prevent it from running again.

Going along with the refactoring that #2 will entail, we should add the ability to re-run a file with different settings without having to reupload. Perhaps a link from the page for that pcap, combined with a redirect from the main page if you upload a file with the same hash as another in the database.

Large log files can hang the web server

The files that snort generates especially can reach in the tens of megabytes. The parsing of those with the syntax highlighter and the transfer of them over the network causes a lot of lag and makes the server hang completely.

I propose we make it so that up to a certain size, files are displayed in full. After that size, they will be truncated and a download link will be provided for the raw file, which should be hosted on a static server like apache to improve performance.

Search

Would like the ability to be able to search the file archive by hash and keywords, maybe even to the point of searching through the logfile output to see if a certain event happened in any of the logs.

Mark a file as private

Checkbox when you upload that will make the file not show up on the main list. Only people with access to the link (which is an md5 of the file) would be able to view it. If they still have the file but forgot the link, they could reupload it and they would be taken to the page for that file because it is already in the system.

Log display page refresh

Because the processing takes long enough that we would not want to hang the webserver by running IDSdeathblossom during a web request, it is run in the background. Usually, a user will need to refresh the page manually a couple seconds after they submit to get the results. This could be either made into a simple HTML refresh after 5 seconds until the status is marked as complete, or the server sends the client a websocket message when the background thread finishes

File representation

Current issue: the files are identified by the file hash. If we want to upload the same file and run it through a different engine than the one that was selected, or perhaps a different rule set, this would cause issues as the system would simply say that it is already in the database. We need a way to differentiate between the files and the engines that are used on them, so this will not happen if the engine and rules are different when uploading a file for the second time.

Alternatively, each file can have links on the page to reprocess it with another engine or ruleset.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.